diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crows Zero 2 Br Rip 720p Movies Torrents The Ultimate Collection of Fight Scenes and Night Club Music.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crows Zero 2 Br Rip 720p Movies Torrents The Ultimate Collection of Fight Scenes and Night Club Music.md deleted file mode 100644 index 2375540639209159b9b1646465d98c8899409901..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crows Zero 2 Br Rip 720p Movies Torrents The Ultimate Collection of Fight Scenes and Night Club Music.md +++ /dev/null @@ -1,94 +0,0 @@ - -

Crows Zero 2: A Review of the Action-Packed Sequel

-

Introduction

-

If you are a fan of Japanese action films, you might have heard of Crows Zero, a 2007 film based on the manga Crows by Hiroshi Takahashi. The film follows the violent conflicts between rival gangs of students at Suzuran All-Boys High School, also known as "The School of Crows". The film was a commercial and critical success, and spawned a sequel in 2009, Crows Zero 2.

-

Crows Zero 2 Br Rip 720p Movies Torrents


DOWNLOAD ○○○ https://byltly.com/2uKAeU



-

What is Crows Zero 2?

-

Crows Zero 2 is a 2009 Japanese action film directed by Takashi Miike with a screenplay by Shogo Muto. It is the second film based on the manga Crows by Hiroshi Takahashi, and a direct sequel to 2007's Crows Zero. The film stars much of the cast from the first film, including Shun Oguri, Kyōsuke Yabe, Meisa Kuroki, and Takayuki Yamada reprising their roles. It was released in Japan on April 11, 2009.

-

Who are the main characters and actors?

-

The main characters and actors of Crows Zero 2 are:

- - - - - - - - - - - - - - - - - - - - - -```html

Analysis

-

What are the strengths of the film?

-

One of the strengths of Crows Zero 2 is the action scenes, which are well-choreographed, realistic, and brutal. The film does not shy away from showing the blood and pain of the street fights, and the sound effects and camera work add to the impact. The film also showcases a variety of fighting styles and weapons, such as fists, kicks, bats, pipes, chains, knives, and even umbrellas. The action scenes are not only entertaining, but also serve to advance the plot and develop the characters.

-

What are the weaknesses of the film?

-

One of the weaknesses of Crows Zero 2 is the length, which is over two hours long. The film could have been trimmed down by cutting some unnecessary scenes or subplots, such as the romance between Genji and Ruka (Meisa Kuroki), which does not add much to the story or the characters. The film also suffers from some pacing issues, especially in the first half, where it takes too long to set up the conflict and introduce the characters. The film could have benefited from more editing and focus.

-

How does it compare to the first film and the manga?

-

Crows Zero 2 is a faithful adaptation of the manga by Hiroshi Takahashi, which is a prequel to his other manga series Crows and Worst. The film follows the manga closely, with some minor changes and additions. For example, the film adds a new character, Ryo Urushibara (Gou Ayano), who is a homage to Michael Jackson. The film also changes some details of the final battle between Suzuran and Hosen, such as the location and the outcome.

-

Crows Zero II 2009 720p BRRip XviD AC3-ViSiON
-Crows Zero II Kurozu Zero II 2009 1080p BRRip x265 HEVC
-Crows Zero II English Subbed Live Action Movie
-Crows Zero II Free Download Streaming Archive.org
-Crows Zero II Action School Yakuza Japan Fighting
-Crows Zero II Suzuran School Housen School Rivalry
-Crows Zero II Genji Takiya Shun Oguri Tamao Serizawa
-Crows Zero II Rindaman Kenichi Endo Makise Shinnosuke
-Crows Zero II Night Clubs Musical Performers Drinking Murder
-Crows Zero II Sequel to Crows Zero Prequel to Crows Explode
-Crows Zero II Based on Manga by Hiroshi Takahashi
-Crows Zero II Directed by Takashi Miike
-Crows Zero II Soundtrack by The Street Beats
-Crows Zero II DVD Blu-ray Release Date
-Crows Zero II Box Office Gross Reviews Ratings
-Crows Zero II Watch Online HD Quality Subtitles
-Crows Zero II Full Movie Download Magnet Link Torrent File
-Crows Zero II Nyaa Torrents LimeTorrents.lol
-Crows Zero II ultragoji2 Free Borrow Streaming Archive.org
-Crows Zero II netdsalispa sabrinatucker1994 wixsite.com
-Crows Zero 2 Br Rip 720p Movies Torrents THE HANDY GANG
-Crows Zero 2 Br Rip 1080p Movie Torrents 5asec cz.5asec.com
-Crows Zero 2 Br Rip 720p Movies Torrents angela-cuervos blogspot.com
-Crows Zero 2 Br Rip 720p Movies Torrents kickass.to kat.cr kat.sx kat.am kat.li kat.rip kat.ag kat.pw kat.lol kat.ph kat.ws kat.tf kat.io kat.unblockit.pro kat.unblockit.one kat.unblockit.red kat.unblockit.pw kat.unblockit.id kat.unblockit.link kat.unblockit.win kat.unblockit.top kat.unblockit.bid kat.unblockit.trade kat.unblockit.date kat.unblockit.party kat.unblockit.download kat.unblockit.review kat.unblockit.faith kat.unblockit.webcam kat.unblockit.loan kat.unblockit.win
-Crows Zero 2 Br Rip 720p Movies Torrents thepiratebay.org thepiratebay.net thepiratebay.se thepiratebay.com thepiratebay10.org thepiratebay3.to thepiratebay3.org thepiratebay3.net thepiratebay3.com thepiratebay3.se thepiratebay3.is thepiratebay3.link thepiratebay3.info thepiratebay3.xyz thepiratebay3.site thepiratebay3.icu thepiratebay3.rocks thepiratebay3.live thepiratebay3.zone thepiratebay3.works thepiratebay3.fun thepiratebay3.biz thepiratebay3.asia thepiratebay3.win thepiratebay3.lol

-

Crows Zero 2 is a direct sequel to Crows Zero, which was also directed by Takashi Miike. The film continues the story of Genji and his quest to conquer Suzuran High School. The film retains most of the cast and crew from the first film, as well as its style and tone. However, Crows Zero 2 is darker and more serious than Crows Zero, which had more comedy and humor. The film also has more violence and bloodshed than Crows Zero, which was more stylized and cartoonish.

-

Conclusion

-

What is the main message of the film?

-

The main message of Crows Zero 2 is that friendship and loyalty are more important than power and glory. The film shows that Genji learns to value his friends and allies more than his ambition to rule Suzuran. He realizes that he cannot achieve his goal alone, and that he needs to unite his school against a common enemy. He also learns to respect his rivals and enemies, such as Serizawa and Narumi, who share his passion for fighting. The film also shows that violence is not always the answer, and that sometimes peace and dialogue are better options.

-

Who should watch it and why?

-

Crows Zero 2 is a film for fans of Japanese action films, especially those who like street gang movies. The film offers a lot of excitement and thrill for those who enjoy watching realistic and brutal fight scenes. The film also has some good acting from its cast with some great moments from quite a few of the kids from Suzuran. The comedy in the movie is great with some funny scenes that successfully manage to get a few chuckles from the audience and its overall quirkiness is fun to watch as well .

-

FAQs

- - ```

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Apk Miracle Thunder 2.82 Crack High Quality.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Apk Miracle Thunder 2.82 Crack High Quality.md deleted file mode 100644 index ec79744371950a679e9fa45d48167c2671d2da2a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Apk Miracle Thunder 2.82 Crack High Quality.md +++ /dev/null @@ -1,18 +0,0 @@ -
-

How to Download APK Miracle Thunder 2.82 Crack for Free

-

If you are looking for a way to download APK Miracle Thunder 2.82 crack for free, you have come to the right place. APK Miracle Thunder is a powerful tool that allows you to flash, unlock, repair, and root your Android devices. It supports a wide range of models and brands, such as Samsung, Huawei, Oppo, Vivo, Xiaomi, and more.

-

download apk miracle thunder 2.82 crack


Download Zip ————— https://byltly.com/2uKxkK



-

However, APK Miracle Thunder is not a free tool. You need to pay a license fee to use it. But don't worry, there is a way to download APK Miracle Thunder 2.82 crack for free and enjoy all its features without any limitations. In this article, we will show you how to do it in a few simple steps.

-
    -
  1. Download APK Miracle Thunder 2.82 crack from a reliable source. There are many websites that claim to offer APK Miracle Thunder 2.82 crack for free, but not all of them are trustworthy. Some of them may contain viruses, malware, or spyware that can harm your device or steal your data. To avoid this, you need to download APK Miracle Thunder 2.82 crack from a reliable source. We recommend you to use this link: https://example.com/download-apk-miracle-thunder-2-82-crack/
  2. -
  3. Install APK Miracle Thunder 2.82 crack on your computer. Once you have downloaded APK Miracle Thunder 2.82 crack from the link above, you need to install it on your computer. To do this, you need to extract the zip file and run the setup.exe file. Follow the instructions on the screen and complete the installation process.
  4. -
  5. Connect your Android device to your computer. After installing APK Miracle Thunder 2.82 crack on your computer, you need to connect your Android device to your computer using a USB cable. Make sure you enable USB debugging mode on your device before connecting it.
  6. -
  7. Select your device model and brand on APK Miracle Thunder 2.82 crack. Once your device is connected to your computer, you need to select your device model and brand on APK Miracle Thunder 2.82 crack. You can find them on the left panel of the tool.
  8. -
  9. Choose the action you want to perform on your device. APK Miracle Thunder 2.82 crack offers various actions that you can perform on your device, such as flash, unlock, repair, or root. You can choose the action you want to perform on the right panel of the tool.
  10. -
  11. Click on the start button and wait for the process to finish. After choosing the action you want to perform on your device, you need to click on the start button and wait for the process to finish. APK Miracle Thunder 2.82 crack will display the progress and status of the process on the screen.
  12. -
-

Congratulations! You have successfully downloaded and used APK Miracle Thunder 2.82 crack for free on your Android device. You can now enjoy all its features and benefits without any restrictions.

-

If you found this article helpful, please share it with your friends and family who may also want to download APK Miracle Thunder 2.82 crack for free.

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Badrinath Ki Dulhania Movies 1080p Torrent The Story Behind the Making of the Movie.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Badrinath Ki Dulhania Movies 1080p Torrent The Story Behind the Making of the Movie.md deleted file mode 100644 index 6369f0a06de3c72d3933cdb86092df7ae419db54..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Badrinath Ki Dulhania Movies 1080p Torrent The Story Behind the Making of the Movie.md +++ /dev/null @@ -1,200 +0,0 @@ - -

Download Badrinath Ki Dulhania Movies 1080p Torrent

-

If you are a fan of Bollywood romantic comedy movies, you might have heard of Badrinath Ki Dulhania, a 2017 hit film starring Varun Dhawan and Alia Bhatt. But did you know that you can download Badrinath Ki Dulhania movies 1080p torrent and watch it on your device anytime and anywhere? In this article, we will tell you everything you need to know about downloading Badrinath Ki Dulhania movies 1080p torrent, including what the movie is about, why you should download it, where to download it, and how to do it safely and legally.

-

What is Badrinath Ki Dulhania?

-

Badrinath Ki Dulhania is a Hindi-language romantic comedy film written and directed by Shashank Khaitan and produced by Dharma Productions. It is a spiritual successor to Humpty Sharma Ki Dulhania (2014), which also starred Varun Dhawan and Alia Bhatt. The film was released on March 10, 2017, during the Holi weekend, and became a box office success, earning over ₹200.45 crores worldwide.

-

download Badrinath Ki Dulhania movies 1080p torrent


Download ✏ ✏ ✏ https://byltly.com/2uKw81



-

A brief summary of the plot

-

The film follows the story of Badrinath Bansal (Varun Dhawan), a wealthy but chauvinistic young man from Jhansi, who falls in love with Vaidehi Trivedi (Alia Bhatt), a smart and independent woman from Kota, who wants to become an air hostess. However, Vaidehi rejects his marriage proposal and runs away on their wedding day, leaving him humiliated and heartbroken. Badri then chases her to Singapore, where she works as a flight attendant, and tries to win her back. Along the way, he learns to respect her dreams and ambitions, while she learns to trust him and his love.

-

The cast and crew of the movie

-

The film features a talented cast of actors who deliver memorable performances. Here are some of the main cast members:

- -

The film also has a talented crew behind the scenes who made the film possible. Here are some of the key crew members:

- -

The reception and awards of the film

-

The film received positive reviews from critics and audiences alike, who praised its humor, romance, message, performances, music, and direction. It also received several awards and nominations at various ceremonies. Here are some of the accolades that the film won or was nominated for:

-
CharacterActor
Takiya GenjiShun Oguri
Serizawa TamaoTakayuki Yamada
Aizawa RukaMeisa Kuroki
Katagiri KenKyōsuke Yabe
Tatsukawa TokioKenta Kiritani
Tamura ChūtaSuzunosuke Tanaka
Izaki ShunSōsuke Takaoka
Takiya HideoGoro Kishitani
Rindaman / Hayashida MegumiMotoki Fukami
Kirishima HiromiShunsuke Daitō
Makise TakashiTsutomu Takahashi
Tsutsumoto ShōjiYusuke Kamiji
Mikami Manabu and TakeshiYusuke Izaki and Hisato Izaki
Honjō ToshiakiRyō Hashizume
Sugihara MakotoYu Koyanagi
Tokaji YūjiKaname Endō
Kawanishi NoboruShinnosuke Abe
Bitō Makio and TatsuyaYoshiyuki Yamaguchi and Haruma Miura
Narumi TaigaNobuaki Kaneko
- - - - - - - - - -< -< -< -< -< -< -< -/ table> -

Why download Badrinath Ki Dulhania movies 1080p torrent?

-

Why download Badrinath Ki Dulhania movies 1080p torrent?

-

If you have enjoyed watching Badrinath Ki Dulhania in the theaters or on streaming platforms, you might want to download it and watch it again on your device. There are many reasons why downloading Badrinath Ki Dulhania movies 1080p torrent is a good idea. Here are some of them:

-

Badrinath Ki Dulhania full movie download HD torrent
-How to download Badrinath Ki Dulhania 1080p movie for free
-Badrinath Ki Dulhania torrent magnet link download
-Watch Badrinath Ki Dulhania online free HD quality
-Badrinath Ki Dulhania movie download with subtitles
-Download Badrinath Ki Dulhania Hindi movie 1080p
-Badrinath Ki Dulhania full movie torrent download filmywap
-Badrinath Ki Dulhania 1080p movie download kickass
-Badrinath Ki Dulhania movie torrent download in Tamil
-Badrinath Ki Dulhania HD movie download utorrent
-Badrinath Ki Dulhania full movie free download mp4
-Badrinath Ki Dulhania movie download 1080p bluray
-Badrinath Ki Dulhania torrent download yify
-Badrinath Ki Dulhania movie download with English subtitles
-Download Badrinath Ki Dulhania full movie in Telugu
-Badrinath Ki Dulhania 1080p movie torrent download extratorrent
-Badrinath Ki Dulhania full movie online watch free
-Download Badrinath Ki Dulhania movie in dual audio
-Badrinath Ki Dulhania movie torrent download 720p
-Badrinath Ki Dulhania full movie download HD filmyzilla
-Download Badrinath Ki Dulhania movie songs mp3
-Watch Badrinath Ki Dulhania full movie on Netflix
-Download Badrinath Ki Dulhania movie trailer HD
-Badrinath Ki Dulhania full movie torrent download rarbg
-Download Badrinath Ki Dulhania full movie in Malayalam
-Watch Badrinath Ki Dulhania full movie on YouTube
-Download Badrinath Ki Dulhania full movie with Hindi dubbed
-Badrinath Ki Dulhania 1080p torrent download limetorrents
-Download Badrinath Ki Dulhania full movie in HD quality
-Watch Badrinath Ki Dulhania full movie on Amazon Prime Video
-Download Badrinath Ki Dulhania full movie in Kannada
-Badrinath Ki Dulhania full movie torrent download bittorrent
-Download Badrinath Ki Dulhania full movie in Bengali
-Watch Badrinath Ki Dulhania full movie on Hotstar
-Download Badrinath Ki Dulhania full movie in Marathi
-Badrinath Ki Dulhania 1080p torrent download thepiratebay
-Download Badrinath Ki Dulhania full movie in Punjabi
-Watch Badrinath Ki Dulhania full movie on Zee5
-Download Badrinath Ki Dulhania full movie in Gujarati
-Watch Badrinath Ki Dulhania full movie on SonyLIV
-Download Badrinath Ki Dulhania full movie in Urdu
-Watch Badrinath Ki Dulhania full movie on MX Player
-Download Badrinath Ki Dulhania full movie in Odia
-Watch Badrinath Ki Dulhania full movie on Voot
-Download Badrinath Ki Dulhania full movie in Nepali
-Watch Badrinath Ki Dulhania full movie on Eros Now
-Download Badrinath Ki Dulhania full movie in Bhojpuri
-Watch Badrinath Ki Dulhania full movie on JioCinema
-Download Badrinath Ki Dulhania full movie in Assamese

-

The benefits of downloading movies 1080p torrent

- -

The risks of downloading movies 1080p torrent

-

However, downloading Badrinath Ki Dulhania movies 1080p torrent also comes with some risks that you should be aware of. Here are some of them:

- -

How to download movies 1080p torrent safely and legally

-

Fortunately, there are ways to download Badrinath Ki Dulhania movies 1080p torrent safely and legally. Here are some tips that you should follow:

- -

Where to download Badrinath Ki Dulhania movies 1080p torrent?

-

Now that you know why and how to download Badrinath Ki Dulhania movies 1080p torrent, you might be wondering where to find them. There are many websites and apps that offer torrent files for movies, but not all of them are safe, legal, or reliable. To help you out, we have compiled a list of some of the best websites and apps to download Badrinath Ki Dulhania movies 1080p torrent. Here they are:

-

The best websites to download movies 1080p torrent

-

123Movies

-

123Movies is one of the most popular and widely used websites to watch and download movies online for free. It has a huge collection of movies from various genres, languages, countries, and years. You can easily find Badrinath Ki Dulhania on this website and download it as a 1080p torrent file. You can also stream the movie online without any registration or ads. However, you should be careful about the pop-ups and redirects that might lead you to malicious sites or downloads. You should also use a VPN service to access this website as it might be blocked in some regions due to legal issues.

-

The URL for this website is https://w1.123-movies.lol/movie/watch-badrinath-ki-dulhania-online-5763.

-

Netflix

-

Netflix is one of the most popular and trusted streaming platforms in the world. It offers a wide range of movies, shows, documentaries, and originals for its subscribers. You can watch Badrinath Ki Dulhania on Netflix with high-quality video and audio. You can also download the movie on your device using the Netflix app for offline viewing. However, you need to have a Netflix subscription to access this service. You also need to have enough storage space on your device to download the movie as a 1080p file.

-

The URL for this website is https://www.netflix.com/title/80180043.

-

Prime Video

-

Prime Video is another popular and trusted streaming platform that offers a variety of movies, shows, originals, and exclusives for its subscribers. You can watch Badrinath Ki Dulhania on Prime Video with high-quality video and audio. You can also download the movie on your device using the Prime Video app for offline viewing. However, you need to have a Prime Video subscription to access this service. You also need to have enough storage space on your device to download the movie as a 1080p file.

-

The URL for this website is https://www.primevideo.com/detail/Badrinath-Ki-Dulhania/0QSSI97L6FF0AN5EV3FEEWV298.

-

The best apps to download movies 1080p torrent

-

uTorrent

-

uTorrent is one of the most popular and widely used torrent clients in the world. It allows you to download and manage torrent files from various sources with ease and speed. You can use uTorrent to download Badrinath Ki Dulhania movies 1080p torrent from any website that offers it. You can also adjust the settings and preferences of uTorrent to optimize your downloading experience. However, you should be careful about the ads and offers that might appear on uTorrent as they might be harmful or unwanted. You should also use a VPN service to protect your privacy and security while using uTorrent.

-

The URL for this app is https://www.utorrent.com/.

-

BitTorrent

-

BitTorrent is another popular and widely used torrent client in the world. It allows you to download and manage torrent files from various sources with ease and speed. You can use BitTorrent to download Badrinath Ki Dulhania movies 1080p torrent from any website that offers it. You can also adjust the settings and preferences of BitTorrent to optimize your downloading experience. However, you should be careful about the ads and offers that might appear on BitTorrent as they might be harmful or unwanted. You should also use a VPN service to protect your privacy and security while using BitTorrent.

-

The URL for this app is https://www.bittorrent.com/.

-

Popcorn Time

-

Popcorn Time is a unique app that combines streaming and torrenting in one platform. It allows you to watch and download movies online for free using torrent files from various sources. You can use Popcorn Time to watch and download Badrinath Ki Dulhania movies 1080p torrent with high-quality video and audio. You can also choose from different subtitles and languages for your convenience. However, you should be aware that Popcorn Time is not legal in some countries and regions due to copyright issues. You should also use a VPN service to access Popcorn Time safely and anonymously. -

The URL for this app is https://popcorntime.app/. -

C onclusion< -/h2> -

Badrinath Ki Dulhania is a fun and entertaining Bollywood romantic comedy film that stars Varun Dhawan and Alia Bhatt as two opposite characters who fall in love despite their differences and challenges. If you want to watch this movie again or share it with your friends and family, you can download Badrinath Ki Dulhania movies 1080p torrent and enjoy it on your device anytime and anywhere. However, you should be careful about the risks and responsibilities involved in downloading movies 1080p torrent, and follow the tips and suggestions we have given you in this article. We hope you have found this article helpful and informative, and we wish you a happy and safe downloading experience.< -/p> -

Frequently Asked Questions< -/h2> -

Frequently Asked Questions

-
    -
  1. What is the meaning of Badrinath Ki Dulhania?
  2. -

    Badrinath Ki Dulhania means Badrinath's bride in Hindi. It is the title of the movie and also the name of one of the songs in the movie.

    -
  3. Is Badrinath Ki Dulhania a sequel to Humpty Sharma Ki Dulhania?
  4. -

    Badrinath Ki Dulhania is not a sequel to Humpty Sharma Ki Dulhania, but a spiritual successor. It has a different story and characters, but shares the same genre and theme of love and marriage.

    -
  5. Where can I watch Badrinath Ki Dulhania online?
  6. -

    You can watch Badrinath Ki Dulhania online on streaming platforms like Netflix and Prime Video, if you have a subscription. You can also watch it online for free on websites like 123Movies, but you should be careful about the legality and safety of these sites.

    -
  7. How can I download Badrinath Ki Dulhania movies 1080p torrent?
  8. -

    You can download Badrinath Ki Dulhania movies 1080p torrent from websites that offer torrent files for movies, such as 123Movies. You will need a torrent client like uTorrent or BitTorrent to download the file. You will also need a VPN service to protect your privacy and security while downloading.

    -
  9. Is downloading Badrinath Ki Dulhania movies 1080p torrent legal and safe?
  10. -

    Downloading Badrinath Ki Dulhania movies 1080p torrent may not be legal in some countries and regions, as it may violate the copyright laws and rights of the creators and distributors of the movie. You should respect their wishes and support them by buying their products or services. Downloading Badrinath Ki Dulhania movies 1080p torrent may also not be safe, as you may encounter malware or viruses that can harm your device or data. You should use a reliable and reputable website and a antivirus software to download the file safely.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office for Free Cracked Pros Cons and Alternatives.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office for Free Cracked Pros Cons and Alternatives.md deleted file mode 100644 index 7240a8622607cb9f50caae0692c17cd4853a4dac..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office for Free Cracked Pros Cons and Alternatives.md +++ /dev/null @@ -1,40 +0,0 @@ - -

How to Download Microsoft Office for Free Cracked in 2023

-

Microsoft Office is one of the most popular and widely used productivity suites in the world. It includes applications such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive and Teams that help you create, edit, share and collaborate on various types of documents and projects. However, Microsoft Office is not free. You need to purchase a license key or a subscription to use it legally and access all its features and updates.

-

download microsoft office for free cracked


Downloadhttps://byltly.com/2uKxYa



-

But what if you don't want to pay for Microsoft Office? Is there a way to download it for free cracked and use it without any limitations? The answer is yes, but it comes with some risks and drawbacks. In this article, we will show you how to download Microsoft Office for free cracked in 2023, what are the pros and cons of doing so, and what are some safer and cheaper alternatives to consider.

-

How to Download Microsoft Office for Free Cracked in 2023

-

There are many websites and torrents that claim to offer Microsoft Office for free cracked. These are usually modified versions of the original software that have been hacked or cracked to bypass the activation process and remove the restrictions. Some of these versions may also include additional features or tools that are not available in the official release.

-

To download Microsoft Office for free cracked in 2023, you need to follow these steps:

-
    -
  1. Find a reliable source that offers Microsoft Office for free cracked. You can use a search engine or a torrent site to look for it. Make sure to check the reviews and ratings of the source before downloading anything.
  2. -
  3. Download the installation file or the ISO file of Microsoft Office for free cracked. You may need a torrent client or a download manager to do this.
  4. -
  5. Open the installation file or mount the ISO file and run the setup.exe file. Follow the on-screen instructions to install Microsoft Office for free cracked on your computer.
  6. -
  7. Activate Microsoft Office for free cracked using the crack or activator provided by the source. This may involve copying some files, running some commands, or using some tools.
  8. -
  9. Enjoy using Microsoft Office for free cracked on your computer.
  10. -
-

The Pros and Cons of Downloading Microsoft Office for Free Cracked

-

Downloading Microsoft Office for free cracked may seem like a tempting option, but it also comes with some disadvantages and risks that you should be aware of. Here are some of the pros and cons of downloading Microsoft Office for free cracked:

-

-

The Pros

- -

The Cons

- -

The Safer and Cheaper Alternatives to Downloading Microsoft Office for Free Cracked

-

If you want to use Microsoft Office legally and safely, you have some alternatives that are cheaper than buying a license key or a subscription. Here are some of them:

-
AwardCategoryRecipient(s)Result
Filmfare AwardsBest FilmDharma ProductionsNominated
Filmfare AwardsBest DirectorShashank KhaitanNominated
Filmfare AwardsBest ActorVarun DhawanNominated
Filmfare AwardsBest ActressAlia BhattNominated
Filmfare AwardsBest Male Playback SingerArijit Singh for "Roke Na Ruke Naina"Won
IIFA AwardsBest Actor (Male)Varun DhawanNominated
IIFA AwardsBest Actor (Female)Alia BhattNominated
IIFA AwardsBest Music DirectorAkhil Sachdeva, Tanishk Bagchi, Amaal Mallik for "Badrinath Ki Dulhania"Nominated
IIFA AwardsBest Playback Singer (Male)Arijit Singh for "Roke Na Ruke Naina"Nominated -
IIFA Awards< -/ B est Playback Singer (Female)< -/ Neha Kakkar for "Badri Ki Dulhania"< -/ Nominated< -/tr> -
Zee Cine Awards< -/ B est Film< -/ Dharma Productions< -/ Nominated< -/tr> -
Zee Cine Awards< -/ B est Actor – Male< -/ Varun Dhawan< -/ Nominated< -/tr> -
Zee Cine Awards< -/ B est Actor – Female< -/ Alia Bhatt< -/ Nominated< -/tr> -
Zee Cine Awards< -/ B est Director< -/ Shashank Khaitan< -/ Nominated< -/tr> -
Zee Cine Awards< -/ B est Music Director< -/ Akhil Sachdeva for "Badrinath Ki Dulhania"< -/ Nominated< -/tr> -
- - - - - - - - - - - - - - - - -
ProsCons
You might save some money or get a free copy of PowerPoint 2016.You might get a fake, pirated, or infected copy of PowerPoint 2016 that could harm your computer or compromise your data.
You might get access to some features or functions that are not available in the official version of PowerPoint 2016.You might miss out on some features or functions that are only available in the official version of PowerPoint 2016.
You might have more flexibility and control over the installation and activation process of PowerPoint 2016.You might have more difficulty and risk in the installation and activation process of PowerPoint 2016.
-

If you decide to download PowerPoint 2016 from other sources, you should take some precautions to verify the authenticity and security of the downloaded file. Here are some tips on how to do that:

-

Tips and Tricks for Using PowerPoint 2016 on Windows 10

-

Now that you have downloaded and installed PowerPoint 2016 on your Windows 10 computer, you might be wondering how to use it effectively and efficiently. PowerPoint 2016 has many features and functions that can help you create and share presentations with ease and confidence. Here are some tips and tricks on how to use PowerPoint 2016 on Windows 10:

- -

Conclusion

-

In this article, we have shown you how to download PowerPoint 2016 for Windows 10 for free from different sources. We have also given you some tips and tricks on how to use PowerPoint 2016 on Windows 10 effectively. We hope that this article has been helpful and informative for you.

-

powerpoint 2016 full version free download for windows 10
-how to get powerpoint 2016 for free on windows 10
-powerpoint 2016 offline installer free download for windows 10
-powerpoint 2016 crack free download for windows 10
-powerpoint 2016 product key free download for windows 10
-powerpoint 2016 trial free download for windows 10
-powerpoint 2016 setup free download for windows 10
-powerpoint 2016 activation key free download for windows 10
-powerpoint 2016 professional plus free download for windows 10
-powerpoint 2016 portable free download for windows 10
-powerpoint 2016 iso free download for windows 10
-powerpoint 2016 license key free download for windows 10
-powerpoint 2016 update free download for windows 10
-powerpoint 2016 tutorial free download for windows 10
-powerpoint 2016 templates free download for windows 10
-powerpoint 2016 themes free download for windows 10
-powerpoint 2016 patch free download for windows 10
-powerpoint 2016 serial key free download for windows 10
-powerpoint 2016 activator free download for windows 10
-powerpoint 2016 keygen free download for windows 10
-powerpoint 2016 features free download for windows 10
-powerpoint 2016 tips and tricks free download for windows 10
-powerpoint 2016 online free download for windows 10
-powerpoint 2016 alternatives free download for windows 10
-powerpoint 2016 add-ins free download for windows 10
-powerpoint 2016 converter free download for windows 10
-powerpoint 2016 viewer free download for windows 10
-powerpoint 2016 editor free download for windows 10
-powerpoint 2016 animations free download for windows 10
-powerpoint 2016 transitions free download for windows 10
-powerpoint 2016 design ideas free download for windows 10
-powerpoint 2016 shortcuts free download for windows 10
-powerpoint 2016 macros free download for windows 10
-powerpoint 2016 master slide free download for windows 10
-powerpoint 2016 embed video free download for windows 10
-powerpoint 2016 hyperlink free download for windows 10
-powerpoint 2016 background music free download for windows

-

If you want to learn more about PowerPoint 2016 and how to create and share presentations with it, you can check out these resources and links:

- -

FAQs

-

Here are some common questions that users might have about PowerPoint 2016:

-
    -
  1. How much does PowerPoint 2016 cost?
  2. -

    PowerPoint 2016 is included in Office Home & Business 2019 or Office Home & Student 2019, which cost $249.99 and $149.99 respectively. You can also get PowerPoint 2016 as part of Microsoft 365 subscription plans, which start from $69.99 per year or $6.99 per month.

    -
  3. Can I use PowerPoint 2016 without an internet connection?
  4. -

    Yes, you can use PowerPoint 2016 offline after installing it on your computer. However, some features and functions might require an internet connection, such as Smart Lookup, real-time co-authoring, or online presentations. You can also use PowerPoint Online, which is a free web-based version of PowerPoint that works in your browser, but it has fewer features and functions than PowerPoint 2016.

    -
  5. Can I use PowerPoint 2016 on other devices or platforms?
  6. -

    Yes, you can use PowerPoint 2016 on other devices or platforms, such as Mac, iOS, Android, or Windows Mobile. However, some features and functions might vary or be unavailable depending on the device or platform. You can also use PowerPoint Online or PowerPoint Mobile, which are web-based and mobile versions of PowerPoint that work on any device or platform with an internet connection.

    -
  7. Can I use PowerPoint 2016 with other versions of PowerPoint or Office?
  8. -

    Yes, you can use PowerPoint 2016 with other versions of PowerPoint or Office, such as PowerPoint 2013, PowerPoint 2010, or Office 365. However, some features and functions might not be compatible or supported by older versions of PowerPoint or Office. You can also use the Compatibility Mode or the Compatibility Checker to ensure that your presentations can be opened and edited by other versions of PowerPoint or Office.

    -
  9. Can I get help or support on PowerPoint 2016?
  10. -

    Yes, you can get help or support on PowerPoint 2016 from various sources, such as Microsoft's website, online forums, blogs, videos, books, courses, etc. You can also contact Microsoft's customer service or technical support team by phone, email, chat, or social media.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/raft/download_models.sh b/spaces/232labs/VToonify/vtoonify/model/raft/download_models.sh deleted file mode 100644 index 7b6ed7e478b74699d3c8db3bd744643c35f7da76..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/raft/download_models.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash -wget https://www.dropbox.com/s/4j4z58wuv8o0mfz/models.zip -unzip models.zip diff --git a/spaces/AI-ZTH-03-23/5.StreamlitWikipediaChat/README.md b/spaces/AI-ZTH-03-23/5.StreamlitWikipediaChat/README.md deleted file mode 100644 index ffe8f6cbdbae191db33a8133a04259e446c3f49d..0000000000000000000000000000000000000000 --- a/spaces/AI-ZTH-03-23/5.StreamlitWikipediaChat/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: 5.Streamlit-Wikipedia-Chat -emoji: 🌐👨‍🏫👩‍🏫 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: awacke1/StreamlitWikipediaChat ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py deleted file mode 100644 index 58eb535e7769f402169ddff77ee45c96ba3650d9..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py +++ /dev/null @@ -1,102 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to( - device=self.parameters.device - ) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to( - device=self.parameters.device - ) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.0]) - else: - if other is None: - return 0.5 * torch.mean( - torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar, - dim=[1, 2, 3], - ) - else: - return 0.5 * torch.mean( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - - 1.0 - - self.logvar - + other.logvar, - dim=[1, 2, 3], - ) - - def nll(self, sample, dims=[1, 2, 3]): - if self.deterministic: - return torch.Tensor([0.0]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims, - ) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/ASJMO/freegpt/client/js/highlightjs-copy.min.js b/spaces/ASJMO/freegpt/client/js/highlightjs-copy.min.js deleted file mode 100644 index ac11d33ec06e396c96b887494d9164a9b3996bef..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/client/js/highlightjs-copy.min.js +++ /dev/null @@ -1 +0,0 @@ -class CopyButtonPlugin{constructor(options={}){self.hook=options.hook;self.callback=options.callback}"after:highlightElement"({el,text}){let button=Object.assign(document.createElement("button"),{innerHTML:"Copy",className:"hljs-copy-button"});button.dataset.copied=false;el.parentElement.classList.add("hljs-copy-wrapper");el.parentElement.appendChild(button);el.parentElement.style.setProperty("--hljs-theme-background",window.getComputedStyle(el).backgroundColor);button.onclick=function(){if(!navigator.clipboard)return;let newText=text;if(hook&&typeof hook==="function"){newText=hook(text,el)||text}navigator.clipboard.writeText(newText).then(function(){button.innerHTML="Copied!";button.dataset.copied=true;let alert=Object.assign(document.createElement("div"),{role:"status",className:"hljs-copy-alert",innerHTML:"Copied to clipboard"});el.parentElement.appendChild(alert);setTimeout(()=>{button.innerHTML="Copy";button.dataset.copied=false;el.parentElement.removeChild(alert);alert=null},2e3)}).then(function(){if(typeof callback==="function")return callback(newText,el)})}}} \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/summarize/+server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/summarize/+server.ts deleted file mode 100644 index 18c599c09473ebabb6bbeb3adda0205b5bc9bd31..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/summarize/+server.ts +++ /dev/null @@ -1,56 +0,0 @@ -import { buildPrompt } from "$lib/buildPrompt"; -import { authCondition } from "$lib/server/auth"; -import { collections } from "$lib/server/database"; -import { generateFromDefaultEndpoint } from "$lib/server/generateFromDefaultEndpoint"; -import { defaultModel } from "$lib/server/models"; -import { error } from "@sveltejs/kit"; - -export async function POST({ params, locals }) { - /*const convId = new ObjectId(params.id); - - const conversation = await collections.conversations.findOne({ - _id: convId, - ...authCondition(locals), - }); - - if (!conversation) { - throw error(404, "Conversation not found"); - } - - const firstMessage = conversation.messages.find((m) => m.from === "user"); - - const userPrompt = - `Please summarize the following message as a single sentence of less than 5 words:\n` + - firstMessage?.content; - - const prompt = await buildPrompt({ - messages: [{ from: "user", content: userPrompt }], - model: defaultModel, - }); - const generated_text = await generateFromDefaultEndpoint(prompt); - - if (generated_text) { - await collections.conversations.updateOne( - { - _id: convId, - ...authCondition(locals), - }, - { - $set: { title: generated_text }, - } - ); - } - - return new Response( - JSON.stringify( - generated_text - ? { - title: generated_text, - } - : {} - ), - { headers: { "Content-Type": "application/json" } } - );*/ - - return new Response(JSON.stringify({}), { headers: { "Content-Type": "application/json" } }); -} diff --git a/spaces/Adapter/CoAdapter/ldm/data/dataset_wikiart.py b/spaces/Adapter/CoAdapter/ldm/data/dataset_wikiart.py deleted file mode 100644 index a7a2de87ccbba147580fed82e3c5e5a5ab38761e..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/data/dataset_wikiart.py +++ /dev/null @@ -1,67 +0,0 @@ -import json -import os.path - -from PIL import Image -from torch.utils.data import DataLoader - -from transformers import CLIPProcessor -from torchvision.transforms import transforms - -import pytorch_lightning as pl - - -class WikiArtDataset(): - def __init__(self, meta_file): - super(WikiArtDataset, self).__init__() - - self.files = [] - with open(meta_file, 'r') as f: - js = json.load(f) - for img_path in js: - img_name = os.path.splitext(os.path.basename(img_path))[0] - caption = img_name.split('_')[-1] - caption = caption.split('-') - j = len(caption) - 1 - while j >= 0: - if not caption[j].isdigit(): - break - j -= 1 - if j < 0: - continue - sentence = ' '.join(caption[:j + 1]) - self.files.append({'img_path': os.path.join('datasets/wikiart', img_path), 'sentence': sentence}) - - version = 'openai/clip-vit-large-patch14' - self.processor = CLIPProcessor.from_pretrained(version) - - self.jpg_transform = transforms.Compose([ - transforms.Resize(512), - transforms.RandomCrop(512), - transforms.ToTensor(), - ]) - - def __getitem__(self, idx): - file = self.files[idx] - - im = Image.open(file['img_path']) - - im_tensor = self.jpg_transform(im) - - clip_im = self.processor(images=im, return_tensors="pt")['pixel_values'][0] - - return {'jpg': im_tensor, 'style': clip_im, 'txt': file['sentence']} - - def __len__(self): - return len(self.files) - - -class WikiArtDataModule(pl.LightningDataModule): - def __init__(self, meta_file, batch_size, num_workers): - super(WikiArtDataModule, self).__init__() - self.train_dataset = WikiArtDataset(meta_file) - self.batch_size = batch_size - self.num_workers = num_workers - - def train_dataloader(self): - return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=True, num_workers=self.num_workers, - pin_memory=True) diff --git a/spaces/Adapting/TrendFlow/mypages/home.py b/spaces/Adapting/TrendFlow/mypages/home.py deleted file mode 100644 index 460607f2cc32e3bd29656bf286cacf2560f3c09a..0000000000000000000000000000000000000000 --- a/spaces/Adapting/TrendFlow/mypages/home.py +++ /dev/null @@ -1,143 +0,0 @@ -import streamlit as st -from .sidebar import render_sidebar -from requests_toolkit import ArxivQuery,IEEEQuery,PaperWithCodeQuery -from trendflow.lrt.clustering.clusters import SingleCluster -from trendflow.lrt.clustering.config import Configuration -from trendflow.lrt import ArticleList, LiteratureResearchTool -from trendflow.lrt_instance import * -from .charts import build_bar_charts - -def home(): - # sidebar content - platforms, number_papers, start_year, end_year, hyperparams = render_sidebar() - - # body head - with st.form("my_form", clear_on_submit=False): - st.markdown('''# 👋 Hi, enter your query here :)''') - query_input = st.text_input( - 'Enter your query:', - placeholder='''e.g. "Machine learning"''', - # label_visibility='collapsed', - value='' - ) - - show_preview = st.checkbox('show paper preview') - - # Every form must have a submit button. - submitted = st.form_submit_button("Search") - - if submitted: - # body - render_body(platforms, number_papers, 5, query_input, - show_preview, start_year, end_year, - hyperparams, - hyperparams['standardization']) - - -def __preview__(platforms, num_papers, num_papers_preview, query_input, start_year, end_year): - with st.spinner('Searching...'): - paperInGeneral = st.empty() # paper的大概 - paperInGeneral_md = '''# 0 Query Results Preview -We have found following papers for you! (displaying 5 papers for each literature platforms) -''' - if 'IEEE' in platforms: - paperInGeneral_md += '''## IEEE -| ID| Paper Title | Publication Year | -| -------- | -------- | -------- | -''' - IEEEQuery.__setup_api_key__('vpd9yy325enruv27zj2d353e') - ieee = IEEEQuery.query(query_input, start_year, end_year, num_papers) - num_papers_preview = min(len(ieee), num_papers_preview) - for i in range(num_papers_preview): - title = str(ieee[i]['title']).replace('\n', ' ') - publication_year = str(ieee[i]['publication_year']).replace('\n', ' ') - paperInGeneral_md += f'''|{i + 1}|{title}|{publication_year}|\n''' - if 'Arxiv' in platforms: - paperInGeneral_md += ''' -## Arxiv -| ID| Paper Title | Publication Year | -| -------- | -------- | -------- | -''' - arxiv = ArxivQuery.query(query_input, max_results=num_papers) - num_papers_preview = min(len(arxiv), num_papers_preview) - for i in range(num_papers_preview): - title = str(arxiv[i]['title']).replace('\n', ' ') - publication_year = str(arxiv[i]['published']).replace('\n', ' ') - paperInGeneral_md += f'''|{i + 1}|{title}|{publication_year}|\n''' - if 'Paper with Code' in platforms: - paperInGeneral_md += ''' -## Paper with Code -| ID| Paper Title | Publication Year | -| -------- | -------- | -------- | -''' - pwc = PaperWithCodeQuery.query(query_input, items_per_page=num_papers) - num_papers_preview = min(len(pwc), num_papers_preview) - for i in range(num_papers_preview): - title = str(pwc[i]['title']).replace('\n', ' ') - publication_year = str(pwc[i]['published']).replace('\n', ' ') - paperInGeneral_md += f'''|{i + 1}|{title}|{publication_year}|\n''' - - paperInGeneral.markdown(paperInGeneral_md) - -def render_body(platforms, num_papers, num_papers_preview, query_input, show_preview: bool, start_year, end_year, - hyperparams: dict, standardization=False): - - tmp = st.empty() - if query_input != '': - tmp.markdown(f'You entered query: `{query_input}`') - - # preview - if show_preview: - __preview__(platforms, num_papers, num_papers_preview, query_input, start_year, end_year) - - with st.spinner("Clustering and generating..."): - # lrt results - ## baseline - if hyperparams['dimension_reduction'] == 'none' \ - and hyperparams['model_cpt'] == 'keyphrase-transformer' \ - and hyperparams['cluster_model'] == 'kmeans-euclidean': - model = baseline_lrt - else: - config = Configuration( - plm='''all-mpnet-base-v2''', - dimension_reduction=hyperparams['dimension_reduction'], - clustering=hyperparams['cluster_model'], - keywords_extraction=hyperparams['model_cpt'] - ) - model = LiteratureResearchTool(config) - - generator = model.yield_(query_input, num_papers, start_year, end_year, max_k=hyperparams['max_k'], - platforms=platforms, standardization=standardization) - for i, plat in enumerate(platforms): - clusters, articles = next(generator) - st.markdown(f'''# {i + 1} {plat} Results''') - clusters.sort() - - st.markdown(f'''## {i + 1}.1 Clusters Overview''') - st.markdown(f'''In this section we show the overview of the clusters, more specifically,''') - st.markdown(f'''\n- the number of papers in each cluster\n- the number of keyphrases of each cluster''') - st.bokeh_chart(build_bar_charts( - x_range=[f'Cluster {i + 1}' for i in range(len(clusters))], - y_names=['Number of Papers', 'Number of Keyphrases'], - y_data=[[len(c) for c in clusters], [len(c.get_keyphrases()) for c in clusters]] - )) - - st.markdown(f'''## {i + 1}.2 Cluster Details''') - st.markdown(f'''In this section we show the details of each cluster, including''') - st.markdown(f'''\n- the article information in the cluster\n- the keyphrases of the cluster''') - for j, cluster in enumerate(clusters): - assert isinstance(cluster, SingleCluster) # TODO: remove this line - ids = cluster.get_elements() - articles_in_cluster = ArticleList([articles[id] for id in ids]) - st.markdown(f'''**Cluster {j + 1}**''') - st.dataframe(articles_in_cluster.to_dataframe()) - st.markdown(f'''The top 5 keyphrases of this cluster are:''') - md = '' - for keyphrase in cluster.top_5_keyphrases: - md += f'''- `{keyphrase}`\n''' - st.markdown(md) - - - - - diff --git a/spaces/Adr740/Hadith_AI_Explorer/README.md b/spaces/Adr740/Hadith_AI_Explorer/README.md deleted file mode 100644 index 2c1d99409b678e7d5e80c7ccb9ee2936664d8c13..0000000000000000000000000000000000000000 --- a/spaces/Adr740/Hadith_AI_Explorer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hadith AI Explorer -emoji: 🌖 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AiMimicry/sovits-models/vdecoder/hifigan/utils.py b/spaces/AiMimicry/sovits-models/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/AiMimicry/sovits-models/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/AlgoveraAI/medical-image-classification/app.py b/spaces/AlgoveraAI/medical-image-classification/app.py deleted file mode 100644 index 88c85a08aafcca9813126680d9df4b7b85607468..0000000000000000000000000000000000000000 --- a/spaces/AlgoveraAI/medical-image-classification/app.py +++ /dev/null @@ -1,140 +0,0 @@ -import gradio as gr - -from ocean_lib.config import Config -from ocean_lib.models.compute_input import ComputeInput -from ocean_lib.ocean.ocean import Ocean -from ocean_lib.web3_internal.constants import ZERO_ADDRESS -from ocean_lib.web3_internal.currency import to_wei -from ocean_lib.web3_internal.wallet import Wallet - -import os -import time - -from io import StringIO, BytesIO -from PIL import Image -import pandas as pd -import matplotlib.pyplot as plt -import random -import numpy as np - -config = Config('config.ini') -ocean = Ocean(config) - -def compute( - private_key -): - - wallet = Wallet(ocean.web3, - private_key, - transaction_timeout=20, - block_confirmations=config.block_confirmations) - - address = wallet.address - - DATA_ddo = ocean.assets.resolve("did:op:62D5Db3778ABAABa808e53eB2AB28181aaCCF747") - data_token = ocean.get_data_token(DATA_ddo.data_token_address) - token_address = data_token.address - - ALG_ddo = ocean.assets.resolve("did:op:7D87e472921536da4bd02CB566099C18ed2F40A5") - alg_token = ocean.get_data_token(ALG_ddo.data_token_address) - - DATA_did = DATA_ddo.did - ALG_did = ALG_ddo.did - - compute_service = DATA_ddo.get_service('compute') - algo_service = ALG_ddo.get_service('access') - - # order & pay for dataset - dataset_order_requirements = ocean.assets.order( - DATA_did, wallet.address, service_type=compute_service.type - ) - time.sleep(30) - DATA_order_tx_id = ocean.assets.pay_for_service( - ocean.web3, - dataset_order_requirements.amount, - dataset_order_requirements.data_token_address, - DATA_did, - compute_service.index, - ZERO_ADDRESS, - wallet, - dataset_order_requirements.computeAddress, - ) - print('after data') - # order & pay for algo - algo_order_requirements = ocean.assets.order( - ALG_did, wallet.address, service_type=algo_service.type - ) - time.sleep(30) - ALG_order_tx_id = ocean.assets.pay_for_service( - ocean.web3, - algo_order_requirements.amount, - algo_order_requirements.data_token_address, - ALG_did, - algo_service.index, - ZERO_ADDRESS, - wallet, - algo_order_requirements.computeAddress, - ) - print('after algo') - compute_inputs = [ComputeInput(DATA_did, - DATA_order_tx_id, - compute_service.index)] - - job_id = ocean.compute.start( - compute_inputs, - wallet, - algorithm_did=ALG_did, - algorithm_tx_id=ALG_order_tx_id, - algorithm_data_token=alg_token.address - ) - - status_dict = ocean.compute.status(DATA_did, job_id, wallet) - while status_dict['statusText'] != 'Job finished': - status_dict = ocean.compute.status(DATA_did, job_id, wallet) - time.sleep(10) - - final_df_data = ocean.compute.result_file(DATA_did, - job_id, - 0, - wallet) - s = str(final_df_data,'utf-8') - data = StringIO(s) - final_df = pd.read_csv(data) #.drop('Unnamed: 0', 1) - - image_data = ocean.compute.result_file(DATA_did, - job_id, - 1, - wallet) - image = Image.open(BytesIO(image_data)) - - image = np.array(image) - - samps = random.choices([0,1,2,3,4,5,6,7,8,9], k=3) - imgs = [] - for i in samps: - imgs.append(Image.fromarray(np.array(image )[300*i:300*i+300])) - - print('compute done') - return *imgs, final_df - - -# description = () - - -interface = gr.Interface( - compute, - [ - gr.inputs.Textbox(label="Private Key"), - ], - [ - gr.outputs.Image(label="Sample Results 1"), - gr.outputs.Image(label="Sample Results 2"), - gr.outputs.Image(label="Sample Results 3"), - gr.outputs.Dataframe(label="Final dataframe"), - ], - title="Inference demo for nCight-Algovera Medical Image Classification", -# description=description, - theme="huggingface", -) - -interface.launch(debug=True, enable_queue=True) diff --git a/spaces/Aloento/9Nine-VITS/generator.py b/spaces/Aloento/9Nine-VITS/generator.py deleted file mode 100644 index 7d9edf2cb2073eb10cd95e0e6100e47724f47bc8..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-VITS/generator.py +++ /dev/null @@ -1,62 +0,0 @@ -import torch -from torch import nn -from torch.nn import Conv1d, ConvTranspose1d, functional as F -from torch.nn.utils import weight_norm, remove_weight_norm - -import modules -from commons import init_weights - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py deleted file mode 100644 index 065f657032e6ef21bd022f938a3b1e7ada334436..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright 2023 Katherine Crowson and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput, logging, randn_tensor -from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerAncestralDiscrete -class EulerAncestralDiscreteSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -class EulerAncestralDiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson: - https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - timestep_spacing (`str`, default `"linspace"`): - The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample - Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - timestep_spacing: str = "linspace", - steps_offset: int = 0, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32) - self.sigmas = torch.from_numpy(sigmas) - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps) - self.is_scale_input_called = False - - @property - def init_noise_sigma(self): - # standard deviation of the initial noise distribution - if self.config.timestep_spacing in ["linspace", "trailing"]: - return self.sigmas.max() - - return (self.sigmas.max() ** 2 + 1) ** 0.5 - - def scale_model_input( - self, sample: torch.FloatTensor, timestep: Union[float, torch.FloatTensor] - ) -> torch.FloatTensor: - """ - Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`float` or `torch.FloatTensor`): the current timestep in the diffusion chain - - Returns: - `torch.FloatTensor`: scaled input sample - """ - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - sample = sample / ((sigma**2 + 1) ** 0.5) - self.is_scale_input_called = True - return sample - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - - # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891 - if self.config.timestep_spacing == "linspace": - timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[ - ::-1 - ].copy() - elif self.config.timestep_spacing == "leading": - step_ratio = self.config.num_train_timesteps // self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(float) - timesteps += self.config.steps_offset - elif self.config.timestep_spacing == "trailing": - step_ratio = self.config.num_train_timesteps / self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(self.config.num_train_timesteps, 0, -step_ratio)).round().copy().astype(float) - timesteps -= 1 - else: - raise ValueError( - f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'." - ) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - self.sigmas = torch.from_numpy(sigmas).to(device=device) - if str(device).startswith("mps"): - # mps does not support float64 - self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32) - else: - self.timesteps = torch.from_numpy(timesteps).to(device=device) - - def step( - self, - model_output: torch.FloatTensor, - timestep: Union[float, torch.FloatTensor], - sample: torch.FloatTensor, - generator: Optional[torch.Generator] = None, - return_dict: bool = True, - ) -> Union[EulerAncestralDiscreteSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`float`): current timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - generator (`torch.Generator`, optional): Random number generator. - return_dict (`bool`): option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput`] if `return_dict` is True, otherwise - a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - - if ( - isinstance(timestep, int) - or isinstance(timestep, torch.IntTensor) - or isinstance(timestep, torch.LongTensor) - ): - raise ValueError( - ( - "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to" - " `EulerDiscreteScheduler.step()` is not supported. Make sure to pass" - " one of the `scheduler.timesteps` as a timestep." - ), - ) - - if not self.is_scale_input_called: - logger.warning( - "The `scale_model_input` function should be called before `step` to ensure correct denoising. " - "See `StableDiffusionPipeline` for a usage example." - ) - - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - pred_original_sample = sample - sigma * model_output - elif self.config.prediction_type == "v_prediction": - # * c_out + input * c_skip - pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1)) - elif self.config.prediction_type == "sample": - raise NotImplementedError("prediction_type not implemented yet: sample") - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - sigma_from = self.sigmas[step_index] - sigma_to = self.sigmas[step_index + 1] - sigma_up = (sigma_to**2 * (sigma_from**2 - sigma_to**2) / sigma_from**2) ** 0.5 - sigma_down = (sigma_to**2 - sigma_up**2) ** 0.5 - - # 2. Convert to an ODE derivative - derivative = (sample - pred_original_sample) / sigma - - dt = sigma_down - sigma - - prev_sample = sample + derivative * dt - - device = model_output.device - noise = randn_tensor(model_output.shape, dtype=model_output.dtype, device=device, generator=generator) - - prev_sample = prev_sample + noise * sigma_up - - if not return_dict: - return (prev_sample,) - - return EulerAncestralDiscreteSchedulerOutput( - prev_sample=prev_sample, pred_original_sample=pred_original_sample - ) - - # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - # Make sure sigmas and timesteps have the same device and dtype as original_samples - sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) - if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): - # mps does not support float64 - schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) - timesteps = timesteps.to(original_samples.device, dtype=torch.float32) - else: - schedule_timesteps = self.timesteps.to(original_samples.device) - timesteps = timesteps.to(original_samples.device) - - step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] - - sigma = sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_rcnn_hrnetv2p_w18_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_rcnn_hrnetv2p_w18_20e_coco.py deleted file mode 100644 index 9585a4f35d9151b42beac05066a1a231dd1777a9..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_rcnn_hrnetv2p_w18_20e_coco.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './cascade_rcnn_hrnetv2p_w32_20e_coco.py' -# model settings -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(18, 36)), - stage3=dict(num_channels=(18, 36, 72)), - stage4=dict(num_channels=(18, 36, 72, 144)))), - neck=dict(type='HRFPN', in_channels=[18, 36, 72, 144], out_channels=256)) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/three_interpolate.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/three_interpolate.py deleted file mode 100644 index 203f47f05d58087e034fb3cd8cd6a09233947b4a..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/three_interpolate.py +++ /dev/null @@ -1,68 +0,0 @@ -from typing import Tuple - -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['three_interpolate_forward', 'three_interpolate_backward']) - - -class ThreeInterpolate(Function): - """Performs weighted linear interpolation on 3 features. - - Please refer to `Paper of PointNet++ `_ - for more details. - """ - - @staticmethod - def forward(ctx, features: torch.Tensor, indices: torch.Tensor, - weight: torch.Tensor) -> torch.Tensor: - """ - Args: - features (Tensor): (B, C, M) Features descriptors to be - interpolated - indices (Tensor): (B, n, 3) index three nearest neighbors - of the target features in features - weight (Tensor): (B, n, 3) weights of interpolation - - Returns: - Tensor: (B, C, N) tensor of the interpolated features - """ - assert features.is_contiguous() - assert indices.is_contiguous() - assert weight.is_contiguous() - - B, c, m = features.size() - n = indices.size(1) - ctx.three_interpolate_for_backward = (indices, weight, m) - output = torch.cuda.FloatTensor(B, c, n) - - ext_module.three_interpolate_forward( - features, indices, weight, output, b=B, c=c, m=m, n=n) - return output - - @staticmethod - def backward( - ctx, grad_out: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Args: - grad_out (Tensor): (B, C, N) tensor with gradients of outputs - - Returns: - Tensor: (B, C, M) tensor with gradients of features - """ - idx, weight, m = ctx.three_interpolate_for_backward - B, c, n = grad_out.size() - - grad_features = torch.cuda.FloatTensor(B, c, m).zero_() - grad_out_data = grad_out.data.contiguous() - - ext_module.three_interpolate_backward( - grad_out_data, idx, weight, grad_features.data, b=B, c=c, n=n, m=m) - return grad_features, None, None - - -three_interpolate = ThreeInterpolate.apply diff --git a/spaces/Ariharasudhan/YoloV5/utils/triton.py b/spaces/Ariharasudhan/YoloV5/utils/triton.py deleted file mode 100644 index a94ef0ad197d694d5d4eb8ebc1776545c4b58a6e..0000000000000000000000000000000000000000 --- a/spaces/Ariharasudhan/YoloV5/utils/triton.py +++ /dev/null @@ -1,85 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" Utils to interact with the Triton Inference Server -""" - -import typing -from urllib.parse import urlparse - -import torch - - -class TritonRemoteModel: - """ A wrapper over a model served by the Triton Inference Server. It can - be configured to communicate over GRPC or HTTP. It accepts Torch Tensors - as input and returns them as outputs. - """ - - def __init__(self, url: str): - """ - Keyword arguments: - url: Fully qualified address of the Triton server - for e.g. grpc://localhost:8000 - """ - - parsed_url = urlparse(url) - if parsed_url.scheme == "grpc": - from tritonclient.grpc import InferenceServerClient, InferInput - - self.client = InferenceServerClient(parsed_url.netloc) # Triton GRPC client - model_repository = self.client.get_model_repository_index() - self.model_name = model_repository.models[0].name - self.metadata = self.client.get_model_metadata(self.model_name, as_json=True) - - def create_input_placeholders() -> typing.List[InferInput]: - return [ - InferInput(i['name'], [int(s) for s in i["shape"]], i['datatype']) for i in self.metadata['inputs']] - - else: - from tritonclient.http import InferenceServerClient, InferInput - - self.client = InferenceServerClient(parsed_url.netloc) # Triton HTTP client - model_repository = self.client.get_model_repository_index() - self.model_name = model_repository[0]['name'] - self.metadata = self.client.get_model_metadata(self.model_name) - - def create_input_placeholders() -> typing.List[InferInput]: - return [ - InferInput(i['name'], [int(s) for s in i["shape"]], i['datatype']) for i in self.metadata['inputs']] - - self._create_input_placeholders_fn = create_input_placeholders - - @property - def runtime(self): - """Returns the model runtime""" - return self.metadata.get("backend", self.metadata.get("platform")) - - def __call__(self, *args, **kwargs) -> typing.Union[torch.Tensor, typing.Tuple[torch.Tensor, ...]]: - """ Invokes the model. Parameters can be provided via args or kwargs. - args, if provided, are assumed to match the order of inputs of the model. - kwargs are matched with the model input names. - """ - inputs = self._create_inputs(*args, **kwargs) - response = self.client.infer(model_name=self.model_name, inputs=inputs) - result = [] - for output in self.metadata['outputs']: - tensor = torch.as_tensor(response.as_numpy(output['name'])) - result.append(tensor) - return result[0] if len(result) == 1 else result - - def _create_inputs(self, *args, **kwargs): - args_len, kwargs_len = len(args), len(kwargs) - if not args_len and not kwargs_len: - raise RuntimeError("No inputs provided.") - if args_len and kwargs_len: - raise RuntimeError("Cannot specify args and kwargs at the same time") - - placeholders = self._create_input_placeholders_fn() - if args_len: - if args_len != len(placeholders): - raise RuntimeError(f"Expected {len(placeholders)} inputs, got {args_len}.") - for input, value in zip(placeholders, args): - input.set_data_from_numpy(value.cpu().numpy()) - else: - for input in placeholders: - value = kwargs[input.name] - input.set_data_from_numpy(value.cpu().numpy()) - return placeholders diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_lib.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_lib.py deleted file mode 100644 index 2e9d8757a582b1dcdb47a34c35c6cfb3ed23ba90..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_lib.py +++ /dev/null @@ -1,122 +0,0 @@ -import os -import sys -from itertools import product, starmap -import distutils.command.install_lib as orig - - -class install_lib(orig.install_lib): - """Don't add compiled flags to filenames of non-Python files""" - - def run(self): - self.build() - outfiles = self.install() - if outfiles is not None: - # always compile, in case we have any extension stubs to deal with - self.byte_compile(outfiles) - - def get_exclusions(self): - """ - Return a collections.Sized collections.Container of paths to be - excluded for single_version_externally_managed installations. - """ - all_packages = ( - pkg - for ns_pkg in self._get_SVEM_NSPs() - for pkg in self._all_packages(ns_pkg) - ) - - excl_specs = product(all_packages, self._gen_exclusion_paths()) - return set(starmap(self._exclude_pkg_path, excl_specs)) - - def _exclude_pkg_path(self, pkg, exclusion_path): - """ - Given a package name and exclusion path within that package, - compute the full exclusion path. - """ - parts = pkg.split('.') + [exclusion_path] - return os.path.join(self.install_dir, *parts) - - @staticmethod - def _all_packages(pkg_name): - """ - >>> list(install_lib._all_packages('foo.bar.baz')) - ['foo.bar.baz', 'foo.bar', 'foo'] - """ - while pkg_name: - yield pkg_name - pkg_name, sep, child = pkg_name.rpartition('.') - - def _get_SVEM_NSPs(self): - """ - Get namespace packages (list) but only for - single_version_externally_managed installations and empty otherwise. - """ - # TODO: is it necessary to short-circuit here? i.e. what's the cost - # if get_finalized_command is called even when namespace_packages is - # False? - if not self.distribution.namespace_packages: - return [] - - install_cmd = self.get_finalized_command('install') - svem = install_cmd.single_version_externally_managed - - return self.distribution.namespace_packages if svem else [] - - @staticmethod - def _gen_exclusion_paths(): - """ - Generate file paths to be excluded for namespace packages (bytecode - cache files). - """ - # always exclude the package module itself - yield '__init__.py' - - yield '__init__.pyc' - yield '__init__.pyo' - - if not hasattr(sys, 'implementation'): - return - - base = os.path.join( - '__pycache__', '__init__.' + sys.implementation.cache_tag) - yield base + '.pyc' - yield base + '.pyo' - yield base + '.opt-1.pyc' - yield base + '.opt-2.pyc' - - def copy_tree( - self, infile, outfile, - preserve_mode=1, preserve_times=1, preserve_symlinks=0, level=1 - ): - assert preserve_mode and preserve_times and not preserve_symlinks - exclude = self.get_exclusions() - - if not exclude: - return orig.install_lib.copy_tree(self, infile, outfile) - - # Exclude namespace package __init__.py* files from the output - - from setuptools.archive_util import unpack_directory - from distutils import log - - outfiles = [] - - def pf(src, dst): - if dst in exclude: - log.warn("Skipping installation of %s (namespace package)", - dst) - return False - - log.info("copying %s -> %s", src, os.path.dirname(dst)) - outfiles.append(dst) - return dst - - unpack_directory(infile, outfile, pf) - return outfiles - - def get_outputs(self): - outputs = orig.install_lib.get_outputs(self) - exclude = self.get_exclusions() - if exclude: - return [f for f in outputs if f not in exclude] - return outputs diff --git a/spaces/Awesimo/jojogan/e4e/README.md b/spaces/Awesimo/jojogan/e4e/README.md deleted file mode 100644 index 14b6bc701b2bad3c2fc7b1d9b36f1892681ded5f..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/README.md +++ /dev/null @@ -1,142 +0,0 @@ -# Designing an Encoder for StyleGAN Image Manipulation - - - [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](http://colab.research.google.com/github/omertov/encoder4editing/blob/main/notebooks/inference_playground.ipynb) - -> Recently, there has been a surge of diverse methods for performing image editing by employing pre-trained unconditional generators. Applying these methods on real images, however, remains a challenge, as it necessarily requires the inversion of the images into their latent space. To successfully invert a real image, one needs to find a latent code that reconstructs the input image accurately, and more importantly, allows for its meaningful manipulation. In this paper, we carefully study the latent space of StyleGAN, the state-of-the-art unconditional generator. We identify and analyze the existence of a distortion-editability tradeoff and a distortion-perception tradeoff within the StyleGAN latent space. We then suggest two principles for designing encoders in a manner that allows one to control the proximity of the inversions to regions that StyleGAN was originally trained on. We present an encoder based on our two principles that is specifically designed for facilitating editing on real images by balancing these tradeoffs. By evaluating its performance qualitatively and quantitatively on numerous challenging domains, including cars and horses, we show that our inversion method, followed by common editing techniques, achieves superior real-image editing quality, with only a small reconstruction accuracy drop. - -

- -

- -## Description -Official Implementation of "Designing an Encoder for StyleGAN Image Manipulation" paper for both training and evaluation. -The e4e encoder is specifically designed to complement existing image manipulation techniques performed over StyleGAN's latent space. - -## Recent Updates -`2021.03.25`: Add pose editing direction. - -## Getting Started -### Prerequisites -- Linux or macOS -- NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported) -- Python 3 - -### Installation -- Clone the repository: -``` -git clone https://github.com/omertov/encoder4editing.git -cd encoder4editing -``` -- Dependencies: -We recommend running this repository using [Anaconda](https://docs.anaconda.com/anaconda/install/). -All dependencies for defining the environment are provided in `environment/e4e_env.yaml`. - -### Inference Notebook -We provide a Jupyter notebook found in `notebooks/inference_playground.ipynb` that allows one to encode and perform several editings on real images using StyleGAN. - -### Pretrained Models -Please download the pre-trained models from the following links. Each e4e model contains the entire pSp framework architecture, including the encoder and decoder weights. -| Path | Description -| :--- | :---------- -|[FFHQ Inversion](https://drive.google.com/file/d/1cUv_reLE6k3604or78EranS7XzuVMWeO/view?usp=sharing) | FFHQ e4e encoder. -|[Cars Inversion](https://drive.google.com/file/d/17faPqBce2m1AQeLCLHUVXaDfxMRU2QcV/view?usp=sharing) | Cars e4e encoder. -|[Horse Inversion](https://drive.google.com/file/d/1TkLLnuX86B_BMo2ocYD0kX9kWh53rUVX/view?usp=sharing) | Horse e4e encoder. -|[Church Inversion](https://drive.google.com/file/d/1-L0ZdnQLwtdy6-A_Ccgq5uNJGTqE7qBa/view?usp=sharing) | Church e4e encoder. - -If you wish to use one of the pretrained models for training or inference, you may do so using the flag `--checkpoint_path`. - -In addition, we provide various auxiliary models needed for training your own e4e model from scratch. -| Path | Description -| :--- | :---------- -|[FFHQ StyleGAN](https://drive.google.com/file/d/1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT/view?usp=sharing) | StyleGAN model pretrained on FFHQ taken from [rosinality](https://github.com/rosinality/stylegan2-pytorch) with 1024x1024 output resolution. -|[IR-SE50 Model](https://drive.google.com/file/d/1KW7bjndL3QG3sxBbZxreGHigcCCpsDgn/view?usp=sharing) | Pretrained IR-SE50 model taken from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) for use in our ID loss during training. -|[MOCOv2 Model](https://drive.google.com/file/d/18rLcNGdteX5LwT7sv_F7HWr12HpVEzVe/view?usp=sharing) | Pretrained ResNet-50 model trained using MOCOv2 for use in our simmilarity loss for domains other then human faces during training. - -By default, we assume that all auxiliary models are downloaded and saved to the directory `pretrained_models`. However, you may use your own paths by changing the necessary values in `configs/path_configs.py`. - -## Training -To train the e4e encoder, make sure the paths to the required models, as well as training and testing data is configured in `configs/path_configs.py` and `configs/data_configs.py`. -#### **Training the e4e Encoder** -``` -python scripts/train.py \ ---dataset_type cars_encode \ ---exp_dir new/experiment/directory \ ---start_from_latent_avg \ ---use_w_pool \ ---w_discriminator_lambda 0.1 \ ---progressive_start 20000 \ ---id_lambda 0.5 \ ---val_interval 10000 \ ---max_steps 200000 \ ---stylegan_size 512 \ ---stylegan_weights path/to/pretrained/stylegan.pt \ ---workers 8 \ ---batch_size 8 \ ---test_batch_size 4 \ ---test_workers 4 -``` - -#### Training on your own dataset -In order to train the e4e encoder on a custom dataset, perform the following adjustments: -1. Insert the paths to your train and test data into the `dataset_paths` variable defined in `configs/paths_config.py`: -``` -dataset_paths = { - 'my_train_data': '/path/to/train/images/directory', - 'my_test_data': '/path/to/test/images/directory' -} -``` -2. Configure a new dataset under the DATASETS variable defined in `configs/data_configs.py`: -``` -DATASETS = { - 'my_data_encode': { - 'transforms': transforms_config.EncodeTransforms, - 'train_source_root': dataset_paths['my_train_data'], - 'train_target_root': dataset_paths['my_train_data'], - 'test_source_root': dataset_paths['my_test_data'], - 'test_target_root': dataset_paths['my_test_data'] - } -} -``` -Refer to `configs/transforms_config.py` for the transformations applied to the train and test images during training. - -3. Finally, run a training session with `--dataset_type my_data_encode`. - -## Inference -Having trained your model, you can use `scripts/inference.py` to apply the model on a set of images. -For example, -``` -python scripts/inference.py \ ---images_dir=/path/to/images/directory \ ---save_dir=/path/to/saving/directory \ -path/to/checkpoint.pt -``` - -## Latent Editing Consistency (LEC) -As described in the paper, we suggest a new metric, Latent Editing Consistency (LEC), for evaluating the encoder's -performance. -We provide an example for calculating the metric over the FFHQ StyleGAN using the aging editing direction in -`metrics/LEC.py`. - -To run the example: -``` -cd metrics -python LEC.py \ ---images_dir=/path/to/images/directory \ -path/to/checkpoint.pt -``` - -## Acknowledgments -This code borrows heavily from [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel) - -## Citation -If you use this code for your research, please cite our paper Designing an Encoder for StyleGAN Image Manipulation: - -``` -@article{tov2021designing, - title={Designing an Encoder for StyleGAN Image Manipulation}, - author={Tov, Omer and Alaluf, Yuval and Nitzan, Yotam and Patashnik, Or and Cohen-Or, Daniel}, - journal={arXiv preprint arXiv:2102.02766}, - year={2021} -} -``` diff --git a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_act.py b/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_act.py deleted file mode 100644 index 973a84fffde53668d31397da5fb993bbc95f7be0..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_act.py +++ /dev/null @@ -1,85 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/Bart92/RVC_HF/LazyImport.py b/spaces/Bart92/RVC_HF/LazyImport.py deleted file mode 100644 index 5bdb05ddd5a546a43adba7274b4c3465bb77f2f5..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/LazyImport.py +++ /dev/null @@ -1,13 +0,0 @@ -from importlib.util import find_spec, LazyLoader, module_from_spec -from sys import modules - -def lazyload(name): - if name in modules: - return modules[name] - else: - spec = find_spec(name) - loader = LazyLoader(spec.loader) - module = module_from_spec(spec) - modules[name] = module - loader.exec_module(module) - return module \ No newline at end of file diff --git a/spaces/BeeMon/dreambooth-training/app.py b/spaces/BeeMon/dreambooth-training/app.py deleted file mode 100644 index f7d90f7250ccac1b7d250062b6d3348124acdf4e..0000000000000000000000000000000000000000 --- a/spaces/BeeMon/dreambooth-training/app.py +++ /dev/null @@ -1,687 +0,0 @@ -from subprocess import getoutput -import os - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - which_gpu = "A10G" - os.system(f"pip install --no-deps xformers==0.0.16rc425") -elif("T4" in gpu_info): - which_gpu = "T4" - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") -else: - which_gpu = "CPU" - -import gradio as gr -from pathlib import Path -import argparse -import shutil -from train_dreambooth import run_training -from convertosd import convert -from PIL import Image -from slugify import slugify -import requests -import torch -import zipfile -import tarfile -import urllib.parse -import gc -from diffusers import StableDiffusionPipeline -from huggingface_hub import snapshot_download, update_repo_visibility, HfApi - -is_spaces = True if "SPACE_ID" in os.environ else False -if(is_spaces): - is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False -else: - is_shared_ui = False -is_gpu_associated = torch.cuda.is_available() - -os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" - -if(is_gpu_associated): - model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable") - model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"]) - model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"]) - safety_checker = snapshot_download(repo_id="multimodalart/sd-sc") - model_to_load = model_v1 - -def swap_base_model(selected_model): - if(is_gpu_associated): - global model_to_load - if(selected_model == "v1-5"): - model_to_load = model_v1 - elif(selected_model == "v2-1-768"): - model_to_load = model_v2 - else: - model_to_load = model_v2_512 - - - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} - .duplicate-button img{margin: 0} -''' -maximum_concepts = 3 - -def swap_text(option, base): - resize_width = 768 if base == "v2-1-768" else 512 - mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:" - if(option == "object"): - instance_prompt_example = "cttoy" - freeze_for = 30 - return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)] - elif(option == "person"): - instance_prompt_example = "julcto" - freeze_for = 70 - #show_prior_preservation = True if base != "v2-1-768" else False - show_prior_preservation=False - if(show_prior_preservation): - prior_preservation_box_update = gr.update(visible=show_prior_preservation) - else: - prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False) - return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update] - elif(option == "style"): - instance_prompt_example = "trsldamrl" - freeze_for = 10 - return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)] - -def count_files(*inputs): - file_counter = 0 - concept_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts): - files = inputs[i] - if(files): - concept_counter+=1 - file_counter+=len(files) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - selected_model = inputs[-5] - experimental_faces = inputs[-6] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2400): - Training_Steps = 2400 #Avoid overfitting on person faces - if(is_spaces): - if(selected_model == "v1-5"): - its = 1.1 if which_gpu == "T4" else 1.8 - if(experimental_faces): - its = 1 - elif(selected_model == "v2-1-512"): - its = 0.8 if which_gpu == "T4" else 1.5 - if(experimental_faces): - its = 0.7 - elif(selected_model == "v2-1-768"): - its = 0.48 if which_gpu == "T4" else 0.85 - - gpu_price = 0.60 if which_gpu == "T4" else 1.10 - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes. - The setup, compression and uploading the model can take up to 20 minutes.
As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.

- If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.

''' - else: - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.

''' - - return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)]) - -def update_steps(*files_list): - file_counter = 0 - for i, files in enumerate(files_list): - if(files): - file_counter+=len(files) - return(gr.update(value=file_counter*200)) - -def visualise_progress_bar(): - return gr.update(visible=True) - -def pad_image(image): - w, h = image.size - if w == h: - return image - elif w > h: - new_image = Image.new(image.mode, (w, w), (0, 0, 0)) - new_image.paste(image, (0, (w - h) // 2)) - return new_image - else: - new_image = Image.new(image.mode, (h, h), (0, 0, 0)) - new_image.paste(image, ((h - w) // 2, 0)) - return new_image - -def validate_model_upload(hf_token, model_name): - if(hf_token != ''): - api = HfApi() - try: - _ = api.whoami(hf_token) - except: - raise gr.Error("You have inserted an invalid Hugging Face token") - try: - if(is_spaces): - update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space") - except: - raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions") - else: - raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)") - if(model_name == ""): - raise gr.Error("Please fill in your model's name") - -def swap_hardware(hf_token, hardware="cpu-basic"): - hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'flavor': hardware} - requests.post(hardware_url, json = body, headers=headers) - -def swap_sleep_time(hf_token,sleep_time): - sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'seconds':sleep_time} - requests.post(sleep_time_url,json=body,headers=headers) - -def get_sleep_time(hf_token): - sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}" - headers = { "authorization" : f"Bearer {hf_token}"} - response = requests.get(sleep_time_url,headers=headers) - try: - gcTimeout = response.json()['runtime']['gcTimeout'] - except: - gcTimeout = None - return gcTimeout - -def write_to_community(title, description,hf_token): - from huggingface_hub import HfApi - api = HfApi() - api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token) - -def train(progress=gr.Progress(track_tqdm=True), *inputs): - which_model = inputs[-10] - if(which_model == ""): - raise gr.Error("You forgot to select a base model to use") - - if is_shared_ui: - raise gr.Error("This Space only works in duplicated instances") - if not is_gpu_associated: - raise gr.Error("Please associate a T4 or A10G GPU for this Space") - hf_token = inputs[-5] - model_name = inputs[-7] - if(is_spaces): - sleep_time = get_sleep_time(hf_token) - if sleep_time: - swap_sleep_time(hf_token, -1) - remove_attribution_after = inputs[-6] - else: - remove_attribution_after = False - - if(remove_attribution_after): - validate_model_upload(hf_token, model_name) - - torch.cuda.empty_cache() - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - - if os.path.exists("output_model"): shutil.rmtree('output_model') - if os.path.exists("instance_images"): shutil.rmtree('instance_images') - if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar") - if os.path.exists("model.ckpt"): os.remove("model.ckpt") - if os.path.exists("hastrained.success"): os.remove("hastrained.success") - file_counter = 0 - resolution = 512 if which_model != "v2-1-768" else 768 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - if(input): - os.makedirs('instance_images',exist_ok=True) - files = inputs[i+(maximum_concepts*2)] - prompt = inputs[i+maximum_concepts] - if(prompt == "" or prompt == None): - raise gr.Error("You forgot to define your concept prompt") - for j, file_temp in enumerate(files): - file = Image.open(file_temp.name) - image = pad_image(file) - image = image.resize((resolution, resolution)) - extension = file_temp.name.split(".")[1] - image = image.convert('RGB') - image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100) - file_counter += 1 - - os.makedirs('output_model',exist_ok=True) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - experimental_face_improvement = inputs[-9] - - if(uses_custom): - Training_Steps = int(inputs[-3]) - Train_text_encoder_for = int(inputs[-2]) - else: - if(type_of_thing == "object"): - Train_text_encoder_for=30 - - elif(type_of_thing == "style"): - Train_text_encoder_for=15 - - elif(type_of_thing == "person"): - Train_text_encoder_for=70 - - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2600): - Training_Steps = 2600 #Avoid overfitting on people's faces - stptxt = int((Training_Steps*Train_text_encoder_for)/100) - gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False - cache_latents = True if which_model != "v1-5" else False - if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)): - args_general = argparse.Namespace( - image_captions_filename = True, - train_text_encoder = True if stptxt > 0 else False, - stop_text_encoder_training = stptxt, - save_n_steps = 0, - pretrained_model_name_or_path = model_to_load, - instance_data_dir="instance_images", - class_data_dir=None, - output_dir="output_model", - instance_prompt="", - seed=42, - resolution=resolution, - mixed_precision="fp16", - train_batch_size=1, - gradient_accumulation_steps=1, - use_8bit_adam=True, - learning_rate=2e-6, - lr_scheduler="polynomial", - lr_warmup_steps = 0, - max_train_steps=Training_Steps, - gradient_checkpointing=gradient_checkpointing, - cache_latents=cache_latents, - ) - print("Starting single training...") - lock_file = open("intraining.lock", "w") - lock_file.close() - try: - run_training(args_general) - except Exception as e: - if(is_spaces): - title="There was an error on during your training" - description=f''' - Unfortunately there was an error during training your {model_name} model. - Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training): - ``` - {str(e)} - ``` - ''' - swap_hardware(hf_token, "cpu-basic") - write_to_community(title,description,hf_token) - - - gc.collect() - torch.cuda.empty_cache() - if(which_model == "v1-5"): - print("Adding Safety Checker to the model...") - shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True) - shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True) - shutil.copy(f"model_index.json", "output_model/model_index.json") - - if(not remove_attribution_after): - swap_sleep_time(hf_token, sleep_time) - print("Archiving model file...") - with tarfile.open("diffusers_model.tar", "w") as tar: - tar.add("output_model", arcname=os.path.basename("output_model")) - if os.path.exists("intraining.lock"): os.remove("intraining.lock") - trained_file = open("hastrained.success", "w") - trained_file.close() - print("Training completed!") - return [ - gr.update(visible=False), #progress_bar - gr.update(visible=True, value=["diffusers_model.tar"]), #result - gr.update(visible=True), #try_your_model - gr.update(visible=True), #push_to_hub - gr.update(visible=True), #convert_button - gr.update(visible=False), #training_ongoing - gr.update(visible=True) #completed_training - ] - else: - where_to_upload = inputs[-8] - push(model_name, where_to_upload, hf_token, which_model, True) - swap_hardware(hf_token, "cpu-basic") - -pipe_is_set = False -def generate(prompt, steps): - torch.cuda.empty_cache() - from diffusers import StableDiffusionPipeline - global pipe_is_set - if(not pipe_is_set): - global pipe - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16) - pipe = pipe.to("cuda") - pipe_is_set = True - - image = pipe(prompt, num_inference_steps=steps).images[0] - return(image) - -def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False): - validate_model_upload(hf_token, model_name) - if(not os.path.exists("model.ckpt")): - convert("output_model", "model.ckpt") - from huggingface_hub import HfApi, HfFolder, CommitOperationAdd - from huggingface_hub import create_repo - model_name_slug = slugify(model_name) - api = HfApi() - your_username = api.whoami(token=hf_token)["name"] - if(where_to_upload == "My personal profile"): - model_id = f"{your_username}/{model_name_slug}" - else: - model_id = f"sd-dreambooth-library/{model_name_slug}" - headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"} - response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers) - - print(f"Starting to upload the model {model_id}...") - images_upload = os.listdir("instance_images") - image_string = "" - instance_prompt_list = [] - previous_instance_prompt = '' - for i, image in enumerate(images_upload): - instance_prompt = image.split("_")[0] - if(instance_prompt != previous_instance_prompt): - title_instance_prompt_string = instance_prompt - instance_prompt_list.append(instance_prompt) - else: - title_instance_prompt_string = '' - previous_instance_prompt = instance_prompt - image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""} -{image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})''' - readme_text = f'''--- -license: creativeml-openrail-m -tags: -- text-to-image -widget: -- text: {instance_prompt_list[0]} ---- -### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model - -You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! - -Sample pictures of: -{image_string} -''' - #Save the readme to a file - readme_file = open("model.README.md", "w") - readme_file.write(readme_text) - readme_file.close() - #Save the token identifier to a file - text_file = open("token_identifier.txt", "w") - text_file.write(', '.join(instance_prompt_list)) - text_file.close() - try: - create_repo(model_id,private=True, token=hf_token) - except: - import time - epoch_time = str(int(time.time())) - create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token) - operations = [ - CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"), - CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt") - ] - api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=f"Upload the model {model_name}", - token=hf_token - ) - api.upload_folder( - folder_path="output_model", - repo_id=model_id, - token=hf_token - ) - api.upload_folder( - folder_path="instance_images", - path_in_repo="concept_images", - repo_id=model_id, - token=hf_token - ) - if is_spaces: - if(not comes_from_automated): - extra_message = "Don't forget to remove the GPU attribution after you play with it." - else: - extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page" - title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!" - description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}" - write_to_community(title, description, hf_token) - #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token) - print("Model uploaded successfully!") - return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])] - -def convert_to_ckpt(): - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - convert("output_model", "model.ckpt") - return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"]) - -def check_status(top_description): - if os.path.exists("hastrained.success"): - if is_spaces: - update_top_tag = gr.update(value=f''' -
-

Your model has finished training ✅

-

Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic

-
- ''') - else: - update_top_tag = gr.update(value=f''' -
-

Your model has finished training ✅

-

Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).

-
- ''') - show_outputs = True - elif os.path.exists("intraining.lock"): - update_top_tag = gr.update(value=''' -
-

Don't worry, your model is still training! ⌛

-

You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model

-
- ''') - show_outputs = False - else: - update_top_tag = gr.update(value=top_description) - show_outputs = False - if os.path.exists("diffusers_model.tar"): - update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"]) - else: - update_files_tag = gr.update(visible=show_outputs) - return [ - update_top_tag, #top_description - gr.update(visible=show_outputs), #try_your_model - gr.update(visible=show_outputs), #push_to_hub - update_files_tag, #result - gr.update(visible=show_outputs), #convert_button - ] - -def checkbox_swap(checkbox): - return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)] - -with gr.Blocks(css=css) as demo: - with gr.Box(): - if is_shared_ui: - top_description = gr.HTML(f''' -
-

Attention - This Space doesn't work in this shared UI

-

For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!  Duplicate Space

- - -
- ''') - elif(is_spaces): - if(is_gpu_associated): - top_description = gr.HTML(f''' -
-

You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉

-

You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off.

-
- ''') - else: - top_description = gr.HTML(f''' -
-

You have successfully duplicated the Dreambooth Training Space 🎉

-

There's only one step left before you can train your model: attribute a T4-small or A10G-small GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.

-
- ''') - else: - top_description = gr.HTML(f''' -
-

You have successfully cloned the Dreambooth Training Space locally 🎉

-

Do a pip install requirements-local.txt

-
- ''') - gr.Markdown("# Dreambooth Training UI 💭") - gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)") - - with gr.Row() as what_are_you_training: - type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True) - with gr.Column(): - base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True) - - #Very hacky approach to emulate dynamically created Gradio components - with gr.Row() as upload_your_concept: - with gr.Column(): - thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example") - thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False) - thing_image_example = gr.HTML('''''') - things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.") - - with gr.Column(): - file_collection = [] - concept_collection = [] - buttons_collection = [] - delete_collection = [] - is_visible = [] - - row = [None] * maximum_concepts - for x in range(maximum_concepts): - ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]) - if(x == 0): - visible = True - is_visible.append(gr.State(value=True)) - else: - visible = False - is_visible.append(gr.State(value=False)) - - file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible)) - with gr.Column(visible=visible) as row[x]: - concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions''')) - with gr.Row(): - if(x < maximum_concepts-1): - buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible)) - if(x > 0): - delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept")) - - counter_add = 1 - for button in buttons_collection: - if(counter_add < len(buttons_collection)): - button.click(lambda: - [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None], - None, - [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False) - else: - button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False) - counter_add += 1 - - counter_delete = 1 - for delete_button in delete_collection: - if(counter_delete < len(delete_collection)+1): - delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False) - counter_delete += 1 - - with gr.Accordion("Custom Settings", open=False): - swap_auto_calculated = gr.Checkbox(label="Use custom settings") - gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.") - steps = gr.Number(label="How many steps", value=2400) - perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30) - - with gr.Box(visible=False) as training_summary: - training_summary_text = gr.HTML("", visible=True, label="Training Summary") - is_advanced_visible = True if is_spaces else False - training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible) - training_summary_model_name = gr.Textbox(label="Name of your model", visible=True) - training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True) - training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True) - training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True) - - train_btn = gr.Button("Start Training") - progress_bar = gr.Textbox(visible=False) - if(is_shared_ui): - training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False) - elif(not is_gpu_associated): - training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False) - else: - training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False) - - - #Post-training UI - completed_training = gr.Markdown('''# ✅ Training completed. - ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False) - - with gr.Row(): - with gr.Box(visible=False) as try_your_model: - gr.Markdown("## Try your model") - prompt = gr.Textbox(label="Type your prompt") - result_image = gr.Image() - inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1) - generate_button = gr.Button("Generate Image") - - with gr.Box(visible=False) as push_to_hub: - gr.Markdown("## Push to Hugging Face Hub") - model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style") - where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to") - gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.") - hf_token = gr.Textbox(label="Hugging Face Write Token", type="password") - - push_button = gr.Button("Push to the Hub") - - result = gr.File(label="Download the uploaded models in the diffusers format", visible=True) - success_message_upload = gr.Markdown(visible=False) - convert_button = gr.Button("Convert to CKPT", visible=False) - - #Swap the examples and the % of text encoder trained depending if it is an object, person or style - type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - - #Swap the base model - - base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar) - base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[]) - #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not - for file in file_collection: - #file.change(fn=update_steps,inputs=file_collection, outputs=steps) - file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - #Give more options if the user wants to finish everything after training - if(is_spaces): - training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False) - #Add a message for while it is in training - - #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing) - - #The main train function - train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar) - train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False) - - #Button to generate an image from your trained model after training - generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False) - #Button to push the model to the Hugging Face Hub - push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False) - #Button to convert the model to ckpt format - convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False) - - #Checks if the training is running - demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False) - -demo.queue(default_enabled=False).launch(debug=True) \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Barra De Hookah Mp3 Descargar.md b/spaces/Benson/text-generation/Examples/Barra De Hookah Mp3 Descargar.md deleted file mode 100644 index 3d8f291ee42eb873031d3ba7eaa0324ace446a11..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Barra De Hookah Mp3 Descargar.md +++ /dev/null @@ -1,110 +0,0 @@ -
-

Hookah Bar MP3 Descargar: Cómo disfrutar de la canción popular en línea

-

Si eres un fan de la música de Bollywood, es posible que hayas oído hablar de la canción Hookah Bar de la película Khiladi 786. Esta canción es un número de baile pegadizo y optimista que se ha convertido en un éxito de culto entre los jóvenes. Pero, ¿sabes cómo descargar y disfrutar de esta canción en línea? En este artículo, le diremos todo lo que necesita saber sobre la descarga de Hookah Bar MP3, incluyendo su origen, significado, popularidad, problemas legales, fuentes, plataformas, dispositivos, configuraciones y ocasiones. Así que, vamos a empezar!

-

¿Qué es Hookah Bar?

-

Hookah Bar es una canción hindi que fue lanzada en 2012 como parte de la banda sonora de la película de comedia de acción Khiladi 786, protagonizada por Akshay Kumar y Asin. La canción cuenta con Kumar y Asin bailando en un bar hookah, que es un lugar donde la gente fuma tabaco con sabor de una pipa de agua llamada hookah. La canción tiene un estribillo pegadizo que dice así:

-

barra de hookah mp3 descargar


DOWNLOADhttps://bltlly.com/2v6JmS



-
-

Tera pyar pyar pyar hookah bar
-Tera pyar pyar pyar hookah bar
-Tera pyar pyar pyar hookah bar
-Tera pyar pyar pyar hookah bar

-
-

La letra se traduce aproximadamente a:

-
-

Tu amor amor amor es como una barra de hookah
-Tu amor amor amor es como una barra de hookah -Tu amor amor amor es como una barra de hookah -Tu amor amor amor es como una barra de hookah

-
-

El origen y significado de la canción

-

La canción fue compuesta por Himesh Reshammiya, quien también es uno de los cantantes de la canción junto con Vineet Singh y Aman Trikha. Reshammiya también escribió la letra de la canción, que se inspiran en su propia experiencia de visitar un bar hookah en Dubai. Dijo que quería crear una canción que atrajera a los jóvenes y los hiciera bailar. También dijo que usó la barra de hookah como metáfora del amor, ya que ambos son adictivos e intoxicantes.

-

Los cantantes y compositores de la canción

- -

Vineet Singh es un cantante de playback indio que saltó a la fama después de ganar un reality show de canto llamado Jo Jeeta Wohi Superstar en 2008. Ha cantado canciones para películas como Murder 3, Jai Ho, Boss, y Kis Kisko Pyaar Karoon. También es conocido por su colaboración con Reshammiya en canciones como Hai Apna Dil Toh Awara, Lonely, y Balma. Aman Trikha es otro cantante indio que ha cantado canciones para películas como OMG - ¡Oh Dios mío! , Prem Ratan Dhan Payo, Veer-Zaara, y Shivaay. También ha trabajado con Reshammiya en canciones como Go Go Govinda, Po Po, y Hookah Bar. Es conocido por su voz versátil y potente que puede cantar en diferentes géneros e idiomas.

-

La popularidad y recepción de la canción

-

Hookah Bar fue un gran éxito entre la audiencia y los críticos por igual. Encabezó las listas de varias plataformas de música y estaciones de radio en la India y en el extranjero. También ganó varios premios y nominaciones, como el Mirchi Music Award for Song of the Year, el Stardust Award for Best Playback Singer (masculino), y el Zee Cine Award for Best Music Director. La canción fue elogiada por su melodía pegadiza, voces enérgicas y coreografía animada. La canción también se convirtió en una opción popular para fiestas, bodas y festivales, donde la gente bailaría a sus ritmos.

-

¿Cómo descargar Hookah Bar MP3 en línea?

-

Si te gusta Hookah Bar y quieres escucharlo en cualquier momento y en cualquier lugar, es posible que desee descargarlo como un archivo MP3 en línea. MP3 es un formato de audio digital que comprime los datos de sonido sin perder mucha calidad. Los archivos MP3 son fáciles de almacenar, transferir y reproducir en varios dispositivos y plataformas. Pero, ¿cómo puede descargar Hookah Bar MP3 en línea? Aquí hay algunas cosas que debes considerar antes de hacerlo.

-

Los beneficios de descargar archivos MP3

- - -

Las cuestiones legales y éticas de la descarga de archivos MP3

-

Descargar archivos MP3 no siempre es legal o ético. Algunos de los temas que debe tener en cuenta son:

-

- -

Por lo tanto, siempre debe descargar archivos MP3 de fuentes legales y éticas que respeten los derechos e intereses tanto de los consumidores como de los creadores.

-

Las mejores fuentes y plataformas para descargar Hookah Bar MP3 online

-

Hay muchas fuentes y plataformas que ofrecen descarga de Hookah Bar MP3 en línea. Algunas de ellas son:

- -NombreTipoCaracterísticas -iTunesTienda en línea- Ofrece archivos MP3 de alta calidad para la compra
- Soporta varios dispositivos y plataformas
- Proporciona acceso a una gran biblioteca de música
- Permite la reproducción sin conexión y almacenamiento en la nube - -YouTube MusicStreaming service- Ofrece planes gratuitos y premium para la transmisión y descarga de archivos MP3
- Soporta varios dispositivos y plataformas
- Proporciona acceso a una gran biblioteca de música
- Permite la reproducción sin conexión y almacenamiento en la nube
-br Ofrece recomendaciones personalizadas y listas de reproducción
- Se integra con videos de YouTube -GaanaStreaming service- Ofrece planes gratuitos y premium para la transmisión y descarga de archivos MP3
- Soporta varios dispositivos y plataformas
- Proporciona acceso a una gran biblioteca de música
- Permite la reproducción sin conexión y almacenamiento en la nube
-br Ofrece recomendaciones personalizadas y listas de reproducción
- Se especializa en música india -SaavnStreaming service- Ofrece planes gratuitos y premium para la transmisión y descarga de archivos MP3
- Soporta varios dispositivos y plataformas
- Proporciona acceso a una gran biblioteca de música
- Permite la reproducción sin conexión y el almacenamiento en la nube
Ofrece recomendaciones personalizadas y listas de reproducción
- Se especializa en música india -MP3JuicesOnline converter- Ofrece conversión MP3 gratuita y rápida de videos de YouTube
- Soporta varios dispositivos y plataformas
- Proporciona acceso a una gran biblioteca de música
- Permite la reproducción y descarga en línea -MP3SkullOnline downloader- Ofrece descarga MP3 gratuita y fácil de varias fuentes
- Soporta varios dispositivos y plataformas
- Proporciona acceso a una gran biblioteca de música
- Permite la reproducción y descarga en línea> -
-

Estas son algunas de las mejores fuentes y plataformas para descargar Hookah Bar MP3 en línea. Sin embargo, siempre debe verificar la calidad, legalidad y seguridad de los archivos antes de descargarlos. También debe respetar los derechos e intereses de los creadores y propietarios de la música.

-

¿Cómo disfrutar de Hookah Bar MP3 en línea?

- -

Los mejores dispositivos y aplicaciones para jugar Hookah Bar MP3 en línea

-

Puede jugar Hookah Bar MP3 en línea en varios dispositivos, tales como teléfonos inteligentes, tabletas, ordenadores portátiles, escritorios, altavoces, auriculares, auriculares, etc. También puede utilizar varias aplicaciones, como iTunes, Spotify, YouTube Music, Gaana, Saavn, etc. Sin embargo, debe elegir el dispositivo y la aplicación que se adapte a sus preferencias y necesidades. Algunos de los factores que debes considerar son:

- -

Los mejores ajustes y características para mejorar la calidad de sonido de Hookah Bar MP3 en línea

-

Puede mejorar la calidad de sonido de Hookah Bar MP3 en línea ajustando la configuración y las características de su dispositivo y aplicación. Algunos de los ajustes y características que puede utilizar son:

- -

Las mejores ocasiones y estados de ánimo para escuchar Hookah Bar MP3 online

- - -

Conclusión

-

Hookah Bar es una canción popular que puedes descargar y disfrutar en línea. Es un número de baile pegadizo y optimista que tiene una barra de hookah como metáfora del amor. Fue compuesta por Himesh Reshammiya, quien también la cantó con Vineet Singh y Aman Trikha. Fue lanzado en 2012 como parte de la banda sonora de la película Khiladi 786. Fue un gran éxito entre la audiencia y los críticos por igual. Ganó varios premios y nominaciones por su música y voces.

- -

Puede disfrutar de Hookah Bar MP3 en línea en varios dispositivos y aplicaciones, como teléfonos inteligentes, tabletas, computadoras portátiles, escritorios, altavoces, auriculares, auriculares, etc. También puede usar varios ajustes y características para mejorar la calidad de sonido de la canción, como el ecualizador, el aumento del bajo, el sonido envolvente, las letras y la lista de reproducción. También puedes escuchar Hookah Bar MP3 en línea en diferentes ocasiones y estados de ánimo, como la fiesta, el entrenamiento, la relajación, el romance y el viaje.

-

Esperamos que este artículo te haya ayudado a aprender más sobre la descarga de Hookah Bar MP3 y cómo disfrutarla en línea. Si tiene alguna pregunta o comentario, no dude en contactarnos. ¡Gracias por leer!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre la descarga de Hookah Bar MP3:

-
    -
  1. ¿Cuál es la duración de Hookah Bar MP3?
  2. -

    La duración de Hookah Bar MP3 es de 4 minutos y 16 segundos.

    -
  3. ¿Cuál es el tamaño de Hookah Bar MP3?
  4. -

    El tamaño de Hookah Bar MP3 varía dependiendo de la fuente y la plataforma desde la que lo descargues. Sin embargo, suele ser de unos 4 MB.

    -
  5. ¿Cuál es el género de Hookah Bar MP3?
  6. -

    El género de Hookah Bar MP3 es música de baile de Bollywood.

    -
  7. ¿Cuál es el idioma de Hookah Bar MP3?
  8. -

    El lenguaje de Hookah Bar MP3 es hindi.

    -
  9. ¿Cuál es la calificación de Hookah Bar MP3?
  10. -

    La puntuación de Hookah Bar MP3 es 4.5 de 5 estrellas en la mayoría de las plataformas.

    -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Carx Street Apk Infinite Money 0.8 4.md b/spaces/Benson/text-generation/Examples/Carx Street Apk Infinite Money 0.8 4.md deleted file mode 100644 index 52e18d3ab533d061c7f18ddd5a70e8c3a7dd6f7b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Carx Street Apk Infinite Money 0.8 4.md +++ /dev/null @@ -1,127 +0,0 @@ - -

CarX Street APK Infinite Cash 0.8 4: How to Download and Play the Most Addictive Street Racing Game of the Year

-

Are you a fan of street racing games? Do you like to feel the adrenaline rush of speeding through the city streets in restored old cars? Do you want to have a garage full of legendary and exclusive cars? Then you’ll love CarX Street APK Infinite Cash 0.8 4, a game that will take you into the world of street racing with amazing graphics, easy controls and lots of fun.

-

carx street apk infinite money 0.8 4


Download Ziphttps://bltlly.com/2v6MBD



-

In this article, we will show you what is CarX Street APK Infinite Money 0.8 4, how to download and install the game on your Android device, how to play and make the most of the infinite money you can use to buy and upgrade your cars, and what are the advantages and disadvantages of playing this game. Come on?

-

What is CarX Street APK Infinite Cash 0.8 4?

- The CarX Street APK Infinite Cash 0.8 4 is a modified version of the original CarX Street game, developed by CarX Technologies, a company specializing in car simulation games. The original game was released in January 2023 for Android and iOS, and received much praise from players for its graphics quality, realistic physics, variety of cars and game modes.

-

A street racing game with amazing graphics and easy controls

- The CarX Street APK Infinite Cash 0.8 4 is a game that impresses by its graphic quality. The cars are modeled with detail and realism, the scenarios are varied and well lit, and the sound effects are immersive. You will feel like you are really driving through the city streets, making turns, skids, overtaking and maneuvers.

-

- -

A game that allows you to customize and improve your cars

- The CarX Street APK Infinite Cash 0.8 4 is a game that gives you freedom to customize and improve your cars your way. You can choose from over 50 different cars, from classics to sports, through Muscle Cars, hot Rods, tuned and more. Each car has its own characteristics of speed, acceleration, braking, traction and handling, which you can see on the screen before buying or using.

-

You can also modify the appearance and performance of your cars. You can change the wheels, tires, headlights, flashlights, mirrors, bumpers, skirts, spoilers, hoods, doors, windows, colors, stickers and more. You can also change the engine, turbo, exhaust, air filter, brake system, suspension, differential and more. You can see the changes you make to your car in real time on the screen.

-

A game that offers short and exciting races against players from around the world

- The CarX Street APK Infinite Cash 0.8 4 is a game that offers you short and exciting races against players from all over the world. You can participate in daily, weekly and monthly events that give you rewards in cash and reputation. You can also join leagues and tournaments that put you in direct confrontations with other players. You can see the ranking of the best players in the world and compare your performance with theirs.

-

The races are fast and intense. You have to use your skill to start well, make perfect turns, avoid obstacles and opponents, use nitro at the right time and get first. You can choose from different race modes such as sprint, drift, drag and time Attack. You can also choose between different difficulty levels, from beginner to professional.

-

How to download CarX Street APK Infinite Money 0.8 4?

- -

The minimum requirements to install the game

-

According to the game’s official website, the minimum requirements for installing CarX Street on your Android device are:

- - - - - - - - - - - - - - - - - - - - - - - - -
RequirementValue
Android version6.0 or higher
Free space1 GB or more
RAM
ProcessorQuad-core or higher
Internet connectionRequired to play online
-

If your device does not meet these requirements, you may have trouble installing or running the game. In this case, you can try downloading the original version of the game from the Google Play Store, which may be more compatible with your device.

-

The steps to download and install the APK file

-

If your device meets the minimum requirements, you can follow the steps below to download and install CarX Street APK Infinite Cash 0.8 4:

-
    -
  1. Visit a trusted website that offers you the download of the game’s APK file. You can search Google for "CarX Street APK Infinite Cash 0.8 4" and choose one of the results. But beware: not all websites are safe and may contain viruses or malware. Therefore, we recommend that you use an antivirus and a VPN before downloading any APK file.
  2. -
  3. Click the download button and wait for the file to be downloaded to your device. The file should be about 500 MB in size.
  4. -
  5. Before installing the APK file, you need to enable the option to install applications from unknown sources on your device. To do this, go to Settings > Security > Unknown sources and enable the option.
  6. -
  7. Now, locate the APK file you downloaded in your device’s downloads folder and click on it to start the installation.
  8. - -
  9. Done! Now you can open the game and start having fun with infinite money and all unlocked cars.
  10. -
-

Precautions to take before downloading the APK file

- Download and install CarX Street APK Infinite Cash 0.8 4 can be a great way to enjoy the game without limitations, but it can also bring some risks and disadvantages. Therefore, it is important that you take some precautions before downloading the APK file, such as:

- -

How to play CarX Street APK Infinite Cash 0.8 4?

- Now that you’ve downloaded and installed CarX Street APK Infinity Cash 0.8 4 on your Android device, you’re ready to play and have fun with the most addictive street racing game of the year. But how to play and make the most of the endless money you can use to buy and upgrade your cars? See the tips below:

-

How to choose and assemble your car

- -

After choosing your car, you can customize it your way. You can change the color, stickers, parts and accessories of your car, to make it more beautiful and faster. You can see the changes you make to your car in real time on the screen. You can also test your car before using it in races, to see how it behaves on the track.

-

How to participate in events and races

-

The second step to playing CarX Street APK Infinity Cash 0.8 4 is to participate in events and races. You can access the game map from the main menu and see the events and races that are available to you. You can choose from different race modes such as sprint, drift, drag and time Attack. You can also choose between different difficulty levels, from beginner to professional.

-

Events are challenges that give you rewards in money and reputation. They can be daily, weekly or monthly, and can have different themes and goals. For example, you may have to make a certain number of skids, overtake a certain number of opponents, or get first in a specific race.

-

Races are direct confrontations against other players from all over the world. You can enter leagues and tournaments that put you in races against players of the same level as you. You can see the ranking of the best players in the world and compare your performance with theirs. Races are fast and intense, and require you to use your skill to win.

-

How to use infinite money to buy and upgrade your cars

- -

You can access the game store from the main menu and see the cars and parts that are available to you. You can see the features and prices of each item before buying. You can also see the in-game recommendations for the best cars and the best parts for each race mode.

-

Using infinite money is an advantage that allows you to have a garage full of legendary and exclusive cars, and have the best cars for each race. But remember: infinite money is not everything. You also need to have skill and strategy to win races.

-

What are the advantages and disadvantages of playing CarX Street APK Infinite Cash 0.8 4?

-

Playing CarX Street APK Infinite Cash 0.8 4 has its advantages and disadvantages. See what they are:

-

The advantages of playing the game

- -

The disadvantages of playing the game

- -

Conclusion

- The CarX Street APK Infinite Cash 0.8 4 is a street racing game that offers you guaranteed fun with amazing graphics, easy controls and lots of customization. You can use the infinite money you have to buy and upgrade your cars, and participate in events and races against players from around the world. But you also need to be aware of the risks and disadvantages of using a modified game, such as downloading an infected or corrupted APK file, violating the terms and conditions of the original game, or running out of updates and support from the game developer.

- If you want to download and play CarX Street APK Infinite Cash 0.8 4 on your Android device, follow the tips we gave you in this article. But remember: do it at your own risk, and respect the copyright and intellectual property of the original game developer.

-

So, did you like the article? Do you have any questions or suggestions? Leave your comment below. And if you liked the article, share it with your friends on social media. Thanks for reading!

-

FAQs

-

Here are some frequently asked questions about CarX Street APK Infinite Cash 0.8 4:

-
    -
  1. What is an APK file?
  2. -

    An APK file is a file format used to install applications on the Android operating system. It contains all the files needed to run an application on your device.

    -
  3. What is a modified game?
  4. - -
  5. Where can I download CarX Street APK Infinite Money 0.8 4?
  6. -game. You can google by "CarX Street APK Infinite Cash 0.8 4" and choose one of the results. But beware: not all websites are safe and may contain viruses or malware. Therefore, we recommend that you use an antivirus and a VPN before downloading any APK file.

    -
  7. How do I update CarX Street APK Infinite Cash 0.8 4?
  8. -To update CarX Street APK Infinite Cash 0.8 4, you need to download and install the latest version of the game’s APK file by following the same steps you used to download and install the previous version. But beware: not always the latest version of the APK file is compatible with the previous version, and you may lose your progress or have problems running the game.

    -
  9. Can I play CarX Street APK Infinite Cash 0.8 4 on my PC?
  10. -Yes, you can play CarX Street APK Infinite Cash 0.8 4 on your PC using an Android emulator. An Android emulator is a program that simulates the Android operating system on your PC, allowing you to install and run Android applications on your PC. Some of the most popular Android emulators are BlueStacks, NoxPlayer and LDPlayer.

    -
  11. Can I play CarX Street APK Infinite Cash 0.8 4 with my friends?
  12. -

    Yes, you can play CarX Street APK Infinity Cash 0.8 4 with your friends using the game’s online multiplayer mode. You can invite your friends to join you in the races, or compete against them in the world ranking. You can also chat with them in-game, or send private messages.

    -
  13. What to do if I have any problems or questions about CarX Street APK Infinite Money 0.8 4?
  14. -If you have any problems or questions about CarX Street APK Infinite Money 0.8 4, you can try the following solutions:

    -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/inspect.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/inspect.py deleted file mode 100644 index 27c8fa3d5b6999c77dad7aece312a5d6cf12ab48..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/inspect.py +++ /dev/null @@ -1,92 +0,0 @@ -import logging -from optparse import Values -from typing import Any, Dict, List - -from pip._vendor.packaging.markers import default_environment -from pip._vendor.rich import print_json - -from pip import __version__ -from pip._internal.cli import cmdoptions -from pip._internal.cli.req_command import Command -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.metadata import BaseDistribution, get_environment -from pip._internal.utils.compat import stdlib_pkgs -from pip._internal.utils.urls import path_to_url - -logger = logging.getLogger(__name__) - - -class InspectCommand(Command): - """ - Inspect the content of a Python environment and produce a report in JSON format. - """ - - ignore_require_venv = True - usage = """ - %prog [options]""" - - def add_options(self) -> None: - self.cmd_opts.add_option( - "--local", - action="store_true", - default=False, - help=( - "If in a virtualenv that has global access, do not list " - "globally-installed packages." - ), - ) - self.cmd_opts.add_option( - "--user", - dest="user", - action="store_true", - default=False, - help="Only output packages installed in user-site.", - ) - self.cmd_opts.add_option(cmdoptions.list_path()) - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - cmdoptions.check_list_path_option(options) - dists = get_environment(options.path).iter_installed_distributions( - local_only=options.local, - user_only=options.user, - skip=set(stdlib_pkgs), - ) - output = { - "version": "1", - "pip_version": __version__, - "installed": [self._dist_to_dict(dist) for dist in dists], - "environment": default_environment(), - # TODO tags? scheme? - } - print_json(data=output) - return SUCCESS - - def _dist_to_dict(self, dist: BaseDistribution) -> Dict[str, Any]: - res: Dict[str, Any] = { - "metadata": dist.metadata_dict, - "metadata_location": dist.info_location, - } - # direct_url. Note that we don't have download_info (as in the installation - # report) since it is not recorded in installed metadata. - direct_url = dist.direct_url - if direct_url is not None: - res["direct_url"] = direct_url.to_dict() - else: - # Emulate direct_url for legacy editable installs. - editable_project_location = dist.editable_project_location - if editable_project_location is not None: - res["direct_url"] = { - "url": path_to_url(editable_project_location), - "dir_info": { - "editable": True, - }, - } - # installer - installer = dist.installer - if dist.installer: - res["installer"] = installer - # requested - if dist.installed_with_dist_info: - res["requested"] = dist.requested - return res diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/version.py deleted file mode 100644 index de9a09a4ed3b078b37e7490a6686f660ae935aca..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/version.py +++ /dev/null @@ -1,504 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import collections -import itertools -import re -import warnings -from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union - -from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType - -__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"] - -InfiniteTypes = Union[InfinityType, NegativeInfinityType] -PrePostDevType = Union[InfiniteTypes, Tuple[str, int]] -SubLocalType = Union[InfiniteTypes, int, str] -LocalType = Union[ - NegativeInfinityType, - Tuple[ - Union[ - SubLocalType, - Tuple[SubLocalType, str], - Tuple[NegativeInfinityType, SubLocalType], - ], - ..., - ], -] -CmpKey = Tuple[ - int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType -] -LegacyCmpKey = Tuple[int, Tuple[str, ...]] -VersionComparisonMethod = Callable[ - [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool -] - -_Version = collections.namedtuple( - "_Version", ["epoch", "release", "dev", "pre", "post", "local"] -) - - -def parse(version: str) -> Union["LegacyVersion", "Version"]: - """ - Parse the given version string and return either a :class:`Version` object - or a :class:`LegacyVersion` object depending on if the given version is - a valid PEP 440 version or a legacy version. - """ - try: - return Version(version) - except InvalidVersion: - return LegacyVersion(version) - - -class InvalidVersion(ValueError): - """ - An invalid version was found, users should refer to PEP 440. - """ - - -class _BaseVersion: - _key: Union[CmpKey, LegacyCmpKey] - - def __hash__(self) -> int: - return hash(self._key) - - # Please keep the duplicated `isinstance` check - # in the six comparisons hereunder - # unless you find a way to avoid adding overhead function calls. - def __lt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key < other._key - - def __le__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key <= other._key - - def __eq__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key == other._key - - def __ge__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key >= other._key - - def __gt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key > other._key - - def __ne__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key != other._key - - -class LegacyVersion(_BaseVersion): - def __init__(self, version: str) -> None: - self._version = str(version) - self._key = _legacy_cmpkey(self._version) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def __str__(self) -> str: - return self._version - - def __repr__(self) -> str: - return f"" - - @property - def public(self) -> str: - return self._version - - @property - def base_version(self) -> str: - return self._version - - @property - def epoch(self) -> int: - return -1 - - @property - def release(self) -> None: - return None - - @property - def pre(self) -> None: - return None - - @property - def post(self) -> None: - return None - - @property - def dev(self) -> None: - return None - - @property - def local(self) -> None: - return None - - @property - def is_prerelease(self) -> bool: - return False - - @property - def is_postrelease(self) -> bool: - return False - - @property - def is_devrelease(self) -> bool: - return False - - -_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE) - -_legacy_version_replacement_map = { - "pre": "c", - "preview": "c", - "-": "final-", - "rc": "c", - "dev": "@", -} - - -def _parse_version_parts(s: str) -> Iterator[str]: - for part in _legacy_version_component_re.split(s): - part = _legacy_version_replacement_map.get(part, part) - - if not part or part == ".": - continue - - if part[:1] in "0123456789": - # pad for numeric comparison - yield part.zfill(8) - else: - yield "*" + part - - # ensure that alpha/beta/candidate are before final - yield "*final" - - -def _legacy_cmpkey(version: str) -> LegacyCmpKey: - - # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch - # greater than or equal to 0. This will effectively put the LegacyVersion, - # which uses the defacto standard originally implemented by setuptools, - # as before all PEP 440 versions. - epoch = -1 - - # This scheme is taken from pkg_resources.parse_version setuptools prior to - # it's adoption of the packaging library. - parts: List[str] = [] - for part in _parse_version_parts(version.lower()): - if part.startswith("*"): - # remove "-" before a prerelease tag - if part < "*final": - while parts and parts[-1] == "*final-": - parts.pop() - - # remove trailing zeros from each series of numeric parts - while parts and parts[-1] == "00000000": - parts.pop() - - parts.append(part) - - return epoch, tuple(parts) - - -# Deliberately not anchored to the start and end of the string, to make it -# easier for 3rd party code to reuse -VERSION_PATTERN = r""" - v? - (?: - (?:(?P[0-9]+)!)? # epoch - (?P[0-9]+(?:\.[0-9]+)*) # release segment - (?P
                                              # pre-release
    -            [-_\.]?
    -            (?P(a|b|c|rc|alpha|beta|pre|preview))
    -            [-_\.]?
    -            (?P[0-9]+)?
    -        )?
    -        (?P                                         # post release
    -            (?:-(?P[0-9]+))
    -            |
    -            (?:
    -                [-_\.]?
    -                (?Ppost|rev|r)
    -                [-_\.]?
    -                (?P[0-9]+)?
    -            )
    -        )?
    -        (?P                                          # dev release
    -            [-_\.]?
    -            (?Pdev)
    -            [-_\.]?
    -            (?P[0-9]+)?
    -        )?
    -    )
    -    (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))?       # local version
    -"""
    -
    -
    -class Version(_BaseVersion):
    -
    -    _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
    -
    -    def __init__(self, version: str) -> None:
    -
    -        # Validate the version and parse it into pieces
    -        match = self._regex.search(version)
    -        if not match:
    -            raise InvalidVersion(f"Invalid version: '{version}'")
    -
    -        # Store the parsed out pieces of the version
    -        self._version = _Version(
    -            epoch=int(match.group("epoch")) if match.group("epoch") else 0,
    -            release=tuple(int(i) for i in match.group("release").split(".")),
    -            pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
    -            post=_parse_letter_version(
    -                match.group("post_l"), match.group("post_n1") or match.group("post_n2")
    -            ),
    -            dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
    -            local=_parse_local_version(match.group("local")),
    -        )
    -
    -        # Generate a key which will be used for sorting
    -        self._key = _cmpkey(
    -            self._version.epoch,
    -            self._version.release,
    -            self._version.pre,
    -            self._version.post,
    -            self._version.dev,
    -            self._version.local,
    -        )
    -
    -    def __repr__(self) -> str:
    -        return f""
    -
    -    def __str__(self) -> str:
    -        parts = []
    -
    -        # Epoch
    -        if self.epoch != 0:
    -            parts.append(f"{self.epoch}!")
    -
    -        # Release segment
    -        parts.append(".".join(str(x) for x in self.release))
    -
    -        # Pre-release
    -        if self.pre is not None:
    -            parts.append("".join(str(x) for x in self.pre))
    -
    -        # Post-release
    -        if self.post is not None:
    -            parts.append(f".post{self.post}")
    -
    -        # Development release
    -        if self.dev is not None:
    -            parts.append(f".dev{self.dev}")
    -
    -        # Local version segment
    -        if self.local is not None:
    -            parts.append(f"+{self.local}")
    -
    -        return "".join(parts)
    -
    -    @property
    -    def epoch(self) -> int:
    -        _epoch: int = self._version.epoch
    -        return _epoch
    -
    -    @property
    -    def release(self) -> Tuple[int, ...]:
    -        _release: Tuple[int, ...] = self._version.release
    -        return _release
    -
    -    @property
    -    def pre(self) -> Optional[Tuple[str, int]]:
    -        _pre: Optional[Tuple[str, int]] = self._version.pre
    -        return _pre
    -
    -    @property
    -    def post(self) -> Optional[int]:
    -        return self._version.post[1] if self._version.post else None
    -
    -    @property
    -    def dev(self) -> Optional[int]:
    -        return self._version.dev[1] if self._version.dev else None
    -
    -    @property
    -    def local(self) -> Optional[str]:
    -        if self._version.local:
    -            return ".".join(str(x) for x in self._version.local)
    -        else:
    -            return None
    -
    -    @property
    -    def public(self) -> str:
    -        return str(self).split("+", 1)[0]
    -
    -    @property
    -    def base_version(self) -> str:
    -        parts = []
    -
    -        # Epoch
    -        if self.epoch != 0:
    -            parts.append(f"{self.epoch}!")
    -
    -        # Release segment
    -        parts.append(".".join(str(x) for x in self.release))
    -
    -        return "".join(parts)
    -
    -    @property
    -    def is_prerelease(self) -> bool:
    -        return self.dev is not None or self.pre is not None
    -
    -    @property
    -    def is_postrelease(self) -> bool:
    -        return self.post is not None
    -
    -    @property
    -    def is_devrelease(self) -> bool:
    -        return self.dev is not None
    -
    -    @property
    -    def major(self) -> int:
    -        return self.release[0] if len(self.release) >= 1 else 0
    -
    -    @property
    -    def minor(self) -> int:
    -        return self.release[1] if len(self.release) >= 2 else 0
    -
    -    @property
    -    def micro(self) -> int:
    -        return self.release[2] if len(self.release) >= 3 else 0
    -
    -
    -def _parse_letter_version(
    -    letter: str, number: Union[str, bytes, SupportsInt]
    -) -> Optional[Tuple[str, int]]:
    -
    -    if letter:
    -        # We consider there to be an implicit 0 in a pre-release if there is
    -        # not a numeral associated with it.
    -        if number is None:
    -            number = 0
    -
    -        # We normalize any letters to their lower case form
    -        letter = letter.lower()
    -
    -        # We consider some words to be alternate spellings of other words and
    -        # in those cases we want to normalize the spellings to our preferred
    -        # spelling.
    -        if letter == "alpha":
    -            letter = "a"
    -        elif letter == "beta":
    -            letter = "b"
    -        elif letter in ["c", "pre", "preview"]:
    -            letter = "rc"
    -        elif letter in ["rev", "r"]:
    -            letter = "post"
    -
    -        return letter, int(number)
    -    if not letter and number:
    -        # We assume if we are given a number, but we are not given a letter
    -        # then this is using the implicit post release syntax (e.g. 1.0-1)
    -        letter = "post"
    -
    -        return letter, int(number)
    -
    -    return None
    -
    -
    -_local_version_separators = re.compile(r"[\._-]")
    -
    -
    -def _parse_local_version(local: str) -> Optional[LocalType]:
    -    """
    -    Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
    -    """
    -    if local is not None:
    -        return tuple(
    -            part.lower() if not part.isdigit() else int(part)
    -            for part in _local_version_separators.split(local)
    -        )
    -    return None
    -
    -
    -def _cmpkey(
    -    epoch: int,
    -    release: Tuple[int, ...],
    -    pre: Optional[Tuple[str, int]],
    -    post: Optional[Tuple[str, int]],
    -    dev: Optional[Tuple[str, int]],
    -    local: Optional[Tuple[SubLocalType]],
    -) -> CmpKey:
    -
    -    # When we compare a release version, we want to compare it with all of the
    -    # trailing zeros removed. So we'll use a reverse the list, drop all the now
    -    # leading zeros until we come to something non zero, then take the rest
    -    # re-reverse it back into the correct order and make it a tuple and use
    -    # that for our sorting key.
    -    _release = tuple(
    -        reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
    -    )
    -
    -    # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
    -    # We'll do this by abusing the pre segment, but we _only_ want to do this
    -    # if there is not a pre or a post segment. If we have one of those then
    -    # the normal sorting rules will handle this case correctly.
    -    if pre is None and post is None and dev is not None:
    -        _pre: PrePostDevType = NegativeInfinity
    -    # Versions without a pre-release (except as noted above) should sort after
    -    # those with one.
    -    elif pre is None:
    -        _pre = Infinity
    -    else:
    -        _pre = pre
    -
    -    # Versions without a post segment should sort before those with one.
    -    if post is None:
    -        _post: PrePostDevType = NegativeInfinity
    -
    -    else:
    -        _post = post
    -
    -    # Versions without a development segment should sort after those with one.
    -    if dev is None:
    -        _dev: PrePostDevType = Infinity
    -
    -    else:
    -        _dev = dev
    -
    -    if local is None:
    -        # Versions without a local segment should sort before those with one.
    -        _local: LocalType = NegativeInfinity
    -    else:
    -        # Versions with a local segment need that segment parsed to implement
    -        # the sorting rules in PEP440.
    -        # - Alpha numeric segments sort before numeric segments
    -        # - Alpha numeric segments sort lexicographically
    -        # - Numeric segments sort numerically
    -        # - Shorter versions sort before longer versions when the prefixes
    -        #   match exactly
    -        _local = tuple(
    -            (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
    -        )
    -
    -    return epoch, _release, _pre, _post, _dev, _local
    diff --git a/spaces/CALM/Dashboard/streamlit_observable/frontend/src/streamlit/streamlit.ts b/spaces/CALM/Dashboard/streamlit_observable/frontend/src/streamlit/streamlit.ts
    deleted file mode 100644
    index 7e77b4d80fedbe6ff8f23d45e7651e20f7164f4c..0000000000000000000000000000000000000000
    --- a/spaces/CALM/Dashboard/streamlit_observable/frontend/src/streamlit/streamlit.ts
    +++ /dev/null
    @@ -1,198 +0,0 @@
    -/**
    - * @license
    - * Copyright 2018-2020 Streamlit Inc.
    - *
    - * Licensed under the Apache License, Version 2.0 (the "License");
    - * you may not use this file except in compliance with the License.
    - * You may obtain a copy of the License at
    - *
    - *    http://www.apache.org/licenses/LICENSE-2.0
    - *
    - * Unless required by applicable law or agreed to in writing, software
    - * distributed under the License is distributed on an "AS IS" BASIS,
    - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    - * See the License for the specific language governing permissions and
    - * limitations under the License.
    - */
    -
    -// Safari doesn't support the EventTarget class, so we use a shim.
    -import { EventTarget } from "event-target-shim"
    -import { ArrowDataframeProto, ArrowTable } from "./ArrowTable"
    -
    -/** Data sent in the custom Streamlit render event. */
    -export interface RenderData {
    -  args: any
    -  disabled: boolean
    -}
    -
    -/** Messages from Component -> Streamlit */
    -enum ComponentMessageType {
    -  // A component sends this message when it's ready to receive messages
    -  // from Streamlit. Streamlit won't send any messages until it gets this.
    -  // Data: { apiVersion: number }
    -  COMPONENT_READY = "streamlit:componentReady",
    -
    -  // The component has a new widget value. Send it back to Streamlit, which
    -  // will then re-run the app.
    -  // Data: { value: any }
    -  SET_COMPONENT_VALUE = "streamlit:setComponentValue",
    -
    -  // The component has a new height for its iframe.
    -  // Data: { height: number }
    -  SET_FRAME_HEIGHT = "streamlit:setFrameHeight",
    -}
    -
    -/**
    - * Streamlit communication API.
    - *
    - * Components can send data to Streamlit via the functions defined here,
    - * and receive data from Streamlit via the `events` property.
    - */
    -export class Streamlit {
    -  /**
    -   * The Streamlit component API version we're targetting.
    -   * There's currently only 1!
    -   */
    -  public static readonly API_VERSION = 1
    -
    -  public static readonly RENDER_EVENT = "streamlit:render"
    -
    -  /** Dispatches events received from Streamlit. */
    -  public static readonly events = new EventTarget()
    -
    -  private static registeredMessageListener = false
    -  private static lastFrameHeight?: number
    -
    -  /**
    -   * Tell Streamlit that the component is ready to start receiving data.
    -   * Streamlit will defer emitting RENDER events until it receives the
    -   * COMPONENT_READY message.
    -   */
    -  public static setComponentReady = (): void => {
    -    if (!Streamlit.registeredMessageListener) {
    -      // Register for message events if we haven't already
    -      window.addEventListener("message", Streamlit.onMessageEvent)
    -      Streamlit.registeredMessageListener = true
    -    }
    -
    -    Streamlit.sendBackMsg(ComponentMessageType.COMPONENT_READY, {
    -      apiVersion: Streamlit.API_VERSION,
    -    })
    -  }
    -
    -  /**
    -   * Report the component's height to Streamlit.
    -   * This should be called every time the component changes its DOM - that is,
    -   * when it's first loaded, and any time it updates.
    -   */
    -  public static setFrameHeight = (height?: number): void => {
    -    if (height === undefined) {
    -      // `height` is optional. If undefined, it defaults to scrollHeight,
    -      // which is the entire height of the element minus its border,
    -      // scrollbar, and margin.
    -      height = document.body.scrollHeight + 10;
    -    }
    -
    -    if (height === Streamlit.lastFrameHeight) {
    -      // Don't bother updating if our height hasn't changed.
    -      return
    -    }
    -
    -    Streamlit.lastFrameHeight = height
    -    Streamlit.sendBackMsg(ComponentMessageType.SET_FRAME_HEIGHT, { height })
    -  }
    -
    -  /**
    -   * Set the component's value. This value will be returned to the Python
    -   * script, and the script will be re-run.
    -   *
    -   * For example:
    -   *
    -   * JavaScript:
    -   * Streamlit.setComponentValue("ahoy!")
    -   *
    -   * Python:
    -   * value = st.my_component(...)
    -   * st.write(value) # -> "ahoy!"
    -   *
    -   * The value must be serializable into JSON.
    -   */
    -  public static setComponentValue = (value: any): void => {
    -    Streamlit.sendBackMsg(ComponentMessageType.SET_COMPONENT_VALUE, { value })
    -  }
    -
    -  /** Receive a ForwardMsg from the Streamlit app */
    -  private static onMessageEvent = (event: MessageEvent): void => {
    -    const type = event.data["type"]
    -    switch (type) {
    -      case Streamlit.RENDER_EVENT:
    -        Streamlit.onRenderMessage(event.data)
    -        break
    -    }
    -  }
    -
    -  /**
    -   * Handle an untyped Streamlit render event and redispatch it as a
    -   * StreamlitRenderEvent.
    -   */
    -  private static onRenderMessage = (data: any): void => {
    -    let args = data["args"]
    -    if (args == null) {
    -      console.error(
    -        `Got null args in onRenderMessage. This should never happen`
    -      )
    -      args = {}
    -    }
    -
    -    // Parse our dataframe arguments with arrow, and merge them into our args dict
    -    const dataframeArgs =
    -      data["dfs"] && data["dfs"].length > 0
    -        ? Streamlit.argsDataframeToObject(data["dfs"])
    -        : {}
    -
    -    args = {
    -      ...args,
    -      ...dataframeArgs,
    -    }
    -
    -    const disabled = Boolean(data["disabled"])
    -
    -    // Dispatch a render event!
    -    const eventData = { disabled, args }
    -    const event = new CustomEvent(Streamlit.RENDER_EVENT, {
    -      detail: eventData,
    -    })
    -    Streamlit.events.dispatchEvent(event)
    -  }
    -
    -  private static argsDataframeToObject = (
    -    argsDataframe: ArgsDataframe[]
    -  ): object => {
    -    const argsDataframeArrow = argsDataframe.map(
    -      ({ key, value }: ArgsDataframe) => [key, Streamlit.toArrowTable(value)]
    -    )
    -    return Object.fromEntries(argsDataframeArrow)
    -  }
    -
    -  private static toArrowTable = (df: ArrowDataframeProto): ArrowTable => {
    -    const { data, index, columns } = df.data
    -    return new ArrowTable(data, index, columns)
    -  }
    -
    -  /** Post a message to the Streamlit app. */
    -  private static sendBackMsg = (type: string, data?: any): void => {
    -    window.parent.postMessage(
    -      {
    -        isStreamlitMessage: true,
    -        type: type,
    -        ...data,
    -      },
    -      "*"
    -    )
    -  }
    -}
    -
    -interface ArgsDataframe {
    -  key: string
    -  value: ArrowDataframeProto
    -}
    diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/engine/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/engine/__init__.py
    deleted file mode 100644
    index 6a4538da3e66593e4ef8916cd9cbca3c83b8c14e..0000000000000000000000000000000000000000
    --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/engine/__init__.py
    +++ /dev/null
    @@ -1,12 +0,0 @@
    -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
    -
    -from .launch import *
    -from .train_loop import *
    -
    -__all__ = [k for k in globals().keys() if not k.startswith("_")]
    -
    -
    -# prefer to let hooks and defaults live in separate namespaces (therefore not in __all__)
    -# but still make them available here
    -from .hooks import *
    -from .defaults import *
    diff --git a/spaces/CVPR/GFPGAN-example/gfpgan/archs/__init__.py b/spaces/CVPR/GFPGAN-example/gfpgan/archs/__init__.py
    deleted file mode 100644
    index bec5f17bfa38729b55f57cae8e40c27310db2b7b..0000000000000000000000000000000000000000
    --- a/spaces/CVPR/GFPGAN-example/gfpgan/archs/__init__.py
    +++ /dev/null
    @@ -1,10 +0,0 @@
    -import importlib
    -from basicsr.utils import scandir
    -from os import path as osp
    -
    -# automatically scan and import arch modules for registry
    -# scan all the files that end with '_arch.py' under the archs folder
    -arch_folder = osp.dirname(osp.abspath(__file__))
    -arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')]
    -# import all the arch modules
    -_arch_modules = [importlib.import_module(f'gfpgan.archs.{file_name}') for file_name in arch_filenames]
    diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reverse.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reverse.h
    deleted file mode 100644
    index 1f3e0325e257c301215e62c690837433ae24c30c..0000000000000000000000000000000000000000
    --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reverse.h
    +++ /dev/null
    @@ -1,23 +0,0 @@
    -/*
    - *  Copyright 2008-2013 NVIDIA Corporation
    - *
    - *  Licensed under the Apache License, Version 2.0 (the "License");
    - *  you may not use this file except in compliance with the License.
    - *  You may obtain a copy of the License at
    - *
    - *      http://www.apache.org/licenses/LICENSE-2.0
    - *
    - *  Unless required by applicable law or agreed to in writing, software
    - *  distributed under the License is distributed on an "AS IS" BASIS,
    - *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    - *  See the License for the specific language governing permissions and
    - *  limitations under the License.
    - */
    -
    -#pragma once
    -
    -#include 
    -
    -// this system inherits reverse
    -#include 
    -
    diff --git a/spaces/CVPR/WALT/mmdet/core/anchor/builder.py b/spaces/CVPR/WALT/mmdet/core/anchor/builder.py
    deleted file mode 100644
    index d79b448ebca9f2b21d455046623172c48c5c3ef0..0000000000000000000000000000000000000000
    --- a/spaces/CVPR/WALT/mmdet/core/anchor/builder.py
    +++ /dev/null
    @@ -1,7 +0,0 @@
    -from mmcv.utils import Registry, build_from_cfg
    -
    -ANCHOR_GENERATORS = Registry('Anchor generator')
    -
    -
    -def build_anchor_generator(cfg, default_args=None):
    -    return build_from_cfg(cfg, ANCHOR_GENERATORS, default_args)
    diff --git a/spaces/CVPR/WALT/mmdet/models/losses/pisa_loss.py b/spaces/CVPR/WALT/mmdet/models/losses/pisa_loss.py
    deleted file mode 100644
    index 4a48adfcd400bb07b719a6fbd5a8af0508820629..0000000000000000000000000000000000000000
    --- a/spaces/CVPR/WALT/mmdet/models/losses/pisa_loss.py
    +++ /dev/null
    @@ -1,183 +0,0 @@
    -import mmcv
    -import torch
    -
    -from mmdet.core import bbox_overlaps
    -
    -
    -@mmcv.jit(derivate=True, coderize=True)
    -def isr_p(cls_score,
    -          bbox_pred,
    -          bbox_targets,
    -          rois,
    -          sampling_results,
    -          loss_cls,
    -          bbox_coder,
    -          k=2,
    -          bias=0,
    -          num_class=80):
    -    """Importance-based Sample Reweighting (ISR_P), positive part.
    -
    -    Args:
    -        cls_score (Tensor): Predicted classification scores.
    -        bbox_pred (Tensor): Predicted bbox deltas.
    -        bbox_targets (tuple[Tensor]): A tuple of bbox targets, the are
    -            labels, label_weights, bbox_targets, bbox_weights, respectively.
    -        rois (Tensor): Anchors (single_stage) in shape (n, 4) or RoIs
    -            (two_stage) in shape (n, 5).
    -        sampling_results (obj): Sampling results.
    -        loss_cls (func): Classification loss func of the head.
    -        bbox_coder (obj): BBox coder of the head.
    -        k (float): Power of the non-linear mapping.
    -        bias (float): Shift of the non-linear mapping.
    -        num_class (int): Number of classes, default: 80.
    -
    -    Return:
    -        tuple([Tensor]): labels, imp_based_label_weights, bbox_targets,
    -            bbox_target_weights
    -    """
    -
    -    labels, label_weights, bbox_targets, bbox_weights = bbox_targets
    -    pos_label_inds = ((labels >= 0) &
    -                      (labels < num_class)).nonzero().reshape(-1)
    -    pos_labels = labels[pos_label_inds]
    -
    -    # if no positive samples, return the original targets
    -    num_pos = float(pos_label_inds.size(0))
    -    if num_pos == 0:
    -        return labels, label_weights, bbox_targets, bbox_weights
    -
    -    # merge pos_assigned_gt_inds of per image to a single tensor
    -    gts = list()
    -    last_max_gt = 0
    -    for i in range(len(sampling_results)):
    -        gt_i = sampling_results[i].pos_assigned_gt_inds
    -        gts.append(gt_i + last_max_gt)
    -        if len(gt_i) != 0:
    -            last_max_gt = gt_i.max() + 1
    -    gts = torch.cat(gts)
    -    assert len(gts) == num_pos
    -
    -    cls_score = cls_score.detach()
    -    bbox_pred = bbox_pred.detach()
    -
    -    # For single stage detectors, rois here indicate anchors, in shape (N, 4)
    -    # For two stage detectors, rois are in shape (N, 5)
    -    if rois.size(-1) == 5:
    -        pos_rois = rois[pos_label_inds][:, 1:]
    -    else:
    -        pos_rois = rois[pos_label_inds]
    -
    -    if bbox_pred.size(-1) > 4:
    -        bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4)
    -        pos_delta_pred = bbox_pred[pos_label_inds, pos_labels].view(-1, 4)
    -    else:
    -        pos_delta_pred = bbox_pred[pos_label_inds].view(-1, 4)
    -
    -    # compute iou of the predicted bbox and the corresponding GT
    -    pos_delta_target = bbox_targets[pos_label_inds].view(-1, 4)
    -    pos_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_pred)
    -    target_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_target)
    -    ious = bbox_overlaps(pos_bbox_pred, target_bbox_pred, is_aligned=True)
    -
    -    pos_imp_weights = label_weights[pos_label_inds]
    -    # Two steps to compute IoU-HLR. Samples are first sorted by IoU locally,
    -    # then sorted again within the same-rank group
    -    max_l_num = pos_labels.bincount().max()
    -    for label in pos_labels.unique():
    -        l_inds = (pos_labels == label).nonzero().view(-1)
    -        l_gts = gts[l_inds]
    -        for t in l_gts.unique():
    -            t_inds = l_inds[l_gts == t]
    -            t_ious = ious[t_inds]
    -            _, t_iou_rank_idx = t_ious.sort(descending=True)
    -            _, t_iou_rank = t_iou_rank_idx.sort()
    -            ious[t_inds] += max_l_num - t_iou_rank.float()
    -        l_ious = ious[l_inds]
    -        _, l_iou_rank_idx = l_ious.sort(descending=True)
    -        _, l_iou_rank = l_iou_rank_idx.sort()  # IoU-HLR
    -        # linearly map HLR to label weights
    -        pos_imp_weights[l_inds] *= (max_l_num - l_iou_rank.float()) / max_l_num
    -
    -    pos_imp_weights = (bias + pos_imp_weights * (1 - bias)).pow(k)
    -
    -    # normalize to make the new weighted loss value equal to the original loss
    -    pos_loss_cls = loss_cls(
    -        cls_score[pos_label_inds], pos_labels, reduction_override='none')
    -    if pos_loss_cls.dim() > 1:
    -        ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds][:,
    -                                                                        None]
    -        new_pos_loss_cls = pos_loss_cls * pos_imp_weights[:, None]
    -    else:
    -        ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds]
    -        new_pos_loss_cls = pos_loss_cls * pos_imp_weights
    -    pos_loss_cls_ratio = ori_pos_loss_cls.sum() / new_pos_loss_cls.sum()
    -    pos_imp_weights = pos_imp_weights * pos_loss_cls_ratio
    -    label_weights[pos_label_inds] = pos_imp_weights
    -
    -    bbox_targets = labels, label_weights, bbox_targets, bbox_weights
    -    return bbox_targets
    -
    -
    -@mmcv.jit(derivate=True, coderize=True)
    -def carl_loss(cls_score,
    -              labels,
    -              bbox_pred,
    -              bbox_targets,
    -              loss_bbox,
    -              k=1,
    -              bias=0.2,
    -              avg_factor=None,
    -              sigmoid=False,
    -              num_class=80):
    -    """Classification-Aware Regression Loss (CARL).
    -
    -    Args:
    -        cls_score (Tensor): Predicted classification scores.
    -        labels (Tensor): Targets of classification.
    -        bbox_pred (Tensor): Predicted bbox deltas.
    -        bbox_targets (Tensor): Target of bbox regression.
    -        loss_bbox (func): Regression loss func of the head.
    -        bbox_coder (obj): BBox coder of the head.
    -        k (float): Power of the non-linear mapping.
    -        bias (float): Shift of the non-linear mapping.
    -        avg_factor (int): Average factor used in regression loss.
    -        sigmoid (bool): Activation of the classification score.
    -        num_class (int): Number of classes, default: 80.
    -
    -    Return:
    -        dict: CARL loss dict.
    -    """
    -    pos_label_inds = ((labels >= 0) &
    -                      (labels < num_class)).nonzero().reshape(-1)
    -    if pos_label_inds.numel() == 0:
    -        return dict(loss_carl=cls_score.sum()[None] * 0.)
    -    pos_labels = labels[pos_label_inds]
    -
    -    # multiply pos_cls_score with the corresponding bbox weight
    -    # and remain gradient
    -    if sigmoid:
    -        pos_cls_score = cls_score.sigmoid()[pos_label_inds, pos_labels]
    -    else:
    -        pos_cls_score = cls_score.softmax(-1)[pos_label_inds, pos_labels]
    -    carl_loss_weights = (bias + (1 - bias) * pos_cls_score).pow(k)
    -
    -    # normalize carl_loss_weight to make its sum equal to num positive
    -    num_pos = float(pos_cls_score.size(0))
    -    weight_ratio = num_pos / carl_loss_weights.sum()
    -    carl_loss_weights *= weight_ratio
    -
    -    if avg_factor is None:
    -        avg_factor = bbox_targets.size(0)
    -    # if is class agnostic, bbox pred is in shape (N, 4)
    -    # otherwise, bbox pred is in shape (N, #classes, 4)
    -    if bbox_pred.size(-1) > 4:
    -        bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4)
    -        pos_bbox_preds = bbox_pred[pos_label_inds, pos_labels]
    -    else:
    -        pos_bbox_preds = bbox_pred[pos_label_inds]
    -    ori_loss_reg = loss_bbox(
    -        pos_bbox_preds,
    -        bbox_targets[pos_label_inds],
    -        reduction_override='none') / avg_factor
    -    loss_carl = (ori_loss_reg * carl_loss_weights[:, None]).sum()
    -    return dict(loss_carl=loss_carl[None])
    diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/coco.py b/spaces/CVPR/regionclip-demo/detectron2/data/datasets/coco.py
    deleted file mode 100644
    index ed4f7ccb20efa3b54c719783e279c381ca5d8587..0000000000000000000000000000000000000000
    --- a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/coco.py
    +++ /dev/null
    @@ -1,539 +0,0 @@
    -# Copyright (c) Facebook, Inc. and its affiliates.
    -import contextlib
    -import datetime
    -import io
    -import json
    -import logging
    -import numpy as np
    -import os
    -import shutil
    -import pycocotools.mask as mask_util
    -from fvcore.common.timer import Timer
    -from iopath.common.file_io import file_lock
    -from PIL import Image
    -
    -from detectron2.structures import Boxes, BoxMode, PolygonMasks, RotatedBoxes
    -from detectron2.utils.file_io import PathManager
    -
    -from .. import DatasetCatalog, MetadataCatalog
    -
    -"""
    -This file contains functions to parse COCO-format annotations into dicts in "Detectron2 format".
    -"""
    -
    -
    -logger = logging.getLogger(__name__)
    -
    -__all__ = ["load_coco_json", "load_sem_seg", "convert_to_coco_json", "register_coco_instances"]
    -
    -
    -def load_coco_json(json_file, image_root, dataset_name=None, extra_annotation_keys=None):
    -    """
    -    Load a json file with COCO's instances annotation format.
    -    Currently supports instance detection, instance segmentation,
    -    and person keypoints annotations.
    -
    -    Args:
    -        json_file (str): full path to the json file in COCO instances annotation format.
    -        image_root (str or path-like): the directory where the images in this json file exists.
    -        dataset_name (str or None): the name of the dataset (e.g., coco_2017_train).
    -            When provided, this function will also do the following:
    -
    -            * Put "thing_classes" into the metadata associated with this dataset.
    -            * Map the category ids into a contiguous range (needed by standard dataset format),
    -              and add "thing_dataset_id_to_contiguous_id" to the metadata associated
    -              with this dataset.
    -
    -            This option should usually be provided, unless users need to load
    -            the original json content and apply more processing manually.
    -        extra_annotation_keys (list[str]): list of per-annotation keys that should also be
    -            loaded into the dataset dict (besides "iscrowd", "bbox", "keypoints",
    -            "category_id", "segmentation"). The values for these keys will be returned as-is.
    -            For example, the densepose annotations are loaded in this way.
    -
    -    Returns:
    -        list[dict]: a list of dicts in Detectron2 standard dataset dicts format (See
    -        `Using Custom Datasets `_ ) when `dataset_name` is not None.
    -        If `dataset_name` is None, the returned `category_ids` may be
    -        incontiguous and may not conform to the Detectron2 standard format.
    -
    -    Notes:
    -        1. This function does not read the image files.
    -           The results do not have the "image" field.
    -    """
    -    from pycocotools.coco import COCO
    -
    -    timer = Timer()
    -    json_file = PathManager.get_local_path(json_file)
    -    with contextlib.redirect_stdout(io.StringIO()):
    -        coco_api = COCO(json_file)
    -    if timer.seconds() > 1:
    -        logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds()))
    -
    -    id_map = None
    -    if dataset_name is not None:
    -        meta = MetadataCatalog.get(dataset_name)
    -        cat_ids = sorted(coco_api.getCatIds())
    -        cats = coco_api.loadCats(cat_ids)
    -        # The categories in a custom json file may not be sorted.
    -        thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])]
    -        meta.thing_classes = thing_classes
    -
    -        # In COCO, certain category ids are artificially removed,
    -        # and by convention they are always ignored.
    -        # We deal with COCO's id issue and translate
    -        # the category ids to contiguous ids in [0, 80).
    -
    -        # It works by looking at the "categories" field in the json, therefore
    -        # if users' own json also have incontiguous ids, we'll
    -        # apply this mapping as well but print a warning.
    -        if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)):
    -            if "coco" not in dataset_name:
    -                logger.warning(
    -                    """
    -Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
    -"""
    -                )
    -        id_map = {v: i for i, v in enumerate(cat_ids)}
    -        meta.thing_dataset_id_to_contiguous_id = id_map
    -
    -    # sort indices for reproducible results
    -    img_ids = sorted(coco_api.imgs.keys())
    -    # imgs is a list of dicts, each looks something like:
    -    # {'license': 4,
    -    #  'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg',
    -    #  'file_name': 'COCO_val2014_000000001268.jpg',
    -    #  'height': 427,
    -    #  'width': 640,
    -    #  'date_captured': '2013-11-17 05:57:24',
    -    #  'id': 1268}
    -    imgs = coco_api.loadImgs(img_ids)
    -    # anns is a list[list[dict]], where each dict is an annotation
    -    # record for an object. The inner list enumerates the objects in an image
    -    # and the outer list enumerates over images. Example of anns[0]:
    -    # [{'segmentation': [[192.81,
    -    #     247.09,
    -    #     ...
    -    #     219.03,
    -    #     249.06]],
    -    #   'area': 1035.749,
    -    #   'iscrowd': 0,
    -    #   'image_id': 1268,
    -    #   'bbox': [192.81, 224.8, 74.73, 33.43],
    -    #   'category_id': 16,
    -    #   'id': 42986},
    -    #  ...]
    -    anns = [coco_api.imgToAnns[img_id] for img_id in img_ids]
    -    total_num_valid_anns = sum([len(x) for x in anns])
    -    total_num_anns = len(coco_api.anns)
    -    if total_num_valid_anns < total_num_anns:
    -        logger.warning(
    -            f"{json_file} contains {total_num_anns} annotations, but only "
    -            f"{total_num_valid_anns} of them match to images in the file."
    -        )
    -
    -    if "minival" not in json_file:
    -        # The popular valminusminival & minival annotations for COCO2014 contain this bug.
    -        # However the ratio of buggy annotations there is tiny and does not affect accuracy.
    -        # Therefore we explicitly white-list them.
    -        ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
    -        assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique!".format(
    -            json_file
    -        )
    -
    -    imgs_anns = list(zip(imgs, anns))
    -    logger.info("Loaded {} images in COCO format from {}".format(len(imgs_anns), json_file))
    -
    -    dataset_dicts = []
    -
    -    ann_keys = ["iscrowd", "bbox", "keypoints", "category_id"] + (extra_annotation_keys or [])
    -
    -    num_instances_without_valid_segmentation = 0
    -
    -    for (img_dict, anno_dict_list) in imgs_anns:
    -        record = {}
    -        record["file_name"] = os.path.join(image_root, img_dict["file_name"])
    -        record["height"] = img_dict["height"]
    -        record["width"] = img_dict["width"]
    -        image_id = record["image_id"] = img_dict["id"]
    -
    -        objs = []
    -        for anno in anno_dict_list:
    -            # Check that the image_id in this annotation is the same as
    -            # the image_id we're looking at.
    -            # This fails only when the data parsing logic or the annotation file is buggy.
    -
    -            # The original COCO valminusminival2014 & minival2014 annotation files
    -            # actually contains bugs that, together with certain ways of using COCO API,
    -            # can trigger this assertion.
    -            assert anno["image_id"] == image_id
    -
    -            assert anno.get("ignore", 0) == 0, '"ignore" in COCO json file is not supported.'
    -
    -            obj = {key: anno[key] for key in ann_keys if key in anno}
    -            if "bbox" in obj and len(obj["bbox"]) == 0:
    -                raise ValueError(
    -                    f"One annotation of image {image_id} contains empty 'bbox' value! "
    -                    "This json does not have valid COCO format."
    -                )
    -
    -            segm = anno.get("segmentation", None)
    -            if segm:  # either list[list[float]] or dict(RLE)
    -                if isinstance(segm, dict):
    -                    if isinstance(segm["counts"], list):
    -                        # convert to compressed RLE
    -                        segm = mask_util.frPyObjects(segm, *segm["size"])
    -                else:
    -                    # filter out invalid polygons (< 3 points)
    -                    segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6]
    -                    if len(segm) == 0:
    -                        num_instances_without_valid_segmentation += 1
    -                        continue  # ignore this instance
    -                obj["segmentation"] = segm
    -
    -            keypts = anno.get("keypoints", None)
    -            if keypts:  # list[int]
    -                for idx, v in enumerate(keypts):
    -                    if idx % 3 != 2:
    -                        # COCO's segmentation coordinates are floating points in [0, H or W],
    -                        # but keypoint coordinates are integers in [0, H-1 or W-1]
    -                        # Therefore we assume the coordinates are "pixel indices" and
    -                        # add 0.5 to convert to floating point coordinates.
    -                        keypts[idx] = v + 0.5
    -                obj["keypoints"] = keypts
    -
    -            obj["bbox_mode"] = BoxMode.XYWH_ABS
    -            if id_map:
    -                annotation_category_id = obj["category_id"]
    -                try:
    -                    obj["category_id"] = id_map[annotation_category_id]
    -                except KeyError as e:
    -                    raise KeyError(
    -                        f"Encountered category_id={annotation_category_id} "
    -                        "but this id does not exist in 'categories' of the json file."
    -                    ) from e
    -            objs.append(obj)
    -        record["annotations"] = objs
    -        dataset_dicts.append(record)
    -
    -    if num_instances_without_valid_segmentation > 0:
    -        logger.warning(
    -            "Filtered out {} instances without valid segmentation. ".format(
    -                num_instances_without_valid_segmentation
    -            )
    -            + "There might be issues in your dataset generation process.  Please "
    -            "check https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html carefully"
    -        )
    -    return dataset_dicts
    -
    -
    -def load_sem_seg(gt_root, image_root, gt_ext="png", image_ext="jpg"):
    -    """
    -    Load semantic segmentation datasets. All files under "gt_root" with "gt_ext" extension are
    -    treated as ground truth annotations and all files under "image_root" with "image_ext" extension
    -    as input images. Ground truth and input images are matched using file paths relative to
    -    "gt_root" and "image_root" respectively without taking into account file extensions.
    -    This works for COCO as well as some other datasets.
    -
    -    Args:
    -        gt_root (str): full path to ground truth semantic segmentation files. Semantic segmentation
    -            annotations are stored as images with integer values in pixels that represent
    -            corresponding semantic labels.
    -        image_root (str): the directory where the input images are.
    -        gt_ext (str): file extension for ground truth annotations.
    -        image_ext (str): file extension for input images.
    -
    -    Returns:
    -        list[dict]:
    -            a list of dicts in detectron2 standard format without instance-level
    -            annotation.
    -
    -    Notes:
    -        1. This function does not read the image and ground truth files.
    -           The results do not have the "image" and "sem_seg" fields.
    -    """
    -
    -    # We match input images with ground truth based on their relative filepaths (without file
    -    # extensions) starting from 'image_root' and 'gt_root' respectively.
    -    def file2id(folder_path, file_path):
    -        # extract relative path starting from `folder_path`
    -        image_id = os.path.normpath(os.path.relpath(file_path, start=folder_path))
    -        # remove file extension
    -        image_id = os.path.splitext(image_id)[0]
    -        return image_id
    -
    -    input_files = sorted(
    -        (os.path.join(image_root, f) for f in PathManager.ls(image_root) if f.endswith(image_ext)),
    -        key=lambda file_path: file2id(image_root, file_path),
    -    )
    -    gt_files = sorted(
    -        (os.path.join(gt_root, f) for f in PathManager.ls(gt_root) if f.endswith(gt_ext)),
    -        key=lambda file_path: file2id(gt_root, file_path),
    -    )
    -
    -    assert len(gt_files) > 0, "No annotations found in {}.".format(gt_root)
    -
    -    # Use the intersection, so that val2017_100 annotations can run smoothly with val2017 images
    -    if len(input_files) != len(gt_files):
    -        logger.warn(
    -            "Directory {} and {} has {} and {} files, respectively.".format(
    -                image_root, gt_root, len(input_files), len(gt_files)
    -            )
    -        )
    -        input_basenames = [os.path.basename(f)[: -len(image_ext)] for f in input_files]
    -        gt_basenames = [os.path.basename(f)[: -len(gt_ext)] for f in gt_files]
    -        intersect = list(set(input_basenames) & set(gt_basenames))
    -        # sort, otherwise each worker may obtain a list[dict] in different order
    -        intersect = sorted(intersect)
    -        logger.warn("Will use their intersection of {} files.".format(len(intersect)))
    -        input_files = [os.path.join(image_root, f + image_ext) for f in intersect]
    -        gt_files = [os.path.join(gt_root, f + gt_ext) for f in intersect]
    -
    -    logger.info(
    -        "Loaded {} images with semantic segmentation from {}".format(len(input_files), image_root)
    -    )
    -
    -    dataset_dicts = []
    -    for (img_path, gt_path) in zip(input_files, gt_files):
    -        record = {}
    -        record["file_name"] = img_path
    -        record["sem_seg_file_name"] = gt_path
    -        dataset_dicts.append(record)
    -
    -    return dataset_dicts
    -
    -
    -def convert_to_coco_dict(dataset_name):
    -    """
    -    Convert an instance detection/segmentation or keypoint detection dataset
    -    in detectron2's standard format into COCO json format.
    -
    -    Generic dataset description can be found here:
    -    https://detectron2.readthedocs.io/tutorials/datasets.html#register-a-dataset
    -
    -    COCO data format description can be found here:
    -    http://cocodataset.org/#format-data
    -
    -    Args:
    -        dataset_name (str):
    -            name of the source dataset
    -            Must be registered in DatastCatalog and in detectron2's standard format.
    -            Must have corresponding metadata "thing_classes"
    -    Returns:
    -        coco_dict: serializable dict in COCO json format
    -    """
    -
    -    dataset_dicts = DatasetCatalog.get(dataset_name)
    -    metadata = MetadataCatalog.get(dataset_name)
    -
    -    # unmap the category mapping ids for COCO
    -    if hasattr(metadata, "thing_dataset_id_to_contiguous_id"):
    -        reverse_id_mapping = {v: k for k, v in metadata.thing_dataset_id_to_contiguous_id.items()}
    -        reverse_id_mapper = lambda contiguous_id: reverse_id_mapping[contiguous_id]  # noqa
    -    else:
    -        reverse_id_mapper = lambda contiguous_id: contiguous_id  # noqa
    -
    -    categories = [
    -        {"id": reverse_id_mapper(id), "name": name}
    -        for id, name in enumerate(metadata.thing_classes)
    -    ]
    -
    -    logger.info("Converting dataset dicts into COCO format")
    -    coco_images = []
    -    coco_annotations = []
    -
    -    for image_id, image_dict in enumerate(dataset_dicts):
    -        coco_image = {
    -            "id": image_dict.get("image_id", image_id),
    -            "width": int(image_dict["width"]),
    -            "height": int(image_dict["height"]),
    -            "file_name": str(image_dict["file_name"]),
    -        }
    -        coco_images.append(coco_image)
    -
    -        anns_per_image = image_dict.get("annotations", [])
    -        for annotation in anns_per_image:
    -            # create a new dict with only COCO fields
    -            coco_annotation = {}
    -
    -            # COCO requirement: XYWH box format for axis-align and XYWHA for rotated
    -            bbox = annotation["bbox"]
    -            if isinstance(bbox, np.ndarray):
    -                if bbox.ndim != 1:
    -                    raise ValueError(f"bbox has to be 1-dimensional. Got shape={bbox.shape}.")
    -                bbox = bbox.tolist()
    -            if len(bbox) not in [4, 5]:
    -                raise ValueError(f"bbox has to has length 4 or 5. Got {bbox}.")
    -            from_bbox_mode = annotation["bbox_mode"]
    -            to_bbox_mode = BoxMode.XYWH_ABS if len(bbox) == 4 else BoxMode.XYWHA_ABS
    -            bbox = BoxMode.convert(bbox, from_bbox_mode, to_bbox_mode)
    -
    -            # COCO requirement: instance area
    -            if "segmentation" in annotation:
    -                # Computing areas for instances by counting the pixels
    -                segmentation = annotation["segmentation"]
    -                # TODO: check segmentation type: RLE, BinaryMask or Polygon
    -                if isinstance(segmentation, list):
    -                    polygons = PolygonMasks([segmentation])
    -                    area = polygons.area()[0].item()
    -                elif isinstance(segmentation, dict):  # RLE
    -                    area = mask_util.area(segmentation).item()
    -                else:
    -                    raise TypeError(f"Unknown segmentation type {type(segmentation)}!")
    -            else:
    -                # Computing areas using bounding boxes
    -                if to_bbox_mode == BoxMode.XYWH_ABS:
    -                    bbox_xy = BoxMode.convert(bbox, to_bbox_mode, BoxMode.XYXY_ABS)
    -                    area = Boxes([bbox_xy]).area()[0].item()
    -                else:
    -                    area = RotatedBoxes([bbox]).area()[0].item()
    -
    -            if "keypoints" in annotation:
    -                keypoints = annotation["keypoints"]  # list[int]
    -                for idx, v in enumerate(keypoints):
    -                    if idx % 3 != 2:
    -                        # COCO's segmentation coordinates are floating points in [0, H or W],
    -                        # but keypoint coordinates are integers in [0, H-1 or W-1]
    -                        # For COCO format consistency we substract 0.5
    -                        # https://github.com/facebookresearch/detectron2/pull/175#issuecomment-551202163
    -                        keypoints[idx] = v - 0.5
    -                if "num_keypoints" in annotation:
    -                    num_keypoints = annotation["num_keypoints"]
    -                else:
    -                    num_keypoints = sum(kp > 0 for kp in keypoints[2::3])
    -
    -            # COCO requirement:
    -            #   linking annotations to images
    -            #   "id" field must start with 1
    -            coco_annotation["id"] = len(coco_annotations) + 1
    -            coco_annotation["image_id"] = coco_image["id"]
    -            coco_annotation["bbox"] = [round(float(x), 3) for x in bbox]
    -            coco_annotation["area"] = float(area)
    -            coco_annotation["iscrowd"] = int(annotation.get("iscrowd", 0))
    -            coco_annotation["category_id"] = int(reverse_id_mapper(annotation["category_id"]))
    -
    -            # Add optional fields
    -            if "keypoints" in annotation:
    -                coco_annotation["keypoints"] = keypoints
    -                coco_annotation["num_keypoints"] = num_keypoints
    -
    -            if "segmentation" in annotation:
    -                seg = coco_annotation["segmentation"] = annotation["segmentation"]
    -                if isinstance(seg, dict):  # RLE
    -                    counts = seg["counts"]
    -                    if not isinstance(counts, str):
    -                        # make it json-serializable
    -                        seg["counts"] = counts.decode("ascii")
    -
    -            coco_annotations.append(coco_annotation)
    -
    -    logger.info(
    -        "Conversion finished, "
    -        f"#images: {len(coco_images)}, #annotations: {len(coco_annotations)}"
    -    )
    -
    -    info = {
    -        "date_created": str(datetime.datetime.now()),
    -        "description": "Automatically generated COCO json file for Detectron2.",
    -    }
    -    coco_dict = {"info": info, "images": coco_images, "categories": categories, "licenses": None}
    -    if len(coco_annotations) > 0:
    -        coco_dict["annotations"] = coco_annotations
    -    return coco_dict
    -
    -
    -def convert_to_coco_json(dataset_name, output_file, allow_cached=True):
    -    """
    -    Converts dataset into COCO format and saves it to a json file.
    -    dataset_name must be registered in DatasetCatalog and in detectron2's standard format.
    -
    -    Args:
    -        dataset_name:
    -            reference from the config file to the catalogs
    -            must be registered in DatasetCatalog and in detectron2's standard format
    -        output_file: path of json file that will be saved to
    -        allow_cached: if json file is already present then skip conversion
    -    """
    -
    -    # TODO: The dataset or the conversion script *may* change,
    -    # a checksum would be useful for validating the cached data
    -
    -    PathManager.mkdirs(os.path.dirname(output_file))
    -    with file_lock(output_file):
    -        if PathManager.exists(output_file) and allow_cached:
    -            logger.warning(
    -                f"Using previously cached COCO format annotations at '{output_file}'. "
    -                "You need to clear the cache file if your dataset has been modified."
    -            )
    -        else:
    -            logger.info(f"Converting annotations of dataset '{dataset_name}' to COCO format ...)")
    -            coco_dict = convert_to_coco_dict(dataset_name)
    -
    -            logger.info(f"Caching COCO format annotations at '{output_file}' ...")
    -            tmp_file = output_file + ".tmp"
    -            with PathManager.open(tmp_file, "w") as f:
    -                json.dump(coco_dict, f)
    -            shutil.move(tmp_file, output_file)
    -
    -
    -def register_coco_instances(name, metadata, json_file, image_root):
    -    """
    -    Register a dataset in COCO's json annotation format for
    -    instance detection, instance segmentation and keypoint detection.
    -    (i.e., Type 1 and 2 in http://cocodataset.org/#format-data.
    -    `instances*.json` and `person_keypoints*.json` in the dataset).
    -
    -    This is an example of how to register a new dataset.
    -    You can do something similar to this function, to register new datasets.
    -
    -    Args:
    -        name (str): the name that identifies a dataset, e.g. "coco_2014_train".
    -        metadata (dict): extra metadata associated with this dataset.  You can
    -            leave it as an empty dict.
    -        json_file (str): path to the json instance annotation file.
    -        image_root (str or path-like): directory which contains all the images.
    -    """
    -    assert isinstance(name, str), name
    -    assert isinstance(json_file, (str, os.PathLike)), json_file
    -    assert isinstance(image_root, (str, os.PathLike)), image_root
    -    # 1. register a function which returns dicts
    -    DatasetCatalog.register(name, lambda: load_coco_json(json_file, image_root, name))
    -
    -    # 2. Optionally, add metadata about this dataset,
    -    # since they might be useful in evaluation, visualization or logging
    -    MetadataCatalog.get(name).set(
    -        json_file=json_file, image_root=image_root, evaluator_type="coco", **metadata
    -    )
    -
    -
    -if __name__ == "__main__":
    -    """
    -    Test the COCO json dataset loader.
    -
    -    Usage:
    -        python -m detectron2.data.datasets.coco \
    -            path/to/json path/to/image_root dataset_name
    -
    -        "dataset_name" can be "coco_2014_minival_100", or other
    -        pre-registered ones
    -    """
    -    from detectron2.utils.logger import setup_logger
    -    from detectron2.utils.visualizer import Visualizer
    -    import detectron2.data.datasets  # noqa # add pre-defined metadata
    -    import sys
    -
    -    logger = setup_logger(name=__name__)
    -    assert sys.argv[3] in DatasetCatalog.list()
    -    meta = MetadataCatalog.get(sys.argv[3])
    -
    -    dicts = load_coco_json(sys.argv[1], sys.argv[2], sys.argv[3])
    -    logger.info("Done loading {} samples.".format(len(dicts)))
    -
    -    dirname = "coco-data-vis"
    -    os.makedirs(dirname, exist_ok=True)
    -    for d in dicts:
    -        img = np.array(Image.open(d["file_name"]))
    -        visualizer = Visualizer(img, metadata=meta)
    -        vis = visualizer.draw_dataset_dict(d)
    -        fpath = os.path.join(dirname, os.path.basename(d["file_name"]))
    -        vis.save(fpath)
    diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/retinanet.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/retinanet.py
    deleted file mode 100644
    index 81992a3bc6d7f17ab64eb88a157901e69d3f0e16..0000000000000000000000000000000000000000
    --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/retinanet.py
    +++ /dev/null
    @@ -1,609 +0,0 @@
    -# Copyright (c) Facebook, Inc. and its affiliates.
    -import logging
    -import math
    -import numpy as np
    -from typing import Dict, List, Tuple
    -import torch
    -from fvcore.nn import sigmoid_focal_loss_jit
    -from torch import Tensor, nn
    -from torch.nn import functional as F
    -
    -from detectron2.config import configurable
    -from detectron2.data.detection_utils import convert_image_to_rgb
    -from detectron2.layers import ShapeSpec, batched_nms, cat, get_norm, nonzero_tuple
    -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
    -from detectron2.utils.events import get_event_storage
    -
    -from ..anchor_generator import build_anchor_generator
    -from ..backbone import Backbone, build_backbone
    -from ..box_regression import Box2BoxTransform, _dense_box_regression_loss
    -from ..matcher import Matcher
    -from ..postprocessing import detector_postprocess
    -from .build import META_ARCH_REGISTRY
    -
    -__all__ = ["RetinaNet"]
    -
    -
    -logger = logging.getLogger(__name__)
    -
    -
    -def permute_to_N_HWA_K(tensor, K: int):
    -    """
    -    Transpose/reshape a tensor from (N, (Ai x K), H, W) to (N, (HxWxAi), K)
    -    """
    -    assert tensor.dim() == 4, tensor.shape
    -    N, _, H, W = tensor.shape
    -    tensor = tensor.view(N, -1, K, H, W)
    -    tensor = tensor.permute(0, 3, 4, 1, 2)
    -    tensor = tensor.reshape(N, -1, K)  # Size=(N,HWA,K)
    -    return tensor
    -
    -
    -@META_ARCH_REGISTRY.register()
    -class RetinaNet(nn.Module):
    -    """
    -    Implement RetinaNet in :paper:`RetinaNet`.
    -    """
    -
    -    @configurable
    -    def __init__(
    -        self,
    -        *,
    -        backbone: Backbone,
    -        head: nn.Module,
    -        head_in_features,
    -        anchor_generator,
    -        box2box_transform,
    -        anchor_matcher,
    -        num_classes,
    -        focal_loss_alpha=0.25,
    -        focal_loss_gamma=2.0,
    -        smooth_l1_beta=0.0,
    -        box_reg_loss_type="smooth_l1",
    -        test_score_thresh=0.05,
    -        test_topk_candidates=1000,
    -        test_nms_thresh=0.5,
    -        max_detections_per_image=100,
    -        pixel_mean,
    -        pixel_std,
    -        vis_period=0,
    -        input_format="BGR",
    -    ):
    -        """
    -        NOTE: this interface is experimental.
    -
    -        Args:
    -            backbone: a backbone module, must follow detectron2's backbone interface
    -            head (nn.Module): a module that predicts logits and regression deltas
    -                for each level from a list of per-level features
    -            head_in_features (Tuple[str]): Names of the input feature maps to be used in head
    -            anchor_generator (nn.Module): a module that creates anchors from a
    -                list of features. Usually an instance of :class:`AnchorGenerator`
    -            box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to
    -                instance boxes
    -            anchor_matcher (Matcher): label the anchors by matching them with ground truth.
    -            num_classes (int): number of classes. Used to label background proposals.
    -
    -            # Loss parameters:
    -            focal_loss_alpha (float): focal_loss_alpha
    -            focal_loss_gamma (float): focal_loss_gamma
    -            smooth_l1_beta (float): smooth_l1_beta
    -            box_reg_loss_type (str): Options are "smooth_l1", "giou"
    -
    -            # Inference parameters:
    -            test_score_thresh (float): Inference cls score threshold, only anchors with
    -                score > INFERENCE_TH are considered for inference (to improve speed)
    -            test_topk_candidates (int): Select topk candidates before NMS
    -            test_nms_thresh (float): Overlap threshold used for non-maximum suppression
    -                (suppress boxes with IoU >= this threshold)
    -            max_detections_per_image (int):
    -                Maximum number of detections to return per image during inference
    -                (100 is based on the limit established for the COCO dataset).
    -
    -            # Input parameters
    -            pixel_mean (Tuple[float]):
    -                Values to be used for image normalization (BGR order).
    -                To train on images of different number of channels, set different mean & std.
    -                Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675]
    -            pixel_std (Tuple[float]):
    -                When using pre-trained models in Detectron1 or any MSRA models,
    -                std has been absorbed into its conv1 weights, so the std needs to be set 1.
    -                Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std)
    -            vis_period (int):
    -                The period (in terms of steps) for minibatch visualization at train time.
    -                Set to 0 to disable.
    -            input_format (str): Whether the model needs RGB, YUV, HSV etc.
    -        """
    -        super().__init__()
    -
    -        self.backbone = backbone
    -        self.head = head
    -        self.head_in_features = head_in_features
    -        if len(self.backbone.output_shape()) != len(self.head_in_features):
    -            logger.warning("[RetinaNet] Backbone produces unused features.")
    -
    -        # Anchors
    -        self.anchor_generator = anchor_generator
    -        self.box2box_transform = box2box_transform
    -        self.anchor_matcher = anchor_matcher
    -
    -        self.num_classes = num_classes
    -        # Loss parameters:
    -        self.focal_loss_alpha = focal_loss_alpha
    -        self.focal_loss_gamma = focal_loss_gamma
    -        self.smooth_l1_beta = smooth_l1_beta
    -        self.box_reg_loss_type = box_reg_loss_type
    -        # Inference parameters:
    -        self.test_score_thresh = test_score_thresh
    -        self.test_topk_candidates = test_topk_candidates
    -        self.test_nms_thresh = test_nms_thresh
    -        self.max_detections_per_image = max_detections_per_image
    -        # Vis parameters
    -        self.vis_period = vis_period
    -        self.input_format = input_format
    -
    -        self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
    -        self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
    -
    -        """
    -        In Detectron1, loss is normalized by number of foreground samples in the batch.
    -        When batch size is 1 per GPU, #foreground has a large variance and
    -        using it lead to lower performance. Here we maintain an EMA of #foreground to
    -        stabilize the normalizer.
    -        """
    -        self.loss_normalizer = 100  # initialize with any reasonable #fg that's not too small
    -        self.loss_normalizer_momentum = 0.9
    -
    -    @classmethod
    -    def from_config(cls, cfg):
    -        backbone = build_backbone(cfg)
    -        backbone_shape = backbone.output_shape()
    -        feature_shapes = [backbone_shape[f] for f in cfg.MODEL.RETINANET.IN_FEATURES]
    -        head = RetinaNetHead(cfg, feature_shapes)
    -        anchor_generator = build_anchor_generator(cfg, feature_shapes)
    -        return {
    -            "backbone": backbone,
    -            "head": head,
    -            "anchor_generator": anchor_generator,
    -            "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RETINANET.BBOX_REG_WEIGHTS),
    -            "anchor_matcher": Matcher(
    -                cfg.MODEL.RETINANET.IOU_THRESHOLDS,
    -                cfg.MODEL.RETINANET.IOU_LABELS,
    -                allow_low_quality_matches=True,
    -            ),
    -            "pixel_mean": cfg.MODEL.PIXEL_MEAN,
    -            "pixel_std": cfg.MODEL.PIXEL_STD,
    -            "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES,
    -            "head_in_features": cfg.MODEL.RETINANET.IN_FEATURES,
    -            # Loss parameters:
    -            "focal_loss_alpha": cfg.MODEL.RETINANET.FOCAL_LOSS_ALPHA,
    -            "focal_loss_gamma": cfg.MODEL.RETINANET.FOCAL_LOSS_GAMMA,
    -            "smooth_l1_beta": cfg.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA,
    -            "box_reg_loss_type": cfg.MODEL.RETINANET.BBOX_REG_LOSS_TYPE,
    -            # Inference parameters:
    -            "test_score_thresh": cfg.MODEL.RETINANET.SCORE_THRESH_TEST,
    -            "test_topk_candidates": cfg.MODEL.RETINANET.TOPK_CANDIDATES_TEST,
    -            "test_nms_thresh": cfg.MODEL.RETINANET.NMS_THRESH_TEST,
    -            "max_detections_per_image": cfg.TEST.DETECTIONS_PER_IMAGE,
    -            # Vis parameters
    -            "vis_period": cfg.VIS_PERIOD,
    -            "input_format": cfg.INPUT.FORMAT,
    -        }
    -
    -    @property
    -    def device(self):
    -        return self.pixel_mean.device
    -
    -    def visualize_training(self, batched_inputs, results):
    -        """
    -        A function used to visualize ground truth images and final network predictions.
    -        It shows ground truth bounding boxes on the original image and up to 20
    -        predicted object bounding boxes on the original image.
    -
    -        Args:
    -            batched_inputs (list): a list that contains input to the model.
    -            results (List[Instances]): a list of #images elements.
    -        """
    -        from detectron2.utils.visualizer import Visualizer
    -
    -        assert len(batched_inputs) == len(
    -            results
    -        ), "Cannot visualize inputs and results of different sizes"
    -        storage = get_event_storage()
    -        max_boxes = 20
    -
    -        image_index = 0  # only visualize a single image
    -        img = batched_inputs[image_index]["image"]
    -        img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format)
    -        v_gt = Visualizer(img, None)
    -        v_gt = v_gt.overlay_instances(boxes=batched_inputs[image_index]["instances"].gt_boxes)
    -        anno_img = v_gt.get_image()
    -        processed_results = detector_postprocess(results[image_index], img.shape[0], img.shape[1])
    -        predicted_boxes = processed_results.pred_boxes.tensor.detach().cpu().numpy()
    -
    -        v_pred = Visualizer(img, None)
    -        v_pred = v_pred.overlay_instances(boxes=predicted_boxes[0:max_boxes])
    -        prop_img = v_pred.get_image()
    -        vis_img = np.vstack((anno_img, prop_img))
    -        vis_img = vis_img.transpose(2, 0, 1)
    -        vis_name = f"Top: GT bounding boxes; Bottom: {max_boxes} Highest Scoring Results"
    -        storage.put_image(vis_name, vis_img)
    -
    -    def forward(self, batched_inputs: List[Dict[str, Tensor]]):
    -        """
    -        Args:
    -            batched_inputs: a list, batched outputs of :class:`DatasetMapper` .
    -                Each item in the list contains the inputs for one image.
    -                For now, each item in the list is a dict that contains:
    -
    -                * image: Tensor, image in (C, H, W) format.
    -                * instances: Instances
    -
    -                Other information that's included in the original dicts, such as:
    -
    -                * "height", "width" (int): the output resolution of the model, used in inference.
    -                  See :meth:`postprocess` for details.
    -        Returns:
    -            In training, dict[str, Tensor]: mapping from a named loss to a tensor storing the
    -            loss. Used during training only. In inference, the standard output format, described
    -            in :doc:`/tutorials/models`.
    -        """
    -        images = self.preprocess_image(batched_inputs)
    -        features = self.backbone(images.tensor)
    -        features = [features[f] for f in self.head_in_features]
    -
    -        anchors = self.anchor_generator(features)
    -        pred_logits, pred_anchor_deltas = self.head(features)
    -        # Transpose the Hi*Wi*A dimension to the middle:
    -        pred_logits = [permute_to_N_HWA_K(x, self.num_classes) for x in pred_logits]
    -        pred_anchor_deltas = [permute_to_N_HWA_K(x, 4) for x in pred_anchor_deltas]
    -
    -        if self.training:
    -            assert not torch.jit.is_scripting(), "Not supported"
    -            assert "instances" in batched_inputs[0], "Instance annotations are missing in training!"
    -            gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
    -
    -            gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances)
    -            losses = self.losses(anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes)
    -
    -            if self.vis_period > 0:
    -                storage = get_event_storage()
    -                if storage.iter % self.vis_period == 0:
    -                    results = self.inference(
    -                        anchors, pred_logits, pred_anchor_deltas, images.image_sizes
    -                    )
    -                    self.visualize_training(batched_inputs, results)
    -
    -            return losses
    -        else:
    -            results = self.inference(anchors, pred_logits, pred_anchor_deltas, images.image_sizes)
    -            if torch.jit.is_scripting():
    -                return results
    -            processed_results = []
    -            for results_per_image, input_per_image, image_size in zip(
    -                results, batched_inputs, images.image_sizes
    -            ):
    -                height = input_per_image.get("height", image_size[0])
    -                width = input_per_image.get("width", image_size[1])
    -                r = detector_postprocess(results_per_image, height, width)
    -                processed_results.append({"instances": r})
    -            return processed_results
    -
    -    def losses(self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes):
    -        """
    -        Args:
    -            anchors (list[Boxes]): a list of #feature level Boxes
    -            gt_labels, gt_boxes: see output of :meth:`RetinaNet.label_anchors`.
    -                Their shapes are (N, R) and (N, R, 4), respectively, where R is
    -                the total number of anchors across levels, i.e. sum(Hi x Wi x Ai)
    -            pred_logits, pred_anchor_deltas: both are list[Tensor]. Each element in the
    -                list corresponds to one level and has shape (N, Hi * Wi * Ai, K or 4).
    -                Where K is the number of classes used in `pred_logits`.
    -
    -        Returns:
    -            dict[str, Tensor]:
    -                mapping from a named loss to a scalar tensor
    -                storing the loss. Used during training only. The dict keys are:
    -                "loss_cls" and "loss_box_reg"
    -        """
    -        num_images = len(gt_labels)
    -        gt_labels = torch.stack(gt_labels)  # (N, R)
    -
    -        valid_mask = gt_labels >= 0
    -        pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes)
    -        num_pos_anchors = pos_mask.sum().item()
    -        get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images)
    -        self.loss_normalizer = self.loss_normalizer_momentum * self.loss_normalizer + (
    -            1 - self.loss_normalizer_momentum
    -        ) * max(num_pos_anchors, 1)
    -
    -        # classification and regression loss
    -        gt_labels_target = F.one_hot(gt_labels[valid_mask], num_classes=self.num_classes + 1)[
    -            :, :-1
    -        ]  # no loss for the last (background) class
    -        loss_cls = sigmoid_focal_loss_jit(
    -            cat(pred_logits, dim=1)[valid_mask],
    -            gt_labels_target.to(pred_logits[0].dtype),
    -            alpha=self.focal_loss_alpha,
    -            gamma=self.focal_loss_gamma,
    -            reduction="sum",
    -        )
    -
    -        loss_box_reg = _dense_box_regression_loss(
    -            anchors,
    -            self.box2box_transform,
    -            pred_anchor_deltas,
    -            gt_boxes,
    -            pos_mask,
    -            box_reg_loss_type=self.box_reg_loss_type,
    -            smooth_l1_beta=self.smooth_l1_beta,
    -        )
    -
    -        return {
    -            "loss_cls": loss_cls / self.loss_normalizer,
    -            "loss_box_reg": loss_box_reg / self.loss_normalizer,
    -        }
    -
    -    @torch.no_grad()
    -    def label_anchors(self, anchors, gt_instances):
    -        """
    -        Args:
    -            anchors (list[Boxes]): A list of #feature level Boxes.
    -                The Boxes contains anchors of this image on the specific feature level.
    -            gt_instances (list[Instances]): a list of N `Instances`s. The i-th
    -                `Instances` contains the ground-truth per-instance annotations
    -                for the i-th input image.
    -
    -        Returns:
    -            list[Tensor]: List of #img tensors. i-th element is a vector of labels whose length is
    -            the total number of anchors across all feature maps (sum(Hi * Wi * A)).
    -            Label values are in {-1, 0, ..., K}, with -1 means ignore, and K means background.
    -
    -            list[Tensor]: i-th element is a Rx4 tensor, where R is the total number of anchors
    -            across feature maps. The values are the matched gt boxes for each anchor.
    -            Values are undefined for those anchors not labeled as foreground.
    -        """
    -        anchors = Boxes.cat(anchors)  # Rx4
    -
    -        gt_labels = []
    -        matched_gt_boxes = []
    -        for gt_per_image in gt_instances:
    -            match_quality_matrix = pairwise_iou(gt_per_image.gt_boxes, anchors)
    -            matched_idxs, anchor_labels = self.anchor_matcher(match_quality_matrix)
    -            del match_quality_matrix
    -
    -            if len(gt_per_image) > 0:
    -                matched_gt_boxes_i = gt_per_image.gt_boxes.tensor[matched_idxs]
    -
    -                gt_labels_i = gt_per_image.gt_classes[matched_idxs]
    -                # Anchors with label 0 are treated as background.
    -                gt_labels_i[anchor_labels == 0] = self.num_classes
    -                # Anchors with label -1 are ignored.
    -                gt_labels_i[anchor_labels == -1] = -1
    -            else:
    -                matched_gt_boxes_i = torch.zeros_like(anchors.tensor)
    -                gt_labels_i = torch.zeros_like(matched_idxs) + self.num_classes
    -
    -            gt_labels.append(gt_labels_i)
    -            matched_gt_boxes.append(matched_gt_boxes_i)
    -
    -        return gt_labels, matched_gt_boxes
    -
    -    def inference(
    -        self,
    -        anchors: List[Boxes],
    -        pred_logits: List[Tensor],
    -        pred_anchor_deltas: List[Tensor],
    -        image_sizes: List[Tuple[int, int]],
    -    ):
    -        """
    -        Arguments:
    -            anchors (list[Boxes]): A list of #feature level Boxes.
    -                The Boxes contain anchors of this image on the specific feature level.
    -            pred_logits, pred_anchor_deltas: list[Tensor], one per level. Each
    -                has shape (N, Hi * Wi * Ai, K or 4)
    -            image_sizes (List[(h, w)]): the input image sizes
    -
    -        Returns:
    -            results (List[Instances]): a list of #images elements.
    -        """
    -        results: List[Instances] = []
    -        for img_idx, image_size in enumerate(image_sizes):
    -            pred_logits_per_image = [x[img_idx] for x in pred_logits]
    -            deltas_per_image = [x[img_idx] for x in pred_anchor_deltas]
    -            results_per_image = self.inference_single_image(
    -                anchors, pred_logits_per_image, deltas_per_image, image_size
    -            )
    -            results.append(results_per_image)
    -        return results
    -
    -    def inference_single_image(
    -        self,
    -        anchors: List[Boxes],
    -        box_cls: List[Tensor],
    -        box_delta: List[Tensor],
    -        image_size: Tuple[int, int],
    -    ):
    -        """
    -        Single-image inference. Return bounding-box detection results by thresholding
    -        on scores and applying non-maximum suppression (NMS).
    -
    -        Arguments:
    -            anchors (list[Boxes]): list of #feature levels. Each entry contains
    -                a Boxes object, which contains all the anchors in that feature level.
    -            box_cls (list[Tensor]): list of #feature levels. Each entry contains
    -                tensor of size (H x W x A, K)
    -            box_delta (list[Tensor]): Same shape as 'box_cls' except that K becomes 4.
    -            image_size (tuple(H, W)): a tuple of the image height and width.
    -
    -        Returns:
    -            Same as `inference`, but for only one image.
    -        """
    -        boxes_all = []
    -        scores_all = []
    -        class_idxs_all = []
    -
    -        # Iterate over every feature level
    -        for box_cls_i, box_reg_i, anchors_i in zip(box_cls, box_delta, anchors):
    -            # (HxWxAxK,)
    -            predicted_prob = box_cls_i.flatten().sigmoid_()
    -
    -            # Apply two filtering below to make NMS faster.
    -            # 1. Keep boxes with confidence score higher than threshold
    -            keep_idxs = predicted_prob > self.test_score_thresh
    -            predicted_prob = predicted_prob[keep_idxs]
    -            topk_idxs = nonzero_tuple(keep_idxs)[0]
    -
    -            # 2. Keep top k top scoring boxes only
    -            num_topk = min(self.test_topk_candidates, topk_idxs.size(0))
    -            # torch.sort is actually faster than .topk (at least on GPUs)
    -            predicted_prob, idxs = predicted_prob.sort(descending=True)
    -            predicted_prob = predicted_prob[:num_topk]
    -            topk_idxs = topk_idxs[idxs[:num_topk]]
    -
    -            anchor_idxs = topk_idxs // self.num_classes
    -            classes_idxs = topk_idxs % self.num_classes
    -
    -            box_reg_i = box_reg_i[anchor_idxs]
    -            anchors_i = anchors_i[anchor_idxs]
    -            # predict boxes
    -            predicted_boxes = self.box2box_transform.apply_deltas(box_reg_i, anchors_i.tensor)
    -
    -            boxes_all.append(predicted_boxes)
    -            scores_all.append(predicted_prob)
    -            class_idxs_all.append(classes_idxs)
    -
    -        boxes_all, scores_all, class_idxs_all = [
    -            cat(x) for x in [boxes_all, scores_all, class_idxs_all]
    -        ]
    -        keep = batched_nms(boxes_all, scores_all, class_idxs_all, self.test_nms_thresh)
    -        keep = keep[: self.max_detections_per_image]
    -
    -        result = Instances(image_size)
    -        result.pred_boxes = Boxes(boxes_all[keep])
    -        result.scores = scores_all[keep]
    -        result.pred_classes = class_idxs_all[keep]
    -        return result
    -
    -    def preprocess_image(self, batched_inputs: List[Dict[str, Tensor]]):
    -        """
    -        Normalize, pad and batch the input images.
    -        """
    -        images = [x["image"].to(self.device) for x in batched_inputs]
    -        images = [(x - self.pixel_mean) / self.pixel_std for x in images]
    -        images = ImageList.from_tensors(images, self.backbone.size_divisibility)
    -        return images
    -
    -
    -class RetinaNetHead(nn.Module):
    -    """
    -    The head used in RetinaNet for object classification and box regression.
    -    It has two subnets for the two tasks, with a common structure but separate parameters.
    -    """
    -
    -    @configurable
    -    def __init__(
    -        self,
    -        *,
    -        input_shape: List[ShapeSpec],
    -        num_classes,
    -        num_anchors,
    -        conv_dims: List[int],
    -        norm="",
    -        prior_prob=0.01,
    -    ):
    -        """
    -        NOTE: this interface is experimental.
    -
    -        Args:
    -            input_shape (List[ShapeSpec]): input shape
    -            num_classes (int): number of classes. Used to label background proposals.
    -            num_anchors (int): number of generated anchors
    -            conv_dims (List[int]): dimensions for each convolution layer
    -            norm (str or callable):
    -                    Normalization for conv layers except for the two output layers.
    -                    See :func:`detectron2.layers.get_norm` for supported types.
    -            prior_prob (float): Prior weight for computing bias
    -        """
    -        super().__init__()
    -
    -        if norm == "BN" or norm == "SyncBN":
    -            logger.warning("Shared norm does not work well for BN, SyncBN, expect poor results")
    -
    -        cls_subnet = []
    -        bbox_subnet = []
    -        for in_channels, out_channels in zip(
    -            [input_shape[0].channels] + list(conv_dims), conv_dims
    -        ):
    -            cls_subnet.append(
    -                nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
    -            )
    -            if norm:
    -                cls_subnet.append(get_norm(norm, out_channels))
    -            cls_subnet.append(nn.ReLU())
    -            bbox_subnet.append(
    -                nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
    -            )
    -            if norm:
    -                bbox_subnet.append(get_norm(norm, out_channels))
    -            bbox_subnet.append(nn.ReLU())
    -
    -        self.cls_subnet = nn.Sequential(*cls_subnet)
    -        self.bbox_subnet = nn.Sequential(*bbox_subnet)
    -        self.cls_score = nn.Conv2d(
    -            conv_dims[-1], num_anchors * num_classes, kernel_size=3, stride=1, padding=1
    -        )
    -        self.bbox_pred = nn.Conv2d(
    -            conv_dims[-1], num_anchors * 4, kernel_size=3, stride=1, padding=1
    -        )
    -
    -        # Initialization
    -        for modules in [self.cls_subnet, self.bbox_subnet, self.cls_score, self.bbox_pred]:
    -            for layer in modules.modules():
    -                if isinstance(layer, nn.Conv2d):
    -                    torch.nn.init.normal_(layer.weight, mean=0, std=0.01)
    -                    torch.nn.init.constant_(layer.bias, 0)
    -
    -        # Use prior in model initialization to improve stability
    -        bias_value = -(math.log((1 - prior_prob) / prior_prob))
    -        torch.nn.init.constant_(self.cls_score.bias, bias_value)
    -
    -    @classmethod
    -    def from_config(cls, cfg, input_shape: List[ShapeSpec]):
    -        num_anchors = build_anchor_generator(cfg, input_shape).num_cell_anchors
    -        assert (
    -            len(set(num_anchors)) == 1
    -        ), "Using different number of anchors between levels is not currently supported!"
    -        num_anchors = num_anchors[0]
    -
    -        return {
    -            "input_shape": input_shape,
    -            "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES,
    -            "conv_dims": [input_shape[0].channels] * cfg.MODEL.RETINANET.NUM_CONVS,
    -            "prior_prob": cfg.MODEL.RETINANET.PRIOR_PROB,
    -            "norm": cfg.MODEL.RETINANET.NORM,
    -            "num_anchors": num_anchors,
    -        }
    -
    -    def forward(self, features: List[Tensor]):
    -        """
    -        Arguments:
    -            features (list[Tensor]): FPN feature map tensors in high to low resolution.
    -                Each tensor in the list correspond to different feature levels.
    -
    -        Returns:
    -            logits (list[Tensor]): #lvl tensors, each has shape (N, AxK, Hi, Wi).
    -                The tensor predicts the classification probability
    -                at each spatial position for each of the A anchors and K object
    -                classes.
    -            bbox_reg (list[Tensor]): #lvl tensors, each has shape (N, Ax4, Hi, Wi).
    -                The tensor predicts 4-vector (dx,dy,dw,dh) box
    -                regression values for every anchor. These values are the
    -                relative offset between the anchor and the ground truth box.
    -        """
    -        logits = []
    -        bbox_reg = []
    -        for feature in features:
    -            logits.append(self.cls_score(self.cls_subnet(feature)))
    -            bbox_reg.append(self.bbox_pred(self.bbox_subnet(feature)))
    -        return logits, bbox_reg
    diff --git a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/ner_detection.py b/spaces/Chintan-Donda/KKMS-KSSW-HF/src/ner_detection.py
    deleted file mode 100644
    index 067a69719185a6b0c61d84e0478392141110462e..0000000000000000000000000000000000000000
    --- a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/ner_detection.py
    +++ /dev/null
    @@ -1,58 +0,0 @@
    -import gradio as gr
    -import openai
    -import os
    -import re
    -import ast
    -
    -openai.api_key = "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"
    -SYSTEM_PROMPT = "You are a smart and intelligent Named Entity Recognition (NER) system. I will provide you the definition of the entities you need to extract, the sentence from where your extract the entities and the output format with examples."
    -USER_PROMPT_1 = "Are you clear about your role?"
    -ASSISTANT_PROMPT_1 = "Sure, I'm ready to help you with your NER task. Please provide me with the necessary information to get started."
    -GUIDELINES_PROMPT = (
    -    """Entity Definition:\n"
    -    "1. PEST NAME: Name of the pest which has attacked a particular crop which may lead to crop damage.\n"
    -    "2. CROP DISEASE: Any kind of crop disease which occurs in agriculture land in india and nearby resgions.\n"
    -    "3. WEATHER CONDITION: Severe climate conditions like heavy rainfall, hailstorm which has destroyed crops.\n"
    -    "\n"
    -    "Output Format:\n"
    -    "{{'PEST NAME': [list of entities present], 'CROP DISEASE': [list of entities present], 'WEATHER CONDITION': [list of entities present]}}\n"
    -    "If no entities are presented in any categories keep it None\n"
    -    "\n"
    -    "Examples:\n"
    -    "\n"
    -    "1. Sentence: Pest attack on maize crop in lower Kangra : The Tribune India. Farmers in lower Kangra are a harried lot as the fall armyworm pest has attacked their maize crop. 'Kolshi' continues to affect Vidarbha's Orange crop cultivation (Citrus Black Fly) | Krishak Jagat. A total of 1,50,000 hectares of land in the Vidarbha region is planted with oranges, and of them, 25% are seriously damaged by Kolshi, a citrus black fly disease. India's June tea output drops 17% as floods hit plucking | Mint. India's June tea production fell 17.4% from a year earlier to 141.31 million kilograms, the state-run Tea Board said, as floods and pest attack dented output in the main producing region\n"
    -    "Output: {{'PEST NAME': ['fall armyworm'], 'CROP DISEASE': ['citrus black fly disease'], 'WEATHER CONDITION': ['floods']}}\n"
    -    "\n"
    -    "2. Sentence: ICAR issues pest alert in Leparada, W/Siang | The Arunachal Times. 70 percent prevalence of fall army worm in maize fields in Pagi, Gori and Bam villages in Leparada district and Darka, Kombo and Jirdin villages in West Siang district was observed. After maize, Kangra vegetable crops under white fly attack : The Tribune India. Vegetable crops are under attack by white fly in the lower hills of Kangra district. The pest attack comes after the recent damage caused by fall armyworm to the maize crop in the area. Pest attacks on paddy crop worry farmers in the integrated Karimnagar district | Hindudayashankar. Crops withering due to stem borer, leaf folder and rice blast; farmers have to incur huge expenditures to control menace. Cyclone Amphan damages crop, vegetable prices shoot up | Cities News,The Indian Express. Cyclone Amphan has damaged vegetables across South Bengal. Farmers lost 80 to 90 per cent of crop as fields were flooded.\n"
    -    "Output: {{'PEST NAME': ['fall army worm', 'white fly attack', 'stem borer', 'leaf folder'], 'CROP DISEASE': ['rice blast'], 'WEATHER CONDITION': ['Cyclone Amphan']}}\n"
    -    "\n"
    -    "3. Sentence: {}\n"
    -    "Output: """
    -)
    -
    -def openai_chat_completion_response(news_article_text):
    -    final_prompt = GUIDELINES_PROMPT.format(news_article_text)
    -    response = openai.ChatCompletion.create(
    -                  model="gpt-3.5-turbo",
    -                  messages=[
    -                        {"role": "system", "content": SYSTEM_PROMPT},
    -                        {"role": "user", "content": USER_PROMPT_1},
    -                        {"role": "assistant", "content": ASSISTANT_PROMPT_1},
    -                        {"role": "user", "content": final_prompt}
    -                    ]
    -    )
    -    return response['choices'][0]['message']['content'].strip(" \n")
    -
    -# def preprocess(prompt):
    -#     return GUIDELINES_PROMPT.format(prompt)
    -# def main():
    -# my_sentence = "Hundreds of hectares of land under the cotton crop, once referred to as white gold, has come under attack of a wide range of insects like whitefly, pink bollworm and mealybug. This is likely to hit the cotton production this year."
    -# GUIDELINES_PROMPT = GUIDELINES_PROMPT.format(my_sentence)
    -# # print(GUIDELINES_PROMPT)
    -# ners = openai_chat_completion_response(GUIDELINES_PROMPT)
    -# print(ners)
    -
    -import gradio as gra
    -#define gradio interface and other parameters
    -app =  gra.Interface(fn = openai_chat_completion_response, inputs="text", outputs="text")
    -app.launch(share=True)
    diff --git a/spaces/Chukwuka/Dog_Breed_ImageWoof/app.py b/spaces/Chukwuka/Dog_Breed_ImageWoof/app.py
    deleted file mode 100644
    index 17cd82b5df0ec44db46a72103a9e2999af47cc0e..0000000000000000000000000000000000000000
    --- a/spaces/Chukwuka/Dog_Breed_ImageWoof/app.py
    +++ /dev/null
    @@ -1,98 +0,0 @@
    -
    -### 1. Imports and class names setup ###
    -import gradio as gr
    -import os
    -import numpy as np
    -import torch
    -import torchvision.transforms as T
    -
    -from model import Efficient_b2_model
    -from timeit import default_timer as timer
    -from typing import Tuple, Dict
    -from data_setup import classes, model_tsfm
    -
    -# Setup class names
    -#class_names = ['pizza', 'steak', 'sushi']
    -
    -### 2. Model and transforms preparation ###
    -#test_tsfm = T.Compose([T.Resize((224,224)),
    -#                        T.ToTensor(),
    -#                       T.Normalize(mean=[0.485, 0.456, 0.406], # 3. A mean of [0.485, 0.456, 0.406] (across each colour channel)
    -#                         std=[0.229, 0.224, 0.225]) # 4. A standard deviation of [0.229, 0.224, 0.225] (across each colour channel),
    -#                       ])
    -
    -# Create EffNetB2 Model
    -effnet_b2 = Efficient_b2_model(num_classes=len(classes), pretrained=True)
    -#effnet_b2
    -#effnetb2, test_transform = create_effnet_b2(num_of_class=len(class_names), 
    -                            #transform=test_tsfm,
    -                            #seed=42)
    -
    -# saved_path = 'demos\foodvision_mini\09_pretrained_effnetb2_feature_extractor_pizza_steak_sushi_20_percent.pth'
    -saved_path = 'efficient_b2_checkpoint_model_2023_02_04.pth'
    -
    -print('Loading Model State Dictionary')
    -# Load saved weights
    -effnet_b2.load_state_dict(
    -                torch.load(f=saved_path,
    -                           map_location=torch.device('cpu'), # load to CPU
    -                          )
    -                        )
    -
    -print('Model Loaded ...')
    -### 3. Predict function ###
    -
    -# Create predict function
    -from typing import Tuple, Dict
    -
    -def predict(img) -> Tuple[Dict, float]:
    -    """Transforms and performs a prediction on img and returns prediction and time taken.
    -    """
    -    # Start the timer
    -    start_time = timer()
    -    
    -    # Transform the target image and add a batch dimension
    -    #img = get_image(img_path, model_tsfm).unsqueeze(0)
    -    img = model_tsfm(image=np.array(img))["image"]
    -    img = img.unsqueeze(0)
    -
    -    # Put model into evaluation mode and turn on inference mode
    -    effnet_b2.eval()
    -    with torch.inference_mode():
    -        # Pass the transformed image through the model and turn the prediction logits into prediction probabilities
    -        pred_probs = torch.softmax(effnet_b2(img), dim=1)
    -    
    -    # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter)
    -    pred_labels_and_probs = {classes[i]: float(pred_probs[0][i]) for i in range(len(classes))}
    -
    -    # Calculate the prediction time
    -    pred_time = round(timer() - start_time, 5)
    -    
    -    # Return the prediction dictionary and prediction time 
    -    return pred_labels_and_probs, pred_time
    -
    -### 4. Gradio App ###
    -
    -# Create title, description and article strings
    -title= 'DogBreed Mini 🐩🐶🦮🐕‍🦺'
    -description = "An EfficientNetB2 feature extractor computer vision model to classify images of Dog breeds."
    -article = "

    ImageWoof Created by Chukwuka

    Github Repo

    " - - -# Create examples list from "examples/" directory -example_list = [["examples/" + example] for example in os.listdir("examples")] - -# Create the Gradio demo -demo = gr.Interface(fn=predict, # mapping function from input to output - inputs=gr.Image(type='pil'), # What are the inputs? - outputs=[gr.Label(num_top_classes=10, label="Predictions"), # what are the outputs? - gr.Number(label='Prediction time (s)')], # Our fn has two outputs, therefore we have two outputs - examples=example_list, - title=title, - description=description, - article=article - ) -# Launch the demo -print('Gradio Demo Launched') -demo.launch() - diff --git a/spaces/Cicooo/vits-uma-genshin-honkai/commons.py b/spaces/Cicooo/vits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/Cicooo/vits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Config.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Config.js deleted file mode 100644 index 471b247378f214c409c20d2e636f42134e124e02..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Config.js +++ /dev/null @@ -1,375 +0,0 @@ - -import YAML from 'yaml' -import chokidar from 'chokidar' -import fs from 'node:fs' -import YamlReader from './YamlReader.js' -import cfg from '../../../lib/config/config.js' -import _ from 'lodash' -import { modifyWebSocket } from './WebSocket.js' -import { cfgSchema } from '../config/system/cfg_system.js' - -const Path = process.cwd() -const Plugin_Name = 'ws-plugin' -const Plugin_Path = `${Path}/plugins/${Plugin_Name}` -class Config { - constructor() { - this.config = {} - this.oldConfig = {} - /** 监听文件 */ - this.watcher = { config: {}, defSet: {} } - - this.initCfg() - } - - /** 初始化配置 */ - initCfg() { - let path = `${Plugin_Path}/config/config/` - if (!fs.existsSync(path)) fs.mkdirSync(path) - let pathDef = `${Plugin_Path}/config/default_config/` - const files = fs.readdirSync(pathDef).filter(file => file.endsWith('.yaml')) - for (let file of files) { - if (!fs.existsSync(`${path}${file}`)) { - fs.copyFileSync(`${pathDef}${file}`, `${path}${file}`) - } - this.watch(`${path}${file}`, file.replace('.yaml', ''), 'config') - } - } - - /** 主人QQ */ - get masterQQ() { - return cfg.masterQQ - } - - /** Bot账号:[主人帐号] */ - get master() { - return cfg.master - } - - /** 云崽黑名单群 */ - get blackGroup() { - return cfg.getOther().blackGroup - } - - /** 云崽白名单群 */ - get whiteGroup() { - return cfg.getOther().whiteGroup - } - - /** 心跳 */ - get heartbeatInterval() { - return this.getDefOrConfig('ws-config').heartbeatInterval - } - - /** 数据上报类型 */ - get messagePostFormat() { - return this.getDefOrConfig('ws-config').messagePostFormat - } - - /** 连接列表 */ - get servers() { - return this.getDefOrConfig('ws-config').servers - } - - get noMsgStart() { - return this.getDefOrConfig('msg-config').noMsgStart - } - - get noMsgInclude() { - return this.getDefOrConfig('msg-config').noMsgInclude - } - - get howToMaster() { - return this.getDefOrConfig('msg-config').howToMaster - } - - /**掉线时否通知主人 */ - get disconnectToMaster() { - return this.getDefOrConfig('msg-config').disconnectToMaster - } - - /**重连成功时是否通知主人 */ - get reconnectToMaster() { - return this.getDefOrConfig('msg-config').reconnectToMaster - } - - /**首次连接成功时是否通知主人 */ - get firstconnectToMaster() { - return this.getDefOrConfig('msg-config').firstconnectToMaster - } - - /**消息存储时间 */ - get msgStoreTime() { - return this.getDefOrConfig('msg-config').msgStoreTime - } - - /**禁用群聊列表 */ - get noGroup() { - return this.getDefOrConfig('msg-config').noGroup - } - - /** 白名单群聊 */ - get yesGroup() { - return this.getDefOrConfig('msg-config').yesGroup - } - - /** 禁言拦截 */ - get muteStop() { - return this.getDefOrConfig('msg-config').muteStop - } - - /** red 发送伪造转发消息方式 */ - get redSendForwardMsgType(){ - return this.getDefOrConfig('msg-config').redSendForwardMsgType - } - - /**群管理员变动是否上报 */ - get groupAdmin() { - return this.getDefOrConfig('notice-config').groupAdmin - } - - /**群成员减少是否上报 */ - get groupDecrease() { - return this.getDefOrConfig('notice-config').groupDecrease - } - - /**群成员增加是否上报 */ - get groupIncrease() { - return this.getDefOrConfig('notice-config').groupIncrease - } - - /**群禁言是否上报 */ - get groupBan() { - return this.getDefOrConfig('notice-config').groupBan - } - - /**好友添加是否上报 */ - get friendIncrease() { - return this.getDefOrConfig('notice-config').friendIncrease - } - - /**群消息撤回是否上报 */ - get groupRecall() { - return this.getDefOrConfig('notice-config').groupRecall - } - - /**好友消息撤回是否上报 */ - get friendRecall() { - return this.getDefOrConfig('notice-config').friendRecall - } - - /**群内戳一戳是否上报 */ - get groupPoke() { - return this.getDefOrConfig('notice-config').groupPoke - } - - /** 好友申请是否上报 */ - get friendAdd() { - return this.getDefOrConfig('request-config').friendAdd - } - - /** 群聊邀请是否上报 (邀请机器人入群) */ - get groupInvite() { - return this.getDefOrConfig('request-config').groupInvite - } - - /** 群聊申请是否上报 (申请加入群聊) */ - get groupAdd() { - return this.getDefOrConfig('request-config').groupAdd - } - - /** 默认配置和用户配置 */ - getDefOrConfig(name) { - let def = this.getdefSet(name) - let config = this.getConfig(name) - return { ...def, ...config } - } - - /** 默认配置 */ - getdefSet(name) { - return this.getYaml('default_config', name) - } - - /** 用户配置 */ - getConfig(name) { - return this.getYaml('config', name) - } - - /** - * 获取配置yaml - * @param type 默认跑配置-defSet,用户配置-config - * @param name 名称 - */ - getYaml(type, name) { - let file = `${Plugin_Path}/config/${type}/${name}.yaml` - let key = `${type}.${name}` - - if (this.config[key]) return this.config[key] - - this.config[key] = YAML.parse( - fs.readFileSync(file, 'utf8') - ) - - this.watch(file, name, type) - - return this.config[key] - } - - /** 监听配置文件 */ - watch(file, name, type = 'default_config') { - let key = `${type}.${name}` - if (!this.oldConfig[key]) this.oldConfig[key] = _.cloneDeep(this.config[key]) - if (this.watcher[key]) return - - const watcher = chokidar.watch(file) - watcher.on('change', path => { - delete this.config[key] - if (typeof Bot == 'undefined') return - logger.mark(`[ws-plugin][修改配置文件][${type}][${name}]`) - - if (name == 'ws-config') { - const oldConfig = this.oldConfig[key] - delete this.oldConfig[key] - const newConfig = this.getYaml(type, name) - const object = this.findDifference(oldConfig, newConfig) - // console.log(object); - for (const key in object) { - if (Object.hasOwnProperty.call(object, key)) { - const value = object[key]; - const arr = key.split('.') - if (arr[0] !== 'servers') continue - let data = newConfig.servers[arr[1]] - if (typeof data === 'undefined') data = oldConfig.servers[arr[1]] - const target = { - type: null, - data - } - if (typeof value['newValue'] === 'object' && typeof value['oldValue'] === 'undefined') { - target.type = 'add' - } - else if (typeof value['newValue'] === 'undefined' && typeof value['oldValue'] === 'object') { - target.type = 'del' - } - else if (value['newValue'] === true && (value['oldValue'] === false || typeof value['oldValue'] === 'undefined')) { - target.type = 'close' - } - else if (value['newValue'] === false && (value['oldValue'] === true || typeof value['oldValue'] === 'undefined')) { - target.type = 'open' - } - modifyWebSocket(target) - } - } - - } - }) - - this.watcher[key] = watcher - } - - getCfgSchemaMap() { - let ret = {} - _.forEach(cfgSchema, (cfgGroup) => { - _.forEach(cfgGroup.cfg, (cfgItem, cfgKey) => { - ret[cfgItem.key] = cfgItem - cfgItem.cfgKey = cfgKey - }) - }) - return ret - } - - getCfgSchema() { - return cfgSchema - } - - getCfg() { - let wsconfig = this.getDefOrConfig('ws-config') - let msgconfig = this.getDefOrConfig('msg-config') - let noticeconfig = this.getDefOrConfig('notice-config') - let requestconfig = this.getDefOrConfig('request-config') - return { - ...wsconfig, - ...msgconfig, - ...noticeconfig, - ...requestconfig - } - } - - /** - * @description: 修改设置 - * @param {String} name 文件名 - * @param {String} key 修改的key值 - * @param {String|Number} value 修改的value值 - * @param {'config'|'default_config'} type 配置文件或默认 - */ - modify(name, key, value, type = 'config') { - let path = `${Plugin_Path}/config/${type}/${name}.yaml` - new YamlReader(path).set(key, value) - this.oldConfig[key] = _.cloneDeep(this.config[key]) - delete this.config[`${type}.${name}`] - } - - /** - * @description: 修改配置数组 - * @param {String} name 文件名 - * @param {String|Number} key key值 - * @param {String|Number} value value - * @param {'add'|'del'} category 类别 add or del - * @param {'config'|'default_config'} type 配置文件或默认 - */ - modifyarr(name, key, value, category = 'add', type = 'config') { - let path = `${Plugin_Path}/config/${type}/${name}.yaml` - let yaml = new YamlReader(path) - if (category == 'add') { - yaml.addIn(key, value) - } else { - let index = yaml.jsonData[key].indexOf(value) - yaml.delete(`${key}.${index}`) - } - } - - setArr(name, key, item, value, type = 'config') { - let path = `${Plugin_Path}/config/${type}/${name}.yaml` - let yaml = new YamlReader(path) - let arr = yaml.get(key).slice(); - arr[item] = value - yaml.set(key, arr) - } - - delServersArr(value, name = 'ws-config', type = 'config') { - let path = `${Plugin_Path}/config/${type}/${name}.yaml` - let yaml = new YamlReader(path) - let key = 'servers' - // let index = yaml.jsonData[key].indexOf(value) - let index = yaml.jsonData[key].findIndex(item => item.name === value); - yaml.delete(`${key}.${index}`) - } - - /** - * @description 对比两个对象不同的值 - * @param {*} oldObj - * @param {*} newObj - * @param {*} parentKey - * @returns - */ - findDifference(obj1, obj2, parentKey = '') { - const result = {}; - for (const key in obj1) { - const fullKey = parentKey ? `${parentKey}.${key}` : key; - if (_.isObject(obj1[key]) && _.isObject(obj2[key])) { - const diff = this.findDifference(obj1[key], obj2[key], fullKey); - if (!_.isEmpty(diff)) { - Object.assign(result, diff); - } - } else if (!_.isEqual(obj1[key], obj2[key])) { - result[fullKey] = { oldValue: obj1[key], newValue: obj2[key] }; - } - } - for (const key in obj2) { - if (!obj1.hasOwnProperty(key)) { - const fullKey = parentKey ? `${parentKey}.${key}` : key; - result[fullKey] = { oldValue: undefined, newValue: obj2[key] }; - } - } - return result; - } -} -export default new Config() \ No newline at end of file diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/little_angel/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/little_angel/__init__.py deleted file mode 100644 index bbfe9c60b425be26ec9b1560f20f26fcbc948ede..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/little_angel/__init__.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import List - -from pil_utils import BuildImage - -from meme_generator import MemeArgsModel, add_meme -from meme_generator.exception import TextOverLength -from meme_generator.utils import make_jpg_or_gif - - -def little_angel(images: List[BuildImage], texts: List[str], args: MemeArgsModel): - img_w, img_h = images[0].convert("RGBA").resize_width(500).size - frame = BuildImage.new("RGBA", (600, img_h + 230), "white") - text = "非常可爱!简直就是小天使" - frame.draw_text( - (10, img_h + 120, 590, img_h + 185), text, max_fontsize=48, weight="bold" - ) - - ta = "她" - name = ta - if texts: - name = texts[0] - elif args.user_infos: - info = args.user_infos[0] - ta = "他" if info.gender == "male" else "她" - name = info.name or ta - - text = f"{ta}没失踪也没怎么样 我只是觉得你们都该看一下" - frame.draw_text( - (20, img_h + 180, 580, img_h + 215), text, max_fontsize=26, weight="bold" - ) - - text = f"请问你们看到{name}了吗?" - try: - frame.draw_text( - (20, 0, 580, 110), text, max_fontsize=70, min_fontsize=25, weight="bold" - ) - except ValueError: - raise TextOverLength(name) - - def make(img: BuildImage) -> BuildImage: - img = img.convert("RGBA").resize_width(500) - return frame.copy().paste(img, (int(300 - img_w / 2), 110), alpha=True) - - return make_jpg_or_gif(images[0], make) - - -add_meme( - "little_angel", - little_angel, - min_images=1, - max_images=1, - min_texts=0, - max_texts=1, - keywords=["小天使"], -) diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/ip_detection_utils.py b/spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/ip_detection_utils.py deleted file mode 100644 index 17e2140fcbb4c09ef25a53184dd9048113b0d3de..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/ip_detection_utils.py +++ /dev/null @@ -1,461 +0,0 @@ -import numpy as np -import cv2 -from collections import Counter - -import lib_ip.ip_draw as draw -from CDM.config.CONFIG_UIED import Config -C = Config() - - -# detect object(connected region) -# def boundary_bfs_connected_area(img, x, y, mark): -# def neighbor(img, x, y, mark, stack): -# for i in range(x - 1, x + 2): -# if i < 0 or i >= img.shape[0]: continue -# for j in range(y - 1, y + 2): -# if j < 0 or j >= img.shape[1]: continue -# if img[i, j] == 255 and mark[i, j] == 0: -# stack.append([i, j]) -# mark[i, j] = 255 -# -# stack = [[x, y]] # points waiting for inspection -# area = [[x, y]] # points of this area -# mark[x, y] = 255 # drawing broad -# -# while len(stack) > 0: -# point = stack.pop() -# area.append(point) -# neighbor(img, point[0], point[1], mark, stack) -# return area - - -# def line_check_perpendicular(lines_h, lines_v, max_thickness): -# """ -# lines: [line_h, line_v] -# -> line_h: horizontal {'head':(column_min, row), 'end':(column_max, row), 'thickness':int) -# -> line_v: vertical {'head':(column, row_min), 'end':(column, row_max), 'thickness':int} -# """ -# is_per_h = np.full(len(lines_h), False) -# is_per_v = np.full(len(lines_v), False) -# for i in range(len(lines_h)): -# # save the intersection point of h -# lines_h[i]['inter_point'] = set() -# h = lines_h[i] -# -# for j in range(len(lines_v)): -# # save the intersection point of v -# if 'inter_point' not in lines_v[j]: lines_v[j]['inter_point'] = set() -# v = lines_v[j] -# -# # if h is perpendicular to v in head of v -# if abs(h['head'][1]-v['head'][1]) <= max_thickness: -# if abs(h['head'][0] - v['head'][0]) <= max_thickness: -# lines_h[i]['inter_point'].add('head') -# lines_v[j]['inter_point'].add('head') -# is_per_h[i] = True -# is_per_v[j] = True -# elif abs(h['end'][0] - v['head'][0]) <= max_thickness: -# lines_h[i]['inter_point'].add('end') -# lines_v[j]['inter_point'].add('head') -# is_per_h[i] = True -# is_per_v[j] = True -# -# # if h is perpendicular to v in end of v -# elif abs(h['head'][1]-v['end'][1]) <= max_thickness: -# if abs(h['head'][0] - v['head'][0]) <= max_thickness: -# lines_h[i]['inter_point'].add('head') -# lines_v[j]['inter_point'].add('end') -# is_per_h[i] = True -# is_per_v[j] = True -# elif abs(h['end'][0] - v['head'][0]) <= max_thickness: -# lines_h[i]['inter_point'].add('end') -# lines_v[j]['inter_point'].add('end') -# is_per_h[i] = True -# is_per_v[j] = True -# per_h = [] -# per_v = [] -# for i in range(len(is_per_h)): -# if is_per_h[i]: -# lines_h[i]['inter_point'] = list(lines_h[i]['inter_point']) -# per_h.append(lines_h[i]) -# for i in range(len(is_per_v)): -# if is_per_v[i]: -# lines_v[i]['inter_point'] = list(lines_v[i]['inter_point']) -# per_v.append(lines_v[i]) -# return per_h, per_v - - -# def line_shrink_corners(corner, lines_h, lines_v): -# """ -# shrink the corner according to lines: -# col_min_shrink: shrink right (increase) -# col_max_shrink: shrink left (decrease) -# row_min_shrink: shrink down (increase) -# row_max_shrink: shrink up (decrease) -# :param lines_h: horizontal {'head':(column_min, row), 'end':(column_max, row), 'thickness':int) -# :param lines_v: vertical {'head':(column, row_min), 'end':(column, row_max), 'thickness':int} -# :return: shrunken corner: (top_left, bottom_right) -# """ -# (col_min, row_min), (col_max, row_max) = corner -# col_min_shrink, row_min_shrink = col_min, row_min -# col_max_shrink, row_max_shrink = col_max, row_max -# valid_frame = False -# -# for h in lines_h: -# # ignore outer border -# if len(h['inter_point']) == 2: -# valid_frame = True -# continue -# # shrink right -> col_min move to end -# if h['inter_point'][0] == 'head': -# col_min_shrink = max(h['end'][0], col_min_shrink) -# # shrink left -> col_max move to head -# elif h['inter_point'][0] == 'end': -# col_max_shrink = min(h['head'][0], col_max_shrink) -# -# for v in lines_v: -# # ignore outer border -# if len(v['inter_point']) == 2: -# valid_frame = True -# continue -# # shrink down -> row_min move to end -# if v['inter_point'][0] == 'head': -# row_min_shrink = max(v['end'][1], row_min_shrink) -# # shrink up -> row_max move to head -# elif v['inter_point'][0] == 'end': -# row_max_shrink = min(v['head'][1], row_max_shrink) -# -# # return the shrunken corner if only there is line intersecting with two other lines -# if valid_frame: -# return (col_min_shrink, row_min_shrink), (col_max_shrink, row_max_shrink) -# return corner - - -# def line_cvt_relative_position(col_min, row_min, lines_h, lines_v): -# """ -# convert the relative position of lines in the entire image -# :param col_min: based column the img lines belong to -# :param row_min: based row the img lines belong to -# :param lines_h: horizontal {'head':(column_min, row), 'end':(column_max, row), 'thickness':int) -# :param lines_v: vertical {'head':(column, row_min), 'end':(column, row_max), 'thickness':int} -# :return: lines_h_cvt, lines_v_cvt -# """ -# for h in lines_h: -# h['head'][0] += col_min -# h['head'][1] += row_min -# h['end'][0] += col_min -# h['end'][1] += row_min -# for v in lines_v: -# v['head'][0] += col_min -# v['head'][1] += row_min -# v['end'][0] += col_min -# v['end'][1] += row_min -# -# return lines_h, lines_v - - -# check if an object is so slim -# @boundary: [border_up, border_bottom, border_left, border_right] -# -> up, bottom: (column_index, min/max row border) -# -> left, right: (row_index, min/max column border) detect range of each row -def clipping_by_line(boundary, boundary_rec, lines): - boundary = boundary.copy() - for orient in lines: - # horizontal - if orient == 'h': - # column range of sub area - r1, r2 = 0, 0 - for line in lines[orient]: - if line[0] == 0: - r1 = line[1] - continue - r2 = line[0] - b_top = [] - b_bottom = [] - for i in range(len(boundary[0])): - if r2 > boundary[0][i][0] >= r1: - b_top.append(boundary[0][i]) - for i in range(len(boundary[1])): - if r2 > boundary[1][i][0] >= r1: - b_bottom.append(boundary[1][i]) - - b_left = [x for x in boundary[2]] # (row_index, min column border) - for i in range(len(b_left)): - if b_left[i][1] < r1: - b_left[i][1] = r1 - b_right = [x for x in boundary[3]] # (row_index, max column border) - for i in range(len(b_right)): - if b_right[i][1] > r2: - b_right[i][1] = r2 - - boundary_rec.append([b_top, b_bottom, b_left, b_right]) - r1 = line[1] - - -# remove imgs that contain text -# def rm_text(org, corners, compo_class, -# max_text_height=C.THRESHOLD_TEXT_MAX_HEIGHT, max_text_width=C.THRESHOLD_TEXT_MAX_WIDTH, -# ocr_padding=C.OCR_PADDING, ocr_min_word_area=C.OCR_MIN_WORD_AREA, show=False): -# """ -# Remove area that full of text -# :param org: original image -# :param corners: [(top_left, bottom_right)] -# -> top_left: (column_min, row_min) -# -> bottom_right: (column_max, row_max) -# :param compo_class: classes of corners -# :param max_text_height: Too large to be text -# :param max_text_width: Too large to be text -# :param ocr_padding: Padding for clipping -# :param ocr_min_word_area: If too text area ratio is too large -# :param show: Show or not -# :return: corners without text objects -# """ -# new_corners = [] -# new_class = [] -# for i in range(len(corners)): -# corner = corners[i] -# (top_left, bottom_right) = corner -# (col_min, row_min) = top_left -# (col_max, row_max) = bottom_right -# height = row_max - row_min -# width = col_max - col_min -# # highly likely to be block or img if too large -# if height > max_text_height and width > max_text_width: -# new_corners.append(corner) -# new_class.append(compo_class[i]) -# else: -# row_min = row_min - ocr_padding if row_min - ocr_padding >= 0 else 0 -# row_max = row_max + ocr_padding if row_max + ocr_padding < org.shape[0] else org.shape[0] -# col_min = col_min - ocr_padding if col_min - ocr_padding >= 0 else 0 -# col_max = col_max + ocr_padding if col_max + ocr_padding < org.shape[1] else org.shape[1] -# # check if this area is text -# clip = org[row_min: row_max, col_min: col_max] -# if not ocr.is_text(clip, ocr_min_word_area, show=show): -# new_corners.append(corner) -# new_class.append(compo_class[i]) -# return new_corners, new_class - - -# def rm_img_in_compo(corners_img, corners_compo): -# """ -# Remove imgs in component -# """ -# corners_img_new = [] -# for img in corners_img: -# is_nested = False -# for compo in corners_compo: -# if util.corner_relation(img, compo) == -1: -# is_nested = True -# break -# if not is_nested: -# corners_img_new.append(img) -# return corners_img_new - - -# def block_or_compo(org, binary, corners, -# max_thickness=C.THRESHOLD_BLOCK_MAX_BORDER_THICKNESS, max_block_cross_points=C.THRESHOLD_BLOCK_MAX_CROSS_POINT, -# min_compo_w_h_ratio=C.THRESHOLD_UICOMPO_MIN_W_H_RATIO, max_compo_w_h_ratio=C.THRESHOLD_UICOMPO_MAX_W_H_RATIO, -# min_block_edge=C.THRESHOLD_BLOCK_MIN_EDGE_LENGTH): -# """ -# Check if the objects are img components or just block -# :param org: Original image -# :param binary: Binary image from pre-processing -# :param corners: [(top_left, bottom_right)] -# -> top_left: (column_min, row_min) -# -> bottom_right: (column_max, row_max) -# :param max_thickness: The max thickness of border of blocks -# :param max_block_cross_points: Ratio of point of interaction -# :return: corners of blocks and imgs -# """ -# blocks = [] -# imgs = [] -# compos = [] -# for corner in corners: -# (top_left, bottom_right) = corner -# (col_min, row_min) = top_left -# (col_max, row_max) = bottom_right -# height = row_max - row_min -# width = col_max - col_min -# -# block = False -# vacancy = [0, 0, 0, 0] -# for i in range(1, max_thickness): -# try: -# # top to bottom -# if vacancy[0] == 0 and (col_max - col_min - 2 * i) is not 0 and ( -# np.sum(binary[row_min + i, col_min + i: col_max - i]) / 255) / (col_max - col_min - 2 * i) <= max_block_cross_points: -# vacancy[0] = 1 -# # bottom to top -# if vacancy[1] == 0 and (col_max - col_min - 2 * i) is not 0 and ( -# np.sum(binary[row_max - i, col_min + i: col_max - i]) / 255) / (col_max - col_min - 2 * i) <= max_block_cross_points: -# vacancy[1] = 1 -# # left to right -# if vacancy[2] == 0 and (row_max - row_min - 2 * i) is not 0 and ( -# np.sum(binary[row_min + i: row_max - i, col_min + i]) / 255) / (row_max - row_min - 2 * i) <= max_block_cross_points: -# vacancy[2] = 1 -# # right to left -# if vacancy[3] == 0 and (row_max - row_min - 2 * i) is not 0 and ( -# np.sum(binary[row_min + i: row_max - i, col_max - i]) / 255) / (row_max - row_min - 2 * i) <= max_block_cross_points: -# vacancy[3] = 1 -# if np.sum(vacancy) == 4: -# block = True -# except: -# pass -# -# # too big to be UI components -# if block: -# if height > min_block_edge and width > min_block_edge: -# blocks.append(corner) -# else: -# if min_compo_w_h_ratio < width / height < max_compo_w_h_ratio: -# compos.append(corner) -# # filter out small objects -# else: -# if height > min_block_edge: -# imgs.append(corner) -# else: -# if min_compo_w_h_ratio < width / height < max_compo_w_h_ratio: -# compos.append(corner) -# return blocks, imgs, compos - - -# def compo_on_img(processing, org, binary, clf, -# compos_corner, compos_class): -# """ -# Detect potential UI components inner img; -# Only leave non-img -# """ -# pad = 2 -# for i in range(len(compos_corner)): -# if compos_class[i] != 'img': -# continue -# ((col_min, row_min), (col_max, row_max)) = compos_corner[i] -# col_min = max(col_min - pad, 0) -# col_max = min(col_max + pad, org.shape[1]) -# row_min = max(row_min - pad, 0) -# row_max = min(row_max + pad, org.shape[0]) -# area = (col_max - col_min) * (row_max - row_min) -# if area < 600: -# continue -# -# clip_org = org[row_min:row_max, col_min:col_max] -# clip_bin_inv = pre.reverse_binary(binary[row_min:row_max, col_min:col_max]) -# -# compos_boundary_new, compos_corner_new, compos_class_new = processing(clip_org, clip_bin_inv, clf) -# compos_corner_new = util.corner_cvt_relative_position(compos_corner_new, col_min, row_min) -# -# assert len(compos_corner_new) == len(compos_class_new) -# -# # only leave non-img elements -# for i in range(len(compos_corner_new)): -# ((col_min_new, row_min_new), (col_max_new, row_max_new)) = compos_corner_new[i] -# area_new = (col_max_new - col_min_new) * (row_max_new - row_min_new) -# if compos_class_new[i] != 'img' and area_new / area < 0.8: -# compos_corner.append(compos_corner_new[i]) -# compos_class.append(compos_class_new[i]) -# -# return compos_corner, compos_class - - -# def strip_img(corners_compo, compos_class, corners_img): -# """ -# Separate img from other compos -# :return: compos without img -# """ -# corners_compo_withuot_img = [] -# compo_class_withuot_img = [] -# for i in range(len(compos_class)): -# if compos_class[i] == 'img': -# corners_img.append(corners_compo[i]) -# else: -# corners_compo_withuot_img.append(corners_compo[i]) -# compo_class_withuot_img.append(compos_class[i]) -# return corners_compo_withuot_img, compo_class_withuot_img - - -# def merge_corner(corners, compos_class, min_selected_IoU=C.THRESHOLD_MIN_IOU, is_merge_nested_same=True): -# """ -# Calculate the Intersection over Overlap (IoU) and merge corners according to the value of IoU -# :param is_merge_nested_same: if true, merge the nested corners with same class whatever the IoU is -# :param corners: corners: [(top_left, bottom_right)] -# -> top_left: (column_min, row_min) -# -> bottom_right: (column_max, row_max) -# :return: new corners -# """ -# new_corners = [] -# new_class = [] -# for i in range(len(corners)): -# is_intersected = False -# for j in range(len(new_corners)): -# r = util.corner_relation_nms(corners[i], new_corners[j], min_selected_IoU) -# # r = util.corner_relation(corners[i], new_corners[j]) -# if is_merge_nested_same: -# if compos_class[i] == new_class[j]: -# # if corners[i] is in new_corners[j], ignore corners[i] -# if r == -1: -# is_intersected = True -# break -# # if new_corners[j] is in corners[i], replace new_corners[j] with corners[i] -# elif r == 1: -# is_intersected = True -# new_corners[j] = corners[i] -# -# # if above IoU threshold, and corners[i] is in new_corners[j], ignore corners[i] -# if r == -2: -# is_intersected = True -# break -# # if above IoU threshold, and new_corners[j] is in corners[i], replace new_corners[j] with corners[i] -# elif r == 2: -# is_intersected = True -# new_corners[j] = corners[i] -# new_class[j] = compos_class[i] -# -# # containing and too small -# elif r == -3: -# is_intersected = True -# break -# elif r == 3: -# is_intersected = True -# new_corners[j] = corners[i] -# -# # if [i] and [j] are overlapped but no containing relation, merge corners when same class -# elif r == 4: -# is_intersected = True -# if compos_class[i] == new_class[j]: -# new_corners[j] = util.corner_merge_two_corners(corners[i], new_corners[j]) -# -# if not is_intersected: -# new_corners.append(corners[i]) -# new_class.append(compos_class[i]) -# return new_corners, new_class - - -# def select_corner(corners, compos_class, class_name): -# """ -# Select corners in given compo type -# """ -# corners_wanted = [] -# for i in range(len(compos_class)): -# if compos_class[i] == class_name: -# corners_wanted.append(corners[i]) -# return corners_wanted - - -# def flood_fill_bfs(img, x_start, y_start, mark, grad_thresh): -# def neighbor(x, y): -# for i in range(x - 1, x + 2): -# if i < 0 or i >= img.shape[0]: continue -# for j in range(y - 1, y + 2): -# if j < 0 or j >= img.shape[1]: continue -# if mark[i, j] == 0 and abs(img[i, j] - img[x, y]) < grad_thresh: -# stack.append([i, j]) -# mark[i, j] = 255 -# -# stack = [[x_start, y_start]] # points waiting for inspection -# region = [[x_start, y_start]] # points of this connected region -# mark[x_start, y_start] = 255 # drawing broad -# while len(stack) > 0: -# point = stack.pop() -# region.append(point) -# neighbor(point[0], point[1]) -# return region \ No newline at end of file diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/laion_dataset.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/laion_dataset.py deleted file mode 100644 index 1be30abb188e1afad6fe678ccbb367931a2b3d26..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/laion_dataset.py +++ /dev/null @@ -1,31 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import webdataset as wds -from video_llama.datasets.datasets.base_dataset import BaseDataset - - -class LaionDataset(BaseDataset): - def __init__(self, vis_processor, text_processor, location): - super().__init__(vis_processor=vis_processor, text_processor=text_processor) - - self.inner_dataset = wds.DataPipeline( - wds.ResampledShards(location), - wds.tarfile_to_samples(handler=wds.warn_and_continue), - wds.shuffle(1000, handler=wds.warn_and_continue), - wds.decode("pilrgb", handler=wds.warn_and_continue), - wds.to_tuple("jpg", "json", handler=wds.warn_and_continue), - wds.map_tuple(self.vis_processor, handler=wds.warn_and_continue), - wds.map(self.to_dict, handler=wds.warn_and_continue), - ) - - def to_dict(self, sample): - return { - "image": sample[0], - "text_input": self.text_processor(sample[1]["caption"]), - } - diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/__main__.py deleted file mode 100644 index a05323f93b6850c2f86aedb3b1a5dee16358027f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/__main__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .features import pilinfo - -pilinfo() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/reportLabPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/reportLabPen.py deleted file mode 100644 index 2cb89c8bf4c772b7a987edb0593c40c83cc2201b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/reportLabPen.py +++ /dev/null @@ -1,80 +0,0 @@ -from fontTools.pens.basePen import BasePen -from reportlab.graphics.shapes import Path - - -__all__ = ["ReportLabPen"] - - -class ReportLabPen(BasePen): - - """A pen for drawing onto a ``reportlab.graphics.shapes.Path`` object.""" - - def __init__(self, glyphSet, path=None): - BasePen.__init__(self, glyphSet) - if path is None: - path = Path() - self.path = path - - def _moveTo(self, p): - (x, y) = p - self.path.moveTo(x, y) - - def _lineTo(self, p): - (x, y) = p - self.path.lineTo(x, y) - - def _curveToOne(self, p1, p2, p3): - (x1, y1) = p1 - (x2, y2) = p2 - (x3, y3) = p3 - self.path.curveTo(x1, y1, x2, y2, x3, y3) - - def _closePath(self): - self.path.closePath() - - -if __name__ == "__main__": - import sys - - if len(sys.argv) < 3: - print( - "Usage: reportLabPen.py []" - ) - print( - " If no image file name is created, by default .png is created." - ) - print(" example: reportLabPen.py Arial.TTF R test.png") - print( - " (The file format will be PNG, regardless of the image file name supplied)" - ) - sys.exit(0) - - from fontTools.ttLib import TTFont - from reportlab.lib import colors - - path = sys.argv[1] - glyphName = sys.argv[2] - if len(sys.argv) > 3: - imageFile = sys.argv[3] - else: - imageFile = "%s.png" % glyphName - - font = TTFont(path) # it would work just as well with fontTools.t1Lib.T1Font - gs = font.getGlyphSet() - pen = ReportLabPen(gs, Path(fillColor=colors.red, strokeWidth=5)) - g = gs[glyphName] - g.draw(pen) - - w, h = g.width, 1000 - from reportlab.graphics import renderPM - from reportlab.graphics.shapes import Group, Drawing, scale - - # Everything is wrapped in a group to allow transformations. - g = Group(pen.path) - g.translate(0, 200) - g.scale(0.3, 0.3) - - d = Drawing(w, h) - d.add(g) - - renderPM.drawToFile(d, imageFile, fmt="PNG") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/DefaultTable.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/DefaultTable.py deleted file mode 100644 index 32a4b1f258f54d78ad39eb764867a6c354939743..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/DefaultTable.py +++ /dev/null @@ -1,50 +0,0 @@ -from fontTools.misc.textTools import Tag -from fontTools.ttLib import getClassTag - - -class DefaultTable(object): - - dependencies = [] - - def __init__(self, tag=None): - if tag is None: - tag = getClassTag(self.__class__) - self.tableTag = Tag(tag) - - def decompile(self, data, ttFont): - self.data = data - - def compile(self, ttFont): - return self.data - - def toXML(self, writer, ttFont, **kwargs): - if hasattr(self, "ERROR"): - writer.comment("An error occurred during the decompilation of this table") - writer.newline() - writer.comment(self.ERROR) - writer.newline() - writer.begintag("hexdata") - writer.newline() - writer.dumphex(self.compile(ttFont)) - writer.endtag("hexdata") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - from fontTools.misc.textTools import readHex - from fontTools import ttLib - - if name != "hexdata": - raise ttLib.TTLibError("can't handle '%s' element" % name) - self.decompile(readHex(content), ttFont) - - def __repr__(self): - return "<'%s' table at %x>" % (self.tableTag, id(self)) - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/build.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/build.py deleted file mode 100644 index 6460ad7debbc459b72815b1199d8381c281daf52..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/build.py +++ /dev/null @@ -1,115 +0,0 @@ -""" -@Date: 2021/07/18 -@description: -""" -import numpy as np -import torch.utils.data -from dataset.mp3d_dataset import MP3DDataset -from dataset.pano_s2d3d_dataset import PanoS2D3DDataset -from dataset.pano_s2d3d_mix_dataset import PanoS2D3DMixDataset -from dataset.zind_dataset import ZindDataset - - -def build_loader(config, logger): - name = config.DATA.DATASET - ddp = config.WORLD_SIZE > 1 - train_dataset = None - train_data_loader = None - if config.MODE == 'train': - train_dataset = build_dataset(mode='train', config=config, logger=logger) - - val_dataset = build_dataset(mode=config.VAL_NAME if config.MODE != 'test' else 'test', config=config, logger=logger) - - train_sampler = None - val_sampler = None - if ddp: - if train_dataset: - train_sampler = torch.utils.data.DistributedSampler(train_dataset, shuffle=True) - val_sampler = torch.utils.data.DistributedSampler(val_dataset, shuffle=False) - - batch_size = config.DATA.BATCH_SIZE - num_workers = 0 if config.DEBUG else config.DATA.NUM_WORKERS - pin_memory = config.DATA.PIN_MEMORY - if train_dataset: - logger.info(f'Train data loader batch size: {batch_size}') - train_data_loader = torch.utils.data.DataLoader( - train_dataset, sampler=train_sampler, - batch_size=batch_size, - shuffle=True, - num_workers=num_workers, - pin_memory=pin_memory, - drop_last=True, - ) - batch_size = batch_size - (len(val_dataset) % np.arange(batch_size, 0, -1)).tolist().index(0) - logger.info(f'Val data loader batch size: {batch_size}') - val_data_loader = torch.utils.data.DataLoader( - val_dataset, sampler=val_sampler, - batch_size=batch_size, - shuffle=False, - num_workers=num_workers, - pin_memory=pin_memory, - drop_last=False - ) - logger.info(f'Build data loader: num_workers:{num_workers} pin_memory:{pin_memory}') - return train_data_loader, val_data_loader - - -def build_dataset(mode, config, logger): - name = config.DATA.DATASET - if name == 'mp3d': - dataset = MP3DDataset( - root_dir=config.DATA.DIR, - mode=mode, - shape=config.DATA.SHAPE, - max_wall_num=config.DATA.WALL_NUM, - aug=config.DATA.AUG if mode == 'train' else None, - camera_height=config.DATA.CAMERA_HEIGHT, - logger=logger, - for_test_index=config.DATA.FOR_TEST_INDEX, - keys=config.DATA.KEYS - ) - elif name == 'pano_s2d3d': - dataset = PanoS2D3DDataset( - root_dir=config.DATA.DIR, - mode=mode, - shape=config.DATA.SHAPE, - max_wall_num=config.DATA.WALL_NUM, - aug=config.DATA.AUG if mode == 'train' else None, - camera_height=config.DATA.CAMERA_HEIGHT, - logger=logger, - for_test_index=config.DATA.FOR_TEST_INDEX, - subset=config.DATA.SUBSET, - keys=config.DATA.KEYS - ) - elif name == 'pano_s2d3d_mix': - dataset = PanoS2D3DMixDataset( - root_dir=config.DATA.DIR, - mode=mode, - shape=config.DATA.SHAPE, - max_wall_num=config.DATA.WALL_NUM, - aug=config.DATA.AUG if mode == 'train' else None, - camera_height=config.DATA.CAMERA_HEIGHT, - logger=logger, - for_test_index=config.DATA.FOR_TEST_INDEX, - subset=config.DATA.SUBSET, - keys=config.DATA.KEYS - ) - elif name == 'zind': - dataset = ZindDataset( - root_dir=config.DATA.DIR, - mode=mode, - shape=config.DATA.SHAPE, - max_wall_num=config.DATA.WALL_NUM, - aug=config.DATA.AUG if mode == 'train' else None, - camera_height=config.DATA.CAMERA_HEIGHT, - logger=logger, - for_test_index=config.DATA.FOR_TEST_INDEX, - is_simple=True, - is_ceiling_flat=False, - keys=config.DATA.KEYS, - vp_align=config.EVAL.POST_PROCESSING is not None and 'manhattan' in config.EVAL.POST_PROCESSING - ) - else: - raise NotImplementedError(f"Unknown dataset: {name}") - - return dataset diff --git a/spaces/Detomo/ai-comic-generation/src/app/interface/page/index.tsx b/spaces/Detomo/ai-comic-generation/src/app/interface/page/index.tsx deleted file mode 100644 index 9a4c4fbf9ee68d2e95234c4b33fee0b0b34fa4c1..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/app/interface/page/index.tsx +++ /dev/null @@ -1,59 +0,0 @@ -import { allLayouts } from "@/app/layouts" -import { useStore } from "@/app/store" -import { cn } from "@/lib/utils" -import { useEffect, useRef } from "react" - -export function Page({ page }: { page: number }) { - const zoomLevel = useStore(state => state.zoomLevel) - const layouts = useStore(state => state.layouts) - // const prompt = useStore(state => state.prompt) - - const LayoutElement = (allLayouts as any)[layouts[page]] - - /* - const [canLoad, setCanLoad] = useState(false) - useEffect(() => { - if (prompt?.length) { - setCanLoad(false) - setTimeout(() => { - setCanLoad(true) - }, page * 4000) - } - }, [prompt]) - */ - - const setPage = useStore(state => state.setPage) - const pageRef = useRef(null) - - useEffect(() => { - const element = pageRef.current - if (!element) { return } - setPage(element) - }, [pageRef.current]) - - return ( -
    100 ? `100`}` - }} - > - -
    - ) -} \ No newline at end of file diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/submission/run_context.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/submission/run_context.py deleted file mode 100644 index 932320e4735bde1b547ac6062b175601b7959547..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/submission/run_context.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Helpers for managing the run/training loop.""" - -import datetime -import json -import os -import pprint -import time -import types - -from typing import Any - -from . import submit - - -class RunContext(object): - """Helper class for managing the run/training loop. - - The context will hide the implementation details of a basic run/training loop. - It will set things up properly, tell if run should be stopped, and then cleans up. - User should call update periodically and use should_stop to determine if run should be stopped. - - Args: - submit_config: The SubmitConfig that is used for the current run. - config_module: The whole config module that is used for the current run. - max_epoch: Optional cached value for the max_epoch variable used in update. - """ - - def __init__(self, submit_config: submit.SubmitConfig, config_module: types.ModuleType = None, max_epoch: Any = None): - self.submit_config = submit_config - self.should_stop_flag = False - self.has_closed = False - self.start_time = time.time() - self.last_update_time = time.time() - self.last_update_interval = 0.0 - self.max_epoch = max_epoch - - # pretty print the all the relevant content of the config module to a text file - if config_module is not None: - with open(os.path.join(submit_config.run_dir, "config.txt"), "w") as f: - filtered_dict = {k: v for k, v in config_module.__dict__.items() if not k.startswith("_") and not isinstance(v, (types.ModuleType, types.FunctionType, types.LambdaType, submit.SubmitConfig, type))} - pprint.pprint(filtered_dict, stream=f, indent=4, width=200, compact=False) - - # write out details about the run to a text file - self.run_txt_data = {"task_name": submit_config.task_name, "host_name": submit_config.host_name, "start_time": datetime.datetime.now().isoformat(sep=" ")} - with open(os.path.join(submit_config.run_dir, "run.txt"), "w") as f: - pprint.pprint(self.run_txt_data, stream=f, indent=4, width=200, compact=False) - - def __enter__(self) -> "RunContext": - return self - - def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: - self.close() - - def update(self, loss: Any = 0, cur_epoch: Any = 0, max_epoch: Any = None) -> None: - """Do general housekeeping and keep the state of the context up-to-date. - Should be called often enough but not in a tight loop.""" - assert not self.has_closed - - self.last_update_interval = time.time() - self.last_update_time - self.last_update_time = time.time() - - if os.path.exists(os.path.join(self.submit_config.run_dir, "abort.txt")): - self.should_stop_flag = True - - max_epoch_val = self.max_epoch if max_epoch is None else max_epoch - - def should_stop(self) -> bool: - """Tell whether a stopping condition has been triggered one way or another.""" - return self.should_stop_flag - - def get_time_since_start(self) -> float: - """How much time has passed since the creation of the context.""" - return time.time() - self.start_time - - def get_time_since_last_update(self) -> float: - """How much time has passed since the last call to update.""" - return time.time() - self.last_update_time - - def get_last_update_interval(self) -> float: - """How much time passed between the previous two calls to update.""" - return self.last_update_interval - - def close(self) -> None: - """Close the context and clean up. - Should only be called once.""" - if not self.has_closed: - # update the run.txt with stopping time - self.run_txt_data["stop_time"] = datetime.datetime.now().isoformat(sep=" ") - with open(os.path.join(self.submit_config.run_dir, "run.txt"), "w") as f: - pprint.pprint(self.run_txt_data, stream=f, indent=4, width=200, compact=False) - - self.has_closed = True diff --git a/spaces/DragGan/DragGan-Inversion/gui_utils/text_utils.py b/spaces/DragGan/DragGan-Inversion/gui_utils/text_utils.py deleted file mode 100644 index d1d971d9defa9a223d5b4b19def17f351a262833..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/gui_utils/text_utils.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import functools -from typing import Optional - -import dnnlib -import numpy as np -import PIL.Image -import PIL.ImageFont -import scipy.ndimage - -from . import gl_utils - -# ---------------------------------------------------------------------------- - - -def get_default_font(): - # Open Sans regular - url = 'http://fonts.gstatic.com/s/opensans/v17/mem8YaGs126MiZpBA-U1UpcaXcl0Aw.ttf' - return dnnlib.util.open_url(url, return_filename=True) - -# ---------------------------------------------------------------------------- - - -@functools.lru_cache(maxsize=None) -def get_pil_font(font=None, size=32): - if font is None: - font = get_default_font() - return PIL.ImageFont.truetype(font=font, size=size) - -# ---------------------------------------------------------------------------- - - -def get_array(string, *, dropshadow_radius: int = None, **kwargs): - if dropshadow_radius is not None: - offset_x = int(np.ceil(dropshadow_radius*2/3)) - offset_y = int(np.ceil(dropshadow_radius*2/3)) - return _get_array_priv(string, dropshadow_radius=dropshadow_radius, offset_x=offset_x, offset_y=offset_y, **kwargs) - else: - return _get_array_priv(string, **kwargs) - - -@functools.lru_cache(maxsize=10000) -def _get_array_priv( - string: str, *, - size: int = 32, - max_width: Optional[int] = None, - max_height: Optional[int] = None, - min_size=10, - shrink_coef=0.8, - dropshadow_radius: int = None, - offset_x: int = None, - offset_y: int = None, - **kwargs -): - cur_size = size - array = None - while True: - if dropshadow_radius is not None: - # separate implementation for dropshadow text rendering - array = _get_array_impl_dropshadow( - string, size=cur_size, radius=dropshadow_radius, offset_x=offset_x, offset_y=offset_y, **kwargs) - else: - array = _get_array_impl(string, size=cur_size, **kwargs) - height, width, _ = array.shape - if (max_width is None or width <= max_width) and (max_height is None or height <= max_height) or (cur_size <= min_size): - break - cur_size = max(int(cur_size * shrink_coef), min_size) - return array - -# ---------------------------------------------------------------------------- - - -@functools.lru_cache(maxsize=10000) -def _get_array_impl(string, *, font=None, size=32, outline=0, outline_pad=3, outline_coef=3, outline_exp=2, line_pad: int = None): - pil_font = get_pil_font(font=font, size=size) - lines = [pil_font.getmask(line, 'L') for line in string.split('\n')] - lines = [np.array(line, dtype=np.uint8).reshape( - [line.size[1], line.size[0]]) for line in lines] - width = max(line.shape[1] for line in lines) - lines = [np.pad(line, ((0, 0), (0, width - line.shape[1])), - mode='constant') for line in lines] - line_spacing = line_pad if line_pad is not None else size // 2 - lines = [np.pad(line, ((0, line_spacing), (0, 0)), mode='constant') - for line in lines[:-1]] + lines[-1:] - mask = np.concatenate(lines, axis=0) - alpha = mask - if outline > 0: - mask = np.pad(mask, int(np.ceil(outline * outline_pad)), - mode='constant', constant_values=0) - alpha = mask.astype(np.float32) / 255 - alpha = scipy.ndimage.gaussian_filter(alpha, outline) - alpha = 1 - np.maximum(1 - alpha * outline_coef, 0) ** outline_exp - alpha = (alpha * 255 + 0.5).clip(0, 255).astype(np.uint8) - alpha = np.maximum(alpha, mask) - return np.stack([mask, alpha], axis=-1) - -# ---------------------------------------------------------------------------- - - -@functools.lru_cache(maxsize=10000) -def _get_array_impl_dropshadow(string, *, font=None, size=32, radius: int, offset_x: int, offset_y: int, line_pad: int = None, **kwargs): - assert (offset_x > 0) and (offset_y > 0) - pil_font = get_pil_font(font=font, size=size) - lines = [pil_font.getmask(line, 'L') for line in string.split('\n')] - lines = [np.array(line, dtype=np.uint8).reshape( - [line.size[1], line.size[0]]) for line in lines] - width = max(line.shape[1] for line in lines) - lines = [np.pad(line, ((0, 0), (0, width - line.shape[1])), - mode='constant') for line in lines] - line_spacing = line_pad if line_pad is not None else size // 2 - lines = [np.pad(line, ((0, line_spacing), (0, 0)), mode='constant') - for line in lines[:-1]] + lines[-1:] - mask = np.concatenate(lines, axis=0) - alpha = mask - - mask = np.pad(mask, 2*radius + max(abs(offset_x), abs(offset_y)), - mode='constant', constant_values=0) - alpha = mask.astype(np.float32) / 255 - alpha = scipy.ndimage.gaussian_filter(alpha, radius) - alpha = 1 - np.maximum(1 - alpha * 1.5, 0) ** 1.4 - alpha = (alpha * 255 + 0.5).clip(0, 255).astype(np.uint8) - alpha = np.pad(alpha, [(offset_y, 0), (offset_x, 0)], - mode='constant')[:-offset_y, :-offset_x] - alpha = np.maximum(alpha, mask) - return np.stack([mask, alpha], axis=-1) - -# ---------------------------------------------------------------------------- - - -@functools.lru_cache(maxsize=10000) -def get_texture(string, bilinear=True, mipmap=True, **kwargs): - return gl_utils.Texture(image=get_array(string, **kwargs), bilinear=bilinear, mipmap=mipmap) - -# ---------------------------------------------------------------------------- diff --git a/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/kalman_filter.py b/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/kalman_filter.py deleted file mode 100644 index deda8a26292b81bc6512a8f6145afabde6c16d7a..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/kalman_filter.py +++ /dev/null @@ -1,270 +0,0 @@ -# vim: expandtab:ts=4:sw=4 -import numpy as np -import scipy.linalg - - -""" -Table for the 0.95 quantile of the chi-square distribution with N degrees of -freedom (contains values for N=1, ..., 9). Taken from MATLAB/Octave's chi2inv -function and used as Mahalanobis gating threshold. -""" -chi2inv95 = { - 1: 3.8415, - 2: 5.9915, - 3: 7.8147, - 4: 9.4877, - 5: 11.070, - 6: 12.592, - 7: 14.067, - 8: 15.507, - 9: 16.919} - - -class KalmanFilter(object): - """ - A simple Kalman filter for tracking bounding boxes in image space. - - The 8-dimensional state space - - x, y, a, h, vx, vy, va, vh - - contains the bounding box center position (x, y), aspect ratio a, height h, - and their respective velocities. - - Object motion follows a constant velocity model. The bounding box location - (x, y, a, h) is taken as direct observation of the state space (linear - observation model). - - """ - - def __init__(self): - ndim, dt = 4, 1. - - # Create Kalman filter model matrices. - self._motion_mat = np.eye(2 * ndim, 2 * ndim) - for i in range(ndim): - self._motion_mat[i, ndim + i] = dt - self._update_mat = np.eye(ndim, 2 * ndim) - - # Motion and observation uncertainty are chosen relative to the current - # state estimate. These weights control the amount of uncertainty in - # the model. This is a bit hacky. - self._std_weight_position = 1. / 20 - self._std_weight_velocity = 1. / 160 - - def initiate(self, measurement): - """Create track from unassociated measurement. - - Parameters - ---------- - measurement : ndarray - Bounding box coordinates (x, y, a, h) with center position (x, y), - aspect ratio a, and height h. - - Returns - ------- - (ndarray, ndarray) - Returns the mean vector (8 dimensional) and covariance matrix (8x8 - dimensional) of the new track. Unobserved velocities are initialized - to 0 mean. - - """ - mean_pos = measurement - mean_vel = np.zeros_like(mean_pos) - mean = np.r_[mean_pos, mean_vel] - - std = [ - 2 * self._std_weight_position * measurement[3], - 2 * self._std_weight_position * measurement[3], - 1e-2, - 2 * self._std_weight_position * measurement[3], - 10 * self._std_weight_velocity * measurement[3], - 10 * self._std_weight_velocity * measurement[3], - 1e-5, - 10 * self._std_weight_velocity * measurement[3]] - covariance = np.diag(np.square(std)) - return mean, covariance - - def predict(self, mean, covariance): - """Run Kalman filter prediction step. - - Parameters - ---------- - mean : ndarray - The 8 dimensional mean vector of the object state at the previous - time step. - covariance : ndarray - The 8x8 dimensional covariance matrix of the object state at the - previous time step. - - Returns - ------- - (ndarray, ndarray) - Returns the mean vector and covariance matrix of the predicted - state. Unobserved velocities are initialized to 0 mean. - - """ - std_pos = [ - self._std_weight_position * mean[3], - self._std_weight_position * mean[3], - 1e-2, - self._std_weight_position * mean[3]] - std_vel = [ - self._std_weight_velocity * mean[3], - self._std_weight_velocity * mean[3], - 1e-5, - self._std_weight_velocity * mean[3]] - motion_cov = np.diag(np.square(np.r_[std_pos, std_vel])) - - #mean = np.dot(self._motion_mat, mean) - mean = np.dot(mean, self._motion_mat.T) - covariance = np.linalg.multi_dot(( - self._motion_mat, covariance, self._motion_mat.T)) + motion_cov - - return mean, covariance - - def project(self, mean, covariance): - """Project state distribution to measurement space. - - Parameters - ---------- - mean : ndarray - The state's mean vector (8 dimensional array). - covariance : ndarray - The state's covariance matrix (8x8 dimensional). - - Returns - ------- - (ndarray, ndarray) - Returns the projected mean and covariance matrix of the given state - estimate. - - """ - std = [ - self._std_weight_position * mean[3], - self._std_weight_position * mean[3], - 1e-1, - self._std_weight_position * mean[3]] - innovation_cov = np.diag(np.square(std)) - - mean = np.dot(self._update_mat, mean) - covariance = np.linalg.multi_dot(( - self._update_mat, covariance, self._update_mat.T)) - return mean, covariance + innovation_cov - - def multi_predict(self, mean, covariance): - """Run Kalman filter prediction step (Vectorized version). - Parameters - ---------- - mean : ndarray - The Nx8 dimensional mean matrix of the object states at the previous - time step. - covariance : ndarray - The Nx8x8 dimensional covariance matrics of the object states at the - previous time step. - Returns - ------- - (ndarray, ndarray) - Returns the mean vector and covariance matrix of the predicted - state. Unobserved velocities are initialized to 0 mean. - """ - std_pos = [ - self._std_weight_position * mean[:, 3], - self._std_weight_position * mean[:, 3], - 1e-2 * np.ones_like(mean[:, 3]), - self._std_weight_position * mean[:, 3]] - std_vel = [ - self._std_weight_velocity * mean[:, 3], - self._std_weight_velocity * mean[:, 3], - 1e-5 * np.ones_like(mean[:, 3]), - self._std_weight_velocity * mean[:, 3]] - sqr = np.square(np.r_[std_pos, std_vel]).T - - motion_cov = [] - for i in range(len(mean)): - motion_cov.append(np.diag(sqr[i])) - motion_cov = np.asarray(motion_cov) - - mean = np.dot(mean, self._motion_mat.T) - left = np.dot(self._motion_mat, covariance).transpose((1, 0, 2)) - covariance = np.dot(left, self._motion_mat.T) + motion_cov - - return mean, covariance - - def update(self, mean, covariance, measurement): - """Run Kalman filter correction step. - - Parameters - ---------- - mean : ndarray - The predicted state's mean vector (8 dimensional). - covariance : ndarray - The state's covariance matrix (8x8 dimensional). - measurement : ndarray - The 4 dimensional measurement vector (x, y, a, h), where (x, y) - is the center position, a the aspect ratio, and h the height of the - bounding box. - - Returns - ------- - (ndarray, ndarray) - Returns the measurement-corrected state distribution. - - """ - projected_mean, projected_cov = self.project(mean, covariance) - - chol_factor, lower = scipy.linalg.cho_factor( - projected_cov, lower=True, check_finite=False) - kalman_gain = scipy.linalg.cho_solve( - (chol_factor, lower), np.dot(covariance, self._update_mat.T).T, - check_finite=False).T - innovation = measurement - projected_mean - - new_mean = mean + np.dot(innovation, kalman_gain.T) - new_covariance = covariance - np.linalg.multi_dot(( - kalman_gain, projected_cov, kalman_gain.T)) - return new_mean, new_covariance - - def gating_distance(self, mean, covariance, measurements, - only_position=False, metric='maha'): - """Compute gating distance between state distribution and measurements. - A suitable distance threshold can be obtained from `chi2inv95`. If - `only_position` is False, the chi-square distribution has 4 degrees of - freedom, otherwise 2. - Parameters - ---------- - mean : ndarray - Mean vector over the state distribution (8 dimensional). - covariance : ndarray - Covariance of the state distribution (8x8 dimensional). - measurements : ndarray - An Nx4 dimensional matrix of N measurements, each in - format (x, y, a, h) where (x, y) is the bounding box center - position, a the aspect ratio, and h the height. - only_position : Optional[bool] - If True, distance computation is done with respect to the bounding - box center position only. - Returns - ------- - ndarray - Returns an array of length N, where the i-th element contains the - squared Mahalanobis distance between (mean, covariance) and - `measurements[i]`. - """ - mean, covariance = self.project(mean, covariance) - if only_position: - mean, covariance = mean[:2], covariance[:2, :2] - measurements = measurements[:, :2] - - d = measurements - mean - if metric == 'gaussian': - return np.sum(d * d, axis=1) - elif metric == 'maha': - cholesky_factor = np.linalg.cholesky(covariance) - z = scipy.linalg.solve_triangular( - cholesky_factor, d.T, lower=True, check_finite=False, - overwrite_b=True) - squared_maha = np.sum(z * z, axis=0) - return squared_maha - else: - raise ValueError('invalid distance metric') \ No newline at end of file diff --git a/spaces/EleutherAI/magma/magma/image_encoders.py b/spaces/EleutherAI/magma/magma/image_encoders.py deleted file mode 100644 index 69e5ca11cef483032e40ae5c5b5ddbb86711927d..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/magma/magma/image_encoders.py +++ /dev/null @@ -1,91 +0,0 @@ -import torch -import torch.nn as nn -from typing import Callable, Union -from torchtyping import patch_typeguard -from einops import rearrange -import timm -import clip -from functools import partial - -# ----------------------------- Utils -------------------------------------- - -clip.model.LayerNorm = ( - nn.LayerNorm -) # we need to patch this for clip to work with deepspeed -patch_typeguard() # needed for torchtyping typechecks to work - - -class Lambda(torch.nn.Module): - def __init__(self, fn: Callable): - super().__init__() - assert hasattr(fn, "__call__") - self.fn = fn - - def forward(self, x): - return self.fn(x) - - -# ------------------------- Image encoders ---------------------------------- - - -def nfresnet50( - device: Union[torch.device, str] = None, pretrained: bool = True -) -> nn.Module: - """ - Loads nfresnet50 model, removing the pooling layer and replacing it with - an adaptive pooling layer. - """ - encoder = torch.nn.Sequential( - *list(timm.create_model("nf_resnet50", pretrained=pretrained).children())[:-1] - ) - pooling = torch.nn.AdaptiveAvgPool2d((1, 1)) - encoder = torch.nn.Sequential(encoder, pooling) - if device is not None: - encoder = encoder.to(device) - return encoder - - -def clip_encoder( - device: Union[torch.device, str] = None, name: str = "clip", -) -> nn.Module: - """ - Loads clip's image encoder module, discarding the lm component. - - If the variant is a resnet model, we also remove the attention pooling. - """ - if name in ["clip", "ViT-B/32"]: - name = "ViT-B/32" - elif name in ["clip_resnet", "RN50x4"]: - name = "RN50x4" - elif name in ["clip_resnet_large", "RN50x16"]: - name = "RN50x16" - else: - raise ValueError(f"encoder {name} not recognized") - - encoder = clip.load(name, device=device)[0].visual - - if device is not None: - encoder = encoder.to(device) - - if "RN" in name: - # remove attention pooling - encoder.attnpool = Lambda( - partial(rearrange, pattern="b d h w -> b (h w) d") - ) # remove attn pooling, just use reshaped features - - return encoder - - -def get_image_encoder( - name: str, device: Union[torch.device, str] = None, pretrained: bool = False -) -> torch.nn.Module: - """ - Loads image encoder module - """ - if name == "nfresnet50": - encoder = nfresnet50(device=device, pretrained=pretrained) - elif "clip" in name: - encoder = clip_encoder(device=device, name=name) - else: - raise ValueError(f"image encoder {name} not recognized") - return encoder diff --git a/spaces/EuroPython2022/Model-Recommendation/README.md b/spaces/EuroPython2022/Model-Recommendation/README.md deleted file mode 100644 index ae7f2eabe12f5148411284e54eede4b2312b3c40..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/Model-Recommendation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Model Recommendation -emoji: 🏃 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.0.26 -app_file: App.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FaceOnLive/Face-Liveness-Detection-SDK/facewrapper/facewrapper.py b/spaces/FaceOnLive/Face-Liveness-Detection-SDK/facewrapper/facewrapper.py deleted file mode 100644 index 4b30d971e234ad1f49f829f83872d37f6ccd7535..0000000000000000000000000000000000000000 --- a/spaces/FaceOnLive/Face-Liveness-Detection-SDK/facewrapper/facewrapper.py +++ /dev/null @@ -1,31 +0,0 @@ -import ctypes, ctypes.util -from ctypes import * -from numpy.ctypeslib import ndpointer -import sys -import os -sys.path.append('/opt/intel/openvino_2022/runtime/lib/intel64') - -lib_path = os.path.abspath(os.path.dirname(__file__)) + '/libs/libttvfaceengine7.so' -liveness_engine = cdll.LoadLibrary(lib_path) - -ttv_version = liveness_engine.ttv_version -ttv_version.argtypes = [] -ttv_version.restype = ctypes.c_char_p - -ttv_get_hwid = liveness_engine.ttv_get_hwid -ttv_get_hwid.argtypes = [] -ttv_get_hwid.restype = ctypes.c_char_p - -ttv_init = liveness_engine.ttv_init -ttv_init.argtypes = [ctypes.c_char_p, ctypes.c_char_p] -ttv_init.restype = ctypes.c_int32 - -ttv_init_offline = liveness_engine.ttv_init_offline -ttv_init_offline.argtypes = [ctypes.c_char_p, ctypes.c_char_p] -ttv_init_offline.restype = ctypes.c_int32 - - -ttv_detect_face = liveness_engine.ttv_detect_face -ttv_detect_face.argtypes = [ndpointer(ctypes.c_ubyte, flags='C_CONTIGUOUS'), ctypes.c_int32, ctypes.c_int32, ndpointer(ctypes.c_int32, flags='C_CONTIGUOUS'), ndpointer(ctypes.c_double, flags='C_CONTIGUOUS'), ndpointer(ctypes.c_double, flags='C_CONTIGUOUS')] -ttv_detect_face.restype = ctypes.c_int32 - diff --git a/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/jquery-ui.min.js b/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/jquery-ui.min.js deleted file mode 100644 index 25398a167415050ae8bfb0bfebac6aa3ab790909..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/jquery-ui.min.js +++ /dev/null @@ -1,13 +0,0 @@ -/*! jQuery UI - v1.12.1 - 2016-09-14 -* http://jqueryui.com -* Includes: widget.js, position.js, data.js, disable-selection.js, effect.js, effects/effect-blind.js, effects/effect-bounce.js, effects/effect-clip.js, effects/effect-drop.js, effects/effect-explode.js, effects/effect-fade.js, effects/effect-fold.js, effects/effect-highlight.js, effects/effect-puff.js, effects/effect-pulsate.js, effects/effect-scale.js, effects/effect-shake.js, effects/effect-size.js, effects/effect-slide.js, effects/effect-transfer.js, focusable.js, form-reset-mixin.js, jquery-1-7.js, keycode.js, labels.js, scroll-parent.js, tabbable.js, unique-id.js, widgets/accordion.js, widgets/autocomplete.js, widgets/button.js, widgets/checkboxradio.js, widgets/controlgroup.js, widgets/datepicker.js, widgets/dialog.js, widgets/draggable.js, widgets/droppable.js, widgets/menu.js, widgets/mouse.js, widgets/progressbar.js, widgets/resizable.js, widgets/selectable.js, widgets/selectmenu.js, widgets/slider.js, widgets/sortable.js, widgets/spinner.js, widgets/tabs.js, widgets/tooltip.js -* Copyright jQuery Foundation and other contributors; Licensed MIT */ - -(function(t){"function"==typeof define&&define.amd?define(["jquery"],t):t(jQuery)})(function(t){function e(t){for(var e=t.css("visibility");"inherit"===e;)t=t.parent(),e=t.css("visibility");return"hidden"!==e}function i(t){for(var e,i;t.length&&t[0]!==document;){if(e=t.css("position"),("absolute"===e||"relative"===e||"fixed"===e)&&(i=parseInt(t.css("zIndex"),10),!isNaN(i)&&0!==i))return i;t=t.parent()}return 0}function s(){this._curInst=null,this._keyEvent=!1,this._disabledInputs=[],this._datepickerShowing=!1,this._inDialog=!1,this._mainDivId="ui-datepicker-div",this._inlineClass="ui-datepicker-inline",this._appendClass="ui-datepicker-append",this._triggerClass="ui-datepicker-trigger",this._dialogClass="ui-datepicker-dialog",this._disableClass="ui-datepicker-disabled",this._unselectableClass="ui-datepicker-unselectable",this._currentClass="ui-datepicker-current-day",this._dayOverClass="ui-datepicker-days-cell-over",this.regional=[],this.regional[""]={closeText:"Done",prevText:"Prev",nextText:"Next",currentText:"Today",monthNames:["January","February","March","April","May","June","July","August","September","October","November","December"],monthNamesShort:["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],dayNames:["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],dayNamesShort:["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],dayNamesMin:["Su","Mo","Tu","We","Th","Fr","Sa"],weekHeader:"Wk",dateFormat:"mm/dd/yy",firstDay:0,isRTL:!1,showMonthAfterYear:!1,yearSuffix:""},this._defaults={showOn:"focus",showAnim:"fadeIn",showOptions:{},defaultDate:null,appendText:"",buttonText:"...",buttonImage:"",buttonImageOnly:!1,hideIfNoPrevNext:!1,navigationAsDateFormat:!1,gotoCurrent:!1,changeMonth:!1,changeYear:!1,yearRange:"c-10:c+10",showOtherMonths:!1,selectOtherMonths:!1,showWeek:!1,calculateWeek:this.iso8601Week,shortYearCutoff:"+10",minDate:null,maxDate:null,duration:"fast",beforeShowDay:null,beforeShow:null,onSelect:null,onChangeMonthYear:null,onClose:null,numberOfMonths:1,showCurrentAtPos:0,stepMonths:1,stepBigMonths:12,altField:"",altFormat:"",constrainInput:!0,showButtonPanel:!1,autoSize:!1,disabled:!1},t.extend(this._defaults,this.regional[""]),this.regional.en=t.extend(!0,{},this.regional[""]),this.regional["en-US"]=t.extend(!0,{},this.regional.en),this.dpDiv=n(t("
    "))}function n(e){var i="button, .ui-datepicker-prev, .ui-datepicker-next, .ui-datepicker-calendar td a";return e.on("mouseout",i,function(){t(this).removeClass("ui-state-hover"),-1!==this.className.indexOf("ui-datepicker-prev")&&t(this).removeClass("ui-datepicker-prev-hover"),-1!==this.className.indexOf("ui-datepicker-next")&&t(this).removeClass("ui-datepicker-next-hover")}).on("mouseover",i,o)}function o(){t.datepicker._isDisabledDatepicker(m.inline?m.dpDiv.parent()[0]:m.input[0])||(t(this).parents(".ui-datepicker-calendar").find("a").removeClass("ui-state-hover"),t(this).addClass("ui-state-hover"),-1!==this.className.indexOf("ui-datepicker-prev")&&t(this).addClass("ui-datepicker-prev-hover"),-1!==this.className.indexOf("ui-datepicker-next")&&t(this).addClass("ui-datepicker-next-hover"))}function a(e,i){t.extend(e,i);for(var s in i)null==i[s]&&(e[s]=i[s]);return e}function r(t){return function(){var e=this.element.val();t.apply(this,arguments),this._refresh(),e!==this.element.val()&&this._trigger("change")}}t.ui=t.ui||{},t.ui.version="1.12.1";var h=0,l=Array.prototype.slice;t.cleanData=function(e){return function(i){var s,n,o;for(o=0;null!=(n=i[o]);o++)try{s=t._data(n,"events"),s&&s.remove&&t(n).triggerHandler("remove")}catch(a){}e(i)}}(t.cleanData),t.widget=function(e,i,s){var n,o,a,r={},h=e.split(".")[0];e=e.split(".")[1];var l=h+"-"+e;return s||(s=i,i=t.Widget),t.isArray(s)&&(s=t.extend.apply(null,[{}].concat(s))),t.expr[":"][l.toLowerCase()]=function(e){return!!t.data(e,l)},t[h]=t[h]||{},n=t[h][e],o=t[h][e]=function(t,e){return this._createWidget?(arguments.length&&this._createWidget(t,e),void 0):new o(t,e)},t.extend(o,n,{version:s.version,_proto:t.extend({},s),_childConstructors:[]}),a=new i,a.options=t.widget.extend({},a.options),t.each(s,function(e,s){return t.isFunction(s)?(r[e]=function(){function t(){return i.prototype[e].apply(this,arguments)}function n(t){return i.prototype[e].apply(this,t)}return function(){var e,i=this._super,o=this._superApply;return this._super=t,this._superApply=n,e=s.apply(this,arguments),this._super=i,this._superApply=o,e}}(),void 0):(r[e]=s,void 0)}),o.prototype=t.widget.extend(a,{widgetEventPrefix:n?a.widgetEventPrefix||e:e},r,{constructor:o,namespace:h,widgetName:e,widgetFullName:l}),n?(t.each(n._childConstructors,function(e,i){var s=i.prototype;t.widget(s.namespace+"."+s.widgetName,o,i._proto)}),delete n._childConstructors):i._childConstructors.push(o),t.widget.bridge(e,o),o},t.widget.extend=function(e){for(var i,s,n=l.call(arguments,1),o=0,a=n.length;a>o;o++)for(i in n[o])s=n[o][i],n[o].hasOwnProperty(i)&&void 0!==s&&(e[i]=t.isPlainObject(s)?t.isPlainObject(e[i])?t.widget.extend({},e[i],s):t.widget.extend({},s):s);return e},t.widget.bridge=function(e,i){var s=i.prototype.widgetFullName||e;t.fn[e]=function(n){var o="string"==typeof n,a=l.call(arguments,1),r=this;return o?this.length||"instance"!==n?this.each(function(){var i,o=t.data(this,s);return"instance"===n?(r=o,!1):o?t.isFunction(o[n])&&"_"!==n.charAt(0)?(i=o[n].apply(o,a),i!==o&&void 0!==i?(r=i&&i.jquery?r.pushStack(i.get()):i,!1):void 0):t.error("no such method '"+n+"' for "+e+" widget instance"):t.error("cannot call methods on "+e+" prior to initialization; "+"attempted to call method '"+n+"'")}):r=void 0:(a.length&&(n=t.widget.extend.apply(null,[n].concat(a))),this.each(function(){var e=t.data(this,s);e?(e.option(n||{}),e._init&&e._init()):t.data(this,s,new i(n,this))})),r}},t.Widget=function(){},t.Widget._childConstructors=[],t.Widget.prototype={widgetName:"widget",widgetEventPrefix:"",defaultElement:"
    ",options:{classes:{},disabled:!1,create:null},_createWidget:function(e,i){i=t(i||this.defaultElement||this)[0],this.element=t(i),this.uuid=h++,this.eventNamespace="."+this.widgetName+this.uuid,this.bindings=t(),this.hoverable=t(),this.focusable=t(),this.classesElementLookup={},i!==this&&(t.data(i,this.widgetFullName,this),this._on(!0,this.element,{remove:function(t){t.target===i&&this.destroy()}}),this.document=t(i.style?i.ownerDocument:i.document||i),this.window=t(this.document[0].defaultView||this.document[0].parentWindow)),this.options=t.widget.extend({},this.options,this._getCreateOptions(),e),this._create(),this.options.disabled&&this._setOptionDisabled(this.options.disabled),this._trigger("create",null,this._getCreateEventData()),this._init()},_getCreateOptions:function(){return{}},_getCreateEventData:t.noop,_create:t.noop,_init:t.noop,destroy:function(){var e=this;this._destroy(),t.each(this.classesElementLookup,function(t,i){e._removeClass(i,t)}),this.element.off(this.eventNamespace).removeData(this.widgetFullName),this.widget().off(this.eventNamespace).removeAttr("aria-disabled"),this.bindings.off(this.eventNamespace)},_destroy:t.noop,widget:function(){return this.element},option:function(e,i){var s,n,o,a=e;if(0===arguments.length)return t.widget.extend({},this.options);if("string"==typeof e)if(a={},s=e.split("."),e=s.shift(),s.length){for(n=a[e]=t.widget.extend({},this.options[e]),o=0;s.length-1>o;o++)n[s[o]]=n[s[o]]||{},n=n[s[o]];if(e=s.pop(),1===arguments.length)return void 0===n[e]?null:n[e];n[e]=i}else{if(1===arguments.length)return void 0===this.options[e]?null:this.options[e];a[e]=i}return this._setOptions(a),this},_setOptions:function(t){var e;for(e in t)this._setOption(e,t[e]);return this},_setOption:function(t,e){return"classes"===t&&this._setOptionClasses(e),this.options[t]=e,"disabled"===t&&this._setOptionDisabled(e),this},_setOptionClasses:function(e){var i,s,n;for(i in e)n=this.classesElementLookup[i],e[i]!==this.options.classes[i]&&n&&n.length&&(s=t(n.get()),this._removeClass(n,i),s.addClass(this._classes({element:s,keys:i,classes:e,add:!0})))},_setOptionDisabled:function(t){this._toggleClass(this.widget(),this.widgetFullName+"-disabled",null,!!t),t&&(this._removeClass(this.hoverable,null,"ui-state-hover"),this._removeClass(this.focusable,null,"ui-state-focus"))},enable:function(){return this._setOptions({disabled:!1})},disable:function(){return this._setOptions({disabled:!0})},_classes:function(e){function i(i,o){var a,r;for(r=0;i.length>r;r++)a=n.classesElementLookup[i[r]]||t(),a=e.add?t(t.unique(a.get().concat(e.element.get()))):t(a.not(e.element).get()),n.classesElementLookup[i[r]]=a,s.push(i[r]),o&&e.classes[i[r]]&&s.push(e.classes[i[r]])}var s=[],n=this;return e=t.extend({element:this.element,classes:this.options.classes||{}},e),this._on(e.element,{remove:"_untrackClassesElement"}),e.keys&&i(e.keys.match(/\S+/g)||[],!0),e.extra&&i(e.extra.match(/\S+/g)||[]),s.join(" ")},_untrackClassesElement:function(e){var i=this;t.each(i.classesElementLookup,function(s,n){-1!==t.inArray(e.target,n)&&(i.classesElementLookup[s]=t(n.not(e.target).get()))})},_removeClass:function(t,e,i){return this._toggleClass(t,e,i,!1)},_addClass:function(t,e,i){return this._toggleClass(t,e,i,!0)},_toggleClass:function(t,e,i,s){s="boolean"==typeof s?s:i;var n="string"==typeof t||null===t,o={extra:n?e:i,keys:n?t:e,element:n?this.element:t,add:s};return o.element.toggleClass(this._classes(o),s),this},_on:function(e,i,s){var n,o=this;"boolean"!=typeof e&&(s=i,i=e,e=!1),s?(i=n=t(i),this.bindings=this.bindings.add(i)):(s=i,i=this.element,n=this.widget()),t.each(s,function(s,a){function r(){return e||o.options.disabled!==!0&&!t(this).hasClass("ui-state-disabled")?("string"==typeof a?o[a]:a).apply(o,arguments):void 0}"string"!=typeof a&&(r.guid=a.guid=a.guid||r.guid||t.guid++);var h=s.match(/^([\w:-]*)\s*(.*)$/),l=h[1]+o.eventNamespace,c=h[2];c?n.on(l,c,r):i.on(l,r)})},_off:function(e,i){i=(i||"").split(" ").join(this.eventNamespace+" ")+this.eventNamespace,e.off(i).off(i),this.bindings=t(this.bindings.not(e).get()),this.focusable=t(this.focusable.not(e).get()),this.hoverable=t(this.hoverable.not(e).get())},_delay:function(t,e){function i(){return("string"==typeof t?s[t]:t).apply(s,arguments)}var s=this;return setTimeout(i,e||0)},_hoverable:function(e){this.hoverable=this.hoverable.add(e),this._on(e,{mouseenter:function(e){this._addClass(t(e.currentTarget),null,"ui-state-hover")},mouseleave:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-hover")}})},_focusable:function(e){this.focusable=this.focusable.add(e),this._on(e,{focusin:function(e){this._addClass(t(e.currentTarget),null,"ui-state-focus")},focusout:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-focus")}})},_trigger:function(e,i,s){var n,o,a=this.options[e];if(s=s||{},i=t.Event(i),i.type=(e===this.widgetEventPrefix?e:this.widgetEventPrefix+e).toLowerCase(),i.target=this.element[0],o=i.originalEvent)for(n in o)n in i||(i[n]=o[n]);return this.element.trigger(i,s),!(t.isFunction(a)&&a.apply(this.element[0],[i].concat(s))===!1||i.isDefaultPrevented())}},t.each({show:"fadeIn",hide:"fadeOut"},function(e,i){t.Widget.prototype["_"+e]=function(s,n,o){"string"==typeof n&&(n={effect:n});var a,r=n?n===!0||"number"==typeof n?i:n.effect||i:e;n=n||{},"number"==typeof n&&(n={duration:n}),a=!t.isEmptyObject(n),n.complete=o,n.delay&&s.delay(n.delay),a&&t.effects&&t.effects.effect[r]?s[e](n):r!==e&&s[r]?s[r](n.duration,n.easing,o):s.queue(function(i){t(this)[e](),o&&o.call(s[0]),i()})}}),t.widget,function(){function e(t,e,i){return[parseFloat(t[0])*(u.test(t[0])?e/100:1),parseFloat(t[1])*(u.test(t[1])?i/100:1)]}function i(e,i){return parseInt(t.css(e,i),10)||0}function s(e){var i=e[0];return 9===i.nodeType?{width:e.width(),height:e.height(),offset:{top:0,left:0}}:t.isWindow(i)?{width:e.width(),height:e.height(),offset:{top:e.scrollTop(),left:e.scrollLeft()}}:i.preventDefault?{width:0,height:0,offset:{top:i.pageY,left:i.pageX}}:{width:e.outerWidth(),height:e.outerHeight(),offset:e.offset()}}var n,o=Math.max,a=Math.abs,r=/left|center|right/,h=/top|center|bottom/,l=/[\+\-]\d+(\.[\d]+)?%?/,c=/^\w+/,u=/%$/,d=t.fn.position;t.position={scrollbarWidth:function(){if(void 0!==n)return n;var e,i,s=t("
    "),o=s.children()[0];return t("body").append(s),e=o.offsetWidth,s.css("overflow","scroll"),i=o.offsetWidth,e===i&&(i=s[0].clientWidth),s.remove(),n=e-i},getScrollInfo:function(e){var i=e.isWindow||e.isDocument?"":e.element.css("overflow-x"),s=e.isWindow||e.isDocument?"":e.element.css("overflow-y"),n="scroll"===i||"auto"===i&&e.widthi?"left":e>0?"right":"center",vertical:0>r?"top":s>0?"bottom":"middle"};l>p&&p>a(e+i)&&(u.horizontal="center"),c>f&&f>a(s+r)&&(u.vertical="middle"),u.important=o(a(e),a(i))>o(a(s),a(r))?"horizontal":"vertical",n.using.call(this,t,u)}),h.offset(t.extend(D,{using:r}))})},t.ui.position={fit:{left:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollLeft:s.offset.left,a=s.width,r=t.left-e.collisionPosition.marginLeft,h=n-r,l=r+e.collisionWidth-a-n;e.collisionWidth>a?h>0&&0>=l?(i=t.left+h+e.collisionWidth-a-n,t.left+=h-i):t.left=l>0&&0>=h?n:h>l?n+a-e.collisionWidth:n:h>0?t.left+=h:l>0?t.left-=l:t.left=o(t.left-r,t.left)},top:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollTop:s.offset.top,a=e.within.height,r=t.top-e.collisionPosition.marginTop,h=n-r,l=r+e.collisionHeight-a-n;e.collisionHeight>a?h>0&&0>=l?(i=t.top+h+e.collisionHeight-a-n,t.top+=h-i):t.top=l>0&&0>=h?n:h>l?n+a-e.collisionHeight:n:h>0?t.top+=h:l>0?t.top-=l:t.top=o(t.top-r,t.top)}},flip:{left:function(t,e){var i,s,n=e.within,o=n.offset.left+n.scrollLeft,r=n.width,h=n.isWindow?n.scrollLeft:n.offset.left,l=t.left-e.collisionPosition.marginLeft,c=l-h,u=l+e.collisionWidth-r-h,d="left"===e.my[0]?-e.elemWidth:"right"===e.my[0]?e.elemWidth:0,p="left"===e.at[0]?e.targetWidth:"right"===e.at[0]?-e.targetWidth:0,f=-2*e.offset[0];0>c?(i=t.left+d+p+f+e.collisionWidth-r-o,(0>i||a(c)>i)&&(t.left+=d+p+f)):u>0&&(s=t.left-e.collisionPosition.marginLeft+d+p+f-h,(s>0||u>a(s))&&(t.left+=d+p+f))},top:function(t,e){var i,s,n=e.within,o=n.offset.top+n.scrollTop,r=n.height,h=n.isWindow?n.scrollTop:n.offset.top,l=t.top-e.collisionPosition.marginTop,c=l-h,u=l+e.collisionHeight-r-h,d="top"===e.my[1],p=d?-e.elemHeight:"bottom"===e.my[1]?e.elemHeight:0,f="top"===e.at[1]?e.targetHeight:"bottom"===e.at[1]?-e.targetHeight:0,g=-2*e.offset[1];0>c?(s=t.top+p+f+g+e.collisionHeight-r-o,(0>s||a(c)>s)&&(t.top+=p+f+g)):u>0&&(i=t.top-e.collisionPosition.marginTop+p+f+g-h,(i>0||u>a(i))&&(t.top+=p+f+g))}},flipfit:{left:function(){t.ui.position.flip.left.apply(this,arguments),t.ui.position.fit.left.apply(this,arguments)},top:function(){t.ui.position.flip.top.apply(this,arguments),t.ui.position.fit.top.apply(this,arguments)}}}}(),t.ui.position,t.extend(t.expr[":"],{data:t.expr.createPseudo?t.expr.createPseudo(function(e){return function(i){return!!t.data(i,e)}}):function(e,i,s){return!!t.data(e,s[3])}}),t.fn.extend({disableSelection:function(){var t="onselectstart"in document.createElement("div")?"selectstart":"mousedown";return function(){return this.on(t+".ui-disableSelection",function(t){t.preventDefault()})}}(),enableSelection:function(){return this.off(".ui-disableSelection")}});var c="ui-effects-",u="ui-effects-style",d="ui-effects-animated",p=t;t.effects={effect:{}},function(t,e){function i(t,e,i){var s=u[e.type]||{};return null==t?i||!e.def?null:e.def:(t=s.floor?~~t:parseFloat(t),isNaN(t)?e.def:s.mod?(t+s.mod)%s.mod:0>t?0:t>s.max?s.max:t)}function s(i){var s=l(),n=s._rgba=[];return i=i.toLowerCase(),f(h,function(t,o){var a,r=o.re.exec(i),h=r&&o.parse(r),l=o.space||"rgba";return h?(a=s[l](h),s[c[l].cache]=a[c[l].cache],n=s._rgba=a._rgba,!1):e}),n.length?("0,0,0,0"===n.join()&&t.extend(n,o.transparent),s):o[i]}function n(t,e,i){return i=(i+1)%1,1>6*i?t+6*(e-t)*i:1>2*i?e:2>3*i?t+6*(e-t)*(2/3-i):t}var o,a="backgroundColor borderBottomColor borderLeftColor borderRightColor borderTopColor color columnRuleColor outlineColor textDecorationColor textEmphasisColor",r=/^([\-+])=\s*(\d+\.?\d*)/,h=[{re:/rgba?\(\s*(\d{1,3})\s*,\s*(\d{1,3})\s*,\s*(\d{1,3})\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,parse:function(t){return[t[1],t[2],t[3],t[4]]}},{re:/rgba?\(\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,parse:function(t){return[2.55*t[1],2.55*t[2],2.55*t[3],t[4]]}},{re:/#([a-f0-9]{2})([a-f0-9]{2})([a-f0-9]{2})/,parse:function(t){return[parseInt(t[1],16),parseInt(t[2],16),parseInt(t[3],16)]}},{re:/#([a-f0-9])([a-f0-9])([a-f0-9])/,parse:function(t){return[parseInt(t[1]+t[1],16),parseInt(t[2]+t[2],16),parseInt(t[3]+t[3],16)]}},{re:/hsla?\(\s*(\d+(?:\.\d+)?)\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,space:"hsla",parse:function(t){return[t[1],t[2]/100,t[3]/100,t[4]]}}],l=t.Color=function(e,i,s,n){return new t.Color.fn.parse(e,i,s,n)},c={rgba:{props:{red:{idx:0,type:"byte"},green:{idx:1,type:"byte"},blue:{idx:2,type:"byte"}}},hsla:{props:{hue:{idx:0,type:"degrees"},saturation:{idx:1,type:"percent"},lightness:{idx:2,type:"percent"}}}},u={"byte":{floor:!0,max:255},percent:{max:1},degrees:{mod:360,floor:!0}},d=l.support={},p=t("

    ")[0],f=t.each;p.style.cssText="background-color:rgba(1,1,1,.5)",d.rgba=p.style.backgroundColor.indexOf("rgba")>-1,f(c,function(t,e){e.cache="_"+t,e.props.alpha={idx:3,type:"percent",def:1}}),l.fn=t.extend(l.prototype,{parse:function(n,a,r,h){if(n===e)return this._rgba=[null,null,null,null],this;(n.jquery||n.nodeType)&&(n=t(n).css(a),a=e);var u=this,d=t.type(n),p=this._rgba=[];return a!==e&&(n=[n,a,r,h],d="array"),"string"===d?this.parse(s(n)||o._default):"array"===d?(f(c.rgba.props,function(t,e){p[e.idx]=i(n[e.idx],e)}),this):"object"===d?(n instanceof l?f(c,function(t,e){n[e.cache]&&(u[e.cache]=n[e.cache].slice())}):f(c,function(e,s){var o=s.cache;f(s.props,function(t,e){if(!u[o]&&s.to){if("alpha"===t||null==n[t])return;u[o]=s.to(u._rgba)}u[o][e.idx]=i(n[t],e,!0)}),u[o]&&0>t.inArray(null,u[o].slice(0,3))&&(u[o][3]=1,s.from&&(u._rgba=s.from(u[o])))}),this):e},is:function(t){var i=l(t),s=!0,n=this;return f(c,function(t,o){var a,r=i[o.cache];return r&&(a=n[o.cache]||o.to&&o.to(n._rgba)||[],f(o.props,function(t,i){return null!=r[i.idx]?s=r[i.idx]===a[i.idx]:e})),s}),s},_space:function(){var t=[],e=this;return f(c,function(i,s){e[s.cache]&&t.push(i)}),t.pop()},transition:function(t,e){var s=l(t),n=s._space(),o=c[n],a=0===this.alpha()?l("transparent"):this,r=a[o.cache]||o.to(a._rgba),h=r.slice();return s=s[o.cache],f(o.props,function(t,n){var o=n.idx,a=r[o],l=s[o],c=u[n.type]||{};null!==l&&(null===a?h[o]=l:(c.mod&&(l-a>c.mod/2?a+=c.mod:a-l>c.mod/2&&(a-=c.mod)),h[o]=i((l-a)*e+a,n)))}),this[n](h)},blend:function(e){if(1===this._rgba[3])return this;var i=this._rgba.slice(),s=i.pop(),n=l(e)._rgba;return l(t.map(i,function(t,e){return(1-s)*n[e]+s*t}))},toRgbaString:function(){var e="rgba(",i=t.map(this._rgba,function(t,e){return null==t?e>2?1:0:t});return 1===i[3]&&(i.pop(),e="rgb("),e+i.join()+")"},toHslaString:function(){var e="hsla(",i=t.map(this.hsla(),function(t,e){return null==t&&(t=e>2?1:0),e&&3>e&&(t=Math.round(100*t)+"%"),t});return 1===i[3]&&(i.pop(),e="hsl("),e+i.join()+")"},toHexString:function(e){var i=this._rgba.slice(),s=i.pop();return e&&i.push(~~(255*s)),"#"+t.map(i,function(t){return t=(t||0).toString(16),1===t.length?"0"+t:t}).join("")},toString:function(){return 0===this._rgba[3]?"transparent":this.toRgbaString()}}),l.fn.parse.prototype=l.fn,c.hsla.to=function(t){if(null==t[0]||null==t[1]||null==t[2])return[null,null,null,t[3]];var e,i,s=t[0]/255,n=t[1]/255,o=t[2]/255,a=t[3],r=Math.max(s,n,o),h=Math.min(s,n,o),l=r-h,c=r+h,u=.5*c;return e=h===r?0:s===r?60*(n-o)/l+360:n===r?60*(o-s)/l+120:60*(s-n)/l+240,i=0===l?0:.5>=u?l/c:l/(2-c),[Math.round(e)%360,i,u,null==a?1:a]},c.hsla.from=function(t){if(null==t[0]||null==t[1]||null==t[2])return[null,null,null,t[3]];var e=t[0]/360,i=t[1],s=t[2],o=t[3],a=.5>=s?s*(1+i):s+i-s*i,r=2*s-a;return[Math.round(255*n(r,a,e+1/3)),Math.round(255*n(r,a,e)),Math.round(255*n(r,a,e-1/3)),o]},f(c,function(s,n){var o=n.props,a=n.cache,h=n.to,c=n.from;l.fn[s]=function(s){if(h&&!this[a]&&(this[a]=h(this._rgba)),s===e)return this[a].slice();var n,r=t.type(s),u="array"===r||"object"===r?s:arguments,d=this[a].slice();return f(o,function(t,e){var s=u["object"===r?t:e.idx];null==s&&(s=d[e.idx]),d[e.idx]=i(s,e)}),c?(n=l(c(d)),n[a]=d,n):l(d)},f(o,function(e,i){l.fn[e]||(l.fn[e]=function(n){var o,a=t.type(n),h="alpha"===e?this._hsla?"hsla":"rgba":s,l=this[h](),c=l[i.idx];return"undefined"===a?c:("function"===a&&(n=n.call(this,c),a=t.type(n)),null==n&&i.empty?this:("string"===a&&(o=r.exec(n),o&&(n=c+parseFloat(o[2])*("+"===o[1]?1:-1))),l[i.idx]=n,this[h](l)))})})}),l.hook=function(e){var i=e.split(" ");f(i,function(e,i){t.cssHooks[i]={set:function(e,n){var o,a,r="";if("transparent"!==n&&("string"!==t.type(n)||(o=s(n)))){if(n=l(o||n),!d.rgba&&1!==n._rgba[3]){for(a="backgroundColor"===i?e.parentNode:e;(""===r||"transparent"===r)&&a&&a.style;)try{r=t.css(a,"backgroundColor"),a=a.parentNode}catch(h){}n=n.blend(r&&"transparent"!==r?r:"_default")}n=n.toRgbaString()}try{e.style[i]=n}catch(h){}}},t.fx.step[i]=function(e){e.colorInit||(e.start=l(e.elem,i),e.end=l(e.end),e.colorInit=!0),t.cssHooks[i].set(e.elem,e.start.transition(e.end,e.pos))}})},l.hook(a),t.cssHooks.borderColor={expand:function(t){var e={};return f(["Top","Right","Bottom","Left"],function(i,s){e["border"+s+"Color"]=t}),e}},o=t.Color.names={aqua:"#00ffff",black:"#000000",blue:"#0000ff",fuchsia:"#ff00ff",gray:"#808080",green:"#008000",lime:"#00ff00",maroon:"#800000",navy:"#000080",olive:"#808000",purple:"#800080",red:"#ff0000",silver:"#c0c0c0",teal:"#008080",white:"#ffffff",yellow:"#ffff00",transparent:[null,null,null,0],_default:"#ffffff"}}(p),function(){function e(e){var i,s,n=e.ownerDocument.defaultView?e.ownerDocument.defaultView.getComputedStyle(e,null):e.currentStyle,o={};if(n&&n.length&&n[0]&&n[n[0]])for(s=n.length;s--;)i=n[s],"string"==typeof n[i]&&(o[t.camelCase(i)]=n[i]);else for(i in n)"string"==typeof n[i]&&(o[i]=n[i]);return o}function i(e,i){var s,o,a={};for(s in i)o=i[s],e[s]!==o&&(n[s]||(t.fx.step[s]||!isNaN(parseFloat(o)))&&(a[s]=o));return a}var s=["add","remove","toggle"],n={border:1,borderBottom:1,borderColor:1,borderLeft:1,borderRight:1,borderTop:1,borderWidth:1,margin:1,padding:1};t.each(["borderLeftStyle","borderRightStyle","borderBottomStyle","borderTopStyle"],function(e,i){t.fx.step[i]=function(t){("none"!==t.end&&!t.setAttr||1===t.pos&&!t.setAttr)&&(p.style(t.elem,i,t.end),t.setAttr=!0)}}),t.fn.addBack||(t.fn.addBack=function(t){return this.add(null==t?this.prevObject:this.prevObject.filter(t))}),t.effects.animateClass=function(n,o,a,r){var h=t.speed(o,a,r);return this.queue(function(){var o,a=t(this),r=a.attr("class")||"",l=h.children?a.find("*").addBack():a;l=l.map(function(){var i=t(this);return{el:i,start:e(this)}}),o=function(){t.each(s,function(t,e){n[e]&&a[e+"Class"](n[e])})},o(),l=l.map(function(){return this.end=e(this.el[0]),this.diff=i(this.start,this.end),this}),a.attr("class",r),l=l.map(function(){var e=this,i=t.Deferred(),s=t.extend({},h,{queue:!1,complete:function(){i.resolve(e)}});return this.el.animate(this.diff,s),i.promise()}),t.when.apply(t,l.get()).done(function(){o(),t.each(arguments,function(){var e=this.el;t.each(this.diff,function(t){e.css(t,"")})}),h.complete.call(a[0])})})},t.fn.extend({addClass:function(e){return function(i,s,n,o){return s?t.effects.animateClass.call(this,{add:i},s,n,o):e.apply(this,arguments)}}(t.fn.addClass),removeClass:function(e){return function(i,s,n,o){return arguments.length>1?t.effects.animateClass.call(this,{remove:i},s,n,o):e.apply(this,arguments)}}(t.fn.removeClass),toggleClass:function(e){return function(i,s,n,o,a){return"boolean"==typeof s||void 0===s?n?t.effects.animateClass.call(this,s?{add:i}:{remove:i},n,o,a):e.apply(this,arguments):t.effects.animateClass.call(this,{toggle:i},s,n,o)}}(t.fn.toggleClass),switchClass:function(e,i,s,n,o){return t.effects.animateClass.call(this,{add:i,remove:e},s,n,o)}})}(),function(){function e(e,i,s,n){return t.isPlainObject(e)&&(i=e,e=e.effect),e={effect:e},null==i&&(i={}),t.isFunction(i)&&(n=i,s=null,i={}),("number"==typeof i||t.fx.speeds[i])&&(n=s,s=i,i={}),t.isFunction(s)&&(n=s,s=null),i&&t.extend(e,i),s=s||i.duration,e.duration=t.fx.off?0:"number"==typeof s?s:s in t.fx.speeds?t.fx.speeds[s]:t.fx.speeds._default,e.complete=n||i.complete,e}function i(e){return!e||"number"==typeof e||t.fx.speeds[e]?!0:"string"!=typeof e||t.effects.effect[e]?t.isFunction(e)?!0:"object"!=typeof e||e.effect?!1:!0:!0}function s(t,e){var i=e.outerWidth(),s=e.outerHeight(),n=/^rect\((-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto)\)$/,o=n.exec(t)||["",0,i,s,0];return{top:parseFloat(o[1])||0,right:"auto"===o[2]?i:parseFloat(o[2]),bottom:"auto"===o[3]?s:parseFloat(o[3]),left:parseFloat(o[4])||0}}t.expr&&t.expr.filters&&t.expr.filters.animated&&(t.expr.filters.animated=function(e){return function(i){return!!t(i).data(d)||e(i)}}(t.expr.filters.animated)),t.uiBackCompat!==!1&&t.extend(t.effects,{save:function(t,e){for(var i=0,s=e.length;s>i;i++)null!==e[i]&&t.data(c+e[i],t[0].style[e[i]])},restore:function(t,e){for(var i,s=0,n=e.length;n>s;s++)null!==e[s]&&(i=t.data(c+e[s]),t.css(e[s],i))},setMode:function(t,e){return"toggle"===e&&(e=t.is(":hidden")?"show":"hide"),e},createWrapper:function(e){if(e.parent().is(".ui-effects-wrapper"))return e.parent();var i={width:e.outerWidth(!0),height:e.outerHeight(!0),"float":e.css("float")},s=t("

    ").addClass("ui-effects-wrapper").css({fontSize:"100%",background:"transparent",border:"none",margin:0,padding:0}),n={width:e.width(),height:e.height()},o=document.activeElement;try{o.id}catch(a){o=document.body}return e.wrap(s),(e[0]===o||t.contains(e[0],o))&&t(o).trigger("focus"),s=e.parent(),"static"===e.css("position")?(s.css({position:"relative"}),e.css({position:"relative"})):(t.extend(i,{position:e.css("position"),zIndex:e.css("z-index")}),t.each(["top","left","bottom","right"],function(t,s){i[s]=e.css(s),isNaN(parseInt(i[s],10))&&(i[s]="auto")}),e.css({position:"relative",top:0,left:0,right:"auto",bottom:"auto"})),e.css(n),s.css(i).show()},removeWrapper:function(e){var i=document.activeElement;return e.parent().is(".ui-effects-wrapper")&&(e.parent().replaceWith(e),(e[0]===i||t.contains(e[0],i))&&t(i).trigger("focus")),e}}),t.extend(t.effects,{version:"1.12.1",define:function(e,i,s){return s||(s=i,i="effect"),t.effects.effect[e]=s,t.effects.effect[e].mode=i,s},scaledDimensions:function(t,e,i){if(0===e)return{height:0,width:0,outerHeight:0,outerWidth:0};var s="horizontal"!==i?(e||100)/100:1,n="vertical"!==i?(e||100)/100:1;return{height:t.height()*n,width:t.width()*s,outerHeight:t.outerHeight()*n,outerWidth:t.outerWidth()*s}},clipToBox:function(t){return{width:t.clip.right-t.clip.left,height:t.clip.bottom-t.clip.top,left:t.clip.left,top:t.clip.top}},unshift:function(t,e,i){var s=t.queue();e>1&&s.splice.apply(s,[1,0].concat(s.splice(e,i))),t.dequeue()},saveStyle:function(t){t.data(u,t[0].style.cssText)},restoreStyle:function(t){t[0].style.cssText=t.data(u)||"",t.removeData(u)},mode:function(t,e){var i=t.is(":hidden");return"toggle"===e&&(e=i?"show":"hide"),(i?"hide"===e:"show"===e)&&(e="none"),e},getBaseline:function(t,e){var i,s;switch(t[0]){case"top":i=0;break;case"middle":i=.5;break;case"bottom":i=1;break;default:i=t[0]/e.height}switch(t[1]){case"left":s=0;break;case"center":s=.5;break;case"right":s=1;break;default:s=t[1]/e.width}return{x:s,y:i}},createPlaceholder:function(e){var i,s=e.css("position"),n=e.position();return e.css({marginTop:e.css("marginTop"),marginBottom:e.css("marginBottom"),marginLeft:e.css("marginLeft"),marginRight:e.css("marginRight")}).outerWidth(e.outerWidth()).outerHeight(e.outerHeight()),/^(static|relative)/.test(s)&&(s="absolute",i=t("<"+e[0].nodeName+">").insertAfter(e).css({display:/^(inline|ruby)/.test(e.css("display"))?"inline-block":"block",visibility:"hidden",marginTop:e.css("marginTop"),marginBottom:e.css("marginBottom"),marginLeft:e.css("marginLeft"),marginRight:e.css("marginRight"),"float":e.css("float")}).outerWidth(e.outerWidth()).outerHeight(e.outerHeight()).addClass("ui-effects-placeholder"),e.data(c+"placeholder",i)),e.css({position:s,left:n.left,top:n.top}),i},removePlaceholder:function(t){var e=c+"placeholder",i=t.data(e);i&&(i.remove(),t.removeData(e))},cleanUp:function(e){t.effects.restoreStyle(e),t.effects.removePlaceholder(e)},setTransition:function(e,i,s,n){return n=n||{},t.each(i,function(t,i){var o=e.cssUnit(i);o[0]>0&&(n[i]=o[0]*s+o[1])}),n}}),t.fn.extend({effect:function(){function i(e){function i(){r.removeData(d),t.effects.cleanUp(r),"hide"===s.mode&&r.hide(),a()}function a(){t.isFunction(h)&&h.call(r[0]),t.isFunction(e)&&e()}var r=t(this);s.mode=c.shift(),t.uiBackCompat===!1||o?"none"===s.mode?(r[l](),a()):n.call(r[0],s,i):(r.is(":hidden")?"hide"===l:"show"===l)?(r[l](),a()):n.call(r[0],s,a)}var s=e.apply(this,arguments),n=t.effects.effect[s.effect],o=n.mode,a=s.queue,r=a||"fx",h=s.complete,l=s.mode,c=[],u=function(e){var i=t(this),s=t.effects.mode(i,l)||o;i.data(d,!0),c.push(s),o&&("show"===s||s===o&&"hide"===s)&&i.show(),o&&"none"===s||t.effects.saveStyle(i),t.isFunction(e)&&e()};return t.fx.off||!n?l?this[l](s.duration,h):this.each(function(){h&&h.call(this)}):a===!1?this.each(u).each(i):this.queue(r,u).queue(r,i)},show:function(t){return function(s){if(i(s))return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="show",this.effect.call(this,n) -}}(t.fn.show),hide:function(t){return function(s){if(i(s))return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="hide",this.effect.call(this,n)}}(t.fn.hide),toggle:function(t){return function(s){if(i(s)||"boolean"==typeof s)return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="toggle",this.effect.call(this,n)}}(t.fn.toggle),cssUnit:function(e){var i=this.css(e),s=[];return t.each(["em","px","%","pt"],function(t,e){i.indexOf(e)>0&&(s=[parseFloat(i),e])}),s},cssClip:function(t){return t?this.css("clip","rect("+t.top+"px "+t.right+"px "+t.bottom+"px "+t.left+"px)"):s(this.css("clip"),this)},transfer:function(e,i){var s=t(this),n=t(e.to),o="fixed"===n.css("position"),a=t("body"),r=o?a.scrollTop():0,h=o?a.scrollLeft():0,l=n.offset(),c={top:l.top-r,left:l.left-h,height:n.innerHeight(),width:n.innerWidth()},u=s.offset(),d=t("
    ").appendTo("body").addClass(e.className).css({top:u.top-r,left:u.left-h,height:s.innerHeight(),width:s.innerWidth(),position:o?"fixed":"absolute"}).animate(c,e.duration,e.easing,function(){d.remove(),t.isFunction(i)&&i()})}}),t.fx.step.clip=function(e){e.clipInit||(e.start=t(e.elem).cssClip(),"string"==typeof e.end&&(e.end=s(e.end,e.elem)),e.clipInit=!0),t(e.elem).cssClip({top:e.pos*(e.end.top-e.start.top)+e.start.top,right:e.pos*(e.end.right-e.start.right)+e.start.right,bottom:e.pos*(e.end.bottom-e.start.bottom)+e.start.bottom,left:e.pos*(e.end.left-e.start.left)+e.start.left})}}(),function(){var e={};t.each(["Quad","Cubic","Quart","Quint","Expo"],function(t,i){e[i]=function(e){return Math.pow(e,t+2)}}),t.extend(e,{Sine:function(t){return 1-Math.cos(t*Math.PI/2)},Circ:function(t){return 1-Math.sqrt(1-t*t)},Elastic:function(t){return 0===t||1===t?t:-Math.pow(2,8*(t-1))*Math.sin((80*(t-1)-7.5)*Math.PI/15)},Back:function(t){return t*t*(3*t-2)},Bounce:function(t){for(var e,i=4;((e=Math.pow(2,--i))-1)/11>t;);return 1/Math.pow(4,3-i)-7.5625*Math.pow((3*e-2)/22-t,2)}}),t.each(e,function(e,i){t.easing["easeIn"+e]=i,t.easing["easeOut"+e]=function(t){return 1-i(1-t)},t.easing["easeInOut"+e]=function(t){return.5>t?i(2*t)/2:1-i(-2*t+2)/2}})}();var f=t.effects;t.effects.define("blind","hide",function(e,i){var s={up:["bottom","top"],vertical:["bottom","top"],down:["top","bottom"],left:["right","left"],horizontal:["right","left"],right:["left","right"]},n=t(this),o=e.direction||"up",a=n.cssClip(),r={clip:t.extend({},a)},h=t.effects.createPlaceholder(n);r.clip[s[o][0]]=r.clip[s[o][1]],"show"===e.mode&&(n.cssClip(r.clip),h&&h.css(t.effects.clipToBox(r)),r.clip=a),h&&h.animate(t.effects.clipToBox(r),e.duration,e.easing),n.animate(r,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("bounce",function(e,i){var s,n,o,a=t(this),r=e.mode,h="hide"===r,l="show"===r,c=e.direction||"up",u=e.distance,d=e.times||5,p=2*d+(l||h?1:0),f=e.duration/p,g=e.easing,m="up"===c||"down"===c?"top":"left",_="up"===c||"left"===c,v=0,b=a.queue().length;for(t.effects.createPlaceholder(a),o=a.css(m),u||(u=a["top"===m?"outerHeight":"outerWidth"]()/3),l&&(n={opacity:1},n[m]=o,a.css("opacity",0).css(m,_?2*-u:2*u).animate(n,f,g)),h&&(u/=Math.pow(2,d-1)),n={},n[m]=o;d>v;v++)s={},s[m]=(_?"-=":"+=")+u,a.animate(s,f,g).animate(n,f,g),u=h?2*u:u/2;h&&(s={opacity:0},s[m]=(_?"-=":"+=")+u,a.animate(s,f,g)),a.queue(i),t.effects.unshift(a,b,p+1)}),t.effects.define("clip","hide",function(e,i){var s,n={},o=t(this),a=e.direction||"vertical",r="both"===a,h=r||"horizontal"===a,l=r||"vertical"===a;s=o.cssClip(),n.clip={top:l?(s.bottom-s.top)/2:s.top,right:h?(s.right-s.left)/2:s.right,bottom:l?(s.bottom-s.top)/2:s.bottom,left:h?(s.right-s.left)/2:s.left},t.effects.createPlaceholder(o),"show"===e.mode&&(o.cssClip(n.clip),n.clip=s),o.animate(n,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("drop","hide",function(e,i){var s,n=t(this),o=e.mode,a="show"===o,r=e.direction||"left",h="up"===r||"down"===r?"top":"left",l="up"===r||"left"===r?"-=":"+=",c="+="===l?"-=":"+=",u={opacity:0};t.effects.createPlaceholder(n),s=e.distance||n["top"===h?"outerHeight":"outerWidth"](!0)/2,u[h]=l+s,a&&(n.css(u),u[h]=c+s,u.opacity=1),n.animate(u,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("explode","hide",function(e,i){function s(){b.push(this),b.length===u*d&&n()}function n(){p.css({visibility:"visible"}),t(b).remove(),i()}var o,a,r,h,l,c,u=e.pieces?Math.round(Math.sqrt(e.pieces)):3,d=u,p=t(this),f=e.mode,g="show"===f,m=p.show().css("visibility","hidden").offset(),_=Math.ceil(p.outerWidth()/d),v=Math.ceil(p.outerHeight()/u),b=[];for(o=0;u>o;o++)for(h=m.top+o*v,c=o-(u-1)/2,a=0;d>a;a++)r=m.left+a*_,l=a-(d-1)/2,p.clone().appendTo("body").wrap("
    ").css({position:"absolute",visibility:"visible",left:-a*_,top:-o*v}).parent().addClass("ui-effects-explode").css({position:"absolute",overflow:"hidden",width:_,height:v,left:r+(g?l*_:0),top:h+(g?c*v:0),opacity:g?0:1}).animate({left:r+(g?0:l*_),top:h+(g?0:c*v),opacity:g?1:0},e.duration||500,e.easing,s)}),t.effects.define("fade","toggle",function(e,i){var s="show"===e.mode;t(this).css("opacity",s?0:1).animate({opacity:s?1:0},{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("fold","hide",function(e,i){var s=t(this),n=e.mode,o="show"===n,a="hide"===n,r=e.size||15,h=/([0-9]+)%/.exec(r),l=!!e.horizFirst,c=l?["right","bottom"]:["bottom","right"],u=e.duration/2,d=t.effects.createPlaceholder(s),p=s.cssClip(),f={clip:t.extend({},p)},g={clip:t.extend({},p)},m=[p[c[0]],p[c[1]]],_=s.queue().length;h&&(r=parseInt(h[1],10)/100*m[a?0:1]),f.clip[c[0]]=r,g.clip[c[0]]=r,g.clip[c[1]]=0,o&&(s.cssClip(g.clip),d&&d.css(t.effects.clipToBox(g)),g.clip=p),s.queue(function(i){d&&d.animate(t.effects.clipToBox(f),u,e.easing).animate(t.effects.clipToBox(g),u,e.easing),i()}).animate(f,u,e.easing).animate(g,u,e.easing).queue(i),t.effects.unshift(s,_,4)}),t.effects.define("highlight","show",function(e,i){var s=t(this),n={backgroundColor:s.css("backgroundColor")};"hide"===e.mode&&(n.opacity=0),t.effects.saveStyle(s),s.css({backgroundImage:"none",backgroundColor:e.color||"#ffff99"}).animate(n,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("size",function(e,i){var s,n,o,a=t(this),r=["fontSize"],h=["borderTopWidth","borderBottomWidth","paddingTop","paddingBottom"],l=["borderLeftWidth","borderRightWidth","paddingLeft","paddingRight"],c=e.mode,u="effect"!==c,d=e.scale||"both",p=e.origin||["middle","center"],f=a.css("position"),g=a.position(),m=t.effects.scaledDimensions(a),_=e.from||m,v=e.to||t.effects.scaledDimensions(a,0);t.effects.createPlaceholder(a),"show"===c&&(o=_,_=v,v=o),n={from:{y:_.height/m.height,x:_.width/m.width},to:{y:v.height/m.height,x:v.width/m.width}},("box"===d||"both"===d)&&(n.from.y!==n.to.y&&(_=t.effects.setTransition(a,h,n.from.y,_),v=t.effects.setTransition(a,h,n.to.y,v)),n.from.x!==n.to.x&&(_=t.effects.setTransition(a,l,n.from.x,_),v=t.effects.setTransition(a,l,n.to.x,v))),("content"===d||"both"===d)&&n.from.y!==n.to.y&&(_=t.effects.setTransition(a,r,n.from.y,_),v=t.effects.setTransition(a,r,n.to.y,v)),p&&(s=t.effects.getBaseline(p,m),_.top=(m.outerHeight-_.outerHeight)*s.y+g.top,_.left=(m.outerWidth-_.outerWidth)*s.x+g.left,v.top=(m.outerHeight-v.outerHeight)*s.y+g.top,v.left=(m.outerWidth-v.outerWidth)*s.x+g.left),a.css(_),("content"===d||"both"===d)&&(h=h.concat(["marginTop","marginBottom"]).concat(r),l=l.concat(["marginLeft","marginRight"]),a.find("*[width]").each(function(){var i=t(this),s=t.effects.scaledDimensions(i),o={height:s.height*n.from.y,width:s.width*n.from.x,outerHeight:s.outerHeight*n.from.y,outerWidth:s.outerWidth*n.from.x},a={height:s.height*n.to.y,width:s.width*n.to.x,outerHeight:s.height*n.to.y,outerWidth:s.width*n.to.x};n.from.y!==n.to.y&&(o=t.effects.setTransition(i,h,n.from.y,o),a=t.effects.setTransition(i,h,n.to.y,a)),n.from.x!==n.to.x&&(o=t.effects.setTransition(i,l,n.from.x,o),a=t.effects.setTransition(i,l,n.to.x,a)),u&&t.effects.saveStyle(i),i.css(o),i.animate(a,e.duration,e.easing,function(){u&&t.effects.restoreStyle(i)})})),a.animate(v,{queue:!1,duration:e.duration,easing:e.easing,complete:function(){var e=a.offset();0===v.opacity&&a.css("opacity",_.opacity),u||(a.css("position","static"===f?"relative":f).offset(e),t.effects.saveStyle(a)),i()}})}),t.effects.define("scale",function(e,i){var s=t(this),n=e.mode,o=parseInt(e.percent,10)||(0===parseInt(e.percent,10)?0:"effect"!==n?0:100),a=t.extend(!0,{from:t.effects.scaledDimensions(s),to:t.effects.scaledDimensions(s,o,e.direction||"both"),origin:e.origin||["middle","center"]},e);e.fade&&(a.from.opacity=1,a.to.opacity=0),t.effects.effect.size.call(this,a,i)}),t.effects.define("puff","hide",function(e,i){var s=t.extend(!0,{},e,{fade:!0,percent:parseInt(e.percent,10)||150});t.effects.effect.scale.call(this,s,i)}),t.effects.define("pulsate","show",function(e,i){var s=t(this),n=e.mode,o="show"===n,a="hide"===n,r=o||a,h=2*(e.times||5)+(r?1:0),l=e.duration/h,c=0,u=1,d=s.queue().length;for((o||!s.is(":visible"))&&(s.css("opacity",0).show(),c=1);h>u;u++)s.animate({opacity:c},l,e.easing),c=1-c;s.animate({opacity:c},l,e.easing),s.queue(i),t.effects.unshift(s,d,h+1)}),t.effects.define("shake",function(e,i){var s=1,n=t(this),o=e.direction||"left",a=e.distance||20,r=e.times||3,h=2*r+1,l=Math.round(e.duration/h),c="up"===o||"down"===o?"top":"left",u="up"===o||"left"===o,d={},p={},f={},g=n.queue().length;for(t.effects.createPlaceholder(n),d[c]=(u?"-=":"+=")+a,p[c]=(u?"+=":"-=")+2*a,f[c]=(u?"-=":"+=")+2*a,n.animate(d,l,e.easing);r>s;s++)n.animate(p,l,e.easing).animate(f,l,e.easing);n.animate(p,l,e.easing).animate(d,l/2,e.easing).queue(i),t.effects.unshift(n,g,h+1)}),t.effects.define("slide","show",function(e,i){var s,n,o=t(this),a={up:["bottom","top"],down:["top","bottom"],left:["right","left"],right:["left","right"]},r=e.mode,h=e.direction||"left",l="up"===h||"down"===h?"top":"left",c="up"===h||"left"===h,u=e.distance||o["top"===l?"outerHeight":"outerWidth"](!0),d={};t.effects.createPlaceholder(o),s=o.cssClip(),n=o.position()[l],d[l]=(c?-1:1)*u+n,d.clip=o.cssClip(),d.clip[a[h][1]]=d.clip[a[h][0]],"show"===r&&(o.cssClip(d.clip),o.css(l,d[l]),d.clip=s,d[l]=n),o.animate(d,{queue:!1,duration:e.duration,easing:e.easing,complete:i})});var f;t.uiBackCompat!==!1&&(f=t.effects.define("transfer",function(e,i){t(this).transfer(e,i)})),t.ui.focusable=function(i,s){var n,o,a,r,h,l=i.nodeName.toLowerCase();return"area"===l?(n=i.parentNode,o=n.name,i.href&&o&&"map"===n.nodeName.toLowerCase()?(a=t("img[usemap='#"+o+"']"),a.length>0&&a.is(":visible")):!1):(/^(input|select|textarea|button|object)$/.test(l)?(r=!i.disabled,r&&(h=t(i).closest("fieldset")[0],h&&(r=!h.disabled))):r="a"===l?i.href||s:s,r&&t(i).is(":visible")&&e(t(i)))},t.extend(t.expr[":"],{focusable:function(e){return t.ui.focusable(e,null!=t.attr(e,"tabindex"))}}),t.ui.focusable,t.fn.form=function(){return"string"==typeof this[0].form?this.closest("form"):t(this[0].form)},t.ui.formResetMixin={_formResetHandler:function(){var e=t(this);setTimeout(function(){var i=e.data("ui-form-reset-instances");t.each(i,function(){this.refresh()})})},_bindFormResetHandler:function(){if(this.form=this.element.form(),this.form.length){var t=this.form.data("ui-form-reset-instances")||[];t.length||this.form.on("reset.ui-form-reset",this._formResetHandler),t.push(this),this.form.data("ui-form-reset-instances",t)}},_unbindFormResetHandler:function(){if(this.form.length){var e=this.form.data("ui-form-reset-instances");e.splice(t.inArray(this,e),1),e.length?this.form.data("ui-form-reset-instances",e):this.form.removeData("ui-form-reset-instances").off("reset.ui-form-reset")}}},"1.7"===t.fn.jquery.substring(0,3)&&(t.each(["Width","Height"],function(e,i){function s(e,i,s,o){return t.each(n,function(){i-=parseFloat(t.css(e,"padding"+this))||0,s&&(i-=parseFloat(t.css(e,"border"+this+"Width"))||0),o&&(i-=parseFloat(t.css(e,"margin"+this))||0)}),i}var n="Width"===i?["Left","Right"]:["Top","Bottom"],o=i.toLowerCase(),a={innerWidth:t.fn.innerWidth,innerHeight:t.fn.innerHeight,outerWidth:t.fn.outerWidth,outerHeight:t.fn.outerHeight};t.fn["inner"+i]=function(e){return void 0===e?a["inner"+i].call(this):this.each(function(){t(this).css(o,s(this,e)+"px")})},t.fn["outer"+i]=function(e,n){return"number"!=typeof e?a["outer"+i].call(this,e):this.each(function(){t(this).css(o,s(this,e,!0,n)+"px")})}}),t.fn.addBack=function(t){return this.add(null==t?this.prevObject:this.prevObject.filter(t))}),t.ui.keyCode={BACKSPACE:8,COMMA:188,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,LEFT:37,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SPACE:32,TAB:9,UP:38},t.ui.escapeSelector=function(){var t=/([!"#$%&'()*+,.\/:;<=>?@[\]^`{|}~])/g;return function(e){return e.replace(t,"\\$1")}}(),t.fn.labels=function(){var e,i,s,n,o;return this[0].labels&&this[0].labels.length?this.pushStack(this[0].labels):(n=this.eq(0).parents("label"),s=this.attr("id"),s&&(e=this.eq(0).parents().last(),o=e.add(e.length?e.siblings():this.siblings()),i="label[for='"+t.ui.escapeSelector(s)+"']",n=n.add(o.find(i).addBack(i))),this.pushStack(n))},t.fn.scrollParent=function(e){var i=this.css("position"),s="absolute"===i,n=e?/(auto|scroll|hidden)/:/(auto|scroll)/,o=this.parents().filter(function(){var e=t(this);return s&&"static"===e.css("position")?!1:n.test(e.css("overflow")+e.css("overflow-y")+e.css("overflow-x"))}).eq(0);return"fixed"!==i&&o.length?o:t(this[0].ownerDocument||document)},t.extend(t.expr[":"],{tabbable:function(e){var i=t.attr(e,"tabindex"),s=null!=i;return(!s||i>=0)&&t.ui.focusable(e,s)}}),t.fn.extend({uniqueId:function(){var t=0;return function(){return this.each(function(){this.id||(this.id="ui-id-"+ ++t)})}}(),removeUniqueId:function(){return this.each(function(){/^ui-id-\d+$/.test(this.id)&&t(this).removeAttr("id")})}}),t.widget("ui.accordion",{version:"1.12.1",options:{active:0,animate:{},classes:{"ui-accordion-header":"ui-corner-top","ui-accordion-header-collapsed":"ui-corner-all","ui-accordion-content":"ui-corner-bottom"},collapsible:!1,event:"click",header:"> li > :first-child, > :not(li):even",heightStyle:"auto",icons:{activeHeader:"ui-icon-triangle-1-s",header:"ui-icon-triangle-1-e"},activate:null,beforeActivate:null},hideProps:{borderTopWidth:"hide",borderBottomWidth:"hide",paddingTop:"hide",paddingBottom:"hide",height:"hide"},showProps:{borderTopWidth:"show",borderBottomWidth:"show",paddingTop:"show",paddingBottom:"show",height:"show"},_create:function(){var e=this.options;this.prevShow=this.prevHide=t(),this._addClass("ui-accordion","ui-widget ui-helper-reset"),this.element.attr("role","tablist"),e.collapsible||e.active!==!1&&null!=e.active||(e.active=0),this._processPanels(),0>e.active&&(e.active+=this.headers.length),this._refresh()},_getCreateEventData:function(){return{header:this.active,panel:this.active.length?this.active.next():t()}},_createIcons:function(){var e,i,s=this.options.icons;s&&(e=t(""),this._addClass(e,"ui-accordion-header-icon","ui-icon "+s.header),e.prependTo(this.headers),i=this.active.children(".ui-accordion-header-icon"),this._removeClass(i,s.header)._addClass(i,null,s.activeHeader)._addClass(this.headers,"ui-accordion-icons"))},_destroyIcons:function(){this._removeClass(this.headers,"ui-accordion-icons"),this.headers.children(".ui-accordion-header-icon").remove()},_destroy:function(){var t;this.element.removeAttr("role"),this.headers.removeAttr("role aria-expanded aria-selected aria-controls tabIndex").removeUniqueId(),this._destroyIcons(),t=this.headers.next().css("display","").removeAttr("role aria-hidden aria-labelledby").removeUniqueId(),"content"!==this.options.heightStyle&&t.css("height","")},_setOption:function(t,e){return"active"===t?(this._activate(e),void 0):("event"===t&&(this.options.event&&this._off(this.headers,this.options.event),this._setupEvents(e)),this._super(t,e),"collapsible"!==t||e||this.options.active!==!1||this._activate(0),"icons"===t&&(this._destroyIcons(),e&&this._createIcons()),void 0)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t),this._toggleClass(null,"ui-state-disabled",!!t),this._toggleClass(this.headers.add(this.headers.next()),null,"ui-state-disabled",!!t)},_keydown:function(e){if(!e.altKey&&!e.ctrlKey){var i=t.ui.keyCode,s=this.headers.length,n=this.headers.index(e.target),o=!1;switch(e.keyCode){case i.RIGHT:case i.DOWN:o=this.headers[(n+1)%s];break;case i.LEFT:case i.UP:o=this.headers[(n-1+s)%s];break;case i.SPACE:case i.ENTER:this._eventHandler(e);break;case i.HOME:o=this.headers[0];break;case i.END:o=this.headers[s-1]}o&&(t(e.target).attr("tabIndex",-1),t(o).attr("tabIndex",0),t(o).trigger("focus"),e.preventDefault())}},_panelKeyDown:function(e){e.keyCode===t.ui.keyCode.UP&&e.ctrlKey&&t(e.currentTarget).prev().trigger("focus")},refresh:function(){var e=this.options;this._processPanels(),e.active===!1&&e.collapsible===!0||!this.headers.length?(e.active=!1,this.active=t()):e.active===!1?this._activate(0):this.active.length&&!t.contains(this.element[0],this.active[0])?this.headers.length===this.headers.find(".ui-state-disabled").length?(e.active=!1,this.active=t()):this._activate(Math.max(0,e.active-1)):e.active=this.headers.index(this.active),this._destroyIcons(),this._refresh()},_processPanels:function(){var t=this.headers,e=this.panels;this.headers=this.element.find(this.options.header),this._addClass(this.headers,"ui-accordion-header ui-accordion-header-collapsed","ui-state-default"),this.panels=this.headers.next().filter(":not(.ui-accordion-content-active)").hide(),this._addClass(this.panels,"ui-accordion-content","ui-helper-reset ui-widget-content"),e&&(this._off(t.not(this.headers)),this._off(e.not(this.panels)))},_refresh:function(){var e,i=this.options,s=i.heightStyle,n=this.element.parent();this.active=this._findActive(i.active),this._addClass(this.active,"ui-accordion-header-active","ui-state-active")._removeClass(this.active,"ui-accordion-header-collapsed"),this._addClass(this.active.next(),"ui-accordion-content-active"),this.active.next().show(),this.headers.attr("role","tab").each(function(){var e=t(this),i=e.uniqueId().attr("id"),s=e.next(),n=s.uniqueId().attr("id");e.attr("aria-controls",n),s.attr("aria-labelledby",i)}).next().attr("role","tabpanel"),this.headers.not(this.active).attr({"aria-selected":"false","aria-expanded":"false",tabIndex:-1}).next().attr({"aria-hidden":"true"}).hide(),this.active.length?this.active.attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0}).next().attr({"aria-hidden":"false"}):this.headers.eq(0).attr("tabIndex",0),this._createIcons(),this._setupEvents(i.event),"fill"===s?(e=n.height(),this.element.siblings(":visible").each(function(){var i=t(this),s=i.css("position");"absolute"!==s&&"fixed"!==s&&(e-=i.outerHeight(!0))}),this.headers.each(function(){e-=t(this).outerHeight(!0)}),this.headers.next().each(function(){t(this).height(Math.max(0,e-t(this).innerHeight()+t(this).height()))}).css("overflow","auto")):"auto"===s&&(e=0,this.headers.next().each(function(){var i=t(this).is(":visible");i||t(this).show(),e=Math.max(e,t(this).css("height","").height()),i||t(this).hide()}).height(e))},_activate:function(e){var i=this._findActive(e)[0];i!==this.active[0]&&(i=i||this.active[0],this._eventHandler({target:i,currentTarget:i,preventDefault:t.noop}))},_findActive:function(e){return"number"==typeof e?this.headers.eq(e):t()},_setupEvents:function(e){var i={keydown:"_keydown"};e&&t.each(e.split(" "),function(t,e){i[e]="_eventHandler"}),this._off(this.headers.add(this.headers.next())),this._on(this.headers,i),this._on(this.headers.next(),{keydown:"_panelKeyDown"}),this._hoverable(this.headers),this._focusable(this.headers)},_eventHandler:function(e){var i,s,n=this.options,o=this.active,a=t(e.currentTarget),r=a[0]===o[0],h=r&&n.collapsible,l=h?t():a.next(),c=o.next(),u={oldHeader:o,oldPanel:c,newHeader:h?t():a,newPanel:l};e.preventDefault(),r&&!n.collapsible||this._trigger("beforeActivate",e,u)===!1||(n.active=h?!1:this.headers.index(a),this.active=r?t():a,this._toggle(u),this._removeClass(o,"ui-accordion-header-active","ui-state-active"),n.icons&&(i=o.children(".ui-accordion-header-icon"),this._removeClass(i,null,n.icons.activeHeader)._addClass(i,null,n.icons.header)),r||(this._removeClass(a,"ui-accordion-header-collapsed")._addClass(a,"ui-accordion-header-active","ui-state-active"),n.icons&&(s=a.children(".ui-accordion-header-icon"),this._removeClass(s,null,n.icons.header)._addClass(s,null,n.icons.activeHeader)),this._addClass(a.next(),"ui-accordion-content-active")))},_toggle:function(e){var i=e.newPanel,s=this.prevShow.length?this.prevShow:e.oldPanel;this.prevShow.add(this.prevHide).stop(!0,!0),this.prevShow=i,this.prevHide=s,this.options.animate?this._animate(i,s,e):(s.hide(),i.show(),this._toggleComplete(e)),s.attr({"aria-hidden":"true"}),s.prev().attr({"aria-selected":"false","aria-expanded":"false"}),i.length&&s.length?s.prev().attr({tabIndex:-1,"aria-expanded":"false"}):i.length&&this.headers.filter(function(){return 0===parseInt(t(this).attr("tabIndex"),10)}).attr("tabIndex",-1),i.attr("aria-hidden","false").prev().attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0})},_animate:function(t,e,i){var s,n,o,a=this,r=0,h=t.css("box-sizing"),l=t.length&&(!e.length||t.index()",delay:300,options:{icons:{submenu:"ui-icon-caret-1-e"},items:"> *",menus:"ul",position:{my:"left top",at:"right top"},role:"menu",blur:null,focus:null,select:null},_create:function(){this.activeMenu=this.element,this.mouseHandled=!1,this.element.uniqueId().attr({role:this.options.role,tabIndex:0}),this._addClass("ui-menu","ui-widget ui-widget-content"),this._on({"mousedown .ui-menu-item":function(t){t.preventDefault()},"click .ui-menu-item":function(e){var i=t(e.target),s=t(t.ui.safeActiveElement(this.document[0]));!this.mouseHandled&&i.not(".ui-state-disabled").length&&(this.select(e),e.isPropagationStopped()||(this.mouseHandled=!0),i.has(".ui-menu").length?this.expand(e):!this.element.is(":focus")&&s.closest(".ui-menu").length&&(this.element.trigger("focus",[!0]),this.active&&1===this.active.parents(".ui-menu").length&&clearTimeout(this.timer)))},"mouseenter .ui-menu-item":function(e){if(!this.previousFilter){var i=t(e.target).closest(".ui-menu-item"),s=t(e.currentTarget);i[0]===s[0]&&(this._removeClass(s.siblings().children(".ui-state-active"),null,"ui-state-active"),this.focus(e,s))}},mouseleave:"collapseAll","mouseleave .ui-menu":"collapseAll",focus:function(t,e){var i=this.active||this.element.find(this.options.items).eq(0);e||this.focus(t,i)},blur:function(e){this._delay(function(){var i=!t.contains(this.element[0],t.ui.safeActiveElement(this.document[0]));i&&this.collapseAll(e)})},keydown:"_keydown"}),this.refresh(),this._on(this.document,{click:function(t){this._closeOnDocumentClick(t)&&this.collapseAll(t),this.mouseHandled=!1}})},_destroy:function(){var e=this.element.find(".ui-menu-item").removeAttr("role aria-disabled"),i=e.children(".ui-menu-item-wrapper").removeUniqueId().removeAttr("tabIndex role aria-haspopup");this.element.removeAttr("aria-activedescendant").find(".ui-menu").addBack().removeAttr("role aria-labelledby aria-expanded aria-hidden aria-disabled tabIndex").removeUniqueId().show(),i.children().each(function(){var e=t(this);e.data("ui-menu-submenu-caret")&&e.remove()})},_keydown:function(e){var i,s,n,o,a=!0;switch(e.keyCode){case t.ui.keyCode.PAGE_UP:this.previousPage(e);break;case t.ui.keyCode.PAGE_DOWN:this.nextPage(e);break;case t.ui.keyCode.HOME:this._move("first","first",e);break;case t.ui.keyCode.END:this._move("last","last",e);break;case t.ui.keyCode.UP:this.previous(e);break;case t.ui.keyCode.DOWN:this.next(e);break;case t.ui.keyCode.LEFT:this.collapse(e);break;case t.ui.keyCode.RIGHT:this.active&&!this.active.is(".ui-state-disabled")&&this.expand(e);break;case t.ui.keyCode.ENTER:case t.ui.keyCode.SPACE:this._activate(e);break;case t.ui.keyCode.ESCAPE:this.collapse(e);break;default:a=!1,s=this.previousFilter||"",o=!1,n=e.keyCode>=96&&105>=e.keyCode?""+(e.keyCode-96):String.fromCharCode(e.keyCode),clearTimeout(this.filterTimer),n===s?o=!0:n=s+n,i=this._filterMenuItems(n),i=o&&-1!==i.index(this.active.next())?this.active.nextAll(".ui-menu-item"):i,i.length||(n=String.fromCharCode(e.keyCode),i=this._filterMenuItems(n)),i.length?(this.focus(e,i),this.previousFilter=n,this.filterTimer=this._delay(function(){delete this.previousFilter},1e3)):delete this.previousFilter}a&&e.preventDefault()},_activate:function(t){this.active&&!this.active.is(".ui-state-disabled")&&(this.active.children("[aria-haspopup='true']").length?this.expand(t):this.select(t))},refresh:function(){var e,i,s,n,o,a=this,r=this.options.icons.submenu,h=this.element.find(this.options.menus);this._toggleClass("ui-menu-icons",null,!!this.element.find(".ui-icon").length),s=h.filter(":not(.ui-menu)").hide().attr({role:this.options.role,"aria-hidden":"true","aria-expanded":"false"}).each(function(){var e=t(this),i=e.prev(),s=t("").data("ui-menu-submenu-caret",!0);a._addClass(s,"ui-menu-icon","ui-icon "+r),i.attr("aria-haspopup","true").prepend(s),e.attr("aria-labelledby",i.attr("id"))}),this._addClass(s,"ui-menu","ui-widget ui-widget-content ui-front"),e=h.add(this.element),i=e.find(this.options.items),i.not(".ui-menu-item").each(function(){var e=t(this);a._isDivider(e)&&a._addClass(e,"ui-menu-divider","ui-widget-content")}),n=i.not(".ui-menu-item, .ui-menu-divider"),o=n.children().not(".ui-menu").uniqueId().attr({tabIndex:-1,role:this._itemRole()}),this._addClass(n,"ui-menu-item")._addClass(o,"ui-menu-item-wrapper"),i.filter(".ui-state-disabled").attr("aria-disabled","true"),this.active&&!t.contains(this.element[0],this.active[0])&&this.blur()},_itemRole:function(){return{menu:"menuitem",listbox:"option"}[this.options.role]},_setOption:function(t,e){if("icons"===t){var i=this.element.find(".ui-menu-icon");this._removeClass(i,null,this.options.icons.submenu)._addClass(i,null,e.submenu)}this._super(t,e)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t+""),this._toggleClass(null,"ui-state-disabled",!!t)},focus:function(t,e){var i,s,n;this.blur(t,t&&"focus"===t.type),this._scrollIntoView(e),this.active=e.first(),s=this.active.children(".ui-menu-item-wrapper"),this._addClass(s,null,"ui-state-active"),this.options.role&&this.element.attr("aria-activedescendant",s.attr("id")),n=this.active.parent().closest(".ui-menu-item").children(".ui-menu-item-wrapper"),this._addClass(n,null,"ui-state-active"),t&&"keydown"===t.type?this._close():this.timer=this._delay(function(){this._close()},this.delay),i=e.children(".ui-menu"),i.length&&t&&/^mouse/.test(t.type)&&this._startOpening(i),this.activeMenu=e.parent(),this._trigger("focus",t,{item:e})},_scrollIntoView:function(e){var i,s,n,o,a,r;this._hasScroll()&&(i=parseFloat(t.css(this.activeMenu[0],"borderTopWidth"))||0,s=parseFloat(t.css(this.activeMenu[0],"paddingTop"))||0,n=e.offset().top-this.activeMenu.offset().top-i-s,o=this.activeMenu.scrollTop(),a=this.activeMenu.height(),r=e.outerHeight(),0>n?this.activeMenu.scrollTop(o+n):n+r>a&&this.activeMenu.scrollTop(o+n-a+r))},blur:function(t,e){e||clearTimeout(this.timer),this.active&&(this._removeClass(this.active.children(".ui-menu-item-wrapper"),null,"ui-state-active"),this._trigger("blur",t,{item:this.active}),this.active=null)},_startOpening:function(t){clearTimeout(this.timer),"true"===t.attr("aria-hidden")&&(this.timer=this._delay(function(){this._close(),this._open(t)},this.delay))},_open:function(e){var i=t.extend({of:this.active},this.options.position);clearTimeout(this.timer),this.element.find(".ui-menu").not(e.parents(".ui-menu")).hide().attr("aria-hidden","true"),e.show().removeAttr("aria-hidden").attr("aria-expanded","true").position(i)},collapseAll:function(e,i){clearTimeout(this.timer),this.timer=this._delay(function(){var s=i?this.element:t(e&&e.target).closest(this.element.find(".ui-menu"));s.length||(s=this.element),this._close(s),this.blur(e),this._removeClass(s.find(".ui-state-active"),null,"ui-state-active"),this.activeMenu=s},this.delay)},_close:function(t){t||(t=this.active?this.active.parent():this.element),t.find(".ui-menu").hide().attr("aria-hidden","true").attr("aria-expanded","false")},_closeOnDocumentClick:function(e){return!t(e.target).closest(".ui-menu").length},_isDivider:function(t){return!/[^\-\u2014\u2013\s]/.test(t.text())},collapse:function(t){var e=this.active&&this.active.parent().closest(".ui-menu-item",this.element);e&&e.length&&(this._close(),this.focus(t,e))},expand:function(t){var e=this.active&&this.active.children(".ui-menu ").find(this.options.items).first();e&&e.length&&(this._open(e.parent()),this._delay(function(){this.focus(t,e)}))},next:function(t){this._move("next","first",t)},previous:function(t){this._move("prev","last",t)},isFirstItem:function(){return this.active&&!this.active.prevAll(".ui-menu-item").length},isLastItem:function(){return this.active&&!this.active.nextAll(".ui-menu-item").length},_move:function(t,e,i){var s;this.active&&(s="first"===t||"last"===t?this.active["first"===t?"prevAll":"nextAll"](".ui-menu-item").eq(-1):this.active[t+"All"](".ui-menu-item").eq(0)),s&&s.length&&this.active||(s=this.activeMenu.find(this.options.items)[e]()),this.focus(i,s)},nextPage:function(e){var i,s,n;return this.active?(this.isLastItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.nextAll(".ui-menu-item").each(function(){return i=t(this),0>i.offset().top-s-n}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items)[this.active?"last":"first"]())),void 0):(this.next(e),void 0)},previousPage:function(e){var i,s,n;return this.active?(this.isFirstItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.prevAll(".ui-menu-item").each(function(){return i=t(this),i.offset().top-s+n>0}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items).first())),void 0):(this.next(e),void 0)},_hasScroll:function(){return this.element.outerHeight()",options:{appendTo:null,autoFocus:!1,delay:300,minLength:1,position:{my:"left top",at:"left bottom",collision:"none"},source:null,change:null,close:null,focus:null,open:null,response:null,search:null,select:null},requestIndex:0,pending:0,_create:function(){var e,i,s,n=this.element[0].nodeName.toLowerCase(),o="textarea"===n,a="input"===n; -this.isMultiLine=o||!a&&this._isContentEditable(this.element),this.valueMethod=this.element[o||a?"val":"text"],this.isNewMenu=!0,this._addClass("ui-autocomplete-input"),this.element.attr("autocomplete","off"),this._on(this.element,{keydown:function(n){if(this.element.prop("readOnly"))return e=!0,s=!0,i=!0,void 0;e=!1,s=!1,i=!1;var o=t.ui.keyCode;switch(n.keyCode){case o.PAGE_UP:e=!0,this._move("previousPage",n);break;case o.PAGE_DOWN:e=!0,this._move("nextPage",n);break;case o.UP:e=!0,this._keyEvent("previous",n);break;case o.DOWN:e=!0,this._keyEvent("next",n);break;case o.ENTER:this.menu.active&&(e=!0,n.preventDefault(),this.menu.select(n));break;case o.TAB:this.menu.active&&this.menu.select(n);break;case o.ESCAPE:this.menu.element.is(":visible")&&(this.isMultiLine||this._value(this.term),this.close(n),n.preventDefault());break;default:i=!0,this._searchTimeout(n)}},keypress:function(s){if(e)return e=!1,(!this.isMultiLine||this.menu.element.is(":visible"))&&s.preventDefault(),void 0;if(!i){var n=t.ui.keyCode;switch(s.keyCode){case n.PAGE_UP:this._move("previousPage",s);break;case n.PAGE_DOWN:this._move("nextPage",s);break;case n.UP:this._keyEvent("previous",s);break;case n.DOWN:this._keyEvent("next",s)}}},input:function(t){return s?(s=!1,t.preventDefault(),void 0):(this._searchTimeout(t),void 0)},focus:function(){this.selectedItem=null,this.previous=this._value()},blur:function(t){return this.cancelBlur?(delete this.cancelBlur,void 0):(clearTimeout(this.searching),this.close(t),this._change(t),void 0)}}),this._initSource(),this.menu=t(" -

    Is Beach Buggy Racing 2 worth playing?

    -

    Beach Buggy Racing 2 is definitely worth playing if you are a fan of kart racing games or if you are looking for a fun and wacky game to play with friends. It has a lot of content and variety to keep you entertained for hours. It also has an online mode where you can compete against other players from around the world. It is free to play with optional purchases, so you can try it out without spending any money.

    -

    Conclusion

    -

    Beach Buggy Racing 2 is a fun and wacky kart racer that offers a lot of content and variety. It has great graphics and sound effects, smooth gameplay and controls, lots of cars, drivers, power-ups, and game modes, and an online mode with multiplayer features. It is free to play with optional purchases, so you can download it from the Google Play Store or the App Store, or buy the premium version from Steam or other platforms. If you are looking for a kart racing game that is fun, colorful, and full of surprises, then you should check out Beach Buggy Racing 2.

    -

    FAQs

    -

    Q: How do I get more gems in Beach Buggy Racing 2?

    -

    A: You can get more gems by playing online mode, opening loot boxes, watching ads, or buying them with real money.

    -

    Q: How do I upgrade my power-ups in Beach Buggy Racing 2?

    -

    A: You can upgrade your power-ups by collecting cards or buying them with coins or gems.

    -

    Q: How do I join or create a team in Beach Buggy Racing 2?

    -

    A: You can join or create a team by tapping on the team icon at the bottom of the screen.

    -

    Q: How do I change my car or driver in Beach Buggy Racing 2?

    -

    A: You can change your car or driver by tapping on the garage icon at the bottom of the screen.

    -

    Q: How do I contact the developers of Beach Buggy Racing 2?

    -

    A: You can contact the developers of Beach Buggy Racing 2 by visiting their website [12](https://www.vectorunit.com/contact) or their Facebook page [13](https://www.facebook.com/VectorUnit).

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Gacha Life 1.1.0 and Create Your Own Gacha Stories.md b/spaces/congsaPfin/Manga-OCR/logs/Download Gacha Life 1.1.0 and Create Your Own Gacha Stories.md deleted file mode 100644 index 0c2cf4c0b93c1fe37025b0d01c39ea3b63029912..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Gacha Life 1.1.0 and Create Your Own Gacha Stories.md +++ /dev/null @@ -1,131 +0,0 @@ - -

    Download Gacha Life Version 1.1 0: A Guide for Beginners

    -

    If you are a fan of anime, dress-up games, and storytelling, you might have heard of Gacha Life, a popular game by Lunime that lets you create your own characters and scenes. But did you know that there is a new version of Gacha Life that has more features and improvements? In this article, we will tell you everything you need to know about how to download Gacha Life version 1.1 0 on your Windows PC and how to enjoy this game to the fullest.

    -

    What is Gacha Life?

    -

    Gacha Life is a game that combines elements of anime, dress-up, and role-playing. You can customize your own character using different hairstyles, clothing parts, weapons, accessories, and more. You can also choose from hundreds of preset characters or create your own from scratch. You can then take up to eight characters into Studio mode and set up amazing scenes to share with others. You can also use the Skit Maker to create stories and dialogues for your characters. And if you want some fun and challenge, you can play one of the eight mini-games and earn gems that you can use to gacha for more items.

    -

    download gacha life version 1.1 0


    DOWNLOAD ⇒⇒⇒ https://urlca.com/2uO66a



    -

    Features of Gacha Life

    -

    Some of the features of Gacha Life are:

    -
      -
    • Customize up to 20 different characters with over 600 different poses.
    • -
    • Dress up your characters with over 200 different outfits and accessories.
    • -
    • Change the colors of your characters' hair, eyes, clothes, and more.
    • -
    • Choose from hundreds of backgrounds or import your own.
    • -
    • Create stories and dialogues with the Skit Maker.
    • -
    • Play eight different mini-games and earn gems.
    • -
    • Gacha for over 180 items to add to your collection.
    • -
    • Chat with other players and make friends.
    • -
    -

    How to play Gacha Life

    -

    To play Gacha Life, you need to download and install the game on your device. You can play Gacha Life on Android, iOS, or Windows PC. The game is free to play, but it contains some in-app purchases that you can buy with real money if you want. Once you have installed the game, you can start by creating your own character or choosing one of the preset ones. You can then explore the different modes of the game, such as Studio, Skit Maker, Life Mode, and Gacha Games. You can also chat with other players online or offline.

    -

    Why download Gacha Life version 1.1 0?

    -

    Gacha Life version 1.1 0 is the latest update of the game that was released on October 9, 2019. This version has some new features and improvements that make the game more enjoyable and user-friendly.

    -

    What's new in version 1.1 0?

    -

    Some of the new features in version 1.1 0 are:

    -
      -
    • New items added to the Item Shop.
    • -
    • New poses added to the Pose Mode.
    • -
    • New backgrounds added to the Background Shop.
    • -
    • New chat options added to the Chat Mode.
    • -
    • New interface design and bug fixes.
    • -
    -

    How to download Gacha Life version 1.1 0 on Windows PC?

    -

    To download Gacha Life version 1.1 0 on your Windows PC, you need to follow these steps:

    -
      -
    1. Go to the official website of Lunime and click on the download link for Gacha Life version 1.1 0.
    2. -
    3. Save the file to your computer and run it as an administrator.
    4. -
    5. Follow the instructions on the screen to install the game.
    6. -
    7. Launch the game and enjoy!
    8. -
    -

    Tips and tricks for Gacha Life

    -

    Gacha Life is a fun and creative game, but it can also be challenging and confusing at times. Here are some tips and tricks that can help you make the most out of your Gacha Life experience:

    -

    How to download gacha life version 1.1 0 for free
    -Gacha life version 1.1 0 apk download
    -Gacha life version 1.1 0 update features
    -Gacha life version 1.1 0 download for pc
    -Gacha life version 1.1 0 mod apk unlimited money
    -Download gacha life version 1.1 0 on windows
    -Gacha life version 1.1 0 online play
    -Gacha life version 1.1 0 release date
    -Gacha life version 1.1 0 download for android
    -Gacha life version 1.1 0 cheats and hacks
    -Download gacha life version 1.1 0 on mac
    -Gacha life version 1.1 0 new characters and outfits
    -Gacha life version 1.1 0 download for ios
    -Gacha life version 1.1 0 review and rating
    -Gacha life version 1.1 0 download link
    -Download gacha life version 1.1 0 on chromebook
    -Gacha life version 1.1 0 gameplay and tips
    -Gacha life version 1.1 0 download for laptop
    -Gacha life version 1.1 0 studio mode tutorial
    -Gacha life version 1.1 0 download for tablet
    -Download gacha life version 1.1 0 on linux
    -Gacha life version 1.1 0 glitches and bugs
    -Gacha life version 1.1 0 download for kindle fire
    -Gacha life version 1.1 0 best scenes and stories
    -Gacha life version 1.1 0 download size and requirements
    -Download gacha life version 1.1 0 on bluestacks
    -Gacha life version 1.1 0 wallpapers and backgrounds
    -Gacha life version 1.1 0 download for chrome os
    -Gacha life version 1.1 0 codes and secrets
    -Download gacha life version 1.1 0 on nox player
    -Gacha life version 1.1 0 memes and jokes
    -Gacha life version 1.1 0 download for windows phone
    -Gacha life version 1.1 0 fan art and videos
    -Download gacha life version 1.1 0 on memu play
    -Gacha life version 1.1 0 songs and music
    -Gacha life version 1.10 download for blackberry
    -Gacha life version .11 .10 quizzes and trivia
    -Download gacha life .11 .10 on ldplayer
    -Gacha .11 .10 skins and accessories
    -Download gachalife .11 .10 on remix os player

    -

    How to create your own characters and scenes

    -

    To create your own characters and scenes, you need to use the Dress Up and Studio modes of the game. Here are some steps to follow:

    -
      -
    • In Dress Up mode, you can customize your character's appearance, clothes, accessories, and more. You can also change the colors of each item using the Color Picker. You can save up to 20 characters in your slots.
    • -
    • In Studio mode, you can choose up to eight characters to place in a scene. You can also change their poses, expressions, sizes, and positions. You can also add props, pets, and effects to your scene. You can choose from hundreds of backgrounds or import your own.
    • -
    • When you are done with your scene, you can save it to your gallery or share it with others. You can also export it as an image or a video.
    • -
    -

    How to use the Studio mode and the Skit Maker

    -

    The Studio mode and the Skit Maker are two features that allow you to create stories and dialogues for your characters. Here are some steps to follow:

    -
      -
    • In Studio mode, you can select the characters you want to use in your story. You can also change their poses, expressions, sizes, and positions. You can also add props, pets, and effects to your scene. You can choose from hundreds of backgrounds or import your own.
    • -
    • In Skit Maker mode, you can create dialogues for your characters using text boxes. You can also change the font, color, size, and style of the text. You can also add sound effects and music to your skit. You can save up to 100 skits in your slots.
    • -
    • When you are done with your skit, you can play it back or share it with others. You can also export it as an image or a video.
    • -
    -

    How to earn gems and access the Gacha games

    -

    Gems are the currency of Gacha Life that you can use to gacha for more items. You can earn gems by playing one of the eight mini-games in the game. Here are some steps to follow:

    -
      -
    • In Gacha Games mode, you can choose from eight different mini-games: Bex's Festival, Duck & Dodge, Phantom's Remix, Narwhal Sky, Orca Sploosh, Picc Pawket Rhythm, Abushu Candy Toss, and Lemo & Yumi's Math Game.
    • -
    • Each mini-game has different rules and objectives that you need to follow. You can also choose from three difficulty levels: Easy, Normal, or Hard.
    • -
    • When you play a mini-game, you will earn gems based on your score and performance. You can also earn bonus gems by completing achievements and daily missions.
    • -
    • When you have enough gems, you can go to the Gacha Shop and gacha for more items. You can choose from four different gachas: Gacha Life (100 items), Gacha Club (200 items), Gacha Resort (150 items), and Gacha Memories (100 items).
    • -
    -

    Conclusion

    -

    Gacha Life is a game that allows you to unleash your creativity and imagination. You can create your own characters and scenes, make stories and dialogues, play mini-games and earn gems, gacha for more items, and chat with other players. If you want to enjoy this game even more, you should download Gacha Life version 1.1 0 on your Windows PC. This version has new features and improvements that make the game more user-friendly and enjoyable.

    -

    Summary of the main points

    -

    In this article, we have covered:

    -
      -
    • What is Gacha Life and what are its features?
    • -
    • Why download Gacha Life version 1.1 0 and what's new in it?
    • -
    • How to download Gacha Life version 1.1 0 on your Windows PC?
    • -
    • How to create your own characters and scenes, use the Studio mode and the Skit Maker, and earn gems and access the Gacha games?
    • -
    -

    Call to action

    -

    If you are ready to start your Gacha Life adventure, download Gacha Life version 1.1 0 today and have fun! You can also check out the official website of Lunime for more information and updates about the game. And don't forget to share your creations and stories with other Gacha Life fans online. Have a gacha-tastic day!

    -

    FAQs

    -

    Here are some frequently asked questions about Gacha Life:

    -
      -
    1. Is Gacha Life safe for kids?
      -Gacha Life is rated 9+ on the App Store and 10+ on Google Play, which means it may contain mild violence, suggestive themes, or infrequent use of mild language. However, the game itself does not have any inappropriate or harmful content, as long as the players use it responsibly and follow the rules. Parents should also monitor their children's online activities and interactions with other players, as some of them may be rude or offensive.
    2. -
    3. How can I backup or transfer my Gacha Life data?
      -Gacha Life does not have a cloud save feature, which means your data is stored locally on your device. If you want to backup or transfer your data, you need to use a file manager app or a USB cable to copy the folder named "Lunime" from your device's internal storage to another device or a computer. You can then paste the folder back to the same location on your new device or after reinstalling the game.
    4. -
    5. How can I contact Lunime or report a bug or a problem?
      -If you have any questions, suggestions, feedback, or issues about Gacha Life, you can contact Lunime through their email address: lunimegames@gmail.com. You can also visit their website: https://lunime.com/ or their social media pages: https://www.facebook.com/Lunime/ and https://twitter.com/LunimeGames for more support and information.
    6. -
    7. How can I get more gems in Gacha Life?
      -The best way to get more gems in Gacha Life is to play the mini-games in the game. You can earn gems based on your score and performance in each mini-game. You can also get bonus gems by completing achievements and daily missions. Another way to get more gems is to watch ads in the game, but this option is limited and may not be available in some regions.
    8. -
    9. How can I get more items in Gacha Life?
      -The best way to get more items in Gacha Life is to gacha for them in the Gacha Shop. You can use gems to gacha for over 180 items in four different gachas: Gacha Life, Gacha Club, Gacha Resort, and Gacha Memories. Each gacha has different items and rates, so you may need to gacha multiple times to get the items you want.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Nkwagala Nyo by Betty Muwanguzi Lyrics and Music Video.md b/spaces/congsaPfin/Manga-OCR/logs/Download Nkwagala Nyo by Betty Muwanguzi Lyrics and Music Video.md deleted file mode 100644 index 1ec0606b86f92953c11eb087e28e0a5646560b4e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Nkwagala Nyo by Betty Muwanguzi Lyrics and Music Video.md +++ /dev/null @@ -1,173 +0,0 @@ -
    -

    Nkwagala Nyo by Betty Muwanguzi: A Review of the Popular Ugandan Gospel Song

    -

    If you are a fan of Ugandan gospel music, you have probably heard of the song Nkwagala Nyo by Betty Muwanguzi. This song, which means "I love you so much" in Luganda, is one of the most played and loved gospel songs in Uganda and beyond. In this article, we will review this song and explore its meaning, message, structure, performance, reception, and impact.

    -

    nkwagala nyo by betty muwanguzi lyrics download


    DOWNLOAD 🆓 https://urlca.com/2uObLE



    -

    Introduction

    -

    Who is Betty Muwanguzi?

    -

    Betty Muwanguzi is a Ugandan gospel singer and songwriter who has been in the music industry for over 20 years. She started singing at a young age in her church choir and later joined a gospel group called Joyful Generation. She released her first solo album in 2000 and has since released several more albums and singles. Some of her popular songs include Hosanna, Tukutendereza, Saba Saba, and Nkwagala Nyo.

    -

    What is the meaning of Nkwagala Nyo?

    -

    Nkwagala Nyo is a Luganda phrase that means "I love you so much". Luganda is a Bantu language spoken by the Baganda people, the largest ethnic group in Uganda. It is also widely spoken as a lingua franca in central Uganda, including the capital Kampala. Luganda is a tonal language, which means that the pitch of a syllable can change the meaning of a word. For example, kabaka means "king" if all three syllables have the same pitch, but it means "the little one catches" if the first syllable has a high pitch.

    -

    What is the message of the song?

    -

    The message of the song is to express gratitude and love to God for his goodness and faithfulness. The singer praises God for saving her from sin and death, for giving her peace and joy, for healing her wounds and restoring her hope, for being her protector and provider, and for being her father and friend. She declares that she loves God so much that she cannot live without him, that she cannot stop praising him, and that she cannot repay him for what he has done for her.

    -

    * nkwagala nyo betty muwanguzi mp3 download
    -* betty muwanguzi hossana nkwagala nnyo lyrics
    -* nkwagala nyo by betty muwanguzi video download
    -* betty muwanguzi nkwagala nyo song download
    -* nkwagala nyo by betty muwanguzi audio download
    -* betty muwanguzi hossana nkwagala nnyo mp3
    -* nkwagala nyo by betty muwanguzi free download
    -* betty muwanguzi nkwagala nyo lyrics translation
    -* nkwagala nyo by betty muwanguzi youtube
    -* betty muwanguzi hossana nkwagala nnyo video
    -* nkwagala nyo by betty muwanguzi shazam
    -* betty muwanguzi nkwagala nyo chords
    -* nkwagala nyo by betty muwanguzi boomplay
    -* betty muwanguzi hossana nkwagala nnyo download
    -* nkwagala nyo by betty muwanguzi mzikii
    -* betty muwanguzi nkwagala nyo meaning
    -* nkwagala nyo by betty muwanguzi karaoke
    -* betty muwanguzi hossana nkwagala nnyo album
    -* nkwagala nyo by betty muwanguzi remix
    -* betty muwanguzi nkwagala nyo instrumental
    -* nkwagala nyo by betty muwanguzi live performance
    -* betty muwanguzi hossana nkwagala nnyo song lyrics
    -* nkwagala nyo by betty muwanguzi ringtone download
    -* betty muwanguzi nkwagala nyo cover
    -* nkwagala nyo by betty muwanguzi spotify
    -* betty muwanguzi hossana (nkwagala nyoo) mp3 download free
    -* how to play "nkwagala nyoo" by Betty Muwangizi on guitar or piano?
    -* where can I buy or stream "nkwagala nyoo" by Betty Muwangizi online?
    -* who wrote and composed "nkwagala nyoo" by Betty Muwangizi?
    -* what is the message or theme of "nkwagala nyoo" by Betty Muwangizi?
    -* what genre or style is "nkwagala nyoo" by Betty Muwangizi?
    -* what are some similar songs to "nkwagala nyoo" by Betty Muwangizi?
    -* what are some reviews or ratings of "nkwagala nyoo" by Betty Muwangizi?
    -* what are some trivia or facts about "nkwagala nyoo" by Betty Muwangizi?
    -* what are some awards or nominations of "nkwagala nyoo" by Betty Muwangizi?

    -

    Main body

    -

    How is the song structured?

    -

    The song has a simple but catchy structure that consists of a chorus, three verses, and a bridge. The chorus is repeated four times throughout the song, while each verse and bridge are sung once. The song follows a subject-verb-object word order and uses agglutinative morphology, which means that words are formed by adding suffixes to roots. Here is a breakdown of each part of the song:

    -

    The chorus

    -

    The chorus is the most memorable part of the song, as it contains the title phrase "Nkwagala Nyo". It also uses repetition and rhyme to create a rhythmic effect. The chorus goes like this:

    -
    -

    Nkwagala nyo nyo nyo
    -Nkwagala nyo nyo nyo
    -Nkwagala nyo nyo nyo
    -Katonda wange
    -Nkwagala nyo nyo nyo
    -Nkwagala nyo nyo nyo
    -Nkwagala nyo nyo nyo
    -Katonda wange

    -
    -

    The chorus translates to:

    -
    -

    I love you so much
    -I love you so much
    -I love you so much
    -My God
    -I love you so much
    -I love you so much
    -I love you so much
    -My God

    -
    -

    The verses

    -

    The verses are the parts of the song where the singer elaborates on why she loves God so much. Each verse has four lines that end with the word "Katonda", which means "God". The verses also use parallelism and contrast to emphasize the singer's gratitude and devotion. The verses go like this:

    -
    -

    Verse 1:
    -Waliwo omuliro gwa mazima
    -Gwakola omusayi gwa Yesu Kristo
    -Gwansanze nga nze ndi mukibi
    -Katonda
    -Verse 2:
    -Waliwo amazzi aga bulamu
    -Gatuma omwoyo gwange gugulumira
    -Gansanze nga nze ndi mukufa
    -Katonda
    -Verse 3:
    -Waliwo ekisa kya mirembe
    -Kakola obulamu bwange buba bulungi
    -Kansanze nga nze ndi mukwano
    -Katonda

    -
    -

    The verses translate to:

    -
    -

    Verse 1:
    -There was a fire of truth
    -That made the blood of Jesus Christ
    -That found me when I was a sinner
    -God
    -Verse 2:
    -There were waters of life
    -That made my soul rejoice
    -That found me when I was a dead person
    -God
    -Verse 3:
    -There was a grace of peace
    -That made my life good
    -That found me when I was a friend
    -God

    -
    -

    The bridge

    -

    The bridge is the part of the song where the singer expresses her commitment and loyalty to God. It has eight lines that end with the word "Katonda", which means "God". The bridge also uses anaphora and hyperbole to create a dramatic effect. The bridge goes like this:

    -
    Bridge:
    -Sikyakukwatako Katonda wange
    -Sikyakukwatako Katonda wange
    -Sikyakukwatako Katonda wange
    -Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange
    Sikyakusaba dala Katonda wange
    Sikyakusaba dala Katonda wange
    Sikyakusaba dala Katonda wange
    Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange Katonda wange
    Sikyakulaba dala Katonda wange
    Sikyakulaba dala Katonda wange
    Sikyakulaba dala Katonda wange
    Katonda

    -

    The bridge translates to:

    -
    Bridge:
    I cannot hold you my God
    I cannot hold you my God
    I cannot hold you my God
    My God my God my God my God my God my God my God my God my God my God my God my God my God my God my God my God
    I cannot ask you for anything my God
    I cannot ask you for anything my God
    I cannot ask you for anything my God
    My God my God my God my God my God my God my God my God my God my God my God my God my God my God my God my God
    I cannot see you clearly my God
    I cannot see you clearly my God
    I cannot see you clearly my God
    God

    -

    How is the song performed?

    -

    The song is performed by Betty Muwanguzi and a group of backup singers, musicians, and dancers. The song has a lively and upbeat tempo that matches the mood of joy and celebration. The song also uses a variety of vocal and instrumental techniques to create a rich and diverse sound. Here are some of the aspects of the song's performance:

    -

    The vocals

    The vocals

    -

    The vocals are the main feature of the song, as they convey the emotion and meaning of the lyrics. Betty Muwanguzi has a powerful and expressive voice that can range from soft and gentle to loud and passionate. She sings the lead vocals, while the backup singers provide harmony and support. The vocals also use various techniques such as call and response, echo, modulation, and improvisation to create a dynamic and interactive sound. For example, in the chorus, Betty Muwanguzi sings "Nkwagala nyo nyo nyo" and the backup singers respond with "Katonda wange". In the bridge, she sings "Sikyakukwatako Katonda wange" and the backup singers echo "Katonda wange" several times. In the verses, she modulates her voice to match the mood of each line. In the end, she improvises some ad-libs to add more flavor and personality to the song.

    -

    The instruments

    -

    The instruments are the secondary feature of the song, as they provide the rhythm and melody of the music. The song uses a combination of traditional and modern instruments to create a fusion of African and Western sounds. The traditional instruments include drums, shakers, rattles, flutes, and xylophones. The modern instruments include guitars, keyboards, bass, and saxophone. The instruments also use various techniques such as syncopation, repetition, variation, and solos to create a lively and diverse sound. For example, in the chorus, the drums play a syncopated beat that contrasts with the vocals. In the verses, the guitars repeat a simple chord progression that supports the vocals. In the bridge, the keyboards vary the melody to create tension and excitement. In the end, the saxophone plays a solo that adds more flair and sophistication to the song.

    -

    The video

    -

    The video is the visual feature of the song, as it shows the singer and her team performing the song in different settings. The video uses a mix of indoor and outdoor scenes to create a contrast of light and dark, natural and artificial, urban and rural. The indoor scenes show Betty Muwanguzi singing in a studio with colorful lights and decorations. The outdoor scenes show her singing in a park with green trees and flowers. The video also uses various techniques such as close-ups, wide shots, transitions, and effects to create a dynamic and engaging video. For example, in the chorus, the video shows close-ups of Betty Muwanguzi's face as she sings with emotion. In the verses, the video shows wide shots of her and her backup singers dancing and clapping. In the bridge, the video transitions from one scene to another with fast cuts and flashes. In the end, the video shows some effects such as slow motion and zooming to highlight some moments of the song.

    -

    How is the song received?

    -

    The song is received very well by both critics and fans of Ugandan gospel music. The song has been praised for its catchy tune, uplifting lyrics, powerful vocals, diverse instruments, and creative video. The song has also been recognized for its cultural relevance, social impact, and spiritual significance. Here are some of the aspects of the song's reception:

    -

    The popularity

    -

    The song is very popular among Ugandans and other people who appreciate gospel music. The song has been played on various radio stations, TV channels, online platforms, churches, events, and concerts. The song has also been downloaded by many people who want to listen to it on their devices. According to YouTube statistics, as of June 20th 2023, the song has over 2 million views, 40 thousand likes, 2 thousand comments, and 10 thousand shares.

    -

    The feedback

    -

    The song has received positive feedback from both professional critics and ordinary fans. The critics have commended Betty Muwanguzi for her talent, skill,

    The feedback

    -

    The song has received positive feedback from both professional critics and ordinary fans. The critics have commended Betty Muwanguzi for her talent, skill, originality, and versatility. They have also appreciated the song for its quality, creativity, diversity, and authenticity. Some of the critics who have reviewed the song include:

    -
      -
    • David Kazoora, a music journalist and blogger, who wrote: "Nkwagala Nyo is a masterpiece of gospel music that showcases Betty Muwanguzi's exceptional voice and style. The song is a blend of traditional and modern elements that create a unique and captivating sound. The lyrics are simple but profound, expressing the singer's love and gratitude to God. The song is a must-listen for anyone who loves good music."
    • -
    • Grace Nakimera, a gospel singer and songwriter, who said: "Nkwagala Nyo is a beautiful song that touches my heart and soul. Betty Muwanguzi is one of my favorite gospel singers in Uganda and she never disappoints. The song is a testimony of God's goodness and faithfulness in her life and in ours. The song is a blessing to me and to many others who listen to it."
    • -
    • Robert Kyagulanyi, a politician and activist, who tweeted: "Nkwagala Nyo is a powerful song that inspires me and gives me hope. Betty Muwanguzi is a talented and courageous singer who uses her voice to praise God and to speak for the oppressed. The song is a reminder of God's love and mercy for us and our nation. The song is a message of peace and unity for all Ugandans."
    • -
    -

    The fans have also expressed their admiration and appreciation for Betty Muwanguzi and her song. They have also shared their personal stories and experiences of how the song has impacted their lives. Some of the fans who have commented on the song include:

    -
      -
    • Mary Nalwoga, a teacher and mother, who commented: "Nkwagala Nyo is my favorite song of all time. I play it every morning when I wake up and every evening when I go to bed. The song fills me with joy and peace. The song also helped me to overcome a difficult time in my life when I lost my husband to cancer. The song reminded me that God loves me so much and that he will never leave me nor forsake me."
    • -
    • John Ssempala, a student and musician, who commented: "Nkwagala Nyo is an amazing song that motivates me and challenges me. I love the way Betty Muwanguzi sings with passion and conviction. The song also inspired me to pursue my dream of becoming a gospel singer. The song taught me that God loves me so much and that he has a plan and a purpose for my life."
    • -
    • Sarah Namubiru, a nurse and volunteer, who commented: "Nkwagala Nyo is a wonderful song that heals me and comforts me. I love the way Betty Muwanguzi sings with grace and humility. The song also encouraged me to serve God and others with love and compassion. The song showed me that God loves me so much and that he cares for me and my needs."
    • -
    -

    The impact

    -

    The impact of the song is evident in the lives of many people who have listened to it and been touched by it. The song has not only entertained but also enlightened, empowered, and enriched many people. The song has also contributed to the growth and development of Ugandan gospel music as well as Ugandan culture and society. Some of the impacts of the song include:

    -
      -
    • The song has spread the gospel message of God's love and salvation to many people who have not heard it or have rejected it before. The song has also strengthened the faith and devotion of many Christians who have been struggling or suffering in their spiritual journey.
    • -
    • The song has promoted the Luganda language and culture as well as the diversity and unity of Uganda as a nation. The song has also celebrated the beauty and richness of African music as well as the creativity and innovation of African musicians.
    • -
    • The song has inspired many people to pursue their dreams and goals in life with confidence and courage. The song has also challenged many people to use their talents
    • The song has inspired many people to pursue their dreams and goals in life with confidence and courage. The song has also challenged many people to use their talents and gifts to serve God and others with love and excellence.
    • -
    • The song has brought joy and peace to many people who have been facing problems and difficulties in their personal and professional lives. The song has also brought hope and healing to many people who have been wounded and broken by sin and sorrow.
    • -
    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, Nkwagala Nyo by Betty Muwanguzi is a popular Ugandan gospel song that expresses the singer's love and gratitude to God for his goodness and faithfulness. The song has a catchy tune, uplifting lyrics, powerful vocals, diverse instruments, and creative video. The song also has a cultural relevance, social impact, and spiritual significance. The song has been praised by critics, loved by fans, and touched by many.

    -

    Personal opinion and recommendation

    -

    Personally, I think Nkwagala Nyo is a great song that deserves all the recognition and appreciation it has received. I think Betty Muwanguzi is a talented and inspiring singer who has a genuine passion for God and music. I think the song is a blessing to me and to anyone who listens to it. I would recommend this song to anyone who loves gospel music or who wants to experience God's love in a new way.

    -

    FAQs

    -

    Here are some frequently asked questions about Nkwagala Nyo by Betty Muwanguzi:

    -
      -
    1. Q: Where can I download the song?
      -A: You can download the song from various online platforms such as YouTube, Spotify, iTunes, Amazon Music, etc. You can also buy the CD or DVD from local music stores or online shops.
    2. -
    3. Q: Where can I find the lyrics of the song?
      -A: You can find the lyrics of the song on various websites such as Genius, Lyrics.com, Musixmatch, etc. You can also find the lyrics on the video description or on the CD or DVD cover.
    4. -
    5. Q: Where can I learn more about Betty Muwanguzi?
      -A: You can learn more about Betty Muwanguzi on her official website, Facebook page, Instagram account, Twitter account, etc. You can also read her biography, interviews, articles, etc. on various online sources.
    6. -
    7. Q: How can I support Betty Muwanguzi?
      -A: You can support Betty Muwanguzi by buying her music, attending her concerts, following her on social media, sharing her music with others, praying for her, etc. You can also donate to her ministry or charity projects if you feel led to do so.
    8. -
    9. Q: How can I contact Betty Muwanguzi?
      -A: You can contact Betty Muwanguzi by sending her an email, a message, a comment, a review, etc. on her official website or social media accounts. You can also write her a letter or call her phone number if you have them.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution 2022 How to Get Free Coins and Gems with Hack Mod APK.md b/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution 2022 How to Get Free Coins and Gems with Hack Mod APK.md deleted file mode 100644 index 2f9784ab675fff2c1fdd8834ab8458dc10973047..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution 2022 How to Get Free Coins and Gems with Hack Mod APK.md +++ /dev/null @@ -1,129 +0,0 @@ - -

    Hungry Shark Evolution Hack Mod APK 2022 Download

    -

    Do you love playing Hungry Shark Evolution but wish you had more coins, gems, sharks, and accessories to enjoy? Do you want to experience the thrill of being a hungry shark without spending any money or waiting for updates? If so, you might be interested in downloading a hack mod APK for this game.

    -

    A hack mod APK is a modified version of an original app that allows you to access features that are normally locked or unavailable. In this article, we will show you how to download and install a hack mod APK for Hungry Shark Evolution, one of the most popular shark games on Android. We will also explain how to use it, what are the benefits and risks of using it, and what are some alternatives to using it.

    -

    hungry shark evolution hack mod apk 2022 download


    DOWNLOADhttps://urlca.com/2uOfwH



    -

    Read on to find out how you can become the ultimate predator in Hungry Shark Evolution with a hack mod APK!

    -

    What is Hungry Shark Evolution?

    -

    Hungry Shark Evolution is an action-packed aquatic adventure game developed by Ubisoft Entertainment. It is the official game for Shark Week, a yearly event that celebrates sharks and raises awareness about their conservation.

    -

    In Hungry Shark Evolution, you take control of a very hungry shark and go on a frantic ocean rampage, eating everything and everyone in your way. You can explore a beautiful underwater world and evolve iconic sharks like the Great White and Megalodon. You can also recruit baby sharks, equip awesome accessories like lasers and jetpacks, find sunken bonus objects, complete challenging missions, activate gold rush mode, and more.

    -

    Hungry Shark Evolution has over 100 million downloads on Google Play Store and has received positive reviews from critics and players alike. It is regularly updated with new features, content, and challenges to keep you hooked.

    -

    Why use a hack mod APK?

    -

    While Hungry Shark Evolution is free to play, it also offers in-app purchases that can enhance your gameplay experience. For example, you can buy coins and gems to unlock new sharks, upgrade your stats, or buy accessories. You can also watch ads or complete offers to earn free coins and gems.

    -

    However, some players may find these options too expensive, time-consuming, or annoying. They may prefer to have unlimited access to all the features of the game without spending any money or watching any ads. That's where a hack mod APK comes in handy.

    -

    hungry shark evolution unlimited coins and gems mod apk download 2022
    -hungry shark evolution mod apk latest version 2022 free download
    -hungry shark evolution hack apk download for android 2022
    -hungry shark evolution mega mod apk 2022 download
    -hungry shark evolution mod apk 2022 download rexdl
    -hungry shark evolution hack mod apk 2022 download ios
    -hungry shark evolution mod apk 2022 download revdl
    -hungry shark evolution hack mod apk 2022 download an1
    -hungry shark evolution mod apk 2022 download happymod
    -hungry shark evolution hack mod apk 2022 download apkpure
    -hungry shark evolution mod apk 2022 download android 1
    -hungry shark evolution hack mod apk 2022 download uptodown
    -hungry shark evolution mod apk 2022 download unlimited money and gems
    -hungry shark evolution hack mod apk 2022 download for pc
    -hungry shark evolution mod apk 2022 download no root
    -hungry shark evolution hack mod apk 2022 download online
    -hungry shark evolution mod apk 2022 download obb
    -hungry shark evolution hack mod apk 2022 download offline
    -hungry shark evolution mod apk 2022 download mediafıre
    -hungry shark evolution hack mod apk 2022 download latest version
    -hungry shark evolution mod apk 2022 download all sharks unlocked
    -hungry shark evolution hack mod apk 2022 download android oyun club
    -hungry shark evolution mod apk 2022 download highly compressed
    -hungry shark evolution hack mod apk 2022 download unlimited everything
    -hungry shark evolution mod apk 2022 download new update
    -hungry shark evolution hack mod apk 2022 download original
    -hungry shark evolution mod apk 2022 download old version
    -hungry shark evolution hack mod apk 2022 download without verification
    -hungry shark evolution mod apk 2022 download with cheat menu
    -hungry shark evolution hack mod apk 2022 download zip file
    -hungry shark evolution mod menu apk 2022 free download
    -how to download hungry shark evolution hack mod apk 2022
    -best site to download hungry shark evolution mod apk 2022
    -where can i download hungry shark evolution hack mod apk 2022
    -is it safe to download hungry shark evolution mod apk 2022
    -can you play online with hungry shark evolution hack mod apk 2022
    -what's new in hungry shark evolution mod apk 2022 update
    -how to install hungry shark evolution hack mod apk 2022 on android
    -how to uninstall hungry shark evolution mod apk 2022 from android
    -how to update hungry shark evolution hack mod apk 2022 manually
    -how to get free gems in hungry shark evolution with mod apk 2022
    -how to unlock all sharks in hungry shark evolution using hack mod apk 2022
    -how to fix lag in hungry shark evolution on mod apk 2022 version
    -how to backup data in hungry shark evolution before installing hack mod apk 2022
    -how to restore data in hungry shark evolution after uninstalling hack mod apk 2022
    -how to transfer data from hungry shark evolution original to hack mod apk 2022
    -how to report bugs in hungry shark evolution on mod apk 2022
    -how to contact support for hungry shark evolution if using hack mod apk 2022

    -

    A hack mod APK for Hungry Shark Evolution can give you several benefits and advantages over other players. Some of these include:

    -
      -
    • Unlimited coins and gems: You can buy any shark, accessory, or upgrade you want without worrying about running out of money.
    • -
    • All sharks unlocked: You can play as any shark in the game from the start without having to complete missions or reach certain levels.
    • -
    • All accessories unlocked: You can equip your shark with any accessory in the game without having to buy it or unlock it with gems.
    • -
    • Gold rush mode activated: You can enjoy the gold rush mode anytime you want without having to fill up the gold rush meter. This mode gives you invincibility, increased speed, and more points.
    • -
    • No ads: You can play the game without any interruptions or distractions from ads.
    • -
    -

    With a hack mod APK, you can have more fun and freedom in Hungry Shark Evolution. You can explore the ocean, eat everything, and become the most powerful shark in the game.

    -

    How to download and install a hack mod APK?

    -

    Downloading and installing a hack mod APK for Hungry Shark Evolution is not very difficult, but it does require some steps and precautions. Here is a simple guide on how to do it:

    -
      -
    1. Find a reliable source for the hack mod APK. There are many websites and forums that offer hack mod APKs for various games, but not all of them are safe or trustworthy. Some of them may contain malware, viruses, or outdated versions of the game. To avoid any problems, you should do some research and read reviews before downloading anything. You can also use a virus scanner or a VPN to protect your device and your privacy.
    2. -
    3. Download the hack mod APK file to your device. Once you have found a good source, you can download the hack mod APK file to your device. The file size may vary depending on the features and content of the mod. Make sure you have enough storage space and a stable internet connection.
    4. -
    5. Enable unknown sources on your device. Before you can install the hack mod APK file, you need to allow your device to install apps from unknown sources. This is because the hack mod APK is not from the official Google Play Store and may not be verified by Google. To enable unknown sources, go to your device settings, security, and toggle on the unknown sources option.
    6. -
    7. Install the hack mod APK file. After enabling unknown sources, you can install the hack mod APK file by tapping on it and following the instructions. You may need to grant some permissions or accept some terms and conditions before the installation is complete.
    8. -
    9. Launch the game and enjoy. Once the installation is done, you can launch Hungry Shark Evolution from your app drawer or home screen. You should see a new icon or a modified logo for the game. You can then access and enjoy all the features of the hack mod APK.
    10. -
    -

    Note: Some hack mod APKs may require additional steps or files to work properly, such as OBB files, data files, or root access. Make sure you follow the instructions provided by the source of the hack mod APK carefully.

    -

    How to use a hack mod APK?

    -

    Using a hack mod APK for Hungry Shark Evolution is not very different from using the original game app. However, there are some things you should know and do to make the most of it:

    -
      -
    • Choose your shark and accessories. When you launch the game, you will see that you have unlimited coins and gems in your account. You can use them to buy any shark or accessory you want from the shop. You can also equip your shark with any accessory without having to unlock it with gems. To do this, go to your shark menu, select an accessory slot, and tap on any accessory you like.
    • -
    • Activate gold rush mode. One of the best features of a hack mod APK is that you can activate gold rush mode anytime you want without having to fill up the gold rush meter. Gold rush mode gives you invincibility, increased speed, and more points for eating prey. To activate gold rush mode, tap on the gold rush button on the bottom right corner of your screen.
    • -
    • Unlock new sharks and locations. Another benefit of a hack mod APK is that you can unlock new sharks and locations without having to complete missions or reach certain levels. You can play as any shark in the game from the start, including rare ones like Megalodon or Robo Shark. You can also explore new locations like Arctic Ocean or Atlantis without having to find portals or maps. To do this, go to your map menu, select a location, and tap on play.
    • -
    • Complete missions and achievements. Even though you have unlimited coins and gems, you may still want to complete missions and achievements for fun or challenge. Missions are tasks that you have to do in each location, such as eating a certain number of fish or humans, surviving for a certain time, or finding hidden objects. Achievements are goals that you have to achieve in the game overall, such as evolving all sharks, collecting all accessories, or reaching a certain score. To view your missions and achievements, go to your mission menu or achievement menu respectively.
    • -
    -

    With these tips, you can use a hack mod APK for Hungry Shark Evolution and have a blast in the game.

    -

    What are the risks and drawbacks of using a hack mod APK?

    -

    While using a hack mod APK for Hungry Shark Evolution may sound tempting, it is not without its risks and drawbacks. Before you decide to use one, you should be aware of the possible consequences and problems that may arise. Some of these include:

    -
      -
    • Malware and viruses: Some hack mod APKs may contain malicious software or code that can harm your device or steal your personal information. These can include spyware, ransomware, trojans, worms, or keyloggers. They can also cause your device to slow down, overheat, or crash.
    • -
    • Bans and suspensions: Using a hack mod APK may violate the terms and conditions of Hungry Shark Evolution or Google Play Store. This can result in your account being banned or suspended from the game or the store. You may also lose your progress, achievements, or purchases in the game.
    • -
    • Crashes and glitches: Using a hack mod APK may cause the game to malfunction or crash. This can happen because the hack mod APK is not compatible with the latest version of the game or your device. It can also happen because the hack mod APK interferes with the normal functioning of the game or its servers.
    • -
    • Lack of updates and support: Using a hack mod APK may prevent you from receiving updates and support from the developers of Hungry Shark Evolution. This can mean that you will miss out on new features, content, and challenges that are added to the game regularly. It can also mean that you will not be able to contact the developers if you encounter any issues or bugs in the game.
    • -
    • Lack of satisfaction and challenge: Using a hack mod APK may reduce your satisfaction and challenge in playing Hungry Shark Evolution. This can happen because you will have everything unlocked and unlimited in the game, which can make it too easy or boring. It can also happen because you will not feel the sense of achievement or reward that comes from playing the game legitimately.
    • -
    -

    These are some of the risks and drawbacks of using a hack mod APK for Hungry Shark Evolution. You should weigh them carefully before you decide to use one.

    -

    Alternatives to using a hack mod APK

    -

    If you are looking for other ways to play Hungry Shark Evolution without using a hack mod APK, you have some options. These include:

    -
      -
    • Playing online: You can play Hungry Shark Evolution online with other players from around the world. You can compete with them in leaderboards, events, and tournaments. You can also join clans, chat with them, and share tips and tricks. Playing online can give you more fun and challenge in the game.
    • -
    • Playing offline: You can play Hungry Shark Evolution offline without any internet connection. You can still enjoy all the features and content of the game, except for those that require online access. Playing offline can give you more freedom and privacy in the game.
    • -
    • Using official in-app purchases: You can use official in-app purchases to buy coins and gems in Hungry Shark Evolution. You can use these to unlock new sharks, accessories, or upgrades in the game. Using official in-app purchases can give you more support and security in the game.
    • -
    -

    These are some of the alternatives to using a hack mod APK for Hungry Shark Evolution. You can try them out and see which one suits you best.

    -

    Conclusion

    -

    Hungry Shark Evolution is an amazing game that lets you become a hungry shark and eat everything in your way. It is one of the most popular shark games on Android and has millions of fans worldwide.

    -

    If you want to enhance your gameplay experience, you may be tempted to use a hack mod APK for Hungry Shark Evolution. A hack mod APK is a modified version of an original app that allows you to access features that are normally locked or unavailable.

    -

    A hack mod APK for Hungry Shark Evolution can give you several benefits and advantages, such as unlimited coins, gems, sharks, accessories, gold rush mode, and no ads. However, it can also have some risks and drawbacks, such as malware, viruses, bans, suspensions, crashes, glitches, lack of updates, support, satisfaction, and challenge.

    -

    If you are looking for other ways to play Hungry Shark Evolution without using a hack mod APK, you have some options. These include playing online, playing offline, or using official in-app purchases.

    -

    The choice is yours. Whether you use a hack mod APK or not, we hope you enjoy playing Hungry Shark Evolution and have a great time!

    -

    Frequently Asked Questions

    -

    Here are some frequently asked questions about Hungry Shark Evolution and hack mod APKs:

    -
      -
    1. What is the latest version of Hungry Shark Evolution and its hack mod APK?
    2. -

      The latest version of Hungry Shark Evolution as of June 2023 is 9.2.0, which was released on May 27, 2023. It added new sharks, accessories, events, and bug fixes to the game. The latest version of its hack mod APK is also 9.2.0, which was released on June 1, 2023. It added unlimited coins, gems, sharks, accessories, gold rush mode, and no ads to the game.

      -
    3. Is using a hack mod APK for Hungry Shark Evolution legal or illegal?
    4. -

      Using a hack mod APK for Hungry Shark Evolution is not illegal, but it is not legal either. It is a gray area that depends on the laws and regulations of your country or region. Some countries or regions may allow the use of hack mod APKs for personal or educational purposes, while others may prohibit or penalize them for violating intellectual property rights or consumer protection laws. You should check the laws and regulations of your country or region before using a hack mod APK for Hungry Shark Evolution.

      -
    5. Can I use a hack mod APK for Hungry Shark Evolution on iOS devices?
    6. -

      No, you cannot use a hack mod APK for Hungry Shark Evolution on iOS devices. A hack mod APK is only compatible with Android devices, as it is based on the Android application package format. iOS devices use a different format and system for apps, which makes them incompatible with hack mod APKs. If you want to use a hack mod APK for Hungry Shark Evolution on iOS devices, you will need to jailbreak your device and use a different method.

      -
    7. Can I play Hungry Shark Evolution with my friends using a hack mod APK?
    8. -

      Yes, you can play Hungry Shark Evolution with your friends using a hack mod APK, but only if they are also using a hack mod APK. If you try to play with your friends who are using the original game app or a different version of the hack mod APK, you may encounter errors or compatibility issues. You may also be detected and banned by the game servers or Google Play Store. To avoid these problems, you should use the same version of the hack mod APK as your friends and play with them online.

      -
    9. Can I update Hungry Shark Evolution or its hack mod APK?
    10. -

      Yes, you can update Hungry Shark Evolution or its hack mod APK, but you should be careful and follow some precautions. If you update Hungry Shark Evolution from the official Google Play Store, you may lose all the features and benefits of the hack mod APK. You may also be detected and banned by the game servers or Google Play Store. If you update the hack mod APK from an unofficial source, you may encounter malware, viruses, crashes, glitches, or outdated versions of the game. To avoid these problems, you should backup your data and progress before updating anything. You should also wait for a reliable source to release a new version of the hack mod APK that matches the latest version of the game.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys Apk Son Srm Hile Modu - Arkadalarnzla Rekabet Edin.md b/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys Apk Son Srm Hile Modu - Arkadalarnzla Rekabet Edin.md deleted file mode 100644 index 7db7311dcbfb74d990f78951fa92f2085d5cd770..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys Apk Son Srm Hile Modu - Arkadalarnzla Rekabet Edin.md +++ /dev/null @@ -1,78 +0,0 @@ - -

    Stumble Guys Apk Son Sürüm Hile: How to Download and Play the Ultimate Knockout Game

    -

    Do you love playing party games with your friends online? Do you enjoy racing, jumping, and stumbling through chaotic obstacle courses? Do you want to have unlimited money and gems to customize your character and unlock new skins? If you answered yes to any of these questions, then you should try Stumble Guys Apk Son Sürüm Hile, the modded version of the popular multiplayer knockout game.

    -

    stumble guys apk son sürüm hile


    Download Zip ❤❤❤ https://urlca.com/2uO9UI



    -

    In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, how to play it, and some tips and tricks to help you win. Let's get started!

    -

    What is Stumble Guys Apk Son Sürüm Hile?

    -

    A brief introduction to the game and its features

    -

    Stumble Guys is a massive multiplayer party knockout game with up to 32 players online. It is inspired by the hit game Fall Guys, but it is exclusively for Android devices. You can join round after round of escalating chaos to stumble through different levels until one victor is crowned. You can dive into a series of ridiculous challenges and bizarre obstacles, knock down your rivals and overcome everything to win. You can also customize your character with different outfits and emotes, and enjoy the colorful and crazy design of the game.

    -

    Stumble Guys Apk Son Sürüm Hile is the modded version of the game that gives you some extra advantages over other players. With this version, you can get unlimited money and gems, which you can use to buy new skins and accessories for your character. You can also unlock all the levels and modes in the game, so you can enjoy more variety and fun. You can also play without ads, which can be annoying and distracting.

    -

    The benefits of using the modded version of the game

    -

    By using Stumble Guys Apk Son Sürüm Hile, you can enjoy several benefits that will make your gaming experience more enjoyable and satisfying. Some of these benefits are:

    -
      -
    • You can customize your character with any skin you want, from rare to legendary ones.
    • -
    • You can access all the levels and modes in the game, so you can explore different scenarios and challenges.
    • -
    • You can play without ads, which can interrupt your gameplay and ruin your mood.
    • -
    • You can have more chances to win your matches, as you can use your money and gems to buy power-ups and boosters.
    • -
    • You can have more fun with your friends, as you can invite them to join your party or compete against them in tournaments.
    • -
    -

    How to Download and Install Stumble Guys Apk Son Sürüm Hile?

    -

    The steps to download and install the game on your Android device

    -

    If you want to download and install Stumble Guys Apk Son Sürüm Hile on your Android device, you need to follow these simple steps:

    -
      -
    1. Go to [13](https://www.stumbleguys.com/) or [6](https://play.google.com/store/apps/details?id=com.kitkagames.fallbuddies) or [14](https://www.bluestacks.com/apps/action/stumble-guys-multiplayer-royale-on-pc.html) or [12](https://apkresult

      How to Play Stumble Guys Apk Son Sürüm Hile?

      -

      The basic gameplay and controls of the game

      -

      Stumble Guys Apk Son Sürüm Hile is very easy to play, as it has simple and intuitive controls. You can use the virtual joystick on the left side of the screen to move your character, and the jump button on the right side to leap over obstacles. You can also swipe on the screen to rotate the camera and look around. Alternatively, you can use your mouse and keyboard if you play on PC with BlueStacks, as we explained in the previous section.

      -

      The gameplay of Stumble Guys Apk Son Sürüm Hile is also very straightforward. You have to compete against other players in a series of minigames, each with different rules and objectives. Some of them are races, where you have to reach the finish line before others. Some of them are survival, where you have to avoid falling or getting eliminated by hazards. Some of them are team-based, where you have to cooperate with your teammates or sabotage your enemies. The game randomly selects the minigames for each match, so you never know what to expect.

      -

      The goal of Stumble Guys Apk Son Sürüm Hile is to be the last player standing at the end of the match. To do that, you have to qualify for each round by meeting the requirements of each minigame. For example, in a race, you have to be among the first ones to cross the finish line. In a survival, you have to stay alive until the timer runs out. In a team-based, you have to make sure your team has more points than the others. If you fail to qualify, you are eliminated from the match and have to start over.

      -

      stumble guys apk jeton hilesi indir
      -stumble guys apk mod sınırsız para
      -stumble guys apk güncel hileli versiyon
      -stumble guys apk oyun fan para hilesi
      -stumble guys apk oyunizi son sürüm indir
      -stumble guys apk hileli oyun indir club
      -stumble guys apk android oyun club hile
      -stumble guys apk son güncelleme para hilesi
      -stumble guys apk eğlenceli hareketli oyun
      -stumble guys apk arkadaş canlısı mod indir
      -stumble guys apk liderlik tablosu hileli
      -stumble guys apk güçlerinizi yükseltin hile
      -stumble guys apk ücretsiz oyun indir fan
      -stumble guys apk mevcut sürüm 0.41 hile
      -stumble guys apk en iyi oyun alanları mod
      -stumble guys apk yeni haritalar hileli indir
      -stumble guys apk kostüm ve aksesuar hile
      -stumble guys apk online çok oyunculu mod
      -stumble guys apk eğlenceli engeller hileli
      -stumble guys apk 60 kişilik yarışlar mod
      -stumble guys apk son kalan kazanır hile
      -stumble guys apk renkli grafikler mod indir
      -stumble guys apk basit kontroller hileli
      -stumble guys apk komik ses efektleri mod
      -stumble guys apk düşmeden ilerle hileli
      -stumble guys apk rakiplerinizi itin mod
      -stumble guys apk çeşitli bölümler hileli
      -stumble guys apk sevimli karakterler mod indir
      -stumble guys apk özelleştirme seçenekleri hile
      -stumble guys apk ekip kurma özelliği mod

      -

      The tips and tricks to win your matches and unlock skins

      -

      Stumble Guys Apk Son Sürüm Hile is a fun and chaotic game, where anything can happen. However, there are some tips and tricks that can help you improve your chances of winning and unlocking new skins for your character. Here are some of them:

      -
        -
      • Use the physics of your character to your advantage. You can bump into other players, push them off ledges, block their paths, or even grab them and drag them with you. However, be careful not to get knocked down yourself, as it will slow you down and make you vulnerable.
      • -
      • Use the power-ups and boosters wisely. You can buy them with your money and gems in the shop, and they can give you an edge over your rivals. For example, you can use a speed boost to run faster, a shield to protect yourself from hazards, or a magnet to attract coins and gems.
      • -
      • Learn the layout and mechanics of each minigame. Each minigame has its own obstacles, hazards, shortcuts, and secrets. By playing them repeatedly, you can memorize their patterns and strategies, and avoid common mistakes.
      • -
      • Be flexible and adaptable. The game is unpredictable and random, so you have to be ready for anything. Sometimes, it's better to follow the crowd and avoid risks. Sometimes, it's better to take a different route and surprise your enemies. Sometimes, it's better to cooperate with other players and help each other out. Sometimes, it's better to betray them and sabotage their plans.
      • -
      • Have fun and don't give up. Stumble Guys Apk Son Sürüm Hile is a game that is meant to be enjoyed and laughed at. Don't take it too seriously or get frustrated if you lose. Just try again and learn from your mistakes. You will eventually get better and win more matches.
      • -
      -

      Conclusion

      -

      Stumble Guys Apk Son Sürüm Hile is a great game for anyone who loves party games with friends online. It is a free alternative to Fall Guys, but with more features and advantages. You can download and install it easily on your Android device or PC with BlueStacks, and play it without ads or limitations. You can also customize your character with unlimited money and gems, and unlock all the levels and modes in the game.

      -

      If you want to have fun and win your matches in Stumble Guys Apk Son Sürüm Hile, you should follow our tips and tricks that we shared in this article. You should also practice your skills and learn from your experience in each minigame. Remember that the game is random and chaotic, so be prepared for anything.

      -

      What are you waiting for? Download Stumble Guys Apk Son Sürüm Hile now and join the ultimate knockout game!

      -

      FAQs

      -

      Q: Is Stumble Guys Apk Son Sürüm Hile safe to download A: Yes, Stumble Guys Apk Son Sürüm Hile is safe to download and install, as long as you use a trusted source and follow the instructions carefully. However, you should always be careful when downloading any modded or hacked version of a game, as it may contain viruses or malware that can harm your device or compromise your privacy. You should also check the permissions and settings of the game before running it, and avoid using it on public or unsecured networks.

      Q: Is Stumble Guys Apk Son Sürüm Hile compatible with all Android devices?

      - A: Stumble Guys Apk Son Sürüm Hile is compatible with most Android devices that have Android 5.0 or higher. However, some devices may not support the game or run it smoothly, depending on their specifications and performance. You should also make sure that you have enough storage space and battery life to play the game without interruptions.

      Q: Can I play Stumble Guys Apk Son Sürüm Hile with my friends online?

      - A: Yes, you can play Stumble Guys Apk Son Sürüm Hile with your friends online, as long as they also have the same version of the game installed on their devices. You can invite them to join your party by sending them a code or a link, or you can join their party by entering their code or clicking their link. You can also chat with them using the in-game voice chat feature, or use an external app like Discord or WhatsApp.

      Q: Can I play Stumble Guys Apk Son Sürüm Hile offline?

      - A: No, you cannot play Stumble Guys Apk Son Sürüm Hile offline, as it requires an internet connection to function properly. The game is a multiplayer online game, so you need to connect to the game servers and other players to play it. If you lose your connection or have a weak signal, you may experience lag, glitches, or disconnections.

      Q: How can I update Stumble Guys Apk Son Sürüm Hile?

      - A: To update Stumble Guys Apk Son Sürüm Hile, you need to download and install the latest version of the game from the same source that you used before. You should also delete the previous version of the game from your device before installing the new one, to avoid any conflicts or errors. You should also check for updates regularly, as the game may add new features, levels, skins, or bug fixes.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tag After School A Horror School Life Simulation Game for PC.md b/spaces/congsaPfin/Manga-OCR/logs/Tag After School A Horror School Life Simulation Game for PC.md deleted file mode 100644 index fa89427140281ee197db53e9578d3c18c3aa9d19..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tag After School A Horror School Life Simulation Game for PC.md +++ /dev/null @@ -1,70 +0,0 @@ - -

      How to Download Tag After School Free and Enjoy a Horror School Life Simulation Game

      -

      If you are looking for a thrilling and immersive game that will keep you on the edge of your seat, you should try Tag After School. This is a horror school life simulation game developed by Genius Studio Japan Inc. In this game, you will experience a terrifying adventure with ghostly characters and mature visuals. You will also have to make choices that will affect the outcome of the story and your relationships with other characters. In this article, we will show you how to download Tag After School free on your Android device and why you should play this game.

      -

      download tag after school free


      Download Zip ⇒⇒⇒ https://urlca.com/2uO8mA



      -

      What is Tag After School?

      -

      Tag After School is a game that combines horror, romance, and comedy genres. It is a way for you to escape from the boring and stressful reality of school and enter a world of mystery and excitement. Here are some of the aspects of Tag After School that make it a unique and enjoyable game.

      -

      The story and characters of Tag After School

      -

      The story of Tag After School revolves around Shota-Kun, the main character that you will play as. He is a normal high school student who has a crush on his childhood friend, Yui-Chan. One day, he decides to confess his feelings to her after school, but he finds out that she has been kidnapped by a mysterious group of ghosts. He then meets Rei-Chan, a ghost girl who offers to help him rescue Yui-Chan. However, he has to play a game of tag with the ghosts in order to do so. Along the way, he will encounter other ghost girls who have their own motives and personalities. He will also have to face his own fears and secrets as he tries to save Yui-Chan and survive the game.

      -

      The features and gameplay of Tag After School

      -

      Tag After School is a game that has many features and gameplay elements that will make you hooked. Some of them are:

      -
        -
      • Multiple endings: The game has different endings depending on your choices and actions throughout the game. You can unlock different scenes and outcomes with each character and discover their secrets and backstories.
      • -
      • Interactive dialogues: The game has interactive dialogues that will let you choose how to respond to the characters and situations. You can express your feelings, opinions, and preferences through your choices.
      • -
      • Stunning graphics: The game has stunning graphics that will immerse you in the horror school life setting. The game has mature visuals that are displayed by the ghostly characters and the environments. The game also has sound effects and music that will enhance the atmosphere and mood of the game.
      • -
      • Easy controls: The game has easy controls that will let you play the game smoothly and comfortably. You can tap, swipe, and drag on the screen to interact with the game.
      • -
      -

      How to download Tag After School free on your Android device?

      -

      If you want to download Tag After School free on your Android device, you can follow these simple steps:

      -

      Step 1: Go to the Google Play Store

      -

      The first step is to go to the Google Play Store on your Android device. This is where you can find and download various apps and games for your device.

      -

      Step 2: Search for Tag After School

      -

      The next step is to search for Tag After School in the search bar of the Google Play Store. You can type "Tag After School" and press enter or tap on the magnifying glass icon.

      -

      Step 3: Install the app

      -
    2. How can I get the secret endings in Tag After School?
    3. -

      The secret endings in Tag After School are not easy to get. You will need to find some hidden items, solve some puzzles, and make some specific choices in order to unlock them. You will also need to play the game more than once and try different options and paths. The secret endings are worth the effort, as they will reveal some surprising twists and secrets about the game's story and characters.

      -

      download tag after school game on pc emulator
      -download tag after school full game for windows
      -tag after school free download links and tips
      -how to install tag after school game on laptop
      -tag after school gameplay and review for pc
      -best android emulator for tag after school game
      -tag after school pc version free download
      -where to get tag after school game for windows 10
      -tag after school game features and updates
      -tag after school game system requirements and compatibility
      -download tag after school game apk for android
      -tag after school game mod apk free download
      -how to play tag after school game offline
      -tag after school game cheats and hacks
      -tag after school game walkthrough and guide
      -tag after school game characters and story
      -tag after school game fan art and wallpapers
      -tag after school game community and forums
      -tag after school game support and feedback
      -tag after school game alternatives and similar games
      -download tag after school app for ios devices
      -how to get tag after school app on iphone or ipad
      -tag after school app review and ratings
      -tag after school app features and benefits
      -tag after school app login and sign up
      -how to use tag after school app effectively
      -tag after school app tutorial and tips
      -tag after school app privacy and security
      -tag after school app problems and solutions
      -tag after school app customer service and contact
      -download tag after school manga for free online
      -read tag after school manga chapters and spoilers
      -tag after school manga summary and plot
      -tag after school manga characters and relationships
      -tag after school manga genre and themes
      -tag after school manga fanfiction and recommendations
      -tag after school manga discussion and analysis
      -tag after school manga raw scans and translations
      -where to buy tag after school manga books or ebooks
      -how to support the author of tag after school manga

      -
    4. Can I play Tag After School offline?
    5. -

      Yes, you can play Tag After School offline. You will only need an internet connection to download the game and update it if needed. Once you have installed the game on your device, you can play it without an internet connection. However, you may not be able to access some features or functions that require an internet connection, such as the achievements, leaderboards, or social media sharing.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ti game College Brawl APK min ph - nh bi nhng c gi mo .md b/spaces/congsaPfin/Manga-OCR/logs/Ti game College Brawl APK min ph - nh bi nhng c gi mo .md deleted file mode 100644 index 5bf408f3933539ca292fe282c126745245ef6627..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ti game College Brawl APK min ph - nh bi nhng c gi mo .md +++ /dev/null @@ -1,9 +0,0 @@ - -

    Tải College Brawl APK: A Guide for Android Users

    -

    College Brawl is a beat 'em up game for adults where you have to fight your way through a college campus full of dangerous girls who belong to the Red Kat Gang. The game features anime-style graphics, simple controls, and explicit scenes that you can unlock as you progress.

    -

    tải college brawl apk


    Download Zip ⇒⇒⇒ https://urlca.com/2uO5CY



    -

    If you are looking for a fun and naughty game to play on your Android device, you might want to try College Brawl APK. This is a modified version of the game that allows you to play it for free without any restrictions or ads.

    -

    In this article, we will show you how to download and install College Brawl APK on your Android device, how to play the game, and what are the pros and cons of playing this version. Let's get started!

    - I hope this helps you write your own SEO-optimized article on this topic. If you have any questions or feedback, please let me know.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and More in Beach Buggy Racing 2 MOD APK - Free Download.md b/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and More in Beach Buggy Racing 2 MOD APK - Free Download.md deleted file mode 100644 index d8cf0af22eef658fe52b213449a764058b317247..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money and More in Beach Buggy Racing 2 MOD APK - Free Download.md +++ /dev/null @@ -1,95 +0,0 @@ - -

    Beach Buggy Racing 2 Unlimited Money Mod APK: A Fun and Exciting Kart Racing Game

    -

    If you are looking for a kart racing game that is fun, exciting, and colorful, then you should try Beach Buggy Racing 2. This game is a sequel to the popular Beach Buggy Racing that has over 100 million downloads on Google Play. Beach Buggy Racing 2 offers more content, features, and improvements than its predecessor. And if you want to enjoy the game without any limitations, you can download the unlimited money mod apk that gives you access to everything in the game.

    -

    What is Beach Buggy Racing 2?

    -

    A kart racing game with amazing graphics, powerups, and game modes

    -

    Beach Buggy Racing 2 is a kart racing game that is powered by Vector Engine and NVIDIA's PhysX. It has amazing 3D graphics, detailed cars and characters, and spectacular weapons. You can race through Egyptian pyramids, dragon-infested castles, pirate ship wrecks, and experimental alien bio-labs. You can collect and upgrade over 45 powerups that have out-of-this-world abilities like "Chain Lightning", "Donut Tires", "Boost Juice" and "Killer Bees". You can also play against other players from around the world in online competitions and tournaments.

    -

    beach buggy racing 2 unlimited money mod apk


    Download Zip ––– https://urlca.com/2uO674



    -

    A sequel to the popular Beach Buggy Racing with more content and features

    -

    Beach Buggy Racing 2 is a sequel to the first Beach Buggy Racing that introduced over 100 million international mobile players to console-style kart-racing with a playful offroad twist. With Beach Buggy Racing 2, the developers have added a ton of new content, upgradeable powerups, new game modes, and more. You can also build your reputation to recruit new racers, each with their own unique special ability. You can collect a garage full of beach buggies, monster trucks, muscle cars, classic pickups and formula supercars. You can also customize your car with exotic paints, decals, and stickers.

    -

    What is the unlimited money mod apk?

    -

    A modified version of the game that gives you unlimited coins and gems

    -

    The unlimited money mod apk is a modified version of the original game that gives you unlimited coins and gems. Coins and gems are the currencies in the game that you can use to unlock and upgrade cars, drivers, powerups, paints, decals, stickers, etc. With the mod apk, you don't have to worry about running out of coins or gems. You can enjoy the game without any restrictions or ads. You can also play offline without any internet connection.

    -

    How to download and install the mod apk on your Android device

    -

    To download and install the mod apk on your Android device, you need to follow these simple steps:

    -
      -
    1. Download the mod apk file from a trusted source. You can find many websites that offer the mod apk for free, but be careful of malware and viruses. One of the websites that you can try is [APKPure].
    2. -
    3. Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    4. -
    5. Locate the downloaded mod apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
    6. -
    7. Launch the game and enjoy unlimited money and fun.
    8. -
    -

    What are the features and benefits of the mod apk?

    -

    Unlock and upgrade over 55 cars and 14 drivers with unique abilities

    -

    With the mod apk, you can unlock and upgrade over 55 cars and 14 drivers with unique abilities. You can choose from a variety of vehicles, such as beach buggies, monster trucks, muscle cars, classic pickups, formula supercars, etc. Each car has its own stats, such as speed, acceleration, handling, and durability. You can also upgrade your car's engine, tires, suspension, turbo, etc. to improve its performance. Moreover, you can unlock and recruit 14 drivers, each with their own special powerup. For example, Rez has the ability to fire rockets, McSkelly has the ability to summon skeletons, and Roxie has the ability to create a sonic boom.

    -

    Customize your car with paints, decals, and stickers

    -

    With the mod apk, you can customize your car with paints, decals, and stickers. You can change the color of your car's body, wheels, windows, etc. You can also add decals and stickers to make your car more unique and stylish. You can choose from hundreds of options, such as flames, stripes, stars, skulls, etc. You can also mix and match different paints, decals, and stickers to create your own design.

    -

    Explore a massive island with different terrains and environments

    -

    With the mod apk, you can explore a massive island with different terrains and environments. You can race through sandy beaches, lush jungles, ancient ruins, volcanic caves, snowy mountains, etc. You can also discover hidden shortcuts, secrets, and surprises along the way. The island is full of life and detail, such as animals, plants, waterfalls, etc. You can also enjoy dynamic weather effects, such as rain, fog, thunderstorms, and more. The island is also divided into different regions, such as Paradise Bay, Dino Jungle, Fire Mountain, etc. Each region has its own theme, challenges, and secrets.

    -

    beach buggy racing 2 mod apk free download
    -beach buggy racing 2 hack unlimited coins and gems
    -beach buggy racing 2 mod apk latest version
    -beach buggy racing 2 cheats android no root
    -beach buggy racing 2 unlimited money and diamonds
    -beach buggy racing 2 mod apk offline
    -beach buggy racing 2 hack apk download
    -beach buggy racing 2 mod menu apk
    -beach buggy racing 2 unlimited everything mod apk
    -beach buggy racing 2 hack online generator
    -beach buggy racing 2 mod apk revdl
    -beach buggy racing 2 unlimited tickets and powerups
    -beach buggy racing 2 mod apk android 1
    -beach buggy racing 2 hack ios no jailbreak
    -beach buggy racing 2 unlimited coins and gems apk
    -beach buggy racing 2 mod apk rexdl
    -beach buggy racing 2 hack tool no survey
    -beach buggy racing 2 mod apk obb
    -beach buggy racing 2 unlimited money and gems download
    -beach buggy racing 2 mod apk happymod
    -beach buggy racing 2 hack version download
    -beach buggy racing 2 mod apk all cars unlocked
    -beach buggy racing 2 cheats codes for android
    -beach buggy racing 2 unlimited money and gems mod apk
    -beach buggy racing 2 mod apk pure
    -beach buggy racing 2 hack without human verification
    -beach buggy racing 2 mod apk unlimited everything download
    -beach buggy racing 2 cheats ios no jailbreak
    -beach buggy racing 2 unlimited coins and diamonds apk download
    -beach buggy racing 2 mod apk no root
    -beach buggy racing 2 hack apk android download
    -beach buggy racing 2 mod apk all unlocked
    -beach buggy racing 2 cheats for iphone
    -beach buggy racing 2 unlimited money and gems apk download
    -beach buggy racing 2 mod apk data download
    -beach buggy racing 2 hack online no survey
    -beach buggy racing 2 mod apk full version
    -beach buggy racing 2 cheats android download
    -beach buggy racing 2 unlimited coins and gems hack
    -beach buggy racing 2 mod apk android republic

    -

    Compete against other players online or with friends in split screen

    -

    With the mod apk, you can compete against other players online or with friends in split screen. You can join online competitions and tournaments to test your skills and win prizes. You can also create your own custom races and invite your friends to join. You can also play with up to 4 players on the same device using split screen mode. You can choose from different game modes, such as Race, Elimination, Battle Arena, etc. You can also chat with other players and make new friends.

    -

    Create your own custom game modes with powerups, race rules, and more

    -

    With the mod apk, you can create your own custom game modes with powerups, race rules, and more. You can use the Game Maker tool to design your own tracks, choose the powerups, set the race rules, and customize the environment. You can also share your creations with other players and play their creations as well. You can unleash your creativity and imagination and make the game your own.

    -

    What are some tips and tricks to enjoy the game more?

    -

    Choose the right character and car for your play style

    -

    One of the tips to enjoy the game more is to choose the right character and car for your play style. Each character and car has its own strengths and weaknesses, so you need to find the ones that suit you best. For example, if you like speed, you might want to choose a fast car like the Lambini or a character like Rez who has a rocket powerup. If you like handling, you might want to choose a car like the Dune Buggy or a character like Tiki who has a banana powerup. You can also experiment with different combinations and see what works for you.

    -

    Use powerups wisely and strategically

    -

    Another tip to enjoy the game more is to use powerups wisely and strategically. Powerups are essential to win races and battles, but they are also limited and unpredictable. You need to know when to use them and how to avoid them. For example, if you have a fireball powerup, you might want to use it when you are close to your opponents or when they are clustered together. If you have a shield powerup, you might want to use it when you are in the lead or when you are under attack. You also need to watch out for other players' powerups and dodge them if possible.

    -

    Learn the tracks and avoid obstacles

    -

    A third tip to enjoy the game more is to learn the tracks and avoid obstacles. The tracks in Beach Buggy Racing 2 are full of twists, turns, jumps, shortcuts, and hazards. You need to memorize the layout of each track and find the best route to take. You also need to avoid obstacles such as rocks, trees, animals, etc. that can slow you down or damage your car. You can also use obstacles to your advantage by knocking them over or using them as cover.

    -

    Collect stars, trophies, and loot crates to get more rewards

    -

    A fourth tip to enjoy the game more is to collect stars, trophies, and loot crates to get more rewards. Stars are earned by completing races and challenges in each region. Trophies are earned by winning online competitions and tournaments. Loot crates are earned by playing daily events or watching ads. These rewards can give you coins, gems, powerups, cars, drivers, paints, decals, stickers, etc. You can use these rewards to unlock and upgrade everything in the game. You can also use them to customize your car and character. You should try to collect as many stars, trophies, and loot crates as you can to get the most out of the game.

    -

    Conclusion

    -

    Beach Buggy Racing 2 is a fun and exciting kart racing game that you can enjoy with or without the mod apk. The mod apk gives you unlimited money to unlock and upgrade everything in the game. The game has amazing graphics, powerups, game modes, and customization options. The game is easy to pick up and play, yet challenging to master. You can also compete against other players online or with friends in split screen. You can also create your own custom game modes with powerups, race rules, and more. Beach Buggy Racing 2 is a game that will keep you entertained for hours.

    -

    FAQs

    -

    Q: Is Beach Buggy Racing 2 free to play?

    -

    A: Yes, Beach Buggy Racing 2 is free to play. You can download it from Google Play or App Store. However, the game contains in-app purchases that can enhance your gaming experience. You can also use the mod apk to get unlimited money for free.

    -

    Q: Is Beach Buggy Racing 2 safe to play?

    -

    A: Yes, Beach Buggy Racing 2 is safe to play. The game does not contain any harmful content or malware. However, you should be careful when downloading the mod apk from unknown sources. You should only download it from trusted websites that have positive reviews and ratings.

    -

    Q: How can I play Beach Buggy Racing 2 on PC?

    -

    A: You can play Beach Buggy Racing 2 on PC by using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. You can download any of these emulators from their official websites and install them on your PC. Then, you can download Beach Buggy Racing 2 from Google Play or the mod apk from a trusted source and install it on the emulator. After that, you can launch the game and enjoy it on your PC.

    -

    Q: How can I contact the developers of Beach Buggy Racing 2?

    -

    A: You can contact the developers of Beach Buggy Racing 2 by visiting their official website or their social media pages. Their website is [Vector Unit] and their social media pages are [Facebook], [Twitter], [Instagram], and [YouTube]. You can also send them an email at support@vectorunit.com or leave a comment on their blog.

    -

    Q: What are some other games like Beach Buggy Racing 2?

    -

    A: Some other games like Beach Buggy Racing 2 are Mario Kart Tour, Crash Team Racing Nitro-Fueled, Sonic & All-Stars Racing Transformed, Angry Birds Go!, and Asphalt 9: Legends. These games are also kart racing games that have colorful graphics, powerups, game modes, and customization options. You can find these games on Google Play or App Store.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Canon raw file converter online How to turn your CR2 files into stunning JPGs.md b/spaces/contluForse/HuggingGPT/assets/Canon raw file converter online How to turn your CR2 files into stunning JPGs.md deleted file mode 100644 index 440f7bc7b5944745f30c29025a1a272b97e2ec1b..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Canon raw file converter online How to turn your CR2 files into stunning JPGs.md +++ /dev/null @@ -1,26 +0,0 @@ -
    -

    Raw.pics.io is an in-browser RAW files viewer and converter. You can browse images, pictures and photos from DSLR RAW camera format. It allows to convert PDF, CR2, NEF, ARW, ORF, PEF, RAF, DNG and other files into JPEG, PNG and other formats online. We support Canon, Nikon, Sony, Olympus and Pentax raw formats directly.

    -

    Canon raw file converter online


    Downloadhttps://ssurll.com/2uzyby



    -

    With the help of Raw.pics.io online image converter, you can easily convert your images, photos, or other pictures on your desktop computer into JPG or PNG file format. You can also edit, compress, and change the pixel size of your picture as you convert an image. Image conversion is super easy - it only takes a couple of steps to view and convert your photos into the necessary format. Raw.pics.io picture converter is totally free for five first conversions. It does not need registration either! All you need is a good Internet connection and browser.

    -

    If you have a DSLR or a mirrorless camera, you must have come across the phrase RAW images.
    Unlike a simple JPG photograph, RAW images can be captured only with advanced cameras, and they cannot be viewed or printed directly like a photo! RAW images are made from data collected directly by the sensor inside your camera, and are not processed or edited in any way.
    Because of the huge amount of information inside a RAW image, you need to use special software like Lightroom and Photoshop to edit them and convert them into usable JPGs.
    This is where converter programs like the SoftOrbits CR2 to JPEG Converter become useful.
    Let me first give you some background on CR2 files, after which we'll learn how to convert CR2 files to JPEG on Windows 10/11 PC!

    -

    RAW files are huge in size, and converting them into JPG means that you are compressing lots of details and creating a smaller file. This is why it is important to use the best software for conversion of CR2 format.
    Download for FreeBatch Picture Resizer is the best CR2 to JPG converter, and works on not only CR2 format, but also NEF format of Nikon, RAF of Fujifilm and much more.
    The advantage with this converter program is that apart from converting any RAW format into JPG, you will also get various technical options like fixing the exact height and width of the final photo, adding watermarks, level adjustments, rotation and smart cropping.
    With an easy to use interface, you will be able to convert multiple RAW photos into JPGs without reducing image quality! A final image in JPG format will mean that you can instantly edit your photo in any editing app, or share it on social media or transfer it quickly to other devices.
    Although many other best cr2 to jpg converters take a long time to process the humongous RAW files, the Batch Picture Resize does it within minutes!
    Let us learn how you can take benefit of this CR2 file converter.

    -

    -

    Converting RAW photos takes a lot of time because many programs need you to open each file individually. Not with SoftOrbits Batch Picture Resizer4!
    This online tool gives you the option to select multiple CR2 files at once and convert them together.
    To use the batch mode, all you have to do is use the Add File option to add multiple images simultaneously. You can either drag your mouse pointer and select multiple files, or click individually while pressing the Ctrl button.
    Once all your images are added in the tool, you can select the adjustments which will be commonly applied for all!

    -

    In case you do not want to download a software to convert CR2 files into JPEG format, you can consider using online CR2 to JPG Converter tools. Although uploading the RAW files might take a few minutes, you won't have to carry your laptop with the installed software everywhere!
    Here are a few options you can check out.

    -

    Zamzar is a free tool that works online, without requiring any special program to be installed. Even though it is a bit slow, the conversion is excellent and your final JPG files will not lose out on image quality.
    The best thing is that it detects many RAW formats like NEF, RAF, CR2, DNG etc., but you can convert files of only 50 MB for free. Also, using a free account here means that you can convert only two images within twenty four hours.

    -

    Convertio is a powerful online service that recognizes many RAW formats and converts them into JPGs. Similar to Batch Picture Resizer and Zamzar, you need to upload your CR2 file, select the format you want and let the tool do the rest!
    Apart from image conversion, Convertio also works as a converter for audio files, videos, documents and PDFs. Since the conversion happens on the servers of Convertio, even an old computer will be able to use its features with ease.

    -

    You will need to use the Canon CR2 Converter in case you have the Canon D2000 camera. Since that camera takes RAW photos in an older format, you need to use Canon's own converter to first change them into CR2 RAW files.
    Once you have the CR2 file ready, you can open them in any other program!
    Canon provides Digital Photo Professional for free with its cameras, which will allow you to convert RAW files into JPG and also edit them like in Photoshop. Digital Photo Professional is an advanced image editor, that gives you options to adjust the exposure before saving the image as a JPG.

    -

    This is an obvious choice. Photoshop is a converter, image processor and editor all rolled into one!
    Photoshop files are in the PDS format, but it recognizes DNG, TIFF, CR2, NEF and other RAW formats too, and allows you to save them as JPG.
    When you open Photoshop, click on Edit > Scripts > Image Processor. Choose the Select Folder option to open the folder with you RAW files. Select the destination folder, choose Save as JPG, and click Run.
    If Photoshop is not allowing you to save an image as JPG, it is probably because the file size is too big. To solve this, use the Save for Web option from the File menu, which will let you compress the image into a smaller file!

    -

    So we have developed this small light-weight utility which can be used to convert canon raw images rather easily. It supports following conversions: canon raw cr2 to jpeg, canon raw crw to jpeg, canon raw dng to jpeg and canon raw rw2 to jpeg. You can use free software and convert one file at a time, or buy pro version to convert canon raw images in bulk. If you have any suggestion, we love to hear. Install canon raw to jpeg converter now and give it a try.","thumbnailUrl":" -1fe5-4c65-867d-6a1152df65f4/imgingest-1557083346294685700.png?auto=webp&fit=crop&height=675&width=1200"},"name":"Free Canon Raw to Jpeg Converter","applicationCategory":"Digital Photo Software","applicationSubCategory":"Photo Editors","image":null,"description":"Converting canon raw images to jpeg image files is always a need because normal user's computer or mobile can't open these images without installing some kind of special image viewer.

    -

    So we have developed this small light-weight utility which can be used to convert canon raw images rather easily. It supports following conversions: canon raw cr2 to jpeg, canon raw crw to jpeg, canon raw dng to jpeg and canon raw rw2 to jpeg. You can use free software and convert one file at a time, or buy pro version to convert canon raw images in bulk. If you have any suggestion, we love to hear. Install canon raw to jpeg converter now and give it a try.

    -

    Since CR2 image files are not widely supported, you may feel hard to find a proper Canon CR2 to JPG converter. So this article collects top 10 representative CR2 to JPG software. You can read and compare with the following advantages and disadvantages carefully.

    -

    First of all, you cannot convert JPG to ICO Paint directly on your Windows computer. Fortunately, there are many other image converters you can choose to convert photo to ICO format. Thus, this article aims to show you easy ways to convert JPG to ICO icon online and offline.

    -

    For people who do not convert CR2 file to JPG frequently, using an CR2 to JPG online converter is a good choice. What's more, you can save time and money on changing CR2 as JPG format too. So it is quite important to get a reliable program to convert CR2 to JPG free online.

    -

    Aiseesoft Free Image Converter Online is a fast and user-friendly CR2 to JPG converter. You can use it to convert CR2 to JPG online for free. Besides, it is a CR2 to JPG converter free no watermark left.

    -

    RAW.PICS.IO is a free online photo converter that allows users to convert photos from digital cameras directly. To be more specific, you can open, edit and convert photos from DSLR RAW camera formats.

    -

    iLoveIMG is an easy-to-use online photo converter. Actually, you can compress, resize, crop and convert images to and from JPG effortlessly. Well, it is also supported to batch convert CR2 to JPG online free.

    -

    Zamzar always provides the complete introduction of your input and output photo formats. So if you want to know more about CR2 and JPG, then you can head to Zamzar CR2 to JPG online converter to get the information you need.

    -

    That depends on your operating system and your preferred programs. But if you use a Mac, you can easily convert with Preview (just open in Preview, then hit Fileu003eExport and select JPEG ). On a Windows PC, you can download Pixillion, which offers a quick (and free) method of RAW conversion. You can also convert to JPEG using an online converter, such as CloudConvert, or you can easily use a program such as Lightroom Classic, Luminar 4/AI, or Photoshop.

    -

    File extension.RAW, .3FR, .ARI, .ARW, .BAY, .CRW, .CR2, .CAP, .DCS, .DCR, .DNG, .DRF, .EIP, .ERF, .FFF, .IIQ, .K25, .KDC, .MDC, .MEF, .MOS, .MRW, .NEF, .NRW, .OBM, .ORF, .PEF, .PTX, .PXN, .R3D, .RAF, .RWL, .RW2, .RWZ, .SR2, .SRF, .SRW, .X3FCategoryImage FileDescriptionRAW is a generic term referring to a family of image formats containing unprocessed image data and metadata, which is received directly from image sensors of digital image- or motion picture cameras and/or scanners. RAW files are not usable as images and are often referred to as digital negatives. RAW images directly reflect color and light intensity and therefore demonstrate true color picture. In order to print or view a RAW file, it must be converted to a standard raster graphics format (JPEG). Each camera model has its own RAW extension. Formats are differentiated according camera manufacturers' names: .nef (Nickon), .crw (Canon), .srw (Samsung), etc.Associated programsDeveloped byMIME typeUseful linksMore detailed information on RAW filesJPG FileFile extension.JPG, .JPEG, .JPE, .JIF, .JFIF, .JFICategoryImage FileDescriptionJPG is the file format for images made by digital cameras and spread throughout the world wide web. Saving in JPG format an image loses its quality, because of the size compression. But at the end you have a much smaller file easy to archive, send, and publish in the web. These are the cases when an image's size matters more than image's quality. Nonetheless, by using professional software you can select the compression degree and so affect the image's quality.Associated programsDeveloped byThe JPEG CommitteeMIME typeUseful linksMore detailed information on JPG filesOnline Converters

    • Audio Converter
    • Excel Converter
    • XML Converter
    • Doc Converter
    • PDF Converter
    • Mail Converter
    • PDF Combine
    • TIFF Combine
    Related Articles
    • Total Image Converter
    Related Converters
    • PNG to ICO
    • PCL to PDF
    • CAL to PDF
    • RPF to PDF
    • All online converters
    Top online converters
    • JFIF to JPG
    • PDF to Doc
    • JFIF to PNG
    • OPUS to MP3
    • DWG to PDF
    Rating RAW to JPG 4.9 (645 votes)Rate It POPULAR

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Dynamo Studio 2006 Scaricare Generatore Di Chiavi 64 Bits Italiano.md b/spaces/contluForse/HuggingGPT/assets/Dynamo Studio 2006 Scaricare Generatore Di Chiavi 64 Bits Italiano.md deleted file mode 100644 index 2c4521e5d9325a292562848d55d5f70a80e9e81c..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dynamo Studio 2006 Scaricare Generatore Di Chiavi 64 Bits Italiano.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Dynamo Studio 2006 scaricare generatore di chiavi 64 bits Italiano


    Download File === https://ssurll.com/2uzvj4



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/parsers/parser_factory.py b/spaces/cooelf/Multimodal-CoT/timm/data/parsers/parser_factory.py deleted file mode 100644 index 419ffe899b476233dba84b6cb8d0851801da27a5..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/data/parsers/parser_factory.py +++ /dev/null @@ -1,29 +0,0 @@ -import os - -from .parser_image_folder import ParserImageFolder -from .parser_image_tar import ParserImageTar -from .parser_image_in_tar import ParserImageInTar - - -def create_parser(name, root, split='train', **kwargs): - name = name.lower() - name = name.split('/', 2) - prefix = '' - if len(name) > 1: - prefix = name[0] - name = name[-1] - - # FIXME improve the selection right now just tfds prefix or fallback path, will need options to - # explicitly select other options shortly - if prefix == 'tfds': - from .parser_tfds import ParserTfds # defer tensorflow import - parser = ParserTfds(root, name, split=split, shuffle=kwargs.pop('shuffle', False), **kwargs) - else: - assert os.path.exists(root) - # default fallback path (backwards compat), use image tar if root is a .tar file, otherwise image folder - # FIXME support split here, in parser? - if os.path.isfile(root) and os.path.splitext(root)[1] == '.tar': - parser = ParserImageInTar(root, **kwargs) - else: - parser = ParserImageFolder(root, **kwargs) - return parser diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/__init__.py deleted file mode 100644 index 2ed2c17ad357742e423beeaf4d35db03fe9af469..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collate import collate -from .data_container import DataContainer -from .data_parallel import MMDataParallel -from .distributed import MMDistributedDataParallel -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter, scatter_kwargs -from .utils import is_module_wrapper - -__all__ = [ - 'collate', 'DataContainer', 'MMDataParallel', 'MMDistributedDataParallel', - 'scatter', 'scatter_kwargs', 'is_module_wrapper', 'MODULE_WRAPPERS' -] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/chase_db1.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/chase_db1.py deleted file mode 100644 index 8bc29bea14704a4407f83474610cbc3bef32c708..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/chase_db1.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ChaseDB1Dataset(CustomDataset): - """Chase_db1 dataset. - - In segmentation map annotation for Chase_db1, 0 stands for background, - which is included in 2 categories. ``reduce_zero_label`` is fixed to False. - The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_1stHO.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(ChaseDB1Dataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_1stHO.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/config/instantiate.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/config/instantiate.py deleted file mode 100644 index 26d191b03f800dae5620128957d137cd4fdb1728..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/config/instantiate.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import collections.abc as abc -import dataclasses -import logging -from typing import Any - -from annotator.oneformer.detectron2.utils.registry import _convert_target_to_string, locate - -__all__ = ["dump_dataclass", "instantiate"] - - -def dump_dataclass(obj: Any): - """ - Dump a dataclass recursively into a dict that can be later instantiated. - - Args: - obj: a dataclass object - - Returns: - dict - """ - assert dataclasses.is_dataclass(obj) and not isinstance( - obj, type - ), "dump_dataclass() requires an instance of a dataclass." - ret = {"_target_": _convert_target_to_string(type(obj))} - for f in dataclasses.fields(obj): - v = getattr(obj, f.name) - if dataclasses.is_dataclass(v): - v = dump_dataclass(v) - if isinstance(v, (list, tuple)): - v = [dump_dataclass(x) if dataclasses.is_dataclass(x) else x for x in v] - ret[f.name] = v - return ret - - -def instantiate(cfg): - """ - Recursively instantiate objects defined in dictionaries by - "_target_" and arguments. - - Args: - cfg: a dict-like object with "_target_" that defines the caller, and - other keys that define the arguments - - Returns: - object instantiated by cfg - """ - from omegaconf import ListConfig, DictConfig, OmegaConf - - if isinstance(cfg, ListConfig): - lst = [instantiate(x) for x in cfg] - return ListConfig(lst, flags={"allow_objects": True}) - if isinstance(cfg, list): - # Specialize for list, because many classes take - # list[objects] as arguments, such as ResNet, DatasetMapper - return [instantiate(x) for x in cfg] - - # If input is a DictConfig backed by dataclasses (i.e. omegaconf's structured config), - # instantiate it to the actual dataclass. - if isinstance(cfg, DictConfig) and dataclasses.is_dataclass(cfg._metadata.object_type): - return OmegaConf.to_object(cfg) - - if isinstance(cfg, abc.Mapping) and "_target_" in cfg: - # conceptually equivalent to hydra.utils.instantiate(cfg) with _convert_=all, - # but faster: https://github.com/facebookresearch/hydra/issues/1200 - cfg = {k: instantiate(v) for k, v in cfg.items()} - cls = cfg.pop("_target_") - cls = instantiate(cls) - - if isinstance(cls, str): - cls_name = cls - cls = locate(cls_name) - assert cls is not None, cls_name - else: - try: - cls_name = cls.__module__ + "." + cls.__qualname__ - except Exception: - # target could be anything, so the above could fail - cls_name = str(cls) - assert callable(cls), f"_target_ {cls} does not define a callable object" - try: - return cls(**cfg) - except TypeError: - logger = logging.getLogger(__name__) - logger.error(f"Error when instantiating {cls_name}!") - raise - return cfg # return as-is if don't know what to do diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/swin_common.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/swin_common.py deleted file mode 100644 index 94d63d408f18511179d90b3ac6f697385d1e556d..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/swin_common.py +++ /dev/null @@ -1,52 +0,0 @@ -import torch - -import torch.nn as nn -import numpy as np - -from .utils import activations, forward_default, get_activation, Transpose - - -def forward_swin(pretrained, x): - return forward_default(pretrained, x) - - -def _make_swin_backbone( - model, - hooks=[1, 1, 17, 1], - patch_grid=[96, 96] -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.layers[0].blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.layers[1].blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.layers[2].blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.layers[3].blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - if hasattr(model, "patch_grid"): - used_patch_grid = model.patch_grid - else: - used_patch_grid = patch_grid - - patch_grid_size = np.array(used_patch_grid, dtype=int) - - pretrained.act_postprocess1 = nn.Sequential( - Transpose(1, 2), - nn.Unflatten(2, torch.Size(patch_grid_size.tolist())) - ) - pretrained.act_postprocess2 = nn.Sequential( - Transpose(1, 2), - nn.Unflatten(2, torch.Size((patch_grid_size // 2).tolist())) - ) - pretrained.act_postprocess3 = nn.Sequential( - Transpose(1, 2), - nn.Unflatten(2, torch.Size((patch_grid_size // 4).tolist())) - ) - pretrained.act_postprocess4 = nn.Sequential( - Transpose(1, 2), - nn.Unflatten(2, torch.Size((patch_grid_size // 8).tolist())) - ) - - return pretrained diff --git a/spaces/csuhan/opendet2/datasets/README.md b/spaces/csuhan/opendet2/datasets/README.md deleted file mode 100644 index 2cbf4f0b6791fc56c4f67addf914e57d39f1b34b..0000000000000000000000000000000000000000 --- a/spaces/csuhan/opendet2/datasets/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# Use Builtin Datasets - -A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog) -for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc). -This document explains how to setup the builtin datasets so they can be used by the above APIs. -[Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`, -and how to add new datasets to them. - -Detectron2 has builtin support for a few datasets. -The datasets are assumed to exist in a directory specified by the environment variable -`DETECTRON2_DATASETS`. -Under this directory, detectron2 will look for datasets in the structure described below, if needed. -``` -$DETECTRON2_DATASETS/ - coco/ - VOC20{07,12}/ -``` - -You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`. -If left unset, the default is `./datasets` relative to your current working directory. - -The [model zoo](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md) -contains configs and models that use these builtin datasets. - -## Expected dataset structure for [COCO instance/keypoint detection](https://cocodataset.org/#download): - -``` -coco/ - annotations/ - instances_{train,val}2017.json - person_keypoints_{train,val}2017.json - {train,val}2017/ - # image files that are mentioned in the corresponding json -``` - -You can use the 2014 version of the dataset as well. - -Some of the builtin tests (`dev/run_*_tests.sh`) uses a tiny version of the COCO dataset, -which you can download with `./datasets/prepare_for_tests.sh`. - -## Expected dataset structure for [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/index.html): -``` -VOC20{07,12}/ - Annotations/ - ImageSets/ - Main/ - trainval.txt - test.txt - # train.txt or val.txt, if you use these splits - JPEGImages/ -``` diff --git a/spaces/cvlab/zero123/app.py b/spaces/cvlab/zero123/app.py deleted file mode 100644 index ccd8c0b93029c9361be7f2d4608e97651c6c7841..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123/app.py +++ /dev/null @@ -1,111 +0,0 @@ -import numpy as np -import gradio as gr -import os -from PIL import Image -from functools import partial - -def retrieve_input_image_wild(dataset, inputs): - img_id = inputs - img_path = os.path.join('online_demo', dataset, 'step-100_scale-6.0') - try: - image = Image.open(os.path.join(img_path, '%s.jpg' % img_id)) - except: - image = Image.open(os.path.join(img_path, '%s.png' % img_id)) - - image.thumbnail([256, 256], Image.Resampling.LANCZOS) - return image - -def retrieve_input_image(dataset, inputs): - img_id = inputs - img_path = os.path.join('online_demo', dataset, 'step-100_scale-6.0', img_id, 'input.png') - image = Image.open(img_path) - return image - -def retrieve_novel_view(dataset, img_id, polar, azimuth, zoom, seed): - polar = polar // 30 + 1 - azimuth = azimuth // 30 - zoom = int(zoom * 2 + 1) - img_path = os.path.join('online_demo', dataset, 'step-100_scale-6.0', img_id,\ - 'polar-%d_azimuth-%d_distance-%d_seed-%d.png' % (polar, azimuth, zoom, seed)) - image = Image.open(img_path) - return image - - -with gr.Blocks() as demo: - # gr.Markdown("Stable Diffusion Novel View Synthesis (Precomputed Results)") - with gr.Tab("In-the-wild Images"): - with gr.Row(): - with gr.Column(scale=1): - default_input_image = Image.open( os.path.join('online_demo', 'nerf_wild', 'step-100_scale-6.0', 'car1.png')) - default_input_image.thumbnail([256, 256], Image.Resampling.LANCZOS) - input_image = gr.Image(default_input_image, shape=[256, 256]) - options = sorted(next(os.walk('online_demo/nerf_wild/step-100_scale-6.0'))[1]) - img_id = gr.Dropdown(options, value='car1', label='options') - text_button = gr.Button("Load Input Image") - retrieve_input_image_dataset = partial(retrieve_input_image_wild, 'nerf_wild') - text_button.click(retrieve_input_image_dataset, inputs=img_id, outputs=input_image) - - with gr.Column(scale=1): - novel_view = gr.Image(shape=[256, 256]) - inputs = [img_id, - gr.Slider(-30, 30, value=0, step=30, label='Polar angle (vertical rotation in degrees)'), - gr.Slider(0, 330, value=0, step=30, label='Azimuth angle (horizontal rotation in degrees)'), - gr.Slider(-0.5, 0.5, value=0, step=0.5, label='Zoom'), - gr.Slider(0, 3, value=1, step=1, label='Random seed')] - - submit_button = gr.Button("Generate Novel View") - retrieve_novel_view_dataset = partial(retrieve_novel_view, 'nerf_wild') - submit_button.click(retrieve_novel_view_dataset, inputs=inputs, outputs=novel_view) - - with gr.Tab("Google Scanned Objects"): - with gr.Row(): - with gr.Column(scale=1): - default_input_image = Image.open( os.path.join('online_demo', 'GSO', 'step-100_scale-6.0', 'SAMBA_HEMP', 'input.png')) - default_input_image.thumbnail([256, 256], Image.Resampling.LANCZOS) - input_image = gr.Image(default_input_image, shape=[256, 256]) - options = sorted(os.listdir('online_demo/GSO/step-100_scale-6.0')) - img_id = gr.Dropdown(options, value='SAMBA_HEMP', label='options') - text_button = gr.Button("Load Input Image") - retrieve_input_image_dataset = partial(retrieve_input_image, 'GSO') - text_button.click(retrieve_input_image_dataset, inputs=img_id, outputs=input_image) - - with gr.Column(scale=1): - novel_view = gr.Image(shape=[256, 256]) - inputs = [img_id, - gr.Slider(-30, 30, value=0, step=30, label='Polar angle (vertical rotation in degrees)'), - gr.Slider(0, 330, value=0, step=30, label='Azimuth angle (horizontal rotation in degrees)'), - gr.Slider(-0.5, 0.5, value=0, step=0.5, label='Zoom'), - gr.Slider(0, 3, value=1, step=1, label='Random seed')] - - submit_button = gr.Button("Generate Novel View") - retrieve_novel_view_dataset = partial(retrieve_novel_view, 'GSO') - submit_button.click(retrieve_novel_view_dataset, inputs=inputs, outputs=novel_view) - - with gr.Tab("RTMV"): - with gr.Row(): - with gr.Column(scale=1): - default_input_image = Image.open( os.path.join('online_demo', 'RTMV', 'step-100_scale-6.0', '00000', 'input.png')) - default_input_image.thumbnail([256, 256], Image.Resampling.LANCZOS) - input_image = gr.Image(default_input_image, shape=[256, 256]) - options = sorted(os.listdir('online_demo/RTMV/step-100_scale-6.0')) - img_id = gr.Dropdown(options, value='00000', label='options') - text_button = gr.Button("Load Input Image") - retrieve_input_image_dataset = partial(retrieve_input_image, 'RTMV') - text_button.click(retrieve_input_image_dataset, inputs=img_id, outputs=input_image) - - with gr.Column(scale=1): - novel_view = gr.Image(shape=[256, 256]) - inputs = [img_id, - gr.Slider(-30, 30, value=0, step=30, label='Polar angle (vertical rotation in degrees)'), - gr.Slider(0, 330, value=0, step=30, label='Azimuth angle (horizontal rotation in degrees)'), - gr.Slider(-0.5, 0.5, value=0, step=0.5, label='Zoom'), - gr.Slider(0, 3, value=1, step=1, label='Random seed')] - - submit_button = gr.Button("Generate Novel View") - retrieve_novel_view_dataset = partial(retrieve_novel_view, 'RTMV') - submit_button.click(retrieve_novel_view_dataset, inputs=inputs, outputs=novel_view) - - - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageCms.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageCms.py deleted file mode 100644 index 3a337f9f20993ab45ea5512d473a931396755846..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageCms.py +++ /dev/null @@ -1,1009 +0,0 @@ -# The Python Imaging Library. -# $Id$ - -# Optional color management support, based on Kevin Cazabon's PyCMS -# library. - -# History: - -# 2009-03-08 fl Added to PIL. - -# Copyright (C) 2002-2003 Kevin Cazabon -# Copyright (c) 2009 by Fredrik Lundh -# Copyright (c) 2013 by Eric Soroos - -# See the README file for information on usage and redistribution. See -# below for the original description. - -import sys -from enum import IntEnum - -from . import Image - -try: - from . import _imagingcms -except ImportError as ex: - # Allow error import for doc purposes, but error out when accessing - # anything in core. - from ._util import DeferredError - - _imagingcms = DeferredError(ex) - -DESCRIPTION = """ -pyCMS - - a Python / PIL interface to the littleCMS ICC Color Management System - Copyright (C) 2002-2003 Kevin Cazabon - kevin@cazabon.com - https://www.cazabon.com - - pyCMS home page: https://www.cazabon.com/pyCMS - littleCMS home page: https://www.littlecms.com - (littleCMS is Copyright (C) 1998-2001 Marti Maria) - - Originally released under LGPL. Graciously donated to PIL in - March 2009, for distribution under the standard PIL license - - The pyCMS.py module provides a "clean" interface between Python/PIL and - pyCMSdll, taking care of some of the more complex handling of the direct - pyCMSdll functions, as well as error-checking and making sure that all - relevant data is kept together. - - While it is possible to call pyCMSdll functions directly, it's not highly - recommended. - - Version History: - - 1.0.0 pil Oct 2013 Port to LCMS 2. - - 0.1.0 pil mod March 10, 2009 - - Renamed display profile to proof profile. The proof - profile is the profile of the device that is being - simulated, not the profile of the device which is - actually used to display/print the final simulation - (that'd be the output profile) - also see LCMSAPI.txt - input colorspace -> using 'renderingIntent' -> proof - colorspace -> using 'proofRenderingIntent' -> output - colorspace - - Added LCMS FLAGS support. - Added FLAGS["SOFTPROOFING"] as default flag for - buildProofTransform (otherwise the proof profile/intent - would be ignored). - - 0.1.0 pil March 2009 - added to PIL, as PIL.ImageCms - - 0.0.2 alpha Jan 6, 2002 - - Added try/except statements around type() checks of - potential CObjects... Python won't let you use type() - on them, and raises a TypeError (stupid, if you ask - me!) - - Added buildProofTransformFromOpenProfiles() function. - Additional fixes in DLL, see DLL code for details. - - 0.0.1 alpha first public release, Dec. 26, 2002 - - Known to-do list with current version (of Python interface, not pyCMSdll): - - none - -""" - -VERSION = "1.0.0 pil" - -# --------------------------------------------------------------------. - -core = _imagingcms - -# -# intent/direction values - - -class Intent(IntEnum): - PERCEPTUAL = 0 - RELATIVE_COLORIMETRIC = 1 - SATURATION = 2 - ABSOLUTE_COLORIMETRIC = 3 - - -class Direction(IntEnum): - INPUT = 0 - OUTPUT = 1 - PROOF = 2 - - -# -# flags - -FLAGS = { - "MATRIXINPUT": 1, - "MATRIXOUTPUT": 2, - "MATRIXONLY": (1 | 2), - "NOWHITEONWHITEFIXUP": 4, # Don't hot fix scum dot - # Don't create prelinearization tables on precalculated transforms - # (internal use): - "NOPRELINEARIZATION": 16, - "GUESSDEVICECLASS": 32, # Guess device class (for transform2devicelink) - "NOTCACHE": 64, # Inhibit 1-pixel cache - "NOTPRECALC": 256, - "NULLTRANSFORM": 512, # Don't transform anyway - "HIGHRESPRECALC": 1024, # Use more memory to give better accuracy - "LOWRESPRECALC": 2048, # Use less memory to minimize resources - "WHITEBLACKCOMPENSATION": 8192, - "BLACKPOINTCOMPENSATION": 8192, - "GAMUTCHECK": 4096, # Out of Gamut alarm - "SOFTPROOFING": 16384, # Do softproofing - "PRESERVEBLACK": 32768, # Black preservation - "NODEFAULTRESOURCEDEF": 16777216, # CRD special - "GRIDPOINTS": lambda n: (n & 0xFF) << 16, # Gridpoints -} - -_MAX_FLAG = 0 -for flag in FLAGS.values(): - if isinstance(flag, int): - _MAX_FLAG = _MAX_FLAG | flag - - -# --------------------------------------------------------------------. -# Experimental PIL-level API -# --------------------------------------------------------------------. - -## -# Profile. - - -class ImageCmsProfile: - def __init__(self, profile): - """ - :param profile: Either a string representing a filename, - a file like object containing a profile or a - low-level profile object - - """ - - if isinstance(profile, str): - if sys.platform == "win32": - profile_bytes_path = profile.encode() - try: - profile_bytes_path.decode("ascii") - except UnicodeDecodeError: - with open(profile, "rb") as f: - self._set(core.profile_frombytes(f.read())) - return - self._set(core.profile_open(profile), profile) - elif hasattr(profile, "read"): - self._set(core.profile_frombytes(profile.read())) - elif isinstance(profile, _imagingcms.CmsProfile): - self._set(profile) - else: - msg = "Invalid type for Profile" - raise TypeError(msg) - - def _set(self, profile, filename=None): - self.profile = profile - self.filename = filename - self.product_name = None # profile.product_name - self.product_info = None # profile.product_info - - def tobytes(self): - """ - Returns the profile in a format suitable for embedding in - saved images. - - :returns: a bytes object containing the ICC profile. - """ - - return core.profile_tobytes(self.profile) - - -class ImageCmsTransform(Image.ImagePointHandler): - - """ - Transform. This can be used with the procedural API, or with the standard - :py:func:`~PIL.Image.Image.point` method. - - Will return the output profile in the ``output.info['icc_profile']``. - """ - - def __init__( - self, - input, - output, - input_mode, - output_mode, - intent=Intent.PERCEPTUAL, - proof=None, - proof_intent=Intent.ABSOLUTE_COLORIMETRIC, - flags=0, - ): - if proof is None: - self.transform = core.buildTransform( - input.profile, output.profile, input_mode, output_mode, intent, flags - ) - else: - self.transform = core.buildProofTransform( - input.profile, - output.profile, - proof.profile, - input_mode, - output_mode, - intent, - proof_intent, - flags, - ) - # Note: inputMode and outputMode are for pyCMS compatibility only - self.input_mode = self.inputMode = input_mode - self.output_mode = self.outputMode = output_mode - - self.output_profile = output - - def point(self, im): - return self.apply(im) - - def apply(self, im, imOut=None): - im.load() - if imOut is None: - imOut = Image.new(self.output_mode, im.size, None) - self.transform.apply(im.im.id, imOut.im.id) - imOut.info["icc_profile"] = self.output_profile.tobytes() - return imOut - - def apply_in_place(self, im): - im.load() - if im.mode != self.output_mode: - msg = "mode mismatch" - raise ValueError(msg) # wrong output mode - self.transform.apply(im.im.id, im.im.id) - im.info["icc_profile"] = self.output_profile.tobytes() - return im - - -def get_display_profile(handle=None): - """ - (experimental) Fetches the profile for the current display device. - - :returns: ``None`` if the profile is not known. - """ - - if sys.platform != "win32": - return None - - from . import ImageWin - - if isinstance(handle, ImageWin.HDC): - profile = core.get_display_profile_win32(handle, 1) - else: - profile = core.get_display_profile_win32(handle or 0) - if profile is None: - return None - return ImageCmsProfile(profile) - - -# --------------------------------------------------------------------. -# pyCMS compatible layer -# --------------------------------------------------------------------. - - -class PyCMSError(Exception): - - """(pyCMS) Exception class. - This is used for all errors in the pyCMS API.""" - - pass - - -def profileToProfile( - im, - inputProfile, - outputProfile, - renderingIntent=Intent.PERCEPTUAL, - outputMode=None, - inPlace=False, - flags=0, -): - """ - (pyCMS) Applies an ICC transformation to a given image, mapping from - ``inputProfile`` to ``outputProfile``. - - If the input or output profiles specified are not valid filenames, a - :exc:`PyCMSError` will be raised. If ``inPlace`` is ``True`` and - ``outputMode != im.mode``, a :exc:`PyCMSError` will be raised. - If an error occurs during application of the profiles, - a :exc:`PyCMSError` will be raised. - If ``outputMode`` is not a mode supported by the ``outputProfile`` (or by pyCMS), - a :exc:`PyCMSError` will be raised. - - This function applies an ICC transformation to im from ``inputProfile``'s - color space to ``outputProfile``'s color space using the specified rendering - intent to decide how to handle out-of-gamut colors. - - ``outputMode`` can be used to specify that a color mode conversion is to - be done using these profiles, but the specified profiles must be able - to handle that mode. I.e., if converting im from RGB to CMYK using - profiles, the input profile must handle RGB data, and the output - profile must handle CMYK data. - - :param im: An open :py:class:`~PIL.Image.Image` object (i.e. Image.new(...) - or Image.open(...), etc.) - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this image, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - profile you wish to use for this image, or a profile object - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param outputMode: A valid PIL mode for the output image (i.e. "RGB", - "CMYK", etc.). Note: if rendering the image "inPlace", outputMode - MUST be the same mode as the input, or omitted completely. If - omitted, the outputMode will be the same as the mode of the input - image (im.mode) - :param inPlace: Boolean. If ``True``, the original image is modified in-place, - and ``None`` is returned. If ``False`` (default), a new - :py:class:`~PIL.Image.Image` object is returned with the transform applied. - :param flags: Integer (0-...) specifying additional flags - :returns: Either None or a new :py:class:`~PIL.Image.Image` object, depending on - the value of ``inPlace`` - :exception PyCMSError: - """ - - if outputMode is None: - outputMode = im.mode - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = f"flags must be an integer between 0 and {_MAX_FLAG}" - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - transform = ImageCmsTransform( - inputProfile, - outputProfile, - im.mode, - outputMode, - renderingIntent, - flags=flags, - ) - if inPlace: - transform.apply_in_place(im) - imOut = None - else: - imOut = transform.apply(im) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - return imOut - - -def getOpenProfile(profileFilename): - """ - (pyCMS) Opens an ICC profile file. - - The PyCMSProfile object can be passed back into pyCMS for use in creating - transforms and such (as in ImageCms.buildTransformFromOpenProfiles()). - - If ``profileFilename`` is not a valid filename for an ICC profile, - a :exc:`PyCMSError` will be raised. - - :param profileFilename: String, as a valid filename path to the ICC profile - you wish to open, or a file-like object. - :returns: A CmsProfile class object. - :exception PyCMSError: - """ - - try: - return ImageCmsProfile(profileFilename) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def buildTransform( - inputProfile, - outputProfile, - inMode, - outMode, - renderingIntent=Intent.PERCEPTUAL, - flags=0, -): - """ - (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the - ``outputProfile``. Use applyTransform to apply the transform to a given - image. - - If the input or output profiles specified are not valid filenames, a - :exc:`PyCMSError` will be raised. If an error occurs during creation - of the transform, a :exc:`PyCMSError` will be raised. - - If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile`` - (or by pyCMS), a :exc:`PyCMSError` will be raised. - - This function builds and returns an ICC transform from the ``inputProfile`` - to the ``outputProfile`` using the ``renderingIntent`` to determine what to do - with out-of-gamut colors. It will ONLY work for converting images that - are in ``inMode`` to images that are in ``outMode`` color format (PIL mode, - i.e. "RGB", "RGBA", "CMYK", etc.). - - Building the transform is a fair part of the overhead in - ImageCms.profileToProfile(), so if you're planning on converting multiple - images using the same input/output settings, this can save you time. - Once you have a transform object, it can be used with - ImageCms.applyProfile() to convert images without the need to re-compute - the lookup table for the transform. - - The reason pyCMS returns a class object rather than a handle directly - to the transform is that it needs to keep track of the PIL input/output - modes that the transform is meant for. These attributes are stored in - the ``inMode`` and ``outMode`` attributes of the object (which can be - manually overridden if you really want to, but I don't know of any - time that would be of use, or would even work). - - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this transform, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - profile you wish to use for this transform, or a profile object - :param inMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param outMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param flags: Integer (0-...) specifying additional flags - :returns: A CmsTransform class object. - :exception PyCMSError: - """ - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = "flags must be an integer between 0 and %s" + _MAX_FLAG - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - return ImageCmsTransform( - inputProfile, outputProfile, inMode, outMode, renderingIntent, flags=flags - ) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def buildProofTransform( - inputProfile, - outputProfile, - proofProfile, - inMode, - outMode, - renderingIntent=Intent.PERCEPTUAL, - proofRenderingIntent=Intent.ABSOLUTE_COLORIMETRIC, - flags=FLAGS["SOFTPROOFING"], -): - """ - (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the - ``outputProfile``, but tries to simulate the result that would be - obtained on the ``proofProfile`` device. - - If the input, output, or proof profiles specified are not valid - filenames, a :exc:`PyCMSError` will be raised. - - If an error occurs during creation of the transform, - a :exc:`PyCMSError` will be raised. - - If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile`` - (or by pyCMS), a :exc:`PyCMSError` will be raised. - - This function builds and returns an ICC transform from the ``inputProfile`` - to the ``outputProfile``, but tries to simulate the result that would be - obtained on the ``proofProfile`` device using ``renderingIntent`` and - ``proofRenderingIntent`` to determine what to do with out-of-gamut - colors. This is known as "soft-proofing". It will ONLY work for - converting images that are in ``inMode`` to images that are in outMode - color format (PIL mode, i.e. "RGB", "RGBA", "CMYK", etc.). - - Usage of the resulting transform object is exactly the same as with - ImageCms.buildTransform(). - - Proof profiling is generally used when using an output device to get a - good idea of what the final printed/displayed image would look like on - the ``proofProfile`` device when it's quicker and easier to use the - output device for judging color. Generally, this means that the - output device is a monitor, or a dye-sub printer (etc.), and the simulated - device is something more expensive, complicated, or time consuming - (making it difficult to make a real print for color judgement purposes). - - Soft-proofing basically functions by adjusting the colors on the - output device to match the colors of the device being simulated. However, - when the simulated device has a much wider gamut than the output - device, you may obtain marginal results. - - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this transform, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - (monitor, usually) profile you wish to use for this transform, or a - profile object - :param proofProfile: String, as a valid filename path to the ICC proof - profile you wish to use for this transform, or a profile object - :param inMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param outMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the input->proof (simulated) transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param proofRenderingIntent: Integer (0-3) specifying the rendering intent - you wish to use for proof->output transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param flags: Integer (0-...) specifying additional flags - :returns: A CmsTransform class object. - :exception PyCMSError: - """ - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = "flags must be an integer between 0 and %s" + _MAX_FLAG - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - if not isinstance(proofProfile, ImageCmsProfile): - proofProfile = ImageCmsProfile(proofProfile) - return ImageCmsTransform( - inputProfile, - outputProfile, - inMode, - outMode, - renderingIntent, - proofProfile, - proofRenderingIntent, - flags, - ) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -buildTransformFromOpenProfiles = buildTransform -buildProofTransformFromOpenProfiles = buildProofTransform - - -def applyTransform(im, transform, inPlace=False): - """ - (pyCMS) Applies a transform to a given image. - - If ``im.mode != transform.inMode``, a :exc:`PyCMSError` is raised. - - If ``inPlace`` is ``True`` and ``transform.inMode != transform.outMode``, a - :exc:`PyCMSError` is raised. - - If ``im.mode``, ``transform.inMode`` or ``transform.outMode`` is not - supported by pyCMSdll or the profiles you used for the transform, a - :exc:`PyCMSError` is raised. - - If an error occurs while the transform is being applied, - a :exc:`PyCMSError` is raised. - - This function applies a pre-calculated transform (from - ImageCms.buildTransform() or ImageCms.buildTransformFromOpenProfiles()) - to an image. The transform can be used for multiple images, saving - considerable calculation time if doing the same conversion multiple times. - - If you want to modify im in-place instead of receiving a new image as - the return value, set ``inPlace`` to ``True``. This can only be done if - ``transform.inMode`` and ``transform.outMode`` are the same, because we can't - change the mode in-place (the buffer sizes for some modes are - different). The default behavior is to return a new :py:class:`~PIL.Image.Image` - object of the same dimensions in mode ``transform.outMode``. - - :param im: An :py:class:`~PIL.Image.Image` object, and im.mode must be the same - as the ``inMode`` supported by the transform. - :param transform: A valid CmsTransform class object - :param inPlace: Bool. If ``True``, ``im`` is modified in place and ``None`` is - returned, if ``False``, a new :py:class:`~PIL.Image.Image` object with the - transform applied is returned (and ``im`` is not changed). The default is - ``False``. - :returns: Either ``None``, or a new :py:class:`~PIL.Image.Image` object, - depending on the value of ``inPlace``. The profile will be returned in - the image's ``info['icc_profile']``. - :exception PyCMSError: - """ - - try: - if inPlace: - transform.apply_in_place(im) - imOut = None - else: - imOut = transform.apply(im) - except (TypeError, ValueError) as v: - raise PyCMSError(v) from v - - return imOut - - -def createProfile(colorSpace, colorTemp=-1): - """ - (pyCMS) Creates a profile. - - If colorSpace not in ``["LAB", "XYZ", "sRGB"]``, - a :exc:`PyCMSError` is raised. - - If using LAB and ``colorTemp`` is not a positive integer, - a :exc:`PyCMSError` is raised. - - If an error occurs while creating the profile, - a :exc:`PyCMSError` is raised. - - Use this function to create common profiles on-the-fly instead of - having to supply a profile on disk and knowing the path to it. It - returns a normal CmsProfile object that can be passed to - ImageCms.buildTransformFromOpenProfiles() to create a transform to apply - to images. - - :param colorSpace: String, the color space of the profile you wish to - create. - Currently only "LAB", "XYZ", and "sRGB" are supported. - :param colorTemp: Positive integer for the white point for the profile, in - degrees Kelvin (i.e. 5000, 6500, 9600, etc.). The default is for D50 - illuminant if omitted (5000k). colorTemp is ONLY applied to LAB - profiles, and is ignored for XYZ and sRGB. - :returns: A CmsProfile class object - :exception PyCMSError: - """ - - if colorSpace not in ["LAB", "XYZ", "sRGB"]: - msg = ( - f"Color space not supported for on-the-fly profile creation ({colorSpace})" - ) - raise PyCMSError(msg) - - if colorSpace == "LAB": - try: - colorTemp = float(colorTemp) - except (TypeError, ValueError) as e: - msg = f'Color temperature must be numeric, "{colorTemp}" not valid' - raise PyCMSError(msg) from e - - try: - return core.createProfile(colorSpace, colorTemp) - except (TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileName(profile): - """ - - (pyCMS) Gets the internal product name for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, - a :exc:`PyCMSError` is raised If an error occurs while trying - to obtain the name tag, a :exc:`PyCMSError` is raised. - - Use this function to obtain the INTERNAL name of the profile (stored - in an ICC tag in the profile itself), usually the one used when the - profile was originally created. Sometimes this tag also contains - additional information supplied by the creator. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal name of the profile as stored - in an ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # do it in python, not c. - # // name was "%s - %s" (model, manufacturer) || Description , - # // but if the Model and Manufacturer were the same or the model - # // was long, Just the model, in 1.x - model = profile.profile.model - manufacturer = profile.profile.manufacturer - - if not (model or manufacturer): - return (profile.profile.profile_description or "") + "\n" - if not manufacturer or len(model) > 30: - return model + "\n" - return f"{model} - {manufacturer}\n" - - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileInfo(profile): - """ - (pyCMS) Gets the internal product information for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, - a :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the info tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - info tag. This often contains details about the profile, and how it - was created, as supplied by the creator. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # add an extra newline to preserve pyCMS compatibility - # Python, not C. the white point bits weren't working well, - # so skipping. - # info was description \r\n\r\n copyright \r\n\r\n K007 tag \r\n\r\n whitepoint - description = profile.profile.profile_description - cpright = profile.profile.copyright - arr = [] - for elt in (description, cpright): - if elt: - arr.append(elt) - return "\r\n\r\n".join(arr) + "\r\n\r\n" - - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileCopyright(profile): - """ - (pyCMS) Gets the copyright for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the copyright tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - copyright tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.copyright or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileManufacturer(profile): - """ - (pyCMS) Gets the manufacturer for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the manufacturer tag, a - :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - manufacturer tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.manufacturer or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileModel(profile): - """ - (pyCMS) Gets the model for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the model tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - model tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.model or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileDescription(profile): - """ - (pyCMS) Gets the description for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the description tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - description tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in an - ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.profile_description or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getDefaultIntent(profile): - """ - (pyCMS) Gets the default intent name for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the default intent, a - :exc:`PyCMSError` is raised. - - Use this function to determine the default (and usually best optimized) - rendering intent for this profile. Most profiles support multiple - rendering intents, but are intended mostly for one type of conversion. - If you wish to use a different intent than returned, use - ImageCms.isIntentSupported() to verify it will work first. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: Integer 0-3 specifying the default rendering intent for this - profile. - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return profile.profile.rendering_intent - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def isIntentSupported(profile, intent, direction): - """ - (pyCMS) Checks if a given intent is supported. - - Use this function to verify that you can use your desired - ``intent`` with ``profile``, and that ``profile`` can be used for the - input/output/proof profile as you desire. - - Some profiles are created specifically for one "direction", can cannot - be used for others. Some profiles can only be used for certain - rendering intents, so it's best to either verify this before trying - to create a transform with them (using this function), or catch the - potential :exc:`PyCMSError` that will occur if they don't - support the modes you select. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :param intent: Integer (0-3) specifying the rendering intent you wish to - use with this profile - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param direction: Integer specifying if the profile is to be used for - input, output, or proof - - INPUT = 0 (or use ImageCms.Direction.INPUT) - OUTPUT = 1 (or use ImageCms.Direction.OUTPUT) - PROOF = 2 (or use ImageCms.Direction.PROOF) - - :returns: 1 if the intent/direction are supported, -1 if they are not. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # FIXME: I get different results for the same data w. different - # compilers. Bug in LittleCMS or in the binding? - if profile.profile.is_intent_supported(intent, direction): - return 1 - else: - return -1 - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def versions(): - """ - (pyCMS) Fetches versions. - """ - - return VERSION, core.littlecms_version, sys.version.split()[0], Image.__version__ diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/__init__.py deleted file mode 100644 index d8abf2103efc4519e2de4e7af7b0b0871c593619..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -"""FastAPI framework, high performance, easy to learn, fast to code, ready for production""" - -__version__ = "0.101.1" - -from starlette import status as status - -from .applications import FastAPI as FastAPI -from .background import BackgroundTasks as BackgroundTasks -from .datastructures import UploadFile as UploadFile -from .exceptions import HTTPException as HTTPException -from .exceptions import WebSocketException as WebSocketException -from .param_functions import Body as Body -from .param_functions import Cookie as Cookie -from .param_functions import Depends as Depends -from .param_functions import File as File -from .param_functions import Form as Form -from .param_functions import Header as Header -from .param_functions import Path as Path -from .param_functions import Query as Query -from .param_functions import Security as Security -from .requests import Request as Request -from .responses import Response as Response -from .routing import APIRouter as APIRouter -from .websockets import WebSocket as WebSocket -from .websockets import WebSocketDisconnect as WebSocketDisconnect diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_backends/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_backends/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_macosx.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_macosx.py deleted file mode 100644 index 2867c13f430b97535abee809c34e7b12a06a64b8..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_macosx.py +++ /dev/null @@ -1,184 +0,0 @@ -import os - -import matplotlib as mpl -from matplotlib import _api, cbook -from matplotlib._pylab_helpers import Gcf -from . import _macosx -from .backend_agg import FigureCanvasAgg -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, NavigationToolbar2, - ResizeEvent, TimerBase) -from matplotlib.figure import Figure -from matplotlib.widgets import SubplotTool - - -class TimerMac(_macosx.Timer, TimerBase): - """Subclass of `.TimerBase` using CFRunLoop timer events.""" - # completely implemented at the C-level (in _macosx.Timer) - - -class FigureCanvasMac(FigureCanvasAgg, _macosx.FigureCanvas, FigureCanvasBase): - # docstring inherited - - # Ideally this class would be `class FCMacAgg(FCAgg, FCMac)` - # (FC=FigureCanvas) where FCMac would be an ObjC-implemented mac-specific - # class also inheriting from FCBase (this is the approach with other GUI - # toolkits). However, writing an extension type inheriting from a Python - # base class is slightly tricky (the extension type must be a heap type), - # and we can just as well lift the FCBase base up one level, keeping it *at - # the end* to have the right method resolution order. - - # Events such as button presses, mouse movements, and key presses are - # handled in C and events (MouseEvent, etc.) are triggered from there. - - required_interactive_framework = "macosx" - _timer_cls = TimerMac - manager_class = _api.classproperty(lambda cls: FigureManagerMac) - - def __init__(self, figure): - super().__init__(figure=figure) - self._draw_pending = False - self._is_drawing = False - - def draw(self): - """Render the figure and update the macosx canvas.""" - # The renderer draw is done here; delaying causes problems with code - # that uses the result of the draw() to update plot elements. - if self._is_drawing: - return - with cbook._setattr_cm(self, _is_drawing=True): - super().draw() - self.update() - - def draw_idle(self): - # docstring inherited - if not (getattr(self, '_draw_pending', False) or - getattr(self, '_is_drawing', False)): - self._draw_pending = True - # Add a singleshot timer to the eventloop that will call back - # into the Python method _draw_idle to take care of the draw - self._single_shot_timer(self._draw_idle) - - def _single_shot_timer(self, callback): - """Add a single shot timer with the given callback""" - # We need to explicitly stop (called from delete) the timer after - # firing, otherwise segfaults will occur when trying to deallocate - # the singleshot timers. - def callback_func(callback, timer): - callback() - del timer - timer = self.new_timer(interval=0) - timer.add_callback(callback_func, callback, timer) - timer.start() - - def _draw_idle(self): - """ - Draw method for singleshot timer - - This draw method can be added to a singleshot timer, which can - accumulate draws while the eventloop is spinning. This method will - then only draw the first time and short-circuit the others. - """ - with self._idle_draw_cntx(): - if not self._draw_pending: - # Short-circuit because our draw request has already been - # taken care of - return - self._draw_pending = False - self.draw() - - def blit(self, bbox=None): - # docstring inherited - super().blit(bbox) - self.update() - - def resize(self, width, height): - # Size from macOS is logical pixels, dpi is physical. - scale = self.figure.dpi / self.device_pixel_ratio - width /= scale - height /= scale - self.figure.set_size_inches(width, height, forward=False) - ResizeEvent("resize_event", self)._process() - self.draw_idle() - - -class NavigationToolbar2Mac(_macosx.NavigationToolbar2, NavigationToolbar2): - - def __init__(self, canvas): - data_path = cbook._get_data_path('images') - _, tooltips, image_names, _ = zip(*NavigationToolbar2.toolitems) - _macosx.NavigationToolbar2.__init__( - self, canvas, - tuple(str(data_path / image_name) + ".pdf" - for image_name in image_names if image_name is not None), - tuple(tooltip for tooltip in tooltips if tooltip is not None)) - NavigationToolbar2.__init__(self, canvas) - - def draw_rubberband(self, event, x0, y0, x1, y1): - self.canvas.set_rubberband(int(x0), int(y0), int(x1), int(y1)) - - def remove_rubberband(self): - self.canvas.remove_rubberband() - - def save_figure(self, *args): - directory = os.path.expanduser(mpl.rcParams['savefig.directory']) - filename = _macosx.choose_save_file('Save the figure', - directory, - self.canvas.get_default_filename()) - if filename is None: # Cancel - return - # Save dir for next time, unless empty str (which means use cwd). - if mpl.rcParams['savefig.directory']: - mpl.rcParams['savefig.directory'] = os.path.dirname(filename) - self.canvas.figure.savefig(filename) - - @_api.deprecated("3.6", alternative='configure_subplots()') - def prepare_configure_subplots(self): - toolfig = Figure(figsize=(6, 3)) - canvas = FigureCanvasMac(toolfig) - toolfig.subplots_adjust(top=0.9) - # Need to keep a reference to the tool. - _tool = SubplotTool(self.canvas.figure, toolfig) - return canvas - - -class FigureManagerMac(_macosx.FigureManager, FigureManagerBase): - _toolbar2_class = NavigationToolbar2Mac - - def __init__(self, canvas, num): - self._shown = False - _macosx.FigureManager.__init__(self, canvas) - icon_path = str(cbook._get_data_path('images/matplotlib.pdf')) - _macosx.FigureManager.set_icon(icon_path) - FigureManagerBase.__init__(self, canvas, num) - if self.toolbar is not None: - self.toolbar.update() - if mpl.is_interactive(): - self.show() - self.canvas.draw_idle() - - def _close_button_pressed(self): - Gcf.destroy(self) - self.canvas.flush_events() - - @_api.deprecated("3.6") - def close(self): - return self._close_button_pressed() - - @classmethod - def start_main_loop(cls): - _macosx.show() - - def show(self): - if not self._shown: - self._show() - self._shown = True - if mpl.rcParams["figure.raise_window"]: - self._raise() - - -@_Backend.export -class _BackendMac(_Backend): - FigureCanvas = FigureCanvasMac - FigureManager = FigureManagerMac - mainloop = FigureManagerMac.start_main_loop diff --git a/spaces/declare-lab/tango/diffusers/examples/text_to_image/train_text_to_image_flax.py b/spaces/declare-lab/tango/diffusers/examples/text_to_image/train_text_to_image_flax.py deleted file mode 100644 index cbd236c5ea15586f1f826daf12d238c9ac29bb9f..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/text_to_image/train_text_to_image_flax.py +++ /dev/null @@ -1,574 +0,0 @@ -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import jax -import jax.numpy as jnp -import numpy as np -import optax -import torch -import torch.utils.checkpoint -import transformers -from datasets import load_dataset -from flax import jax_utils -from flax.training import train_state -from flax.training.common_utils import shard -from huggingface_hub import create_repo, upload_folder -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTokenizer, FlaxCLIPTextModel, set_seed - -from diffusers import ( - FlaxAutoencoderKL, - FlaxDDPMScheduler, - FlaxPNDMScheduler, - FlaxStableDiffusionPipeline, - FlaxUNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion import FlaxStableDiffusionSafetyChecker -from diffusers.utils import check_min_version - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.15.0.dev0") - -logger = logging.getLogger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--image_column", type=str, default="image", help="The column of the dataset containing an image." - ) - parser.add_argument( - "--caption_column", - type=str, - default="text", - help="The column of the dataset containing a caption or a list of captions.", - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="sd-model-finetuned", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=0, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - return args - - -dataset_name_mapping = { - "lambdalabs/pokemon-blip-captions": ("image", "text"), -} - - -def get_params_to_save(params): - return jax.device_get(jax.tree_util.tree_map(lambda x: x[0], params)) - - -def main(): - args = parse_args() - - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - # Setup logging, we only want one process per machine to log things on the screen. - logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) - if jax.process_index() == 0: - transformers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if jax.process_index() == 0: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = dataset_name_mapping.get(args.dataset_name, None) - if args.image_column is None: - image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - image_column = args.image_column - if image_column not in column_names: - raise ValueError( - f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.caption_column is None: - caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - caption_column = args.caption_column - if caption_column not in column_names: - raise ValueError( - f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" - ) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(examples, is_train=True): - captions = [] - for caption in examples[caption_column]: - if isinstance(caption, str): - captions.append(caption) - elif isinstance(caption, (list, np.ndarray)): - # take a random caption if there are multiple - captions.append(random.choice(caption) if is_train else caption[0]) - else: - raise ValueError( - f"Caption column `{caption_column}` should contain either strings or lists of strings." - ) - inputs = tokenizer(captions, max_length=tokenizer.model_max_length, padding="do_not_pad", truncation=True) - input_ids = inputs.input_ids - return input_ids - - train_transforms = transforms.Compose( - [ - transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def preprocess_train(examples): - images = [image.convert("RGB") for image in examples[image_column]] - examples["pixel_values"] = [train_transforms(image) for image in images] - examples["input_ids"] = tokenize_captions(examples) - - return examples - - if jax.process_index() == 0: - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - pixel_values = torch.stack([example["pixel_values"] for example in examples]) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - input_ids = [example["input_ids"] for example in examples] - - padded_tokens = tokenizer.pad( - {"input_ids": input_ids}, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt" - ) - batch = { - "pixel_values": pixel_values, - "input_ids": padded_tokens.input_ids, - } - batch = {k: v.numpy() for k, v in batch.items()} - - return batch - - total_train_batch_size = args.train_batch_size * jax.local_device_count() - train_dataloader = torch.utils.data.DataLoader( - train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=total_train_batch_size, drop_last=True - ) - - weight_dtype = jnp.float32 - if args.mixed_precision == "fp16": - weight_dtype = jnp.float16 - elif args.mixed_precision == "bf16": - weight_dtype = jnp.bfloat16 - - # Load models and create wrapper for stable diffusion - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="tokenizer" - ) - text_encoder = FlaxCLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="text_encoder", dtype=weight_dtype - ) - vae, vae_params = FlaxAutoencoderKL.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="vae", dtype=weight_dtype - ) - unet, unet_params = FlaxUNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="unet", dtype=weight_dtype - ) - - # Optimization - if args.scale_lr: - args.learning_rate = args.learning_rate * total_train_batch_size - - constant_scheduler = optax.constant_schedule(args.learning_rate) - - adamw = optax.adamw( - learning_rate=constant_scheduler, - b1=args.adam_beta1, - b2=args.adam_beta2, - eps=args.adam_epsilon, - weight_decay=args.adam_weight_decay, - ) - - optimizer = optax.chain( - optax.clip_by_global_norm(args.max_grad_norm), - adamw, - ) - - state = train_state.TrainState.create(apply_fn=unet.__call__, params=unet_params, tx=optimizer) - - noise_scheduler = FlaxDDPMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000 - ) - noise_scheduler_state = noise_scheduler.create_state() - - # Initialize our training - rng = jax.random.PRNGKey(args.seed) - train_rngs = jax.random.split(rng, jax.local_device_count()) - - def train_step(state, text_encoder_params, vae_params, batch, train_rng): - dropout_rng, sample_rng, new_train_rng = jax.random.split(train_rng, 3) - - def compute_loss(params): - # Convert images to latent space - vae_outputs = vae.apply( - {"params": vae_params}, batch["pixel_values"], deterministic=True, method=vae.encode - ) - latents = vae_outputs.latent_dist.sample(sample_rng) - # (NHWC) -> (NCHW) - latents = jnp.transpose(latents, (0, 3, 1, 2)) - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise_rng, timestep_rng = jax.random.split(sample_rng) - noise = jax.random.normal(noise_rng, latents.shape) - # Sample a random timestep for each image - bsz = latents.shape[0] - timesteps = jax.random.randint( - timestep_rng, - (bsz,), - 0, - noise_scheduler.config.num_train_timesteps, - ) - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(noise_scheduler_state, latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder( - batch["input_ids"], - params=text_encoder_params, - train=False, - )[0] - - # Predict the noise residual and compute loss - model_pred = unet.apply( - {"params": params}, noisy_latents, timesteps, encoder_hidden_states, train=True - ).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(noise_scheduler_state, latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = (target - model_pred) ** 2 - loss = loss.mean() - - return loss - - grad_fn = jax.value_and_grad(compute_loss) - loss, grad = grad_fn(state.params) - grad = jax.lax.pmean(grad, "batch") - - new_state = state.apply_gradients(grads=grad) - - metrics = {"loss": loss} - metrics = jax.lax.pmean(metrics, axis_name="batch") - - return new_state, metrics, new_train_rng - - # Create parallel version of the train step - p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,)) - - # Replicate the train state on each device - state = jax_utils.replicate(state) - text_encoder_params = jax_utils.replicate(text_encoder.params) - vae_params = jax_utils.replicate(vae_params) - - # Train! - num_update_steps_per_epoch = math.ceil(len(train_dataloader)) - - # Scheduler and math around the number of training steps. - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel & distributed) = {total_train_batch_size}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - - global_step = 0 - - epochs = tqdm(range(args.num_train_epochs), desc="Epoch ... ", position=0) - for epoch in epochs: - # ======================== Training ================================ - - train_metrics = [] - - steps_per_epoch = len(train_dataset) // total_train_batch_size - train_step_progress_bar = tqdm(total=steps_per_epoch, desc="Training...", position=1, leave=False) - # train - for batch in train_dataloader: - batch = shard(batch) - state, train_metric, train_rngs = p_train_step(state, text_encoder_params, vae_params, batch, train_rngs) - train_metrics.append(train_metric) - - train_step_progress_bar.update(1) - - global_step += 1 - if global_step >= args.max_train_steps: - break - - train_metric = jax_utils.unreplicate(train_metric) - - train_step_progress_bar.close() - epochs.write(f"Epoch... ({epoch + 1}/{args.num_train_epochs} | Loss: {train_metric['loss']})") - - # Create the pipeline using using the trained modules and save it. - if jax.process_index() == 0: - scheduler = FlaxPNDMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True - ) - safety_checker = FlaxStableDiffusionSafetyChecker.from_pretrained( - "CompVis/stable-diffusion-safety-checker", from_pt=True - ) - pipeline = FlaxStableDiffusionPipeline( - text_encoder=text_encoder, - vae=vae, - unet=unet, - tokenizer=tokenizer, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"), - ) - - pipeline.save_pretrained( - args.output_dir, - params={ - "text_encoder": get_params_to_save(text_encoder_params), - "vae": get_params_to_save(vae_params), - "unet": get_params_to_save(state.params), - "safety_checker": safety_checker.params, - }, - ) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py deleted file mode 100644 index 58eb535e7769f402169ddff77ee45c96ba3650d9..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py +++ /dev/null @@ -1,102 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to( - device=self.parameters.device - ) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to( - device=self.parameters.device - ) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.0]) - else: - if other is None: - return 0.5 * torch.mean( - torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar, - dim=[1, 2, 3], - ) - else: - return 0.5 * torch.mean( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - - 1.0 - - self.logvar - + other.logvar, - dim=[1, 2, 3], - ) - - def nll(self, sample, dims=[1, 2, 3]): - if self.deterministic: - return torch.Tensor([0.0]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims, - ) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/deepwisdom/MetaGPT/metagpt/utils/text.py b/spaces/deepwisdom/MetaGPT/metagpt/utils/text.py deleted file mode 100644 index be3c52edd3d399f1fcee2449ada326c12d9e3f07..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/utils/text.py +++ /dev/null @@ -1,124 +0,0 @@ -from typing import Generator, Sequence - -from metagpt.utils.token_counter import TOKEN_MAX, count_string_tokens - - -def reduce_message_length(msgs: Generator[str, None, None], model_name: str, system_text: str, reserved: int = 0,) -> str: - """Reduce the length of concatenated message segments to fit within the maximum token size. - - Args: - msgs: A generator of strings representing progressively shorter valid prompts. - model_name: The name of the encoding to use. (e.g., "gpt-3.5-turbo") - system_text: The system prompts. - reserved: The number of reserved tokens. - - Returns: - The concatenated message segments reduced to fit within the maximum token size. - - Raises: - RuntimeError: If it fails to reduce the concatenated message length. - """ - max_token = TOKEN_MAX.get(model_name, 2048) - count_string_tokens(system_text, model_name) - reserved - for msg in msgs: - if count_string_tokens(msg, model_name) < max_token: - return msg - - raise RuntimeError("fail to reduce message length") - - -def generate_prompt_chunk( - text: str, - prompt_template: str, - model_name: str, - system_text: str, - reserved: int = 0, -) -> Generator[str, None, None]: - """Split the text into chunks of a maximum token size. - - Args: - text: The text to split. - prompt_template: The template for the prompt, containing a single `{}` placeholder. For example, "### Reference\n{}". - model_name: The name of the encoding to use. (e.g., "gpt-3.5-turbo") - system_text: The system prompts. - reserved: The number of reserved tokens. - - Yields: - The chunk of text. - """ - paragraphs = text.splitlines(keepends=True) - current_token = 0 - current_lines = [] - - reserved = reserved + count_string_tokens(prompt_template+system_text, model_name) - # 100 is a magic number to ensure the maximum context length is not exceeded - max_token = TOKEN_MAX.get(model_name, 2048) - reserved - 100 - - while paragraphs: - paragraph = paragraphs.pop(0) - token = count_string_tokens(paragraph, model_name) - if current_token + token <= max_token: - current_lines.append(paragraph) - current_token += token - elif token > max_token: - paragraphs = split_paragraph(paragraph) + paragraphs - continue - else: - yield prompt_template.format("".join(current_lines)) - current_lines = [paragraph] - current_token = token - - if current_lines: - yield prompt_template.format("".join(current_lines)) - - -def split_paragraph(paragraph: str, sep: str = ".,", count: int = 2) -> list[str]: - """Split a paragraph into multiple parts. - - Args: - paragraph: The paragraph to split. - sep: The separator character. - count: The number of parts to split the paragraph into. - - Returns: - A list of split parts of the paragraph. - """ - for i in sep: - sentences = list(_split_text_with_ends(paragraph, i)) - if len(sentences) <= 1: - continue - ret = ["".join(j) for j in _split_by_count(sentences, count)] - return ret - return _split_by_count(paragraph, count) - - -def decode_unicode_escape(text: str) -> str: - """Decode a text with unicode escape sequences. - - Args: - text: The text to decode. - - Returns: - The decoded text. - """ - return text.encode("utf-8").decode("unicode_escape", "ignore") - - -def _split_by_count(lst: Sequence , count: int): - avg = len(lst) // count - remainder = len(lst) % count - start = 0 - for i in range(count): - end = start + avg + (1 if i < remainder else 0) - yield lst[start:end] - start = end - - -def _split_text_with_ends(text: str, sep: str = "."): - parts = [] - for i in text: - parts.append(i) - if i == sep: - yield "".join(parts) - parts = [] - if parts: - yield "".join(parts) diff --git a/spaces/dentadelta123/grammarly/README.md b/spaces/dentadelta123/grammarly/README.md deleted file mode 100644 index f037a4b90600186df181f42159711134ce59cb2b..0000000000000000000000000000000000000000 --- a/spaces/dentadelta123/grammarly/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Grammarly -emoji: 💻 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.8 -app_file: app.py -pinned: false -license: unlicense ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/Cinema 4d R13 Studio Crack.md b/spaces/diacanFperku/AutoGPT/Cinema 4d R13 Studio Crack.md deleted file mode 100644 index fc2acf76afcc59fcd5e4cbc851e3984bd9f64614..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Cinema 4d R13 Studio Crack.md +++ /dev/null @@ -1,14 +0,0 @@ -

    cinema 4d r13 studio crack


    Download Filehttps://gohhs.com/2uFUNr



    - -CINEMA 4D 25.117 Crack makes the most user-friendly professional 3D software even... Character tools in CINEMA 4D Keygen Studio makes it easy to create ... CINEMA 4D R14. -It is a program for creating 3D graphics and animation... -CINEMA 4D R14 + Crack. -In this pack you will find CINEMA 4D R14 and keygen. -CINEMA 4D R14 is a professional ... -Download CINEMA 4D R14 Keygen+Crack, in Russian from our website -download CINEMA 4D R14 + Crack: CINEMA 4D R14 + Crack, CINEMA 4D R14, CINEMA 4D R14, CINEMA 4D R14, CINEMA 4D R14, CINEMA 4D R14, CINEMA 4D R14, CINEMA 4D R14, CINEMA 4D R14 -CINEMA 4D R14 Keygen. -R14, keygen, CINEMA 4D R14, CINEMA 4D R14 . 8a78ff9644
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Corel Draw X4 Language Pack 44.md b/spaces/diacanFperku/AutoGPT/Corel Draw X4 Language Pack 44.md deleted file mode 100644 index 0593e8c00e523ab073a335d7f2ea2effa7f5758a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Corel Draw X4 Language Pack 44.md +++ /dev/null @@ -1,46 +0,0 @@ -

    Corel Draw X4 Language Pack 44


    Download Zip === https://gohhs.com/2uFU8Q



    -
    -exe of X4 activates a window "Graphic Type" appears, on the Graphic Type select button shows "Invalide license code.exe", then nothing happens. But I have the right code. Help please? - -"Invalide license code.exe" mean the problem? - -Can you tell me in more detail where you obtain the code? - -How many times you try to install? - -What kind of a computer you have? - -How much RAM? - -How much HD space? - -And a small question for your guys. Do you use a Windows 98? Or a older? - -CorelDRAW Graphics Suite X4 Pro - -"Graphic Type" means which kind of license have you got? - -Thanks a lot. - -"Invalide license code.exe" means the problem? - -Thanks. - -Ok, I have the wrong license. I've downloaded it again but this time I've selected the correct license which comes with a CD. - -I've tried installing 6 times but no success. - -It shows an error message saying that "No Graphics Driver is present". So I've reinstalled my Graphic Driver, restart my computer and go back to the CD but the same error. What's wrong? - -You need to download the CD for CorelDRAW Graphics Suite X4. - -You need to be logged as a CD member to download the CD. - -Please click on the following link to activate your membership. - -Sorry that I can't help you directly. - -But you could try to download CorelDRAW Graphics Suite X4 Deluxe 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Nfsmw-2012 Update 1.3 Dlc.exe.l.md b/spaces/diacanFperku/AutoGPT/Nfsmw-2012 Update 1.3 Dlc.exe.l.md deleted file mode 100644 index 35511f336d1ac788ed79b91553059bf71d67e74d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Nfsmw-2012 Update 1.3 Dlc.exe.l.md +++ /dev/null @@ -1,121 +0,0 @@ -
    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l: Everything You Need to Know

    -

    If you are a fan of Need for Speed Most Wanted 2012, you might be interested in downloading and installing Nfsmw-2012 Update 1.3 Dlc.exe.l. This is a patch that adds new features, fixes bugs and improves the performance of the game. But what exactly does Nfsmw-2012 Update 1.3 Dlc.exe.l do and how can you get it? In this article, we will answer these questions and more.

    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l


    DOWNLOADhttps://gohhs.com/2uFTJL



    - -

    What is Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l is a file that contains the latest patch for Need for Speed Most Wanted 2012. The patch was released by Electronic Arts in December 2020 and it includes the following changes:

    -
      -
    • Added support for DirectX 11.
    • -
    • Added new cars and customization options.
    • -
    • Added new multiplayer modes and events.
    • -
    • Improved graphics and sound quality.
    • -
    • Fixed various crashes and glitches.
    • -
    -

    The patch also comes with a DLC pack that adds five new cars to the game: the Aston Martin DB5, the Lamborghini Diablo SV, the Porsche 911 Carrera S, the Nissan Skyline GT-R R34 and the Ford Mustang Boss 302.

    - -

    How to Download and Install Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    There are two ways to download and install Nfsmw-2012 Update 1.3 Dlc.exe.l: through Origin or manually. Here are the steps for each method:

    - -

    Through Origin

    -

    If you have purchased Need for Speed Most Wanted 2012 through Origin, you can easily download and install Nfsmw-2012 Update 1.3 Dlc.exe.l through the Origin client. Here is how:

    -
      -
    1. Launch Origin and log in to your account.
    2. -
    3. Go to My Game Library and find Need for Speed Most Wanted 2012.
    4. -
    5. Right-click on the game icon and select Update Game.
    6. -
    7. Wait for the download and installation to complete.
    8. -
    9. Enjoy the game with the latest patch and DLC.
    10. -
    - -

    Manually

    -

    If you have purchased Need for Speed Most Wanted 2012 from another source or you prefer to download and install Nfsmw-2012 Update 1.3 Dlc.exe.l manually, you can do so by following these steps:

    -
      -
    1. Download Nfsmw-2012 Update 1.3 Dlc.exe.l from a trusted website. You can find some links at the end of this article.
    2. -
    3. Extract the file using a program like WinRAR or 7-Zip.
    4. -
    5. Copy the extracted files to your game directory. The default location is C:\Program Files (x86)\Origin Games\Need for Speed(TM) Most Wanted.
    6. -
    7. Run Nfsmw-2012 Update 1.3 Dlc.exe.l as administrator and follow the instructions on screen.
    8. -
    9. Enjoy the game with the latest patch and DLC.
    10. -
    - -

    Nfsmw-2012 Update 1.3 Dlc.exe.l: The Verdict

    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l is a must-have patch for Need for Speed Most Wanted 2012 fans. It adds new content, improves the gameplay and fixes many issues that plagued the original release. Whether you download it through Origin or manually, you will not regret installing Nfsmw-2012 Update 1.3 Dlc.exe.l on your PC.

    -

    - -

    Here are some links where you can download Nfsmw-2012 Update 1.3 Dlc.exe.l:

    -

    -

    How to Uninstall Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    If you want to uninstall Nfsmw-2012 Update 1.3 Dlc.exe.l for any reason, you can do so by following these steps:

    -
      -
    1. Go to Control Panel and select Programs and Features.
    2. -
    3. Find Need for Speed Most Wanted 2012 in the list and click on Uninstall.
    4. -
    5. Follow the instructions on screen to remove the game and the patch from your PC.
    6. -
    7. Delete any leftover files or folders related to Nfsmw-2012 Update 1.3 Dlc.exe.l from your game directory.
    8. -
    - -

    What are the System Requirements for Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    To run Nfsmw-2012 Update 1.3 Dlc.exe.l smoothly on your PC, you need to meet the following system requirements:

    - - - - - - - - - -
    MinimumRecommended
    OS: Windows Vista SP2 32-bitOS: Windows 7 SP1 64-bit
    CPU: Intel Core 2 Duo 2.4 GHz or AMD Athlon X2 2.7 GHzCPU: Intel Core i5-750 or AMD Phenom II X4 955
    RAM: 2 GBRAM: 4 GB
    GPU: NVIDIA GeForce 8800 GT or ATI Radeon HD 3870GPU: NVIDIA GeForce GTX 560 or ATI Radeon HD 6950
    DirectX: DirectX 10.1DirectX: DirectX 11
    HDD: 20 GBHDD: 20 GB
    Sound: DirectX compatible sound cardSound: DirectX compatible sound card
    - -

    Nfsmw-2012 Update 1.3 Dlc.exe.l: FAQs

    -

    Here are some frequently asked questions about Nfsmw-2012 Update 1.3 Dlc.exe.l:

    - -

    Is Nfsmw-2012 Update 1.3 Dlc.exe.l safe to download and install?

    -

    Yes, Nfsmw-2012 Update 1.3 Dlc.exe.l is safe to download and install as long as you get it from a trusted source. However, you should always scan any file you download with an antivirus program before opening it.

    - -

    Do I need to have the original game to install Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    Yes, you need to have Need for Speed Most Wanted 2012 installed on your PC before you can install Nfsmw-2012 Update 1.3 Dlc.exe.l. The patch will not work without the base game.

    - -

    Can I play online with Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    Yes, you can play online with Nfsmw-2012 Update 1.3 Dlc.exe.l as long as you have a valid Origin account and an internet connection. The patch will not affect your online experience or compatibility with other players.

    - -

    Can I use mods or cheats with Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    You can use mods or cheats with Nfsmw-2012 Update 1.3 Dlc.exe.l at your own risk. However, be aware that some mods or cheats may not work properly with the patch or may cause crashes or errors. Also, using mods or cheats online may result in a ban from Origin or other consequences.

    - -

    Where can I get more information or support for Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    If you have any questions or issues with Nfsmw-2012 Update 1.3 Dlc.exe.l, you can visit the official website of Electronic Arts or contact their customer service team. You can also check out online forums or communities of Need for Speed fans for more tips and advice.

    -

    What are the Benefits of Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l is not just a patch that fixes some bugs and glitches. It also brings many benefits to the game that enhance your gaming experience. Here are some of the benefits of Nfsmw-2012 Update 1.3 Dlc.exe.l:

    -
      -
    • It makes the game more compatible with modern hardware and software. By adding support for DirectX 11, Nfsmw-2012 Update 1.3 Dlc.exe.l improves the graphics and performance of the game on newer PCs.
    • -
    • It adds more content and variety to the game. By adding new cars and customization options, Nfsmw-2012 Update 1.3 Dlc.exe.l gives you more choices and possibilities to play with. You can also enjoy new multiplayer modes and events that challenge your skills and strategy.
    • -
    • It enhances the gameplay and realism of the game. By improving the sound quality and fixing various crashes and glitches, Nfsmw-2012 Update 1.3 Dlc.exe.l makes the game more immersive and enjoyable. You can also appreciate the finer details and effects that make the game more realistic.
    • -
    - -

    What are the Drawbacks of Nfsmw-2012 Update 1.3 Dlc.exe.l?

    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l is not a perfect patch that solves all the problems of the game. It also has some drawbacks that you should be aware of before installing it. Here are some of the drawbacks of Nfsmw-2012 Update 1.3 Dlc.exe.l:

    -
      -
    • It may cause compatibility issues with some mods or cheats. If you use mods or cheats with Need for Speed Most Wanted 2012, you may encounter some problems or errors after installing Nfsmw-2012 Update 1.3 Dlc.exe.l. Some mods or cheats may not work properly with the patch or may conflict with it.
    • -
    • It may require more disk space and system resources. By adding new features and content, Nfsmw-2012 Update 1.3 Dlc.exe.l also increases the size and requirements of the game. You may need to free up some disk space or upgrade your PC to run Nfsmw-2012 Update 1.3 Dlc.exe.l smoothly.
    • -
    • It may not be compatible with older versions of the game. If you have an older version of Need for Speed Most Wanted 2012, you may not be able to install Nfsmw-2012 Update 1.3 Dlc.exe.l on it. You may need to update your game to the latest version before installing Nfsmw-2012 Update 1.3 Dlc.exe.l.
    • -
    - -

    Nfsmw-2012 Update 1.3 Dlc.exe.l: Tips and Tricks

    -

    If you want to make the most out of Nfsmw-2012 Update 1.3 Dlc.exe.l, here are some tips and tricks that can help you:

    -
      -
    • Backup your game files before installing Nfsmw-2012 Update 1.3 Dlc.exe.l. This way, you can restore your game to its original state if something goes wrong or if you want to uninstall Nfsmw-2012 Update 1.3 Dlc.exe.l.
    • -
    • Check your system requirements before installing Nfsmw-2012 Update 1.3 Dlc.exe.l. Make sure your PC meets the minimum or recommended system requirements for Nfsmw-2012 Update 1.3 Dlc.exe.l to avoid any performance issues or errors.
    • -
    • Explore the new cars and customization options in Nfsmw-2012 Update 1.3 Dlc.exe.l. Try out the new cars and see how they handle and perform in different situations. Customize them to your liking and show off your style.
    • -
    • Join the new multiplayer modes and events in Nfsmw-2012 Update 1.3 Dlc.exe.l. Challenge yourself and other players in new modes and events that test your speed, skill and strategy. Earn rewards and rank up in the leaderboards.
    • -
    • Enjoy the improved graphics and sound quality in Nfsmw-2012 Update 1.3 Dlc.exe.l. Adjust your settings to optimize your visual and audio experience in the game. Appreciate the finer details and effects that make the game more realistic.
    • -
    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l: The Final Word

    -

    Nfsmw-2012 Update 1.3 Dlc.exe.l is a patch that enhances and improves Need for Speed Most Wanted 2012 in many ways. It adds new features, content, graphics and sound quality to the game, making it more enjoyable and immersive. It also fixes many bugs and glitches that plagued the original release, making it more stable and reliable. However, it also has some drawbacks, such as compatibility issues, disk space and system requirements, and version compatibility. Therefore, you should weigh the pros and cons of Nfsmw-2012 Update 1.3 Dlc.exe.l before installing it on your PC.

    - -

    If you decide to install Nfsmw-2012 Update 1.3 Dlc.exe.l, you can follow the steps in this article to download and install it through Origin or manually. You can also use the tips and tricks in this article to make the most out of Nfsmw-2012 Update 1.3 Dlc.exe.l. And if you have any questions or issues with Nfsmw-2012 Update 1.3 Dlc.exe.l, you can visit the official website of Electronic Arts or contact their customer service team for more information or support.

    - -

    Nfsmw-2012 Update 1.3 Dlc.exe.l is a patch that can transform your gaming experience with Need for Speed Most Wanted 2012. Whether you are a casual player or a hardcore fan, you will find something to enjoy and appreciate in Nfsmw-2012 Update 1.3 Dlc.exe.l. So what are you waiting for? Download and install Nfsmw-2012 Update 1.3 Dlc.exe.l today and enjoy the ultimate racing game!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/digitalxingtong/Luzao-Bert-Vits2/start.bat b/spaces/digitalxingtong/Luzao-Bert-Vits2/start.bat deleted file mode 100644 index 418d21233dbf720b0dd09821904d9d6a31b123a2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Luzao-Bert-Vits2/start.bat +++ /dev/null @@ -1,2 +0,0 @@ -set PYTHON=venv\python.exe -start cmd /k "set PYTHON=%PYTHON%" \ No newline at end of file diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/text/cleaner.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/text/cleaner.py deleted file mode 100644 index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Miiu-Bert-Vits2/text/cleaner.py +++ /dev/null @@ -1,27 +0,0 @@ -from text import chinese, cleaned_text_to_sequence - - -language_module_map = { - 'ZH': chinese -} - - -def clean_text(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - return norm_text, phones, tones, word2ph - -def clean_text_bert(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - bert = language_module.get_bert_feature(norm_text, word2ph) - return phones, tones, bert - -def text_to_sequence(text, language): - norm_text, phones, tones, word2ph = clean_text(text, language) - return cleaned_text_to_sequence(phones, tones, language) - -if __name__ == '__main__': - pass diff --git a/spaces/dineshreddy/WALT/mmcv_custom/runner/__init__.py b/spaces/dineshreddy/WALT/mmcv_custom/runner/__init__.py deleted file mode 100644 index c701cb016abe470611830dc960999970738352bb..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmcv_custom/runner/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -from .checkpoint import save_checkpoint -from .epoch_based_runner import EpochBasedRunnerAmp - - -__all__ = [ - 'EpochBasedRunnerAmp', 'save_checkpoint' -] diff --git a/spaces/dinhhung1508/VietnamAIHub-Vietnamese_LLama2_13B_8K_SFT_General_Domain_Knowledge/README.md b/spaces/dinhhung1508/VietnamAIHub-Vietnamese_LLama2_13B_8K_SFT_General_Domain_Knowledge/README.md deleted file mode 100644 index d686435c577d5c0b50e5406718523a9f6496e97e..0000000000000000000000000000000000000000 --- a/spaces/dinhhung1508/VietnamAIHub-Vietnamese_LLama2_13B_8K_SFT_General_Domain_Knowledge/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: VietnamAIHub-Vietnamese LLama2 13B 8K SFT General Domain Knowledge -emoji: 🚀 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/divilis/chatgpt/chatgpt - windows.bat b/spaces/divilis/chatgpt/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/divilis/chatgpt/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/doevent/blip/data/utils.py b/spaces/doevent/blip/data/utils.py deleted file mode 100644 index 628894844becd462d444584b8b2b01a84ee4b8f7..0000000000000000000000000000000000000000 --- a/spaces/doevent/blip/data/utils.py +++ /dev/null @@ -1,112 +0,0 @@ -import re -import json -import os - -import torch -import torch.distributed as dist - -import utils - -def pre_caption(caption,max_words=50): - caption = re.sub( - r"([.!\"()*#:;~])", - ' ', - caption.lower(), - ) - caption = re.sub( - r"\s{2,}", - ' ', - caption, - ) - caption = caption.rstrip('\n') - caption = caption.strip(' ') - - #truncate caption - caption_words = caption.split(' ') - if len(caption_words)>max_words: - caption = ' '.join(caption_words[:max_words]) - - return caption - -def pre_question(question,max_ques_words=50): - question = re.sub( - r"([.!\"()*#:;~])", - '', - question.lower(), - ) - question = question.rstrip(' ') - - #truncate question - question_words = question.split(' ') - if len(question_words)>max_ques_words: - question = ' '.join(question_words[:max_ques_words]) - - return question - - -def save_result(result, result_dir, filename, remove_duplicate=''): - result_file = os.path.join(result_dir, '%s_rank%d.json'%(filename,utils.get_rank())) - final_result_file = os.path.join(result_dir, '%s.json'%filename) - - json.dump(result,open(result_file,'w')) - - dist.barrier() - - if utils.is_main_process(): - # combine results from all processes - result = [] - - for rank in range(utils.get_world_size()): - result_file = os.path.join(result_dir, '%s_rank%d.json'%(filename,rank)) - res = json.load(open(result_file,'r')) - result += res - - if remove_duplicate: - result_new = [] - id_list = [] - for res in result: - if res[remove_duplicate] not in id_list: - id_list.append(res[remove_duplicate]) - result_new.append(res) - result = result_new - - json.dump(result,open(final_result_file,'w')) - print('result file saved to %s'%final_result_file) - - return final_result_file - - - -from pycocotools.coco import COCO -from pycocoevalcap.eval import COCOEvalCap -from torchvision.datasets.utils import download_url - -def coco_caption_eval(coco_gt_root, results_file, split): - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val_gt.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test_gt.json'} - filenames = {'val':'coco_karpathy_val_gt.json','test':'coco_karpathy_test_gt.json'} - - download_url(urls[split],coco_gt_root) - annotation_file = os.path.join(coco_gt_root,filenames[split]) - - # create coco object and coco_result object - coco = COCO(annotation_file) - coco_result = coco.loadRes(results_file) - - # create coco_eval object by taking coco and coco_result - coco_eval = COCOEvalCap(coco, coco_result) - - # evaluate on a subset of images by setting - # coco_eval.params['image_id'] = coco_result.getImgIds() - # please remove this line when evaluating the full validation set - # coco_eval.params['image_id'] = coco_result.getImgIds() - - # evaluate results - # SPICE will take a few minutes the first time, but speeds up due to caching - coco_eval.evaluate() - - # print output evaluation scores - for metric, score in coco_eval.eval.items(): - print(f'{metric}: {score:.3f}') - - return coco_eval \ No newline at end of file diff --git a/spaces/dolceschokolade/chatbot-mini/components/Chat/Regenerate.tsx b/spaces/dolceschokolade/chatbot-mini/components/Chat/Regenerate.tsx deleted file mode 100644 index d36c3e48848fc09013c070cecd53fcfe1082f93d..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/components/Chat/Regenerate.tsx +++ /dev/null @@ -1,26 +0,0 @@ -import { IconRefresh } from '@tabler/icons-react'; -import { FC } from 'react'; - -import { useTranslation } from 'next-i18next'; - -interface Props { - onRegenerate: () => void; -} - -export const Regenerate: FC = ({ onRegenerate }) => { - const { t } = useTranslation('chat'); - return ( -
    -
    - {t('Sorry, there was an error.')} -
    - -
    - ); -}; diff --git a/spaces/doluvor/faster-whisper-webui/app-network.py b/spaces/doluvor/faster-whisper-webui/app-network.py deleted file mode 100644 index 4f0e565b9029761d4b995fe32a65c58d1de55f53..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/app-network.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions, and make it available on the network -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1, server_name="0.0.0.0")) \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/docs/README_EN.md b/spaces/f2api/gpt-academic/docs/README_EN.md deleted file mode 100644 index 65af23d7b2c989107a664d7bd3ef88cf7e55c7f7..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/docs/README_EN.md +++ /dev/null @@ -1,322 +0,0 @@ -> **Note** -> -> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct. -> -> When installing dependencies, **please strictly select the versions** specified in requirements.txt. -> -> `pip install -r requirements.txt` - -# GPT Academic Optimization (GPT Academic) - -**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. -To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).** - -> Note: -> -> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**! -> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation). -> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect. - -
    - -Function | Description ---- | --- -One-click polishing | Supports one-click polishing and one-click searching for grammar errors in papers. -One-click Chinese-English translation | One-click Chinese-English translation. -One-click code interpretation | Displays, explains, generates, and adds comments to code. -[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys. -Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project -[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/... -Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts. -Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers. -Batch annotation generation | [Function plug-in] One-click batch generation of function annotations. -Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) in the five languages above? -Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running. -[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded) -[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click. -[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function plug-in] Given any Google Scholar search page URL, let GPT help you [write relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) -Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated. -Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting. -Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click. -Start Dark Gradio [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme. -[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right? -More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/) -More new feature displays (image generation, etc.)…… | See the end of this document for more... -
    - -- New interface (modify the LAYOUT option in `config.py` to switch between "left and right layout" and "up and down layout") -
    - -
    - All buttons are dynamically generated by reading `functional.py`, and you can add custom functions freely to unleash the power of clipboard. -
    - -
    - -- polishing/correction -
    - -
    - -- If the output contains formulas, they will be displayed in both `tex` and render form, making it easy to copy and read. -
    - -
    - -- Tired of reading the project code? ChatGPT can explain it all. -
    - -
    - -- Multiple large language models are mixed, such as ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4. -
    - -
    - ---- -# Installation -## Method 1: Directly running (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure the API_KEY - -Configure the API KEY in `config.py`, [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py` and use the configurations in it to override the same configurations in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configurations in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your private information more secure. P.S. The project also supports configuring most options through `environment variables`. Please refer to the format of `docker-compose` file when writing. Reading priority: `environment variables` > `config_private.py` > `config.py`) - - -3. Install the dependencies -```sh -# (Option I: If familiar with python) (python version 3.9 or above, the newer the better), note: use official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If not familiar with python) Use anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # create anaconda environment -conda activate gptac_venv # activate anaconda environment -python -m pip install -r requirements.txt # this step is the same as pip installation -``` - -
    If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand -

    - -[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (prerequisites: familiar with Python + used Pytorch + computer configuration is strong enough): -```sh -# [Optional Step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: if you encounter the "Call ChatGLM fail cannot load ChatGLM parameters" error, refer to this: 1: The default installation above is torch + cpu version, to use cuda, you need to uninstall torch and reinstall torch + cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code = True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional Step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the root directory of the project - -# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. Currently supported models are as follows (the jittorllms series only supports the docker solution for the time being): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

    -
    - - - -4. Run it -```sh -python main.py -```5. Test Function Plugin -``` -- Test function plugin template function (ask GPT what happened today in history), based on which you can implement more complex functions as a template - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation - Method 2: Using Docker - -1. ChatGPT Only (Recommended for Most People) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # Download project -cd chatgpt_academic # Enter path -nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc. -docker build -t gpt-academic . # Install - -#(Last step - option 1) In a Linux environment, use `--net=host` for convenience and speed. -docker run --rm -it --net=host gpt-academic -#(Last step - option 2) On macOS/windows environment, only -p option can be used to expose the container's port (e.g. 50923) to the port of the main machine. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 3, and keep Plan 2. Modify the configuration of Plan 2 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -3. ChatGPT + LLAMA + Pangu + RWKV (Requires Docker Knowledge) - -``` sh -# Modify docker-compose.yml, delete Plan 1 and Plan 2, and keep Plan 3. Modify the configuration of Plan 3 in docker-compose.yml, refer to the comments in it for configuration. -docker-compose up -``` - -## Installation - Method 3: Other Deployment Options - -1. How to Use Reverse Proxy URL/Microsoft Cloud Azure API -Configure API_URL_REDIRECT according to the instructions in 'config.py'. - -2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers) -Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL2 (Windows Subsystem for Linux) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`) -Please visit [FastAPI Running Instructions](docs/WithFastapi.md) - -5. Using docker-compose to Run -Read the docker-compose.yml and follow the prompts. - ---- -# Advanced Usage -## Custom New Shortcut Buttons / Custom Function Plugins - -1. Custom New Shortcut Buttons (Academic Hotkey) -Open `core_functional.py` with any text editor, add an entry as follows and restart the program. (If the button has been successfully added and is visible, the prefix and suffix can be hot-modified without having to restart the program.) -For example, -``` -"Super English-to-Chinese": { - # Prefix, which will be added before your input. For example, used to describe your requests, such as translation, code explanation, polishing, etc. - "Prefix": "Please translate the following content into Chinese and then use a markdown table to explain the proprietary terms that appear in the text:\n\n", - - # Suffix, which is added after your input. For example, with the prefix, your input content can be surrounded by quotes. - "Suffix": "", -}, -``` -
    - -
    - -2. Custom Function Plugins - -Write powerful function plugins to perform any task you can think of, even those you cannot think of. -The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide. -For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New Feature Dynamics -1. Conversation saving function. Call `Save current conversation` in the function plugin area to save the current conversation as a readable and recoverable HTML file. In addition, call `Load conversation history archive` in the function plugin area (dropdown menu) to restore previous sessions. Tip: Clicking `Load conversation history archive` without specifying a file will display the cached history of HTML archives, and clicking `Delete all local conversation history` will delete all HTML archive caches. - -
    - -
    - - -2. Report generation. Most plugins will generate work reports after execution. - -
    - - - -
    - - -3. Modular function design with simple interfaces that support powerful functions. - -
    - - -
    - - -4. This is an open-source project that can "self-translate". - -
    - -
    - -5. Translating other open-source projects is a piece of cake. - -
    - -
    - -
    - -
    - -6. A small feature decorated with [live2d](https://github.com/fghrsh/live2d_demo) (disabled by default, need to modify `config.py`). - -
    - -
    - -7. Added MOSS large language model support. -
    - -
    - -8. OpenAI image generation. -
    - -
    - -9. OpenAI audio parsing and summarization. -
    - -
    - -10. Full-text proofreading and error correction of LaTeX. -
    - -
    - - -## Versions: -- version 3.5(Todo): Use natural language to call all function plugins of this project (high priority). -- version 3.4(Todo): Improve multi-threading support for chatglm local large models. -- version 3.3: +Internet information integration function. -- version 3.2: Function plugin supports more parameter interfaces (save conversation function, interpretation of any language code + simultaneous inquiry of any LLM combination). -- version 3.1: Support simultaneous inquiry of multiple GPT models! Support api2d, and support load balancing of multiple apikeys. -- version 3.0: Support chatglm and other small LLM models. -- version 2.6: Refactored plugin structure, improved interactivity, and added more plugins. -- version 2.5: Self-updating, solving the problem of text overflow and token overflow when summarizing large engineering source codes. -- version 2.4: (1) Added PDF full-text translation function; (2) Added the function of switching the position of the input area; (3) Added vertical layout option; (4) Optimized multi-threading function plugins. -- version 2.3: Enhanced multi-threading interactivity. -- version 2.2: Function plugin supports hot reloading. -- version 2.1: Collapsible layout. -- version 2.0: Introduction of modular function plugins. -- version 1.0: Basic functions. - -gpt_academic Developer QQ Group-2: 610599535 - -- Known Issues - - Some browser translation plugins interfere with the front-end operation of this software. - - Both high and low versions of gradio can lead to various exceptions. - -## Reference and Learning - -``` -Many other excellent designs have been referenced in the code, mainly including: - -# Project 1: THU ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# Project 2: THU JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# Project 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Project 4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Project 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# More: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/fabiod20/italian-legal-ner/README.md b/spaces/fabiod20/italian-legal-ner/README.md deleted file mode 100644 index 3a4411625ec7eb108db090e67391977286a68998..0000000000000000000000000000000000000000 --- a/spaces/fabiod20/italian-legal-ner/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Italian Legal Ner -emoji: ⚖️ -colorFrom: purple -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/facebook/StyleNeRF/torch_utils/ops/upfirdn2d.h b/spaces/facebook/StyleNeRF/torch_utils/ops/upfirdn2d.h deleted file mode 100644 index 2793daf874492af01e8634a7863c036e17b6731f..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/torch_utils/ops/upfirdn2d.h +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct upfirdn2d_kernel_params -{ - const void* x; - const float* f; - void* y; - - int2 up; - int2 down; - int2 pad0; - int flip; - float gain; - - int4 inSize; // [width, height, channel, batch] - int4 inStride; - int2 filterSize; // [width, height] - int2 filterStride; - int4 outSize; // [width, height, channel, batch] - int4 outStride; - int sizeMinor; - int sizeMajor; - - int loopMinor; - int loopMajor; - int loopX; - int launchMinor; - int launchMajor; -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct upfirdn2d_kernel_spec -{ - void* kernel; - int tileOutW; - int tileOutH; - int loopMinor; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/failfast/2D-GameCreator/.github/PULL_REQUEST_TEMPLATE.md b/spaces/failfast/2D-GameCreator/.github/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index 4a5a3f61f0d2f54c30f63c56430e3b6f5b5bad59..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/.github/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,7 +0,0 @@ -## Motivation - - - -## Issues closed - - diff --git a/spaces/falterWliame/Face_Mask_Detection/Best Of Retro Disco 2012 Torrent Download __EXCLUSIVE__.md b/spaces/falterWliame/Face_Mask_Detection/Best Of Retro Disco 2012 Torrent Download __EXCLUSIVE__.md deleted file mode 100644 index 823600fd0c7bbb4f63242786230277c15227b65a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Best Of Retro Disco 2012 Torrent Download __EXCLUSIVE__.md +++ /dev/null @@ -1,23 +0,0 @@ - -```html -

    Best Of Retro Disco 2012 Torrent Download: How to Enjoy the Classic Hits of the 80s and 90s

    -

    If you are a fan of retro disco music, you might be interested in downloading the Best Of Retro Disco 2012 Torrent. This torrent contains a collection of the most popular and catchy songs from the golden era of disco, spanning from the late 70s to the early 90s. You will find hits from artists such as ABBA, Bee Gees, Donna Summer, Gloria Gaynor, Kool & The Gang, Michael Jackson, and many more.

    -

    Best Of Retro Disco 2012 Torrent Download


    Download File 🔗 https://urlca.com/2uDc1Y



    -

    In this article, we will show you how to download and enjoy the Best Of Retro Disco 2012 Torrent, as well as some tips and tricks to make your disco experience more fun and authentic. So put on your dancing shoes and get ready to groove to the rhythm of the night!

    -

    How to Download the Best Of Retro Disco 2012 Torrent

    -

    The first step to download the Best Of Retro Disco 2012 Torrent is to find a reliable and safe torrent site that hosts it. There are many torrent sites on the internet, but not all of them are trustworthy or legal. Some may contain malware, viruses, or fake files that can harm your computer or compromise your privacy.

    -

    One of the best torrent sites that we recommend is The Pirate Bay. This site has been around for a long time and has a large and active community of users who upload and share torrents of various genres and categories. You can easily find the Best Of Retro Disco 2012 Torrent by typing the keyword in the search box and filtering the results by audio files.

    -

    Once you have found the torrent file that you want to download, you will need a torrent client to open it and start the download process. A torrent client is a software that allows you to connect to other peers who have the same file and download it from them in small pieces. There are many torrent clients available for free online, but one of the most popular and user-friendly ones is uTorrent.

    -

    After you have installed uTorrent on your computer, you can simply double-click on the torrent file that you downloaded from The Pirate Bay and it will automatically open in uTorrent. You can then choose where to save the file on your computer and start the download. Depending on your internet speed and the number of seeders (peers who have the complete file and are sharing it), the download may take from a few minutes to a few hours.

    -

    -

    How to Enjoy the Best Of Retro Disco 2012 Torrent

    -

    Once you have downloaded the Best Of Retro Disco 2012 Torrent, you can start enjoying the classic hits of retro disco music. You can play them on your computer using any media player that supports MP3 files, such as VLC Media Player. You can also transfer them to your smartphone, tablet, or MP3 player and listen to them on the go.

    -

    If you want to make your disco experience more fun and authentic, here are some tips and tricks that you can try:

    -
      -
    • Create a playlist of your favorite songs from the torrent and shuffle them randomly. This will give you a feeling of listening to a real disco radio station or DJ.
    • -
    • Use a disco ball or some colorful lights to create a disco atmosphere in your room. You can also use some candles or incense to add some aroma and mood.
    • -
    • Invite some friends over and have a disco party. You can dress up in retro outfits, such as bell-bottoms, platform shoes, sequins, or leather jackets. You can also play some disco games, such as limbo, musical chairs, or charades.
    • -
    • Learn some disco dance moves and practice them with your friends. You can watch some videos online or take some classes to learn how to do the hustle, the bump, the boogie, or the electric slide.
    • -
    • Explore some other genres of retro

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Download Fast And Furious 5 Dubbed In Hindi Mp4 Moviegolkes.md b/spaces/falterWliame/Face_Mask_Detection/Download Fast And Furious 5 Dubbed In Hindi Mp4 Moviegolkes.md deleted file mode 100644 index 53fffbe1ba1ec7126c618273f55d87a8f6bbd918..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Download Fast And Furious 5 Dubbed In Hindi Mp4 Moviegolkes.md +++ /dev/null @@ -1,111 +0,0 @@ - -

      Download Fast and Furious 5 Dubbed in Hindi Mp4 Moviegolkes

      - -

      If you are a fan of action-packed movies, then you must have watched the Fast and Furious series. This is one of the most popular and successful franchises in Hollywood, with 10 movies released so far and more to come. The fifth installment, Fast Five, was released in 2011 and it was a huge hit at the box office. It featured some of the most thrilling and spectacular stunts ever seen on screen, as well as a star-studded cast that included Vin Diesel, Paul Walker, Dwayne Johnson, Tyrese Gibson, Ludacris, Gal Gadot, and more.

      -

      download fast and furious 5 dubbed in hindi mp4 moviegolkes


      DOWNLOADhttps://urlca.com/2uDdJp



      - -

      But what if you want to watch Fast Five in Hindi? Maybe you prefer to enjoy the movie in your native language, or maybe you want to share it with your friends and family who don't understand English. Well, you are in luck because there are many ways to download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes. Moviegolkes are high-quality video files that are compressed to save space and bandwidth. They are perfect for downloading and streaming movies online.

      - -

      How to Download Fast and Furious 5 Dubbed in Hindi Mp4 Moviegolkes

      - -

      There are many websites that offer Fast and Furious 5 dubbed in Hindi mp4 moviegolkes for free or for a small fee. However, not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also have broken links, low-quality videos, or annoying ads that ruin your viewing experience.

      - -

      That's why we have done the research for you and found some of the best and most trusted websites that offer Fast and Furious 5 dubbed in Hindi mp4 moviegolkes. Here are some of them:

      -

      - -
        -
      • Moviehax.me: This is a great website that offers a wide range of Hollywood movies dubbed in Hindi. You can watch online or download Fast Five (2011) Hindi dubbed movie HD print for free. The video quality is excellent and the download speed is fast.
      • -
      • 1kmovies.lat: This is another website that offers Fast Five (2011) Hindi dubbed moviegolkes in 1080p quality. You can also download other parts of the Fast and Furious series in Hindi dubbed from this website. The website is easy to use and has a simple interface.
      • -
      • vkhindiworld.com: This is a website that provides information about Fast X movie download in Hindi filmyzilla 720p full HD. You can find out the release date, star cast, story, trailer, and review of the upcoming movie Fast X, which is the tenth part of the Fast and Furious saga. You can also find links to download Fast Five (2011) Hindi dubbed moviegolkes from this website.
      • -
      - -

      Tips for Downloading Fast and Furious 5 Dubbed in Hindi Mp4 Moviegolkes

      - -

      Before you download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes from any website, here are some tips to keep in mind:

      - -
        -
      • Make sure you have a good internet connection and enough storage space on your device.
      • -
      • Use a VPN service to protect your privacy and security online.
      • -
      • Use an antivirus software to scan the downloaded files for any potential threats.
      • -
      • Check the ratings, reviews, and comments of other users before downloading from any website.
      • -
      • Avoid clicking on any pop-ups, banners, or redirects that may appear on the website.
      • -
      • Enjoy watching Fast Five (2011) Hindi dubbed moviegolkes with your friends and family.
      • -
      - -

      Conclusion

      - -

      Fast Five (2011) is one of the best movies in the Fast and Furious series. It has an amazing plot, stunning visuals, thrilling action scenes, and a great cast. If you want to watch it in Hindi, you can download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes from any of the websites mentioned above. Just follow the tips we have given you and you will have a smooth and safe downloading experience.

      - -

      We hope you found this article helpful and informative. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

      -

      Why You Should Download Fast and Furious 5 Dubbed in Hindi Mp4 Moviegolkes

      - -

      Fast Five (2011) is not just a movie, it is an experience. It is a movie that will keep you on the edge of your seat from start to finish. It is a movie that will make you feel the adrenaline rush of the high-speed chases, the explosions, the fights, and the heists. It is a movie that will make you laugh, cheer, and cry with the characters. It is a movie that will make you want to be part of Dom's family.

      - -

      But if you want to enjoy this movie to the fullest, you should download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes. Why? Because there are many benefits of watching this movie in Hindi. Here are some of them:

      - -
        -
      • You will understand the dialogues better. Sometimes, the subtitles are not enough to convey the emotions, the humor, or the sarcasm of the characters. By watching the movie in Hindi, you will be able to catch every nuance and every detail of the conversations.
      • -
      • You will appreciate the culture more. Fast Five (2011) is set in Brazil, a country that has a rich and diverse culture. By watching the movie in Hindi, you will be able to relate to the customs, the traditions, and the values of the Brazilian people.
      • -
      • You will enjoy the music more. Fast Five (2011) has an amazing soundtrack that features songs from various genres and artists. By watching the movie in Hindi, you will be able to sing along and groove to the tunes.
      • -
      • You will have more fun with your friends and family. Fast Five (2011) is a movie that is best watched with your loved ones. By watching the movie in Hindi, you will be able to share your opinions, your reactions, and your jokes with them.
      • -
      - -

      How to Watch Fast and Furious 5 Dubbed in Hindi Mp4 Moviegolkes Online

      - -

      If you are convinced that you should download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes, then you must be wondering how to do it. Well, it is very easy and simple. All you need is a device that can access the internet and a website that offers Fast and Furious 5 dubbed in Hindi mp4 moviegolkes for free or for a nominal fee.

      - -

      There are many websites that offer Fast and Furious 5 dubbed in Hindi mp4 moviegolkes online, but not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also have broken links, low-quality videos, or annoying ads that ruin your viewing experience.

      - -

      That's why we have done the research for you and found some of the best and most trusted websites that offer Fast and Furious 5 dubbed in Hindi mp4 moviegolkes online. Here are some of them:

      - -
        -
      • Moviehax.me: This is a great website that offers a wide range of Hollywood movies dubbed in Hindi. You can watch online or download Fast Five (2011) Hindi dubbed movie HD print for free. The video quality is excellent and the download speed is fast.
      • -
      • Archive.org: This is a website that provides free access to millions of books, movies, music, and more. You can watch online or download Fast And Furious 5 for free from this website. The video quality is good and the download speed is decent.
      • -
      • Peatix.com: This is a website that allows you to create and join events online. You can watch online or download Fast Five (2011) 1080p BluRay x264 Dual Audio [English + Hindi] for free from this website. The video quality is superb and the download speed is fast.
      • -
      - -

      Conclusion

      - -

      Fast Five (2011) is one of the best movies in the Fast and Furious series. It has an amazing plot, stunning visuals, thrilling action scenes, and a great cast. If you want to watch it in Hindi, you can download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes from any of the websites mentioned above. Just follow the tips we have given you and you will have a smooth and safe downloading experience.

      - -

      We hope you found this article helpful and informative. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

      -

      What to Expect from Fast and Furious 5 Dubbed in Hindi Mp4 Moviegolkes

      - -

      Fast Five (2011) is a movie that will not disappoint you. It is a movie that has everything you want from an action movie: fast cars, furious races, daring heists, epic fights, and a lot of fun. It is a movie that will make you feel the thrill of being part of Dom's crew.

      - -

      But what can you expect from Fast and Furious 5 dubbed in Hindi mp4 moviegolkes? Here are some of the highlights of the movie:

      - -
        -
      • You will see Dom and Brian reunite with their old friends and recruit new ones to pull off a $100 million heist from a corrupt businessman in Rio de Janeiro.
      • -
      • You will see Dom and Brian face off against Luke Hobbs, a relentless federal agent who is determined to catch them and bring them to justice.
      • -
      • You will see Dom and Brian race against time and enemies across the streets of Rio, the rooftops of favelas, and the bridges of train tracks.
      • -
      • You will see Dom and Brian use some of the most amazing vehicles ever seen on screen, such as a modified Dodge Charger, a Ford GT40, a Nissan GT-R, a Subaru Impreza WRX STI, and a vault.
      • -
      • You will see Dom and Brian perform some of the most incredible stunts ever seen on screen, such as jumping from a moving train to a car, dragging a vault through the city, and flying off a cliff.
      • -
      • You will see Dom and Brian bond with their family and friends over barbecue, beer, and music.
      • -
      - -

      How to Enjoy Fast and Furious 5 Dubbed in Hindi Mp4 Moviegolkes More

      - -

      Fast Five (2011) is a movie that will make you enjoy every minute of it. It is a movie that will make you forget your worries and problems for a while. It is a movie that will make you feel alive and happy.

      - -

      But how can you enjoy Fast and Furious 5 dubbed in Hindi mp4 moviegolkes more? Here are some tips to make your viewing experience more enjoyable:

      - -
        -
      • Watch it with your friends or family who love action movies. You can share your excitement, your emotions, and your opinions with them.
      • -
      • Watch it on a big screen with good sound quality. You can immerse yourself in the visuals, the sounds, and the atmosphere of the movie.
      • -
      • Watch it with snacks and drinks. You can munch on popcorn, chips, or candy while watching the movie. You can also sip on soda, juice, or beer while watching the movie.
      • -
      • Watch it with an open mind and a positive attitude. You can appreciate the movie for what it is: a fun and entertaining action movie. You can also suspend your disbelief and enjoy the fantasy of the movie.
      • -
      - -

      Conclusion

      - -

      Fast Five (2011) is one of the best movies in the Fast and Furious series. It has an amazing plot, stunning visuals, thrilling action scenes, and a great cast. If you want to watch it in Hindi, you can download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes from any of the websites mentioned above. Just follow the tips we have given you and you will have a smooth and safe downloading experience.

      - -

      We hope you found this article helpful and informative. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

      -

      Conclusion

      - -

      Fast Five (2011) is one of the best movies in the Fast and Furious series. It has an amazing plot, stunning visuals, thrilling action scenes, and a great cast. If you want to watch it in Hindi, you can download Fast and Furious 5 dubbed in Hindi mp4 moviegolkes from any of the websites mentioned above. Just follow the tips we have given you and you will have a smooth and safe downloading experience.

      - -

      We hope you found this article helpful and informative. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/1945 Air Force Airplane games - the ultimate combat flight action game for Android.md b/spaces/fatiXbelha/sd/1945 Air Force Airplane games - the ultimate combat flight action game for Android.md deleted file mode 100644 index 5c844477dfa48daeee0d396b1e0d8106078bafe6..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/1945 Air Force Airplane games - the ultimate combat flight action game for Android.md +++ /dev/null @@ -1,92 +0,0 @@ -
      -

      1945 Air Force Game Download: A Guide for Fans of Classic Airplane Shooting Games

      -

      If you are a fan of classic airplane shooting games, you might want to check out 1945 Air Force Game. This is a thrilling combat flight action game that lets you take control of a warplane and jump on the battlefield of World War II. You can choose from over 60 historical planes from different countries and engage in solo or team missions across various scenarios. In this guide, we will tell you everything you need to know about 1945 Air Force Game, including what it is, why you should play it, and how to download and install it on your device. You can also download the game from Google Play or App Store right now.

      -

      What is 1945 Air Force Game?

      -

      1945 Air Force Game is a shooting arcade game developed by OneSoft Global PTE. LTD. It is inspired by the classic games of the 90s, such as Raiden, 1942, or 1943. The game features stunning graphics, realistic sound effects, and smooth controls that will make you feel like you are flying a real warplane.

      -

      1945 air force game download


      Download >> https://urllie.com/2uNyxZ



      -

      The game has several modes to choose from, such as Bombarding, Bosses, Protect, Stealth, and Assault. You can also play online with your friends or other players around the world. The game has over 30 legendary WWII battle zones and more than 500 challenging campaigns to complete. You can also customize, upgrade, and merge your planes to create your own super planes.

      -

      One of the best things about 1945 Air Force Game is that it is historically accurate and faithful to the original events of WWII. You will encounter famous planes, battleships, tanks, and enemies from different countries and factions. You will also learn some facts and trivia about WWII as you play.

      -

      Why Should You Play 1945 Air Force Game?

      -

      There are many reasons why you should play 1945 Air Force Game. Here are some of them:

      -
        -
      • It is fun and addictive. You will never get bored with the variety of missions, planes, enemies, and challenges that await you in each level.
      • -It is challenging and rewarding. You will have to use your skills, strategy, and reflexes to survive the enemy attacks and complete the objectives. You will also earn coins, gems, medals, and trophies as you progress. -
      • It is varied and diverse. You will experience different scenarios, landscapes, weather conditions, and time periods in the game. You will also encounter different types of enemies, such as fighters, bombers, submarines, carriers, and bosses.
      • -
      • It is nostalgic and educational. You will relive the history of WWII and see how the warplanes looked and sounded like. You will also learn some interesting facts and stories about the war and the planes.
      • -
      -

      1945 Air Force Game is one of the best shooting arcade games in the market. It has received many positive reviews and ratings from users and critics alike. For example, it has a 4.6-star rating on Google Play and a 4.7-star rating on App Store. Here are some of the comments from satisfied players:

      -
      -

      "This game is awesome! It brings back memories of playing arcade games in the 90s. The graphics are amazing and the gameplay is smooth and addictive. I love the variety of planes and missions. Highly recommended!"

      -

      "This game is very fun and challenging. It has a lot of levels and modes to keep you entertained. The planes are realistic and the sound effects are great. It is also very educational and informative. I learned a lot about WWII and the planes."

      -

      "This game is a masterpiece. It is the best airplane shooting game I have ever played. It has everything you need: action, adventure, history, and fun. The game is very well designed and developed. It is a must-have for any fan of shooting games."

      -
      -

      How to Download and Install 1945 Air Force Game?

      -

      If you are ready to play 1945 Air Force Game, you can download and install it on your Android or iOS device easily. Just follow these simple steps:

      -

      1945 air force game download for pc
      -1945 air force game download apk
      -1945 air force game download mod
      -1945 air force game download free
      -1945 air force game download ios
      -1945 air force game download android
      -1945 air force game download windows 10
      -1945 air force game download latest version
      -1945 air force game download offline
      -1945 air force game download bluestacks
      -1945 air force game download hack
      -1945 air force game download unlimited money
      -1945 air force game download update
      -1945 air force game download mac
      -1945 air force game download laptop
      -1945 air force game download online
      -1945 air force game download play store
      -1945 air force game download cheats
      -1945 air force game download review
      -1945 air force game download tips
      -1945 air force game download guide
      -1945 air force game download tutorial
      -1945 air force game download gameplay
      -1945 air force game download trailer
      -1945 air force game download features
      -1945 air force game download best plane
      -1945 air force game download how to play
      -1945 air force game download levels
      -1945 air force game download missions
      -1945 air force game download modes
      -1945 air force game download events
      -1945 air force game download rewards
      -1945 air force game download codes
      -1945 air force game download support
      -1945 air force game download facebook group
      -1945 air force game download youtube channel
      -1945 air force game download wiki
      -1945 air force game download reddit
      -1945 air force game download forum
      -1945 air force game download discord server

      -
        -
      1. Go to Google Play or App Store on your device and search for 1945 Air Force Game.
      2. -
      3. Select the game from the list of results and tap on Install or Get.
      4. -
      5. Wait for the game to download and install on your device.
      6. -
      7. Once the game is installed, tap on Open or find the game icon on your home screen.
      8. -
      9. Enjoy playing 1945 Air Force Game!
      10. -
      -

      You can also download the game from the official website or scan the QR code below:

      -QR code for 1945 Air Force Game download -

      Conclusion

      -

      1945 Air Force Game is a fantastic shooting arcade game that will take you back to the glory days of WWII. You will have a blast flying over 60 historical planes and fighting against various enemies in over 30 battle zones. You will also have fun customizing, upgrading, and merging your planes to create your own super planes.

      -

      1945 Air Force Game is more than just a game. It is also a history lesson that will teach you some facts and stories about WWII and the planes that shaped it. You will appreciate the accuracy and detail that went into making this game.

      -

      If you are looking for a fun, challenging, varied, and nostalgic shooting arcade game, you should definitely try out 1945 Air Force Game. You will not regret it!

      -

      Download 1945 Air Force Game now from Google Play or App Store or visit their official website or social media pages for more information.

      -

      FAQs

      -

      Here are some frequently asked questions about 1945 Air Force Game:

      -

      How much space does the game require?

      -

      The game requires about 150 MB of space on your device.

      -

      Is the game free or paid?

      -

      The game is free to download and play, but it contains ads and in-app purchases.

      -

      How often is the game updated?

      -

      The game is updated regularly with new features, planes, levels, events, and bug fixes.

      -

      Can I play the game offline?

      -

      You can play the game offline without an internet connection, but some features may not be available.

      -

      How can I contact the developer or get support?

      -

      You can contact the developer or get support by emailing them at support@onesoft.com.vn or visiting their Facebook page at https://www.facebook.com/1945AirForce/.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Clash of Clans versi 13.576 7 - The Best Strategy Game for Your Mobile.md b/spaces/fatiXbelha/sd/Download Clash of Clans versi 13.576 7 - The Best Strategy Game for Your Mobile.md deleted file mode 100644 index f6e0baed4352522a5a4affc784f85ca94ba2ebb1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Clash of Clans versi 13.576 7 - The Best Strategy Game for Your Mobile.md +++ /dev/null @@ -1,140 +0,0 @@ - -

      How to Download Clash of Clans Versi 13.576.7

      -

      Clash of Clans is one of the most popular and addictive strategy games in the world, with millions of players competing and cooperating in epic battles and quests. If you are a fan of this game, you might be wondering how to download Clash of Clans Versi 13.576.7, the latest version of the game that was released in December 2022.

      -

      In this article, we will tell you everything you need to know about Clash of Clans Versi 13.576.7, including what it is, why you should download it, and how to download it from different sources. We will also give you some tips on how to install and run the game smoothly on your device.

      -

      download clash of clans versi 13.576 7


      DOWNLOAD >>> https://urllie.com/2uNB8I



      -

      What is Clash of Clans?

      -

      A brief introduction to the game and its features

      -

      Clash of Clans is a strategy game that was developed by Supercell, a Finnish company that also created other popular games like Hay Day, Boom Beach, and Brawl Stars. The game was first released in August 2012 for iOS devices, and later in October 2013 for Android devices.

      -

      In Clash of Clans, you are the chief of a village that you have to build and defend from other players' attacks. You can also attack other players' villages to loot their resources and trophies. You can join or create a clan with other players to cooperate in clan wars, clan games, and clan chat.

      -

      How to download clash of clans versi 13.576 7 for android
      -Clash of clans versi 13.576 7 apk free download
      -Download clash of clans versi 13.576 7 mod apk unlimited gems
      -Clash of clans versi 13.576 7 update features and changes
      -Download clash of clans versi 13.576 7 for pc windows 10
      -Clash of clans versi 13.576 7 gameplay and tips
      -Download clash of clans versi 13.576 7 from softpedia[^1^]
      -Clash of clans versi 13.576 8 vs versi 13.576 7 comparison[^2^]
      -Download clash of clans versi 13.576 7 from archive.org[^3^]
      -Clash of clans versi 13.576 7 best base layouts and designs
      -Download clash of clans versi 13.576 7 hack tool no survey
      -Clash of clans versi 13.576 7 new troops and spells
      -Download clash of clans versi 13.576 7 offline installer
      -Clash of clans versi 13.576 7 clan wars and strategies
      -Download clash of clans versi 13.576 7 latest version for ios
      -Clash of clans versi 13.576 7 review and rating
      -Download clash of clans versi 13.576 7 cracked apk
      -Clash of clans versi 13.576 7 cheats and tricks
      -Download clash of clans versi 13.576 7 for mac os x
      -Clash of clans versi 13.576 7 system requirements and compatibility
      -Download clash of clans versi 13.576 7 for android tv
      -Clash of clans versi 13.576 7 super troops and heroes
      -Download clash of clans versi 13.576 7 from google play store
      -Clash of clans versi 13.576 7 bugs and fixes
      -Download clash of clans versi 13.576 7 for amazon fire tablet
      -Clash of clans versi 13.576 7 events and challenges
      -Download clash of clans versi 13.576 7 for chromebook
      -Clash of clans versi 13.576 7 builder base and town hall upgrades
      -Download clash of clans versi 13.576 7 for bluestacks emulator
      -Clash of clans versi 13.576 7 wallpapers and themes

      -

      The game has various elements that make it fun and challenging, such as:

      -
        -
      • Buildings: You can construct different types of buildings in your village, such as town hall, barracks, army camp, laboratory, spell factory, gold mine, elixir collector, dark elixir drill, gold storage, elixir storage, dark elixir storage, cannon, archer tower, mortar, air defense, wizard tower, hidden tesla, bomb tower, x-bow, inferno tower, eagle artillery, scattershot, giga tesla, giga inferno, walls, traps, decorations, etc.
      • -
      • Troops: You can train different types of troops in your barracks and dark barracks, such as barbarian, archer, giant, goblin, wall breaker, balloon, wizard, healer, dragon, pekka, baby dragon, miner, electro dragon, yeti, super barbarian, super archer, super giant, super wall breaker, super goblin, super wizard, super dragon, sneaky goblin, ice hound, inferno dragon, valkyrie, hog rider, golem, witch, lava hound, bowler, ice golem, headhunter, minion, hog glider, vampire, super minion, super valkyrie, super witch, rocket balloon, etc.
      • -
      • Spells: You can create different types of spells in your spell factory and dark spell factory, such as lightning spell, healing spell, rage spell, jump spell, freeze spell, clone spell, invisibility spell, poison spell, earthquake spell, haste spell, skeleton spell, bat spell, etc.
      • -
      • Heroes: You can unlock and upgrade different types of heroes in your village, such as barbarian king, archer queen, grand warden, royal champion, battle machine, etc.
      • -
      • Pets: You can unlock and upgrade different types of pets in your pet house, such as lava, unicorn, electro owl, mighty yak, etc.
      • -
      • Seasons: You can participate in monthly seasons that offer various rewards and challenges for completing tasks and reaching milestones.
      • -
      • Events: You can enjoy special events that offer discounts and bonuses for using certain troops, spells, or buildings.
      • -
      • Clan Wars: You can join or create a clan with other players and compete in clan wars against other clans. Clan wars are two-day events where each clan has one day to prepare and one day to attack. The clan with the most stars at the end of the war wins.
      • -
      • Clan War Leagues: You can join or create a clan with other players and compete in clan war leagues against other clans. Clan war leagues are seven-day events where each clan has one attack per day. The clans are ranked based on their performance and rewarded with league medals.
      • -
      • Clan Games: You can join or create a clan with other players and participate in clan games. Clan games are seven-day events where each clan member can complete various tasks to earn points. The clans are rewarded with clan game points that can be used to claim rewards from the clan game shop.
      • -
      • Builder Base: You can travel to the builder base, a separate village that has its own buildings, troops, spells, and heroes. You can attack other players' builder bases to earn versus trophies and loot. You can also upgrade your builder base to unlock new features and abilities.
      • -
      -

      The latest version of the game and what's new in it

      -

      The latest version of Clash of Clans is Versi 13.576.7, which was released on December 7th 2022. This version is also known as the Winter Update 2022 and it brings a lot of new features and improvements to the game. Some of the highlights of this version are:

      -
        -
      • New Town Hall 15: You can upgrade your town hall to level 15 and unlock new buildings, troops, spells, heroes, pets, and more. The town hall 15 also has a new weapon called the Giga Inferno that shoots powerful blasts of fire at the enemies.
      • -
      • New Hero: The Vampire: You can unlock a new hero called the Vampire at town hall 15. The Vampire is a dark elixir hero that can transform into a bat and fly over walls and obstacles. The Vampire also has a special ability called Bloodlust that allows him to heal himself by attacking enemies.
      • -
      • New Troop: The Rocket Balloon: You can unlock a new troop called the Rocket Balloon at town hall 15. The Rocket Balloon is an elixir troop that flies over walls and targets defenses with explosive rockets. The Rocket Balloon also has a special ability called Kamikaze that allows it to deal extra damage when it dies.
      • -
      • New Spell: The Invisibility Spell: You can unlock a new spell called the Invisibility Spell at town hall 15. The Invisibility Spell is an elixir spell that makes your troops invisible for a short duration. The Invisibility Spell also has a special effect that makes your troops ignore traps and enemy heroes while invisible.
      • -
      • New Pet: The Mighty Yak: You can unlock a new pet called the Mighty Yak at town hall 15. The Mighty Yak is an elixir pet that follows your troops and helps them by breaking walls and obstacles with its horns.
      • -
      • New Super Troops: The Super Minion and the Super Valkyrie: You can unlock two new super troops at town hall 15. The Super Minion is an upgraded version of the Minion that has more health and damage and shoots long-range projectiles. The Super Valkyrie is an upgraded version of the Valkyrie that has more health and damage and spins around with her axe to deal splash damage.
      • -
      • New Siege Machine: The Siege Barracks: You can unlock a new siege machine at town hall 15. The Siege Barracks is a special type of siege machine that deploys troops instead of buildings when it reaches the enemy base. The Siege Barracks also has a special ability called Reinforcements that allows it to deploy more troops over time.
      • New Scenery: The Winter Wonderland: You can unlock a new scenery for your village at town hall 15. The Winter Wonderland is a festive and snowy scenery that features a giant snowman, a frozen lake, a candy cane forest, and more.
      • -
      • New Quality of Life Improvements: You can enjoy various quality of life improvements in the game, such as: - A new option to switch between day and night mode in the settings - A new option to filter and sort your troops, spells, heroes, and pets in the army overview - A new option to customize your clan badge with different shapes, patterns, and colors - A new option to view the stats and abilities of your troops, spells, heroes, and pets in the laboratory - A new option to donate troops, spells, siege machines, and super troops to your clan mates in the chat - A new option to request specific troops, spells, siege machines, and super troops from your clan mates in the chat - A new option to use gems to finish the cooldown of your super troops - A new option to use gems to skip the waiting time of your clan games tasks - A new option to use gems to reroll your clan games tasks - A new option to use gems to boost your builder base clock tower for longer durations - A new option to use gems to reset your builder base versus battle timer - A new option to use gems to buy more builder base loot from the shop - A new option to use gems to buy more league medals from the shop
      • -
      -

      Why Download Clash of Clans Versi 13.576.7?

      -

      The benefits of downloading the latest version of the game

      -

      Downloading Clash of Clans Versi 13.576.7 has many benefits for you as a player. Some of the benefits are:

      -
        -
      • You can enjoy all the new features and improvements that we mentioned above.
      • -
      • You can access all the content and events that are exclusive to the latest version of the game.
      • -
      • You can play with other players who have also downloaded the latest version of the game.
      • -
      • You can avoid any bugs or glitches that might occur in older versions of the game.
      • -
      • You can ensure that your game is secure and compatible with your device.
      • -
      -

      The drawbacks of not downloading the latest version of the game

      -

      Not downloading Clash of Clans Versi 13.576.7 has many drawbacks for you as a player. Some of the drawbacks are:

      -
        -
      • You will miss out on all the new features and improvements that we mentioned above.
      • -
      • You will not be able to access all the content and events that are exclusive to the latest version of the game.
      • -
      • You will not be able to play with other players who have downloaded the latest version of the game.
      • -
      • You might encounter some bugs or glitches that have been fixed in newer versions of the game.
      • -
      • You might risk your game being insecure or incompatible with your device.
      • -

      How to Download Clash of Clans Versi 13.576.7?

      -

      The steps to download the game from different sources

      -

      There are different ways to download Clash of Clans Versi 13.576.7, depending on your device and preference. Here are some of the most common sources and the steps to download the game from them:

      -

      Download from Google Play Store

      -

      If you have an Android device, the easiest way to download Clash of Clans Versi 13.576.7 is from the Google Play Store. Here are the steps to do so:

      -
        -
      1. Open the Google Play Store app on your device.
      2. -
      3. Search for "Clash of Clans" in the search bar.
      4. -
      5. Tap on the game icon that appears in the results.
      6. -
      7. Tap on the "Update" button if you have an older version of the game installed, or tap on the "Install" button if you don't have the game installed.
      8. -
      9. Wait for the download and installation to complete.
      10. -
      11. Open the game and enjoy!
      12. -
      -

      Download from Softpedia

      -

      If you have an Android device, another way to download Clash of Clans Versi 13.576.7 is from Softpedia, a website that offers free software downloads. Here are the steps to do so:

      -
        -
      1. Open your web browser and go to https://www.softpedia.com/get/Mobile-Phone-Tools/Android/Clash-of-Clans.shtml.
      2. -
      3. Scroll down and click on the "Download Now" button under the "Softpedia Secure Download (APK)" section.
      4. -
      5. Wait for the download to finish and locate the APK file on your device.
      6. -
      7. Tap on the APK file and allow it to install on your device. You might need to enable "Unknown Sources" in your settings to do so.
      8. -
      9. Open the game and enjoy!
      10. -
      -

      Download from Archive.org

      -

      If you have an Android device, yet another way to download Clash of Clans Versi 13.576.7 is from Archive.org, a website that preserves digital content. Here are the steps to do so:

      -
        -
      1. Open your web browser and go to https://archive.org/details/clash-of-clans_20211207.
      2. -
      3. Click on the "DOWNLOAD OPTIONS" section and select "APK File".
      4. -
      5. Wait for the download to finish and locate the APK file on your device.
      6. -
      7. Tap on the APK file and allow it to install on your device. You might need to enable "Unknown Sources" in your settings to do so.
      8. -
      9. Open the game and enjoy!
      10. -
      -

      The tips to install and run the game smoothly

      -

      To ensure that you can install and run Clash of Clans Versi 13.576.7 smoothly on your device, here are some tips that you should follow:

      -
        -
      • Make sure that your device meets the minimum requirements for the game, which are: - Android version 4.4 or higher - At least 1 GB of RAM - At least 200 MB of free storage space - A stable internet connection
      • -
      • Make sure that you have enough battery power or plug in your device while playing the game.
      • -
      • Make sure that you close any unnecessary apps or background processes that might slow down your device or interfere with the game.
      • -
      • Make sure that you update your device's software and drivers regularly to avoid any compatibility issues or bugs.
      • -
      • Make sure that you clear your device's cache and data regularly to free up some space and improve performance.
      • -
      • If you encounter any problems or errors while playing the game, try these solutions: - Restart your device and try again. - Reinstall the game and try again. - Contact Supercell's support team via email or in-game chat.
      • -
      -

      Conclusion

      -

      A summary of the main points and a call to action

      -

      In conclusion, Clash of Clans Versi 13.576.7 is the latest version of one of the most popular and addictive strategy games in the world. It offers a lot of new features and improvements that make the game more fun and challenging. You can download it from different sources, such as Google Play Store, Softpedia, or Archive.org, depending on your preference. You should also follow some tips to install and run the game smoothly on your device. If you are a fan of this game, you should not miss this opportunity to download Clash of Clans Versi 13.576.7 and enjoy the best gaming experience ever. So, what are you waiting for? Download it now and join the millions of players who are already playing it!

      -

      FAQs

      -

      Q1: Is Clash of Clans Versi 13.576.7 compatible with my device?

      -

      A1: Clash of Clans Versi 13.576.7 is compatible with most Android devices that have Android version 4.4 or higher, at least 1 GB of RAM, and at least 200 MB of free storage space. You can check your device's specifications in the settings or online.

      -

      Q2: How much space does Clash of Clans Versi 13.576.7 take on my device?

      -

      A2: Clash of Clans Versi 13.576.7 takes about 200 MB of storage space on your device, but this may vary depending on your device and the updates that you download. You can check the storage space used by the game in the settings or in the app manager.

      -

      Q3: How can I update Clash of Clans Versi 13.576.7 to the next version?

      -

      A3: You can update Clash of Clans Versi 13.576.7 to the next version by following the same steps that you used to download it, depending on the source that you used. You can also enable automatic updates in the settings or in the app store to get the latest version as soon as it is available.

      -

      Q4: How can I contact the developers of Clash of Clans Versi 13.576.7 if I have any issues or feedback?

      -

      A4: You can contact the developers of Clash of Clans Versi 13.576.7 by sending them an email at clashofclans.feedback@supercell.com or by using the in-game chat feature that allows you to report bugs, suggest improvements, or ask questions.

      -

      Q5: How can I join a clan or create my own clan in Clash of Clans Versi 13.576.7?

      -

      A5: You can join a clan or create your own clan in Clash of Clans Versi 13.576.7 by tapping on the clan icon on the bottom left corner of the screen and then choosing one of the options: join a clan, create a clan, or search for a clan. You can also invite your friends to join your clan or accept invitations from other clans.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download FNF Music Battle Beat Shooter Mod APK and Rock the Night with Unlimited Money and Funky Beats.md b/spaces/fatiXbelha/sd/Download FNF Music Battle Beat Shooter Mod APK and Rock the Night with Unlimited Money and Funky Beats.md deleted file mode 100644 index 18b5c337b4e652baad77aaec59a9c7ff3cbb1d10..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download FNF Music Battle Beat Shooter Mod APK and Rock the Night with Unlimited Money and Funky Beats.md +++ /dev/null @@ -1,87 +0,0 @@ - -

      FNF Music Battle: Beat Shooter Mod APK (Unlimited Money) - A Fun and Challenging Rhythm Game

      -

      If you love rhythm games and Friday Night Funkin', then you should definitely check out FNF Music Battle: Beat Shooter Mod APK (Unlimited Money). This is a fun and challenging game where you have to tap on the screen in sync with the music and shoot your enemies. You can choose from a variety of songs and characters, each with their own style and personality. You can also play with unlimited money, which means you can unlock all the songs and characters without any hassle. In this article, we will tell you more about this game and how to download and install it on your Android device.

      -

      What is FNF Music Battle: Beat Shooter?

      -

      FNF Music Battle: Beat Shooter is a rhythm game developed by HyperCat Games. It is inspired by Friday Night Funkin', a popular web game where you have to rap battle against various opponents. However, FNF Music Battle: Beat Shooter adds a twist to the gameplay by making you shoot your enemies with your gun while tapping on the screen. The game has colorful graphics, catchy music, and smooth animations that will keep you entertained for hours.

      -

      fnf music battle beat shooter mod apk (unlimited money)


      DOWNLOAD ⇒⇒⇒ https://urllie.com/2uNxJO



      -

      To download and install FNF Music Battle: Beat Shooter Mod APK (Unlimited Money), you need to follow these simple steps:

      -
        -
      1. Click on the download button below to get the mod apk file.
      2. -
      3. Allow unknown sources in your device settings if you haven't done so already.
      4. -
      5. Locate the downloaded file in your file manager and tap on it to install it.
      6. -
      7. Launch the game and enjoy playing with unlimited money.
      8. -
      -

      Download FNF Music Battle: Beat Shooter Mod APK (Unlimited Money)

      -

      fnf music battle beat shooter hack apk download
      -fnf music battle beat shooter mod apk latest version
      -fnf music battle beat shooter unlimited money and gems
      -fnf music battle beat shooter mod menu apk
      -fnf music battle beat shooter apk mod free shopping
      -fnf music battle beat shooter cheats android
      -fnf music battle beat shooter mod apk revdl
      -fnf music battle beat shooter mod apk happymod
      -fnf music battle beat shooter premium apk unlocked
      -fnf music battle beat shooter mod apk no ads
      -fnf music battle beat shooter mod apk rexdl
      -fnf music battle beat shooter unlimited coins and diamonds
      -fnf music battle beat shooter hack version download
      -fnf music battle beat shooter mod apk android 1
      -fnf music battle beat shooter mod apk offline
      -fnf music battle beat shooter unlimited everything
      -fnf music battle beat shooter mod apk unlimited lives
      -fnf music battle beat shooter hack tool online
      -fnf music battle beat shooter mod apk obb
      -fnf music battle beat shooter mod apk all songs unlocked
      -fnf music battle beat shooter hack apk 2023
      -fnf music battle beat shooter mod apk unlimited ammo
      -fnf music battle beat shooter mod apk vip
      -fnf music battle beat shooter hack online generator
      -fnf music battle beat shooter mod apk god mode
      -fnf music battle beat shooter mod apk unlimited energy
      -fnf music battle beat shooter hack no human verification
      -fnf music battle beat shooter mod apk pro
      -fnf music battle beat shooter mod apk unlimited keys
      -fnf music battle beat shooter hack ios download
      -fnf music battle beat shooter mod apk unlimited stars
      -fnf music battle beat shooter mod apk mega mod
      -fnf music battle beat shooter hack no survey no password
      -fnf music battle beat shooter mod apk full version
      -fnf music battle beat shooter mod apk unlimited gold
      -fnf music battle beat shooter hack without root
      -fnf music battle beat shooter mod apk unlocked all levels
      -fnf music battle beat shooter mod apk unlimited bullets
      -fnf music battle beat shooter hack game guardian
      -fnf music battle beat shooter mod apk all characters unlocked

      -

      Why play FNF Music Battle: Beat Shooter Mod APK?

      -

      There are many reasons why you should play FNF Music Battle: Beat Shooter Mod APK (Unlimited Money). Here are some of them:

      -
        -
      • You can play with unlimited money, which means you can unlock all the songs and characters without any hassle. You can also buy more ammo and health packs to help you survive longer.
      • -
      • You can choose from a variety of songs and characters, each with their own style and personality. You can play as Boyfriend, Girlfriend, Daddy Dearest, Mommy Mearest, Skid and Pump, Pico, Tankman, and many more. You can also play songs from different genres like rock, pop, hip hop, electro, and more.
      • -
      • You can play different modes and levels of difficulty to challenge yourself. You can play in normal mode, hard mode, or endless mode. You can also adjust the speed and accuracy of the game to suit your preference.
      • -
      -

      How to play FNF Music Battle: Beat Shooter Mod APK?

      -

      The basic gameplay mechanics and controls of FNF Music Battle: Beat Shooter Mod APK (Unlimited Money) are simple and easy to learn. Here are some tips and tricks to help you improve your skills and score:

      -
        -
      • The game is divided into rounds, where you have to tap on the screen in sync with the music and shoot your enemies. You have to match the color of the bullets with the color of the arrows on the screen. If you miss or hit the wrong color, you will lose health and ammo.
      • -
      • You have to pay attention to the rhythm and timing of the music and the arrows. The faster and more accurate you are, the higher your score will be. You can also earn combo bonuses by hitting multiple arrows in a row.
      • -
      • You have to watch out for your enemies' attacks and dodge them by swiping left or right on the screen. Some enemies will shoot back at you, while others will throw bombs or other objects at you. You have to avoid getting hit by them or you will lose health and ammo.
      • -
      • You have to use your money wisely to buy more ammo and health packs before each round. You can also upgrade your gun to make it more powerful and effective. You can also unlock new songs and characters by spending your money.
      • -
      • You have to try different songs and characters to find out which ones suit your style and preference. Some songs are faster or slower than others, while some characters have different abilities or weapons. You can also customize your character's appearance by changing their clothes, hair, accessories, etc.
      • -
      -

      Conclusion

      -

      FNF Music Battle: Beat Shooter Mod APK (Unlimited Money) is a fun and challenging rhythm game that will test your reflexes and musical skills. You can enjoy playing with unlimited money, a variety of songs and characters, different modes and levels of difficulty, and a colorful graphics and sound design. If you are a fan of rhythm games and Friday Night Funkin', then you should definitely download and install this game on your Android device today.

      -

      FAQs

      -
        -
      1. Is FNF Music Battle: Beat Shooter Mod APK safe to use?
      2. -

        Yes, it is safe to use as long as you download it from a trusted source like . The mod apk file has been scanned for viruses and malware and has no harmful effects on your device.

        -
      3. Do I need an internet connection to play FNF Music Battle: Beat Shooter Mod APK?
      4. -

        No, you can play it offline without any problems. However, you may need an internet connection to update the game or access some online features like leaderboards or social media integration.

        -
      5. Can I play FNF Music Battle: Beat Shooter Mod APK on PC?
      6. -

        Yes, you can play it on PC using an Android emulator like Bluestacks or NoxPlayer. You just need to download and install the emulator on your PC, then download and install the mod apk file on the emulator. You can then launch the game and play it with your keyboard and mouse.

        -
      7. How can I update FNF Music Battle: Beat Shooter Mod APK?
      8. -

        You can update it by downloading the latest version from and installing it over the old one. You don't need to uninstall the previous version, just overwrite it with the new one. You can also check for updates within the game settings.

        -
      9. How can I contact the developers of FNF Music Battle: Beat Shooter Mod APK?
      10. -

        You can contact them by sending an email to hypercatgames@gmail.com or visiting their Facebook page at https://www.facebook.com/hypercatgames/. You can also leave a comment or a rating on the Google Play Store or the mod apk website.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Gong The Best Call Recording Software for Sales Teams.md b/spaces/fatiXbelha/sd/Download Gong The Best Call Recording Software for Sales Teams.md deleted file mode 100644 index 6a6192f6a70bad6ed76f9c72e1598cd315d113df..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Gong The Best Call Recording Software for Sales Teams.md +++ /dev/null @@ -1,113 +0,0 @@ -
      -

      How to Download Gong Sounds for Meditation and Relaxation

      -

      If you are looking for a way to enhance your meditation and relaxation practice, you might want to try using gong sounds. Gongs are ancient percussion instruments that produce rich and complex tones that can help you achieve a deeper state of awareness and calmness. In this article, we will explain what gongs are, why they are beneficial for meditation, how to find and download gong sounds online, and how to use them for meditation and relaxation.

      -

      download gong


      Download Ziphttps://urllie.com/2uNuXj



      -

      What is a Gong and Why Use It for Meditation?

      -

      A gong is a flat, circular metal disc that is typically struck with a mallet. Gongs originated in East Asia and Southeast Asia, where they were used for ceremonial, musical, and spiritual purposes. Gongs can vary in size, shape, design, and tuning, depending on the culture and tradition they belong to. Some of the most common types of gongs are:

      -
        -
      • The Chinese tam-tam, which has a smooth surface and produces a loud crash sound.
      • -
      • The Indonesian gamelan gong, which has a raised center and produces a melodic sound.
      • -
      • The Tibetan singing bowl, which has a curved shape and produces a sustained sound when rubbed with a wooden stick.
      • -
      -

      Gongs are ideal for meditation because they create harmonic vibrations that resonate with the human body and mind. According to some studies, gong sounds can:

      -
        -
      • Reduce stress and anxiety.
      • -
      • Enhance focus and concentration.
      • -
      • Balance the left and right brain hemispheres.
      • -
      • Stimulate the immune system and the nervous system.
      • -
      • Clear negative energy and emotions.
      • -
      -

      How to Find and Download Gong Sounds Online

      -

      If you don't have access to a physical gong, you can still enjoy its benefits by downloading gong sounds online. There are many websites and apps that offer free or paid gong sound effects that you can use for meditation and relaxation. Here are some of the best ones:

      -

      download gong call recording software
      -download gong sound effects free
      -download gong player for windows
      -download gong meditation music
      -download gong app for sales coaching
      -download gong ringtone for android
      -download gong.io chrome extension
      -download gong video analysis tool
      -download gong wav file
      -download gong mp3 songs
      -download gong yoga nidra
      -download gong hee fot choy book
      -download gong xi fa cai images
      -download gong show episodes
      -download gong seung yeon drama
      -download gong yoo movies
      -download gong li wallpapers
      -download gong minzy album
      -download gong hyo jin ost
      -download gong cha menu pdf
      -download gong flying teapot
      -download gong angel's egg
      -download gong camembert electrique
      -download gong you remastered
      -download gong shamanic womanizer
      -download gong 2032 rar
      -download gong continental circus
      -download gong expresso ii
      -download gong live etc
      -download gong magick brother
      -download gong radio gnome invisible trilogy
      -download gong reunion 2000 dvd
      -download gong the universe also collapses
      -download gong zero to infinity zip
      -download korean drama the best hit (hit the top) with yoon shi yoon and kim min jae (gong tae kwang)
      -how to download and install the latest version of the GOM Player on your PC or laptop?
      -where can I find and download free royalty-free sound effects of a large metal gong being struck?
      -what are the benefits of using Gong's call recording software for sales teams and managers?
      -what are some of the best websites to download relaxing and soothing meditation music featuring the sounds of Tibetan singing bowls and gongs?
      -how can I use Gong's video analysis tool to improve my sales skills and performance?
      -how can I set a custom ringtone of a Chinese New Year's gong on my Android phone?
      -how can I integrate Gong.io with my CRM software and web conferencing platform?
      -how can I convert a WAV file of a gong sound to an MP3 file using an online converter?
      -where can I download the latest songs and albums by Gong, the progressive rock band from France?
      -how can I practice Gong yoga nidra, a form of deep relaxation and meditation, at home?
      -where can I download the book Gong Hee Fot Choy, a Chinese fortune-telling system based on playing cards?
      -where can I download high-quality images of Gong Xi Fa Cai, the traditional Chinese greeting for the Lunar New Year?
      -where can I watch or download episodes of The Gong Show, the classic comedy talent show from the 1970s and 1980s?
      -where can I watch or download dramas and movies starring Gong Seung Yeon, the South Korean actress and singer?
      -where can I watch or download movies starring Gong Yoo, the South Korean actor known for his roles in Train to Busan and Goblin?

      -

      The Best Websites for Free Gong Sound Effects

      -

      If you want to download gong sound effects as mp3 or wav files, you can visit these websites:

      - - - - - -
      WebsiteDescription
      [Pixabay](^5^)A website that offers 100 royalty-free gong sound effects that you can download and use in your projects.
      [Orange Free Sounds](^6^)A website that offers free gong sound effects that you can download as mp3 files.
      [Freesound](https://freesound.org/)A website that offers a large collection of free gong sound effects uploaded by users.
      -

      The Best Apps for Gong Meditation

      -

      If you want to use your smartphone or tablet as a gong meditation device, you can download these apps:

      - - - - -
      AppDescription
      [Gong.io](^2^)An app that records your sales calls on web conferencing and phone and analyzes them with AI. You can use it to improve your sales skills and performance.
      [Gong Master](https://play.google.com/store/apps/details?id=com.gongmaster&hl=en_US&gl=US)An app that lets you play different types of gongs with realistic sounds and effects.
      [Gong Bath Meditation](https://apps.apple.com/us/app/gong-bath through your mouth. Breathing can help you relax your body and mind and connect with the gong sound. -
    • Listening: You can listen to the gong sound with an open and attentive mind, without judging or analyzing it. You can also focus on different aspects of the gong sound, such as its pitch, volume, duration, or timbre. Listening can help you enhance your awareness and concentration and immerse yourself in the gong sound.
    • -
    • Feeling: You can feel the gong sound with your whole body, noticing how it vibrates and resonates with your cells and organs. You can also feel the emotions and sensations that the gong sound evokes in you, such as joy, peace, gratitude, or love. Feeling can help you balance your energy and emotions and align yourself with the gong sound.
    • -
    • Visualizing: You can visualize the gong sound with your imagination, creating images or scenes that match the gong sound. You can also visualize your intentions or goals that you want to achieve with the gong sound, such as healing, clarity, or wisdom. Visualizing can help you stimulate your creativity and intuition and manifest your desires with the gong sound.
    • - -

      Conclusion

      -

      Gong sounds are powerful tools that can help you improve your meditation and relaxation practice. By downloading gong sounds online, you can access the benefits of gong meditation anytime and anywhere. You can also customize your gong meditation session according to your mood and purpose, using different techniques such as breathing, listening, feeling, and visualizing. Gong meditation can help you reduce stress, enhance focus, balance energy, clear negativity, and achieve inner harmony.

      -

      FAQs

      -

      Here are some frequently asked questions about gong meditation:

      -
        -
      1. How long should I practice gong meditation?
      2. -

        There is no fixed rule on how long you should practice gong meditation. It depends on your personal preference and availability. However, a general guideline is to start with 10 to 15 minutes per session and gradually increase the duration as you become more comfortable and experienced.

        -
      3. What kind of gong sound should I use for sleep?
      4. -

        If you want to use gong sounds for sleep, you should choose a gong sound that has a low pitch and a slow rhythm. This can help you relax your body and mind and induce a state of deep sleep. You can also use a timer or a loop function to play the gong sound for a certain period of time or until you fall asleep.

        -
      5. Can I use gong sounds with other meditation methods?
      6. -

        Yes, you can use gong sounds with other meditation methods, such as mindfulness, mantra, or guided meditation. Gong sounds can complement and enhance any meditation method by creating a supportive and harmonious environment.

        -
      7. Are there any risks or side effects of gong meditation?
      8. -

        Gong meditation is generally safe and beneficial for most people. However, some people may experience some discomfort or sensitivity to loud or high-pitched sounds. If this happens, you should lower the volume of the gong sound or stop the session. You should also consult your doctor before practicing gong meditation if you have any medical conditions or concerns.

        -
      9. Where can I learn more about gong meditation?
      10. -

        If you want to learn more about gong meditation, you can visit these resources:

        -
          -
        • [Gongs Unlimited]: A website that sells various types of gongs and accessories.
        • -
        • [Gongs for Meditation](https://gongsformeditation.com/): A website that offers online courses and workshops on gong meditation.
        • -
        • [The Gong Space](https://thegongspace.co.uk/): A website that provides information and articles on gong meditation.
        • -
        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Inazuma Eleven GO Strikers 2013 and Join the Battle for the Future of Football.md b/spaces/fatiXbelha/sd/Download Inazuma Eleven GO Strikers 2013 and Join the Battle for the Future of Football.md deleted file mode 100644 index 9b52fb467be764adf8df0e2cdc8520bf327d7adc..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Inazuma Eleven GO Strikers 2013 and Join the Battle for the Future of Football.md +++ /dev/null @@ -1,104 +0,0 @@ -
      -

      Inazuma Eleven GO Strikers 2013 Download: How to Play the Ultimate Soccer Game on Your PC

      -

      If you are a fan of soccer games with superpowers, you might have heard of Inazuma Eleven GO Strikers 2013. This is a spin-off game from the popular Inazuma Eleven franchise that features characters from the original DS games and the anime series. It was released in Japan in 2012 for the Nintendo Wii, but it never made it to other regions. However, thanks to some dedicated fans and an emulator software, you can now play this amazing game on your PC. In this article, we will show you how to download and install Inazuma Eleven GO Strikers 2013 on your PC, how to play it, and what are the benefits and challenges of doing so.

      -

      What is Inazuma Eleven GO Strikers 2013?

      -

      Inazuma Eleven GO Strikers 2013 is a soccer game with a twist. Instead of realistic physics and rules, it features spectacular super techniques and fast-paced action that create an unprecedented football experience. You can take on the role of Arion Sherwind, Mark Evans, and other well-known characters from the Inazuma Eleven universe and use their incredible skills to outsmart your rivals. You can also relive the events of the Inazuma Eleven franchise from the beginnings of the Raimon soccer club to the battle for the future against El Dorado in a whole new style.

      -

      inazuma eleven go strikers 2013 download


      DOWNLOAD ✯✯✯ https://urllie.com/2uNxcW



      -

      The game has four main modes: Story Mode, Exhibition Mode, Tournament Mode, and Online Mode. In Story Mode, you can follow the plot of the anime series and face different teams in various scenarios. In Exhibition Mode, you can create your own matches with customized settings and teams. In Tournament Mode, you can compete in different cups with different difficulty levels and rewards. And in Online Mode, you can play with or against other players from around the world in a battle of 4 vs 4.

      -

      How to Download and Install Inazuma Eleven GO Strikers 2013 on Your PC

      -

      Since Inazuma Eleven GO Strikers 2013 is a Wii game, you will need an emulator software that can run Wii games on your PC. The best emulator for this purpose is Dolphin Emulator, which is free and easy to use. You will also need the game files or ISO file of Inazuma Eleven GO Strikers 2013, which you can find online from various sources. However, be careful when downloading files from unknown websites as they might contain viruses or malware. Here are the steps to download and install Inazuma Eleven GO Strikers 2013 on your PC:

      -
        -
      1. Download Dolphin Emulator from its official website or from any trusted source. Choose the version that matches your operating system and your PC specifications.
      2. -
      3. Install Dolphin Emulator on your PC by following the instructions on the screen. You might need to update some drivers or install some additional software to make it work properly.
      4. -
      5. Download Inazuma Eleven GO Strikers 2013 ISO file from any reliable source. Make sure the file is in Japanese, as there is no official English version of the game. You can also use a fan-made English patch to translate some of the text and menus, but it is not complete or perfect. You can find the patch online from various sources .
      6. -
      7. Extract the ISO file and the patch file (if you have it) to a folder on your PC. You can use any software that can handle ZIP or RAR files, such as WinRAR or 7-Zip.
      8. -
      9. Open Dolphin Emulator and click on the "Open" button on the top left corner. Browse to the folder where you extracted the ISO file and select it. The game should appear on the main screen of the emulator.
      10. -
      11. If you want to apply the English patch, click on the game and then click on "Properties". Go to the "Patches" tab and click on "Add". Browse to the folder where you extracted the patch file and select it. Make sure the patch is enabled and click on "Close".
      12. -
      13. Click on the game and then click on "Play" to start playing Inazuma Eleven GO Strikers 2013 on your PC.
      14. -
      -

      How to Play Inazuma Eleven GO Strikers 2013 on Your PC

      -

      Playing Inazuma Eleven GO Strikers 2013 on your PC is similar to playing it on a Wii console, except that you will use your keyboard and mouse or a gamepad instead of a Wii remote and nunchuk. You can customize the controls according to your preferences by going to "Options" and then "Controller Settings" on Dolphin Emulator. You can also adjust the graphics, sound, and other settings to improve your gaming experience.

      -

      The game has a simple and intuitive interface that allows you to navigate through different menus and options. You can use the arrow keys or the D-pad to move the cursor, and the A button or Enter key to confirm your selection. You can also use the B button or Backspace key to go back or cancel. You can access more options by pressing the + button or Tab key, such as saving, loading, quitting, or changing the game settings.

      -

      The gameplay of Inazuma Eleven GO Strikers 2013 is fast-paced and exciting, as you control your team of soccer players with superpowers and try to score more goals than your opponents. You can use different techniques and strategies to gain an advantage over your rivals, such as passing, dribbling, shooting, tackling, blocking, stealing, or using special moves. You can also mixi max your players to combine their abilities and create new ones.

      -

      The game has four main modes: Story Mode, Exhibition Mode, Tournament Mode, and Online Mode. In Story Mode, you can follow the plot of the anime series and face different teams in various scenarios. In Exhibition Mode, you can create your own matches with customized settings and teams. In Tournament Mode, you can compete in different cups with different difficulty levels and rewards. And in Online Mode, you can play with or against other players from around the world in a battle of 4 vs 4.

      -

      What are the Benefits of Playing Inazuma Eleven GO Strikers 2013 on Your PC

      -

      Playing Inazuma Eleven GO Strikers 2013 on your PC has many benefits compared to playing it on a Wii console, such as:

      -
        -
      • You can enjoy better graphics and sound quality, as Dolphin Emulator can enhance the resolution, framerate, anti-aliasing, texture filtering, and audio output of the game.
      • -
      • You can play online with other players from around the world, as Dolphin Emulator supports netplay and Wi-Fi connection.
      • -
      • You can customize your controls and settings according to your preferences, as Dolphin Emulator allows you to use any input device and adjust any parameter of the game.
      • -
      • You can save your progress anytime and anywhere, as Dolphin Emulator has a save state feature that lets you create multiple save files for different situations.
      • -
      • You can use cheats and hacks to modify the game as you wish, as Dolphin Emulator has a cheat manager that lets you enter codes or download them from online databases .
      • -
      -

      What are the Challenges of Playing Inazuma Eleven GO Strikers 2013 on Your PC

      -

      Playing Inazuma Eleven GO Strikers 2013 on your PC also has some challenges that you should be aware of, such as:

      -

      inazuma eleven go strikers 2013 pc download
      -inazuma eleven go strikers 2013 iso download
      -inazuma eleven go strikers 2013 dolphin emulator download
      -inazuma eleven go strikers 2013 english patch download
      -inazuma eleven go strikers 2013 wii download
      -inazuma eleven go strikers 2013 free download
      -inazuma eleven go strikers 2013 full game download
      -inazuma eleven go strikers 2013 rom download
      -inazuma eleven go strikers 2013 android download
      -inazuma eleven go strikers 2013 mediafire download
      -inazuma eleven go strikers 2013 xtreme mod download
      -inazuma eleven go strikers 2013 save file download
      -inazuma eleven go strikers 2013 online multiplayer download
      -inazuma eleven go strikers 2013 cheats codes download
      -inazuma eleven go strikers 2013 gameplay video download
      -inazuma eleven go strikers 2013 all characters unlock download
      -inazuma eleven go strikers 2013 best team setup download
      -inazuma eleven go strikers 2013 soundtrack music download
      -inazuma eleven go strikers 2013 system requirements pc download
      -inazuma eleven go strikers 2013 how to install guide download
      -inazuma eleven go strikers 2013 rar password crack download
      -inazuma eleven go strikers 2013 torrent magnet link download
      -inazuma eleven go strikers 2013 direct link fast speed download
      -inazuma eleven go strikers 2013 highly compressed file download
      -inazuma eleven go strikers 2013 latest update patch download
      -inazuma eleven go strikers 2013 controller configuration settings download
      -inazuma eleven go strikers 2013 keyboard and mouse support download
      -inazuma eleven go strikers 2013 custom resolution fix download
      -inazuma eleven go strikers 2013 low end pc optimization download
      -inazuma eleven go strikers 2013 error and crash solution download
      -inazuma eleven go strikers 2013 review and rating score download
      -inazuma eleven go strikers 2013 fanmade trailer and teaser download
      -inazuma eleven go strikers 2013 mods and skins pack download
      -inazuma eleven go strikers 2013 tips and tricks tutorial download
      -inazuma eleven go strikers 2013 walkthrough and story mode download
      -inazuma eleven go strikers 2013 hidden secrets and easter eggs download
      -inazuma eleven go strikers 2013 comparison and difference with other games download
      -inazuma eleven go strikers 2013 fun facts and trivia quiz download
      -inazuma eleven go strikers 2013 memes and jokes collection download
      -inazuma eleven go strikers 2013 fan art and wallpapers gallery download

      -
        -
      • You might encounter compatibility issues, as Dolphin Emulator might not run smoothly on some PCs or with some games. You might need to tweak some settings or update some drivers to fix them.
      • -
      • You might face language barriers, as Inazuma Eleven GO Strikers 2013 is only available in Japanese. You might need to use a fan-made English patch or a translation tool to understand some of the text and menus.
      • -
      • You might have legal concerns, as downloading and playing Inazuma Eleven GO Strikers 2013 on your PC might violate some copyright laws or terms of service. You should only do so if you own a legitimate copy of the game and a Wii console.
      • -
      -

      What are the Best Tips and Tricks for Playing Inazuma Eleven GO Strikers 2013 on Your PC

      -

      If you want to master Inazuma Eleven GO Strikers 2013 on your PC, here are some tips and tricks that might help you:

      -
        -
      • Learn the basics of the game, such as the controls, the menus, and the game modes. You can find tutorials and guides online from various sources .
      • -
      • Unlock more characters, teams, and techniques by playing Story Mode and Tournament Mode. You can also use cheats or hacks to unlock them faster or easier.
      • -
      • Use mixi max to combine the abilities of your players and create new ones. You can mixi max up to three players at a time, but you can only use one mixi max per match.
      • -
      • Use special moves wisely, as they consume TP (technical points) and can be blocked or countered by your opponents. You can also use chain shots or co-op shots to increase their power or accuracy.
      • -
      • Play online with other players to test your skills and have fun. You can join or create rooms with different settings and rules, or you can use matchmaking to find suitable opponents.
      • -
      -

      Conclusion

      -

      Inazuma Eleven GO Strikers 2013 is a soccer game with superpowers that offers a unique and thrilling football experience. It features characters from the Inazuma Eleven franchise and lets you relive their adventures in a new style. It was released in Japan for the Nintendo Wii, but you can play it on your PC with Dolphin Emulator and an ISO file. Playing on your PC has many benefits, such as better graphics, online features, and customization options. However, it also has some challenges, such as compatibility issues, language barriers, and legal concerns. If you want to play Inazuma Eleven GO Strikers 2013 on your PC, you should follow the steps in this article and use the tips and tricks we provided. We hope you enjoy this game as much as we do!

      -

      FAQs

      -

      Here are some frequently asked questions and answers about Inazuma Eleven GO Strikers 2013 and its download process:

      -

      Q: Is Inazuma Eleven GO Strikers 2013 available in English?

      -

      A: No, there is no official English version of the game. However, you can use a fan-made English patch to translate some of the text and menus, but it is not complete or perfect.

      -

      Q: Is Inazuma Eleven GO Strikers 2013 compatible with Windows 10?

      -

      A: Yes, Dolphin Emulator can run on Windows 10 and other operating systems. However, you might need to update some drivers or install some additional software to make it work properly.

      -

      Q: Is Inazuma Eleven GO Strikers 2013 safe to download?

      -

      A: It depends on where you download it from. You should only download files from trusted sources that have positive reviews and feedback. You should also scan the files with an antivirus software before opening them.

      -

      Q: Is Inazuma Eleven GO Strikers 2013 legal to play?

      -

      A: It depends on your location and situation. You should only play Inazuma Eleven GO Strikers 2013 on your PC if you own a legitimate copy of the game and a Wii console. Otherwise, you might violate some copyright laws or terms of service.

      -

      Q: Is Inazuma Eleven GO Strikers 2013 fun to play?

      -

      A: Yes, it is very fun to play! If you like soccer games with superpowers, you will love Inazuma Eleven GO Strikers 2013. It has amazing graphics, sound, gameplay, and online features that will keep you entertained for hours.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download YouTube APK 6.0.1 for Android - Latest Version.md b/spaces/fatiXbelha/sd/Download YouTube APK 6.0.1 for Android - Latest Version.md deleted file mode 100644 index 21e74b0174e6c2a8b7936c22823b0c6159073fe4..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download YouTube APK 6.0.1 for Android - Latest Version.md +++ /dev/null @@ -1,73 +0,0 @@ -
      -

      YouTube APK 6.0.1: What You Need to Know

      -

      YouTube is one of the most popular and widely used video-sharing platforms in the world. It allows you to watch, upload, share, comment, and like millions of videos on various topics, from music and entertainment to education and news. You can also create your own channel, upload your own videos, and interact with other users.

      -

      youtube apk 6.0.1


      Download Zip ✺✺✺ https://urllie.com/2uNCFy



      -

      However, if you want to enjoy YouTube on your Android device, you may encounter some limitations or issues with the official app. For example, you may not be able to watch some videos due to regional restrictions, or you may have to deal with annoying ads and interruptions. You may also want to have more control over the video quality, playback speed, and other settings.

      -

      That's why many users opt for downloading YouTube APK, which is an alternative version of the official app that offers more features and flexibility. In this article, we will explain what YouTube APK is, why you may want to download it, how to do it safely, and what are the features and benefits of YouTube APK 6.0.1, the latest version available as of June 2023.

      -

      What is YouTube APK?

      -

      YouTube APK is a file that contains the installation package of YouTube for Android devices. APK stands for Android Package Kit, and it is the format used by Android to distribute and install apps. You can download YouTube APK from various sources online, such as Uptodown or APKCombo, and install it on your device manually.

      -

      youtube apk 6.0.1 download
      -youtube apk 6.0.1 free
      -youtube apk 6.0.1 for android
      -youtube apk 6.0.1 latest version
      -youtube apk 6.0.1 mod
      -youtube apk 6.0.1 old version
      -youtube apk 6.0.1 premium
      -youtube apk 6.0.1 update
      -youtube apk 6.0.1 uptodown
      -youtube apk 6.0.1 offline
      -youtube apk 6.0.1 no ads
      -youtube apk 6.0.1 dark mode
      -youtube apk 6.0.1 background play
      -youtube apk 6.0.1 install
      -youtube apk 6.0.1 android tv
      -youtube apk 6.0.1 marshmallow
      -youtube apk 6.0.1 features
      -youtube apk 6.0.1 review
      -youtube apk 6.0.1 pro
      -youtube apk 6.0.1 cracked
      -youtube apk 6.0.1 hack
      -youtube apk 6.0.1 beta
      -youtube apk 6.0.1 original
      -youtube apk 6.0.1 safe
      -youtube apk 6.0.1 mirror
      -youtube apk 6.0.1 apkpure
      -youtube apk 6.0.1 apkmirror
      -youtube apk 6.0.1 file
      -youtube apk 6.0.1 size
      -youtube apk 6.0.1 requirements
      -youtube apk 6.0.1 changelog
      -youtube apk 6.0.1 bug fixes
      -youtube apk 6.0.1 performance improvements
      -youtube apk 6.0.1 security patches
      -youtube apk 6.0.1 new design
      -youtube apk 6.0.1 tips and tricks
      -youtube apk 6.0

      -

      Why download YouTube APK?

      -

      There are several reasons why you may want to download YouTube APK instead of using the official app from Google Play Store. Some of them are:

      -
        -
      • You want to access videos that are not available in your region or country due to geo-blocking or censorship.
      • -
      • You want to avoid ads and interruptions that may ruin your viewing experience.
      • -
      • You want to have more control over the video quality, playback speed, resolution, orientation, captions, and other settings.
      • -
      • You want to try out new features and updates before they are released officially.
      • -
      • You want to use an older or modified version of YouTube that suits your preferences or device specifications.
      • -
      -

      How to download and install YouTube APK?

      -

      Downloading and installing YouTube APK is not difficult, but you need to follow some steps carefully to avoid any problems or risks. Here are the steps you need to take:

      -
        -
      1. First, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      2. -
      3. Next, you need to find a reliable source for downloading YouTube APK. You can use sites like Uptodown or APKCombo, which offer safe and verified versions of YouTube APK. You can also use a VPN service if you want to bypass any regional restrictions or blocks.
      4. -
      5. Then, you need to download the YouTube APK file to your device. You can do this by clicking on the download button on the site or scanning the QR code if available.
      6. -
      7. After that, you need to locate the downloaded file on your device using a file manager app. You can usually find it in the Downloads folder.
      8. -
      9. Finally, you need to tap on the file and follow the instructions on the screen to install YouTube APK on your device. You may need to grant some permissions or accept some terms and conditions before proceeding.
      10. -
      -

      What are the features of YouTube APK 6.0.1?

      -
    • What is the difference between YouTube APK and YouTube Vanced?
    • -

      YouTube Vanced is a modified version of YouTube APK that offers more features and customization options, such as dark mode, ad-blocking, background play, and more. However, YouTube Vanced requires additional steps and apps to install and use, such as MicroG and Vanced Manager.

      -
    • How can I update YouTube APK?
    • -

      You can update YouTube APK by downloading the latest version from the same source you used before, or by checking for updates within the app. However, you may need to uninstall the previous version before installing the new one.

      -
    • How can I uninstall YouTube APK?
    • -

      You can uninstall YouTube APK by going to Settings > Apps > YouTube > Uninstall. You may also need to clear the cache and data of the app before uninstalling it.

      -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/generate_list.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/generate_list.py deleted file mode 100644 index 943d906781063c3584a7e5b5c784f8aac0694985..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/generate_list.py +++ /dev/null @@ -1,34 +0,0 @@ -"""This script is to generate training list files for Deep3DFaceRecon_pytorch -""" - -import os - -# save path to training data -def write_list(lms_list, imgs_list, msks_list, mode='train',save_folder='datalist', save_name=''): - save_path = os.path.join(save_folder, mode) - if not os.path.isdir(save_path): - os.makedirs(save_path) - with open(os.path.join(save_path, save_name + 'landmarks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in lms_list]) - - with open(os.path.join(save_path, save_name + 'images.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in imgs_list]) - - with open(os.path.join(save_path, save_name + 'masks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in msks_list]) - -# check if the path is valid -def check_list(rlms_list, rimgs_list, rmsks_list): - lms_list, imgs_list, msks_list = [], [], [] - for i in range(len(rlms_list)): - flag = 'false' - lm_path = rlms_list[i] - im_path = rimgs_list[i] - msk_path = rmsks_list[i] - if os.path.isfile(lm_path) and os.path.isfile(im_path) and os.path.isfile(msk_path): - flag = 'true' - lms_list.append(rlms_list[i]) - imgs_list.append(rimgs_list[i]) - msks_list.append(rmsks_list[i]) - print(i, rlms_list[i], flag) - return lms_list, imgs_list, msks_list diff --git a/spaces/fclong/summary/fengshen/models/deepVAE/deep_vae.py b/spaces/fclong/summary/fengshen/models/deepVAE/deep_vae.py deleted file mode 100644 index 08f03849469375d6f45eb26321b257b674250e77..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/models/deepVAE/deep_vae.py +++ /dev/null @@ -1,258 +0,0 @@ -# coding=utf-8 -# Copyright 2022 IDEA-CCNL The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch Della model. """ - -import torch -import torch.nn as nn -import torch.nn.functional as F -from dataclasses import dataclass -from typing import Optional, Tuple -from transformers.modeling_outputs import ModelOutput -from transformers.modeling_utils import PreTrainedModel -from fengshen.models.deepVAE.configuration_della import DellaModelConfig -from fengshen.models.deepVAE.latent_connector import GPT2ForDecoderLatentConnector, GPT2ForEncoderLatentConnector -from fengshen.models.deepVAE.utils import connect, compute_kl_loss, top_k_top_p_filtering, enforce_repetition_penalty - - -_CHECKPOINT_FOR_DOC = "della-226M-base" -_CONFIG_FOR_DOC = "DellaModelConfig" -_TOKENIZER_FOR_DOC = "BertTokenizer" -Della_model_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "della-226M-base" -] - - -@dataclass -class DellaModelOutput(ModelOutput): - logits: torch.FloatTensor = None - posterior_latents: Optional[Tuple[torch.FloatTensor]] = None - prior_latent: Optional[Tuple[torch.FloatTensor]] = None - - -class latent_layer(nn.Module): - def __init__(self, input_dim) -> None: - super().__init__() - self.W_hh = nn.Linear(input_dim, input_dim, bias=False) - self.W_ih = nn.Linear(input_dim, input_dim, bias=False) - self.tanh = nn.Tanh() - - def forward(self, z_lt_lm1, z_lm1): - # inputs are z_