diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Don 2 UPDATED Full Hindi Movie Hd With English Subtitles.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Don 2 UPDATED Full Hindi Movie Hd With English Subtitles.md
deleted file mode 100644
index 9aae26b29708aee7f71918fbb5756c29786182b9..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Don 2 UPDATED Full Hindi Movie Hd With English Subtitles.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
Don 2: A Thrilling Sequel to the 2006 Action Hit
-
If you are looking for a fast-paced and exciting movie to watch, you might want to check out Don 2, a sequel to the 2006 Indian action thriller Don. The movie stars Shah Rukh Khan as the international gangster Don, who has conquered the Asian underworld and now sets his sights on Europe. Along the way, he faces challenges from the Interpol, the mob bosses of each nation, and his own former allies.
-
The movie is directed by Farhan Akhtar, who also co-wrote the screenplay with Ameet Mehta and Amrish Shah. The movie also features Priyanka Chopra Jonas as Roma, an Interpol officer who is obsessed with catching Don; Boman Irani as Vardhan, Don's former enemy who joins forces with him; Kunal Kapoor as Sameer, Don's trusted friend; and Lara Dutta as Ayesha, Don's girlfriend.
The movie was released in 2011 and was a huge commercial and critical success. It was praised for its stylish cinematography, stunning action sequences, and charismatic performances by the lead actors. The movie also features a catchy soundtrack composed by Shankar-Ehsaan-Loy, with lyrics by Javed Akhtar.
-
If you want to watch Don 2, you can find it on various streaming platforms such as Netflix and Prime Video. The movie is available in Hindi with English subtitles, as well as in other languages such as German, Spanish, French, Italian, Korean, Chinese, and more. You can also rent or buy the movie on Amazon or other online platforms.
-
So what are you waiting for? Grab some popcorn and enjoy this thrilling ride with Don and his gang!
-
-
Don 2: The Plot
-
The movie begins with Don (Shah Rukh Khan) narrating his rise to power in the Asian underworld, after killing his lookalike Vijay and escaping from the Interpol. He reveals that he has a master plan to rob the currency printing plates from a bank in Berlin, Germany. To do this, he needs the help of Vardhan (Boman Irani), who is imprisoned in Malaysia.
-
Don surrenders himself to the Interpol in Malaysia, hoping to get close to Vardhan and break him out of jail. However, he is confronted by Roma (Priyanka Chopra Jonas), who has not forgotten her personal vendetta against him. She tries to stop him from escaping, but Don manages to outsmart her and frees Vardhan. They then fly to Zurich, Switzerland, where they meet Sameer (Kunal Kapoor), Don's friend and partner in crime.
-
-
In Zurich, Don also meets Ayesha (Lara Dutta), his girlfriend and accomplice. She helps him get in touch with Diwan (Alyy Khan), a hacker who can access the bank's security system. Don also recruits Jabbar (Nawab Shah), an assassin who can eliminate any obstacles in his way. With his team ready, Don sets his plan in motion.
-
However, things are not as easy as they seem. Don has to deal with the ruthless mob boss of Europe, Arjun Khanna (Om Puri), who does not want anyone to interfere with his business. He also has to face Malik (Florian Lukas), a German police officer who is determined to catch him. And most importantly, he has to watch out for Roma and her team, who are hot on his trail.
-
Will Don succeed in his daring heist? Will Roma finally get her revenge? Will Don's allies remain loyal to him? Watch Don 2 to find out!
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKPure How to Get Marvel Contest of Champions APK for Free.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKPure How to Get Marvel Contest of Champions APK for Free.md
deleted file mode 100644
index 4d118e2f66b757355836c14f02efca5dd7ef4dc0..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKPure How to Get Marvel Contest of Champions APK for Free.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Marvel Contest of Champions Apkpure: A Superhero Fighting Game for Your Mobile Device
-
Do you love Marvel comics and movies? Do you enjoy fighting games with simple controls and stunning graphics? If you answered yes to both questions, then you should check out Marvel Contest of Champions apkpure, a free-to-play mobile game that lets you collect and battle with your favorite Marvel characters. In this article, we will tell you everything you need to know about Marvel Contest of Champions apkpure, including how to play it, who are the characters, what are the tips, and what are the reviews.
-
How to Play Marvel Contest of Champions Apkpure
-
Marvel Contest of Champions apkpure is a fighting game that pits Marvel heroes and villains against each other in epic duels. You can download the game from [ApkCombo](^1^), a website that provides free APK files for Android devices. The game requires an internet connection and about 1.5 GB of storage space.
The game has a simple touchscreen interface that allows you to control your character's movements and attacks. You can tap to perform light attacks, swipe to perform medium attacks, press and hold to perform heavy attacks, and swipe back to dodge or block. You can also unleash powerful special attacks when your power meter is full, which is indicated by the blue bars at the bottom of the screen.
-
The game has several features and modes that make it fun and engaging. You can play through a story mode that follows a comic book-inspired plot, where you have to fight against the Collector, Thanos, Kang, and other villains who want to destroy the Marvel universe. You can also join an alliance with other players and participate in alliance events, quests, and wars, where you can cooperate or compete with other alliances for rewards and glory. You can also enter various arenas and tournaments, where you can test your skills against other players from around the world.
-
Who Are the Characters in Marvel Contest of Champions Apkpure
-
Marvel Contest of Champions apkpure features over 250 playable characters from the Marvel universe, including Spider-Man, Iron Man, Wolverine, Captain America, Black Widow, Thor, Hulk, Deadpool, Doctor Strange, Captain Marvel, Black Panther, Thanos, Ultron, Venom, and many more. You can obtain new characters by opening crystals that you earn or buy with in-game currency or real money.
-
The characters belong to different classes that have advantages and disadvantages against each other. The classes are Mutant, Skill, Science, Mystic, Cosmic, and Tech. For example, Mutants are strong against Skill but weak against Tech, while Techs are strong against Mutants but weak against Cosmic. You can see the class relationships by tapping on the class icons at the top of the screen.
-
Each character has a unique set of stats, abilities, and special moves that reflect their comic book counterparts. For example, Spider-Man can web-sling, evade attacks, and stun enemies with his spider-sense; Iron Man can fire repulsor blasts, boost his armor, and unleash a unibeam; Wolverine can heal himself, slash enemies with his claws, and go berserk; and so on. You can upgrade your characters by leveling them up with ISO-8 crystals or ranking them up with catalysts. You can also unlock their signature abilities by obtaining duplicate copies of them from crystals.
-
How to Improve Your Skills and Strategies in Marvel Contest of Champions Apkpure
-
If you want to become a better player in Marvel Contest of Champions apkpure, here are some tips that you should follow:
-
marvel contest of champions apk download apkpure
-marvel contest of champions mod apk apkpure
-marvel contest of champions hack apk apkpure
-marvel contest of champions latest version apkpure
-marvel contest of champions update apkpure
-marvel contest of champions offline apkpure
-marvel contest of champions apk obb apkpure
-marvel contest of champions apk data apkpure
-marvel contest of champions apk mirror apkpure
-marvel contest of champions apk pure download
-marvel contest of champions apk pure mod
-marvel contest of champions apk pure hack
-marvel contest of champions apk pure latest version
-marvel contest of champions apk pure update
-marvel contest of champions apk pure offline
-marvel contest of champions apk pure obb
-marvel contest of champions apk pure data
-marvel contest of champions apk pure mirror
-download marvel contest of champions apkpure
-download marvel contest of champions mod apkpure
-download marvel contest of champions hack apkpure
-download marvel contest of champions latest version apkpure
-download marvel contest of champions update apkpure
-download marvel contest of champions offline apkpure
-download marvel contest of champions obb apkpure
-download marvel contest of champions data apkpure
-download marvel contest of champions mirror apkpure
-how to install marvel contest of champions apkpure
-how to play marvel contest of champions apkpure
-how to update marvel contest of champions apkpure
-how to hack marvel contest of champions apkpure
-how to mod marvel contest of champions apkpure
-how to download obb for marvel contest of champions apkpure
-how to download data for marvel contest of champions apkpure
-how to fix error in marvel contest of champions apkpure
-is marvel contest of champions available on apkpure
-is marvel contest of champions safe on apkpure
-is marvel contest of champions offline on apkpure
-is marvel contest of champions modded on apkpure
-is marvel contest of champions hacked on apkpure
-
-
Build a balanced team of characters with different classes and synergies. Synergies are bonuses that you get when you pair up certain characters based on their comic book relationships or affiliations. For example, pairing up Spider-Man and Venom gives you a bonus to critical rate; - Pairing up Iron Man and Captain America gives you a bonus to armor and block proficiency. You can see the synergies by tapping on the team icon at the bottom of the screen.
-
Learn the strengths and weaknesses of each character and use them to your advantage. For example, if you are facing a Mystic character, you can use a Cosmic character to deal more damage and avoid their debuffs; if you are facing a Tech character, you can use a Mutant character to bypass their armor and power drain.
-
Master the basic combat mechanics and practice your timing and reflexes. You should know when to attack, when to block, when to dodge, and when to use your special attacks. You should also learn how to parry, which is a technique that allows you to stun your opponent by blocking right before they hit you. Parrying is very useful for creating openings and preventing damage.
-
Use your special attacks wisely and strategically. You should not waste your power meter on weak or ineffective special attacks, but save it for the ones that can deal more damage, inflict debuffs, or trigger effects. You should also be aware of your opponent's power meter and avoid getting hit by their special attacks, especially the third one, which is usually the most powerful and cannot be blocked.
-
Explore the different game modes and quests and complete the objectives and challenges. You can earn rewards such as gold, units, crystals, ISO-8, catalysts, and more by playing the game regularly and completing various tasks. You can also unlock new characters, arenas, and stories by progressing through the game.
-
-
What Are the Pros and Cons of Marvel Contest of Champions Apkpure
-
Marvel Contest of Champions apkpure is a popular and well-received game that has many positive aspects, but also some negative ones. Here are some of the pros and cons of Marvel Contest of Champions apkpure:
-
-
-
Pros
-
Cons
-
-
-
- The game has amazing graphics and animations that make the characters look realistic and lifelike.
-
- The game can be repetitive and grindy at times, especially when you have to farm for resources or fight the same opponents over and over.
-
-
-
- The game has a large and diverse roster of characters that appeal to Marvel fans of all ages and preferences.
-
- The game can be frustrating and unfair at times, especially when you face opponents that are much stronger or have annoying abilities or buffs.
-
-
-
- The game has a simple and intuitive control system that makes it easy to play for anyone.
-
- The game can be expensive and pay-to-win at times, especially when you have to buy crystals or units to get better characters or items.
-
-
-
- The game has a fun and engaging story mode that follows an original plot with twists and surprises.
-
- The game can be buggy and glitchy at times, especially when it crashes or freezes during gameplay or loading screens.
-
-
-
- The game has a social and competitive aspect that allows you to interact with other players and join alliances.
-
- The game can be addictive and time-consuming at times, especially when you have to keep up with the events and quests or maintain your alliance status.
-
-
-
Conclusion: Is Marvel Contest of Champions Apkpure Worth Playing?
-
In conclusion, Marvel Contest of Champions apkpure is a great game for Marvel fans and fighting game enthusiasts who want to enjoy a thrilling and immersive experience on their mobile devices. The game has many advantages such as stunning graphics, diverse characters, simple controls, engaging story, and social features. However, the game also has some drawbacks such as repetitiveness, frustration, expense, bugs, and addiction. Therefore, we recommend that you play Marvel Contest of Champions apkpure with moderation and caution, and only if you are willing to accept its flaws. If you are looking for a superhero fighting game that is fun, easy, and free to play, then Marvel Contest of Champions apkpure is definitely worth trying.
-
FAQs: Frequently Asked Questions About Marvel Contest of Champions Apkpure
-
Here are some of the most common questions that people ask about Marvel Contest of Champions apkpure:
-
Q: What is apkpure?
-
A: Apkpure is a website that provides free APK files for Android devices. APK files are application packages that contain all the files needed to install an app on your device. Apkpure allows you to download APK files from various sources without any restrictions or limitations.
-
Q: Is Marvel Contest of Champions apkpure safe?A: Marvel Contest of Champions apkpure is generally safe to download and play, as long as you get it from a trusted source like ApkCombo. However, you should always be careful when downloading APK files from unknown or unverified sources, as they may contain malware or viruses that can harm your device or compromise your privacy. You should also make sure that your device meets the minimum requirements and has enough storage space to run the game smoothly.
-
Q: How do I update Marvel Contest of Champions apkpure?
-
A: Marvel Contest of Champions apkpure is updated regularly with new features, characters, events, and bug fixes. You can update the game by downloading the latest APK file from ApkCombo and installing it over the existing one. You can also enable the auto-update option in the settings of your device or the ApkCombo app to get notified and download the updates automatically.
-
Q: How do I get more crystals in Marvel Contest of Champions apkpure?
-
A: Crystals are items that you can use to obtain new characters, items, or resources in Marvel Contest of Champions apkpure. You can get crystals by completing quests, participating in events, opening chests, spinning wheels, watching ads, or buying them with real money. You can also get free crystals every day by logging in to the game and claiming your daily rewards.
-
Q: How do I contact the support team of Marvel Contest of Champions apkpure?
-
A: If you have any issues, questions, or feedback regarding Marvel Contest of Champions apkpure, you can contact the support team by tapping on the gear icon at the top left corner of the screen, then tapping on "Support". You can also visit the official website of Marvel Contest of Champions or follow their social media accounts for more information and updates.
-
Q: What are some similar games to Marvel Contest of Champions apkpure?
-
A: If you like Marvel Contest of Champions apkpure, you might also enjoy some other games that are similar in genre or theme. Some examples are:
-
-
Marvel Future Fight: A role-playing game that lets you create and customize your own team of Marvel heroes and villains and fight against various enemies and bosses.
-
Injustice 2: A fighting game that features characters from DC comics and movies and allows you to upgrade and customize them with gear and abilities.
-
Mortal Kombat X: A fighting game that features characters from the Mortal Kombat franchise and allows you to perform brutal fatalities and x-ray moves.
-
Marvel Strike Force: A turn-based strategy game that lets you assemble and command a squad of Marvel characters and fight against various threats.
-
Marvel Puzzle Quest: A match-3 puzzle game that lets you collect and use Marvel characters in battles and events.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Anime Kamen Rider W The Legendary Tokusatsu Series.md b/spaces/1phancelerku/anime-remove-background/Download Anime Kamen Rider W The Legendary Tokusatsu Series.md
deleted file mode 100644
index 4872ec46cb235de265cf927f0fc2ce7fa2fba132..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Anime Kamen Rider W The Legendary Tokusatsu Series.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
Download Anime Kamen Rider W: A Guide for Fans
-
If you are a fan of tokusatsu, superhero, action, or detective genres, you might have heard of anime kamen rider w. This is a Japanese live-action TV series that aired from 2009 to 2010, as part of the long-running Kamen Rider franchise. It is also known as Kamen Rider Double, because it features two protagonists who can combine into one Kamen Rider. Anime kamen rider w is widely regarded as one of the best Kamen Rider series in the Heisei era, and has spawned a manga sequel, an anime adaptation, and various merchandise and games. In this article, we will give you an overview of anime kamen rider w, its plot and characters, its reception and popularity, its merchandise and games, and the best sites to download it.
-
Plot and Characters
-
Anime kamen rider w is set in the ecologically-minded city of Futo (the "Windy City"), where windmills power almost everything. However, the city is also plagued by crimes committed by Dopants, monsters created by using Gaia Memories, mysterious USB-like devices that contain the essence of the Earth. The Gaia Memories are sold by the Sonozaki Family, a powerful crime syndicate that also controls the Museum, a secret organization that researches the Gaia Memories.
The main protagonists of anime kamen rider w are Shotaro Hidari and Philip. Shotaro is a private detective who runs the Narumi Detective Agency, which specializes in Dopant cases. He is also a self-proclaimed "hard-boiled" detective who likes to wear a fedora and a trench coat. Philip is a mysterious young man who has no memories of his past, but possesses a vast knowledge of the Gaia Memories. He lives in a secret room in the agency, where he accesses a library-like database called the Gaia Library. Together, they can transform into Kamen Rider W (or Double), using two Gaia Memories and a belt called the Double Driver. By combining different Gaia Memories, they can access various forms with different powers and weapons.
-
Some of their allies include Akiko Narumi, Shotaro's boss and the daughter of his mentor Sokichi Narumi, who was killed by a Dopant; Ryu Terui, a police officer who becomes Kamen Rider Accel to avenge his family; Shun Makura, a journalist who helps them with information; Watcherman, a blogger who reports on Dopant incidents; Santa-chan, a former thief who runs a souvenir shop; Queen and Elizabeth, two teenage girls who are fans of Kamen Rider W; and Jinno and Makura, two police officers who often assist Shotaro.
-
Some of their enemies include Ryubee Sonozaki, the head of the Sonozaki Family and the Museum; Saeko Sonozaki, his eldest daughter who becomes the Taboo Dopant; Wakana Sonozaki, his youngest daughter who becomes the Clay Doll Dopant; Kirihiko Sudo, Saeko's husband who becomes the Nasca Dopant; Shinkuro Isaka, a doctor who becomes the Weather Dopant; Jun Kazu, a politician who becomes the Utopia Dopant; Katsumi Daido, the leader of NEVER, a group of undead soldiers who becomes the Eternal Dopant; and Foundation X, a mysterious organization that funds the Museum.
-
Reception and Popularity
-
Anime kamen rider w was well-received by both critics and fans when it aired. It was praised for its engaging plot, likable characters, creative designs, catchy music, humorous moments, emotional scenes, and thrilling action. It also won several awards, such as the Tokyo Anime Award for Best Domestic Feature
Merchandise and Games
-
Anime kamen rider w has a lot of merchandise and games for fans to enjoy. Some of the most popular products include the Gaia Memories, the Double Driver, the Accel Driver, the Lost Driver, and the various weapons and gadgets used by the Kamen Riders. These are sold as toys that can be used to recreate the transformations and attacks from the show. Some of them also have sounds and lights that match the ones in the show.
-
There are also several video games based on anime kamen rider w, such as Kamen Rider: Climax Heroes W, Kamen Rider: Climax Heroes OOO, Kamen Rider: Super Climax Heroes, Kamen Rider: Battride War, Kamen Rider: Battride War II, Kamen Rider: Battride War Genesis, Kamen Rider: Memory of Heroez, and Kamen Rider Battle: Ganbarizing. These games allow players to control various Kamen Riders from anime kamen rider w and other series, and fight against enemies and bosses in different stages. Some of them also have story modes that follow the plot of the show or original scenarios.
-
For fans who prefer more casual games, there are also some mobile games and web games related to anime kamen rider w, such as Kamen Rider City Wars, Kamen Rider Battle Rush, Kamen Rider Transcend Heroes, Kamen Rider Break Joker, and Futo Detectives. These games feature anime kamen rider w characters and elements in various genres, such as city-building, card battle, action RPG, puzzle, and adventure.
-
download kamen rider w episodes free
-kamen rider w blu-ray download
-kamen rider w internet archive download
-download kamen rider w movie war 2010
-kamen rider w bd box download
-download kamen rider w sub indo
-kamen rider w tokushare download
-download kamen rider w gaia memory encyclopedia
-kamen rider w donburi's α download
-download kamen rider w english subtitles
-kamen rider w ozc-live download
-download kamen rider w mp4 format
-kamen rider w over-time subs download
-download kamen rider w 720p quality
-kamen rider w streaming and download
-download kamen rider w soundtrack
-kamen rider w opening song download
-download kamen rider w cyclone effect
-kamen rider w finger on the trigger download
-download kamen rider w nobody's perfect
-kamen rider w extreme dream download
-download kamen rider w love wars
-kamen rider w naturally download
-download kamen rider w goodbye to the tears
-kamen rider w free your heat download
-download kamen rider w theme songs collection
-kamen rider w character songs album download
-download kamen rider w gaia memory soundboard
-kamen rider w driver app download
-download kamen rider w android game
-kamen rider w memory of heroines game download
-download kamen rider w climax heroes game
-kamen rider w all riders vs dai-shocker game download
-download kamen rider w manga scanlation
-kamen rider w fuuto detectives manga download
-download kamen rider w novel translation
-kamen rider w returns movie download
-download kamen rider eternal movie
-kamen rider accel movie download
-download kamen rider joker movie
-kamen rider skull movie core download
-download fuuto pi drama cd series
-fuuto tantei drama cd special file 3.5 - the man who was too loved by the wind - featuring shotaro hirudo and philip - guest starring akiko narumi and ryu terui - a story that takes place after the events of the tv series - a must-listen for fans of the hard-boiled detective duo - available for digital purchase and streaming on various platforms - don't miss it! (This is a parody of the actual drama cd title)
-
If you are looking for anime kamen rider w gifts and merchandise, you can check out some online stores that sell them, such as Redbubble, Amazon, eBay, Mandarake, and AmiAmi. These sites offer a wide range of products, such as T-shirts, posters, stickers, mugs, keychains, figures, cosplay items, and more. You can also find some fan-made items that are unique and creative.
-
Best Sites to Download Anime Kamen Rider W
-
If you want to watch or rewatch anime kamen rider w on your devices, you might be wondering where to download it. There are many sites that offer anime kamen rider w for download, but not all of them are reliable and safe. Some of them might have low-quality videos, broken links, malware, or illegal content. To avoid these problems, you should only use trusted and reputable sites that have good reviews and ratings from other users.
-
Here are some of the best sites to download anime kamen rider w:
-
-
-
Site
-
Pros
-
Cons
-
-
-
[Internet Archive](^7^)
-
- Free and legal - High-quality videos - All episodes and movies available - No ads or pop-ups
-
- Slow download speed - Limited formats and subtitles
-
-
-
[Nyaa](^8^)
-
- Free and fast - High-quality videos - Various formats and subtitles - Multiple sources and seeds
-
- Not legal - Requires torrent client - May contain malware or viruses - May be blocked by some ISPs
-
-
-
[KissAsian](^9^)
-
- Free and easy - High-quality videos - Various formats and subtitles - Streaming option available
-
- Not legal - Contains ads and pop-ups - May redirect to other sites - May require registration or verification
-
-
-
[Over-Time]
-
- Free and legal - High-quality videos - Various formats and subtitles - Official fansub group
-
- Slow download speed - Requires torrent client or file hosting service - Only episodes available - No streaming option
-
-
-
[OZC-Live]
-
- Free and legal - High-quality videos - Various formats and subtitles - Official fansub group
-
- Slow download speed - Requires torrent client or file hosting service - Only episodes available - No streaming option
-
-
-
Conclusion
-
Anime kamen rider w is a great series that deserves to be watched by anyone who likes tokusatsu, superhero, action, or detective genres. It has a captivating plot, charming characters, creative designs, catchy music, humorous moments, emotional scenes, and thrilling action. It also has a lot of merchandise and games for fans to enjoy. If you want to download anime kamen rider w, you can use one of the sites we recommended, or find other ones that suit your preferences. Just make sure to be careful and responsible when downloading, and respect the rights of the creators and owners of the content.
-
We hope this article has helped you learn more about anime kamen rider w, and why it is such a popular and beloved series. If you have not watched it yet, we highly recommend you to give it a try. You will not regret it. Anime kamen rider w is a series that will make you laugh, cry, cheer, and feel inspired. It is a series that will stay with you for a long time.
-
FAQs
-
Here are some frequently asked questions and answers about anime kamen rider w:
-
Q: How many episodes and movies are there in anime kamen rider w?
-
A: Anime kamen rider w has 49 episodes and 3 movies. The episodes are divided into 26 two-part cases, each with a different title that follows the W theme (e.g. The W Search/Two Detectives in One). The movies are Kamen Rider × Kamen Rider W & Decade: Movie War 2010, Kamen Rider W Forever: A to Z/The Gaia Memories of Fate, and Kamen Rider W Returns.
-
Q: What is the difference between the live-action and the anime versions of anime kamen rider w?
-
A: The live-action version of anime kamen rider w is the original TV series that aired from 2009 to 2010. The anime version of anime kamen rider w is an adaptation that was released in 2018 as part of the Toei Animation's 60th anniversary project. The anime version follows the same plot and characters as the live-action version, but with some changes and additions, such as new scenes, new forms, new enemies, and new voice actors.
-
Q: What is the meaning of the W in anime kamen rider w?
-
A: The W in anime kamen rider w has multiple meanings. It stands for Double, because it represents the two protagonists who can combine into one Kamen Rider. It also stands for Windy City, because it is the nickname of Futo, where the series takes place. It also stands for Words, because it relates to the names of the Gaia Memories and the titles of the cases. It also stands for Wonders, because it reflects the mysterious and amazing nature of the series.
-
Q: Who are the voice actors of anime kamen rider w?
-
A: The voice actors of anime kamen rider w are as follows:
-
-
Shotaro Hidari: Renn Kiriyama (live-action), Mamoru Miyano (anime)
-
Philip: Masaki Suda (live-action), Ryo Yoshizawa (anime)
Jinno: Takeshi Nadagi (live-action), Daisuke Ono (anime)
-
Makura: Akira Date (live-action), Yuichi Nakamura (anime)
-
-
Q: Where can I read the manga sequel of anime kamen rider w?
-
A: The manga sequel of anime kamen rider w is called Futo Detectives, and it is written by Riku Sanjo and drawn by Masaki Sato. It continues the story of Shotaro and Philip after the events of the TV series, as they face new cases and enemies in Futo. You can read it online on some manga sites, such as MangaDex, MangaRock, or MangaFox. You can also buy the physical volumes on some online stores, such as Amazon, CDJapan, or YesAsia.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py
deleted file mode 100644
index dcbf8e18d3397271d166a11e2297b4b5ab0bb192..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py
+++ /dev/null
@@ -1,460 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import time
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import paddle
-
-from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTokenizer
-
-from ...fastdeploy_utils import FastDeployRuntimeModel
-from ...pipeline_utils import DiffusionPipeline
-from ...schedulers import (
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
-)
-from ...schedulers.preconfig import (
- PreconfigEulerAncestralDiscreteScheduler,
- PreconfigLMSDiscreteScheduler,
-)
-from ...utils import logging
-from . import StableDiffusionPipelineOutput
-
-logger = logging.get_logger(__name__)
-
-
-class FastDeployStableDiffusionPipeline(DiffusionPipeline):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving etc.)
-
- Args:
- vae_encoder ([`FastDeployRuntimeModel`]):
- Variational Auto-Encoder (VAE) Model to encode images to latent representations.
- vae_decoder ([`FastDeployRuntimeModel`]):
- Variational Auto-Encoder (VAE) Model to decode images from latent representations.
- text_encoder ([`FastDeployRuntimeModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`FastDeployRuntimeModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`]
- or [`DPMSolverMultistepScheduler`].
- safety_checker ([`FastDeployRuntimeModel`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- _optional_components = ["vae_encoder", "safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae_encoder: FastDeployRuntimeModel,
- vae_decoder: FastDeployRuntimeModel,
- text_encoder: FastDeployRuntimeModel,
- tokenizer: CLIPTokenizer,
- unet: FastDeployRuntimeModel,
- scheduler: Union[
- DDIMScheduler,
- PNDMScheduler,
- LMSDiscreteScheduler,
- PreconfigLMSDiscreteScheduler,
- EulerDiscreteScheduler,
- EulerAncestralDiscreteScheduler,
- PreconfigEulerAncestralDiscreteScheduler,
- DPMSolverMultistepScheduler,
- ],
- safety_checker: FastDeployRuntimeModel,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- self.register_modules(
- vae_encoder=vae_encoder,
- vae_decoder=vae_decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `list(int)`):
- prompt to be encoded
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- """
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="np",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="np").input_ids
-
- if not np.array_equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
- text_embeddings = np.repeat(text_embeddings, num_images_per_prompt, axis=0)
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt] * batch_size
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="np",
- )
- uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int64))[0]
- uncond_embeddings = np.repeat(uncond_embeddings, num_images_per_prompt, axis=0)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
-
- return text_embeddings
-
- def run_safety_checker(self, image, dtype):
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(
- self.numpy_to_pil(image), return_tensors="np"
- ).pixel_values.astype(dtype)
- # There will throw an error if use safety_checker batchsize>1
- images, has_nsfw_concept = [], []
- for i in range(image.shape[0]):
- image_i, has_nsfw_concept_i = self.safety_checker(
- clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
- )
- images.append(image_i)
- has_nsfw_concept.append(has_nsfw_concept_i[0])
- image = np.concatenate(images)
- else:
- has_nsfw_concept = None
- return image, has_nsfw_concept
-
- def decode_latents(self, latents):
- latents = 1 / 0.18215 * latents
- latents_shape = latents.shape
- vae_output_shape = [latents_shape[0], 3, latents_shape[2] * 8, latents_shape[3] * 8]
- images_vae = paddle.zeros(vae_output_shape, dtype="float32")
-
- vae_input_name = self.vae_decoder.model.get_input_info(0).name
- vae_output_name = self.vae_decoder.model.get_output_info(0).name
-
- self.vae_decoder.zero_copy_infer(
- prebinded_inputs={vae_input_name: latents},
- prebinded_outputs={vae_output_name: images_vae},
- share_with_raw_ptr=True,
- )
-
- images_vae = paddle.clip(images_vae / 2 + 0.5, 0, 1)
- images = images_vae.transpose([0, 2, 3, 1])
- return images.numpy()
-
- def prepare_extra_step_kwargs(self, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
- return extra_step_kwargs
-
- def check_var_kwargs_of_scheduler_func(self, scheduler_func):
- sig = inspect.signature(scheduler_func)
- params = sig.parameters.values()
- has_kwargs = any([True for p in params if p.kind == p.VAR_KEYWORD])
- return has_kwargs
-
- def check_inputs(self, prompt, height, width, callback_steps):
- if not isinstance(prompt, str) and not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, generator, latents=None):
- if generator is None:
- generator = np.random
-
- latents_shape = (batch_size, num_channels_latents, height // 8, width // 8)
- if latents is None:
- latents = generator.randn(*latents_shape).astype(dtype)
- elif latents.shape != latents_shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * float(self.scheduler.init_noise_sigma)
- return latents
-
- def __call__(
- self,
- prompt: Union[str, List[str]],
- height: Optional[int] = 512,
- width: Optional[int] = 512,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: Optional[float] = 0.0,
- generator: Optional[np.random.RandomState] = None,
- latents: Optional[np.ndarray] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
- callback_steps: Optional[int] = 1,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- height (`int`, *optional*, 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`np.random.RandomState`, *optional*):
- A np.random.RandomState to make generation deterministic.
- latents (`np.ndarray`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(prompt, height, width, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- start_time_encode_prompt = time.perf_counter()
- text_embeddings = self._encode_prompt(
- prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
- print("_encode_prompt latency:", time.perf_counter() - start_time_encode_prompt)
- # 4. Prepare timesteps
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = 4
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- text_embeddings.dtype,
- generator,
- latents,
- )
- if isinstance(latents, np.ndarray):
- latents = paddle.to_tensor(latents)
- # 6. Prepare extra step kwargs.
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
- # 7. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- scheduler_support_kwagrs_scale_input = self.check_var_kwargs_of_scheduler_func(
- self.scheduler.scale_model_input
- )
- scheduler_support_kwagrs_step = self.check_var_kwargs_of_scheduler_func(self.scheduler.step)
-
- unet_output_name = self.unet.model.get_output_info(0).name
- unet_input_names = [self.unet.model.get_input_info(i).name for i in range(self.unet.model.num_inputs())]
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- text_embeddings = paddle.to_tensor(text_embeddings, dtype="float32")
- for i, t in enumerate(timesteps):
- noise_pred_unet = paddle.zeros(
- [2 * batch_size * num_images_per_prompt, 4, height // 8, width // 8], dtype="float32"
- )
- # expand the latents if we are doing classifier free guidance
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
- if scheduler_support_kwagrs_scale_input:
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t, step_index=i)
- else:
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- self.unet.zero_copy_infer(
- prebinded_inputs={
- unet_input_names[0]: latent_model_input,
- unet_input_names[1]: t,
- unet_input_names[2]: text_embeddings,
- },
- prebinded_outputs={unet_output_name: noise_pred_unet},
- share_with_raw_ptr=True,
- )
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred_unet.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
- # compute the previous noisy sample x_t -> x_t-1
- if scheduler_support_kwagrs_step:
- scheduler_output = self.scheduler.step(
- noise_pred, t, latents, step_index=i, return_pred_original_sample=False, **extra_step_kwargs
- )
- else:
- scheduler_output = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)
- latents = scheduler_output.prev_sample
- if i == num_inference_steps - 1:
- # sync for accuracy it/s measure
- paddle.device.cuda.synchronize()
- # call the callback, if provided
- if i == num_inference_steps - 1 or (
- (i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0
- ):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 8. Post-processing
- time_start_decoder = time.perf_counter()
- image = self.decode_latents(latents)
- print("decoder latency:", time.perf_counter() - time_start_decoder)
- # 9. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
-
- # 10. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/2ndelement/voicevox/test/test_core_version_utility.py b/spaces/2ndelement/voicevox/test/test_core_version_utility.py
deleted file mode 100644
index e96ba8009e1614788e1e2b7ea9a11ae6d77dfe5c..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/test/test_core_version_utility.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from unittest import TestCase
-
-from voicevox_engine.utility import get_latest_core_version, parse_core_version
-
-
-class TestCoreVersion(TestCase):
- def test_parse_core_version(self):
- parse_core_version("0.0.0")
- parse_core_version("0.1.0")
- parse_core_version("0.10.0")
- parse_core_version("0.10.0-preview.1")
- parse_core_version("0.14.0")
- parse_core_version("0.14.0-preview.1")
- parse_core_version("0.14.0-preview.10")
-
- def test_get_latest_core_version(self):
- self.assertEqual(
- get_latest_core_version(
- versions=[
- "0.0.0",
- "0.1.0",
- "0.10.0",
- "0.10.0-preview.1",
- "0.14.0",
- "0.14.0-preview.1",
- "0.14.0-preview.10",
- ]
- ),
- "0.14.0",
- )
-
- self.assertEqual(
- get_latest_core_version(
- versions=[
- "0.14.0",
- "0.15.0-preview.1",
- ]
- ),
- "0.15.0-preview.1",
- )
diff --git a/spaces/801artistry/RVC801/infer/modules/vc/utils.py b/spaces/801artistry/RVC801/infer/modules/vc/utils.py
deleted file mode 100644
index a1cb0ff84097d1c7eb82373ccf19db061f595096..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/modules/vc/utils.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import os
-import re
-from fairseq import checkpoint_utils
-
-
-def get_index_path_from_model(sid):
- sid0strip = re.sub(r'\.pth|\.onnx$', '', sid)
- sid0name = os.path.split(sid0strip)[-1] # Extract only the name, not the directory
-
- # Check if the sid0strip has the specific ending format _eXXX_sXXX
- if re.match(r'.+_e\d+_s\d+$', sid0name):
- base_model_name = sid0name.rsplit('_', 2)[0]
- else:
- base_model_name = sid0name
-
- return next(
- (
- f
- for f in [
- os.path.join(root, name)
- for root, _, files in os.walk(os.getenv("index_root"), topdown=False)
- for name in files
- if name.endswith(".index") and "trained" not in name
- ]
- if base_model_name in f
- ),
- "",
- )
-
-
-def load_hubert(config):
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["assets/hubert/hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- return hubert_model.eval()
diff --git a/spaces/A666sxr/Genshin_TTS/text/japanese.py b/spaces/A666sxr/Genshin_TTS/text/japanese.py
deleted file mode 100644
index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000
--- a/spaces/A666sxr/Genshin_TTS/text/japanese.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import re
-from unidecode import unidecode
-import pyopenjtalk
-
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-# List of (romaji, ipa) pairs for marks:
-_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ts', 'ʦ'),
- ('u', 'ɯ'),
- ('j', 'ʥ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (romaji, ipa2) pairs for marks:
-_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('u', 'ɯ'),
- ('ʧ', 'tʃ'),
- ('j', 'dʑ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text != '':
- text += ' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil', 'pau']:
- text += phoneme.replace('ch', 'ʧ').replace('sh',
- 'ʃ').replace('cl', 'Q')
- else:
- continue
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
- a2_next = -1
- else:
- a2_next = int(
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i < len(marks):
- text += unidecode(marks[i]).replace(' ', '')
- return text
-
-
-def get_real_sokuon(text):
- for regex, replacement in _real_sokuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def get_real_hatsuon(text):
- for regex, replacement in _real_hatsuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = re.sub(
- r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa2(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa3(text):
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
- text = re.sub(
- r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
- return text
diff --git a/spaces/AI-Dashboards/AI.Dashboard.Streamlit.Index.For.Assessments/app.py b/spaces/AI-Dashboards/AI.Dashboard.Streamlit.Index.For.Assessments/app.py
deleted file mode 100644
index 6a97b4b79e2a86d6ed1fcf4c87e3a16fe582ea6d..0000000000000000000000000000000000000000
--- a/spaces/AI-Dashboards/AI.Dashboard.Streamlit.Index.For.Assessments/app.py
+++ /dev/null
@@ -1,453 +0,0 @@
-import streamlit as st
-
-
-st.markdown("""
-
-## FHIR - CT - Graph
-
-# FHIR:
-https://huggingface.co/spaces/awacke1/Clinical-Terminology-FHIR-Assessment
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Observation-SDKs
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Kits-SDC-HL7
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Blood-Pressure
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Exercise
-
-# Clinical Terminology:
-https://huggingface.co/spaces/awacke1/Ontology-Gradio
-https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology
-https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored
-https://huggingface.co/spaces/awacke1/ClinicalTerminologyAISearch
-https://huggingface.co/spaces/awacke1/ClinicalTerminologyAISearch1215
-
-# Graph, Clinical Terminology, FHIR Apps and Services:
-https://huggingface.co/spaces/awacke1/Git-GPG-Git-Actions-01-GraphViz
-https://huggingface.co/spaces/awacke1/Dice-Roll-Treemap-Plotly
-https://huggingface.co/spaces/awacke1/GraphVis3
-https://huggingface.co/spaces/awacke1/GraphViz-Demo
-https://huggingface.co/spaces/awacke1/StreamlitGraphViz
-https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz
-
-# CP Matplotlib, NetworkX, Streamlit, PyVis, st-click0detector, graphviz:
-https://huggingface.co/spaces/awacke1/CPVisGraph
-
-# OMS and LOCUS:
-https://huggingface.co/spaces/awacke1/NLPGraphOMSandLOCUS
-
-# Technical Architecture - Open Source Graph ML Libraries:
-NetworkX: https://networkx.org/
-PyTorch GNN: https://github.com/microsoft/ptgnn
-Jraph: https://github.com/deepmind/jraph
-Spektral: https://graphneural.network/
-Graph Nets: https://github.com/deepmind/graph_nets
-Deep Graph Library (DGL): https://github.com/dmlc
-PyTorch Geometric: https://github.com/pyg-team/pytorch_geometric
-
-# Provider Graph - Maps of Hospitals
-
-https://huggingface.co/spaces/awacke1/MN.Map.Hospitals.Top.Five
-
-
-
-
-# Graph, Clinical Terminology, FHIR Apps and Services:
-
-CP Matplotlib, NetworkX, Streamlit, PyVis, st-click0detector, graphviz:
-https://huggingface.co/spaces/awacke1/CPVisGraph
-
-OMS and LOCUS:
-https://huggingface.co/spaces/awacke1/NLPGraphOMSandLOCUS
-
-https://huggingface.co/spaces/awacke1/Git-GPG-Git-Actions-01-GraphViz
-https://huggingface.co/spaces/awacke1/Dice-Roll-Treemap-Plotly
-https://huggingface.co/spaces/awacke1/GraphVis3
-https://huggingface.co/spaces/awacke1/GraphViz-Demo
-https://huggingface.co/spaces/awacke1/StreamlitGraphViz
-https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz
-
-Technical Architecture - Open Source Graph ML Libraries:
-
-NetworkX: https://networkx.org/
-PyTorch GNN: https://github.com/microsoft/ptgnn
-Jraph: https://github.com/deepmind/jraph
-Spektral: https://graphneural.network/
-Graph Nets: https://github.com/deepmind/graph_nets
-Deep Graph Library (DGL): https://github.com/dmlc
-PyTorch Geometric: https://github.com/pyg-team/pytorch_geometric
-
-
-
-# Clinical Terminology:
-# FHIR:
-https://huggingface.co/spaces/awacke1/Clinical-Terminology-FHIR-Assessment
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Observation-SDKs
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Kits-SDC-HL7
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Blood-Pressure
-https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Exercise
-
-
-# Clinical Terminology:
-https://huggingface.co/spaces/awacke1/Ontology-Gradio
-https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology
-https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored
-https://huggingface.co/spaces/awacke1/ClinicalTerminologyAISearch
-https://huggingface.co/spaces/awacke1/ClinicalTerminologyAISearch1215
-
-
-
-
-# Saturday Evening:
-https://huggingface.co/spaces/awacke1/MN.Map.Hospitals.Top.Five
-
-
-
-# Iceland Myths - Places to See - https://huggingface.co/spaces/awacke1/Maps.Markers.Honor.Iceland
-
-
-Ásbyrgi: Thor, trying to prove his strength, challenged Sleipnir to a race. Odin agreed, but secretly fed Sleipnir his favorite snack, lightning bolts. With each step, Sleipnir left a massive print, and thus, Ásbyrgi was formed.
-
-
-
-
-
-# Saturday
-write a streamlit python program that uses functions and user interface elements of a textbox, a dial, a four direction button array for up down left right and display a folium map with the data in python list dictionaries with these values: Aurora Spottings, Notifications on Nerthern Lights, Northern lights map location cities and countries for Iceland on a map written with folium for latitude and longitude of top ten places to view Northern Lights. Cite References as urls.
-
-# Maps
-
-Space | URL
--------------------------------------------------------------------------------------------------------------------------------------------
-awacke1/VizLib-TopLargeHospitalsNewJersey-03-09-2023 | https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsNewJersey-03-09-2023
-awacke1/Bird-Species-Migration-Month-Map | https://huggingface.co/spaces/awacke1/Bird-Species-Migration-Month-Map
-⚗️🧠🔬🧬 Clinical Terminology Auto Mapper AI 👩⚕️🩺⚕️🙋 | https://huggingface.co/spaces/awacke1/SNOMED-LOINC-eCQM
-awacke1/Visualization-Plotly-Sunbursts-Treemaps-and-WebGL | https://huggingface.co/spaces/awacke1/Visualization-Plotly-Sunbursts-Treemaps-and-WebGL
-awacke1/HTML5-Aframe-3D-Maps | https://huggingface.co/spaces/awacke1/HTML5-Aframe-3D-Maps
-awacke1/HTML5-Aframe-3dMap-Flight | https://huggingface.co/spaces/awacke1/HTML5-Aframe-3dMap-Flight
-
-Figures:
-
-
-
-
-# Top Ten Board Games
-## Map-Making-Strategy
-https://huggingface.co/spaces/awacke1/Top-Ten-Board-Games-Map-Making-Strategy
-
-
-
-# MediaPipe
-### A cross language SDK for AI that is real time, 3d, camera responsive, and on any device for nearly any language
-#### Vision
-#### Natural Language
-#### Audio
-
-Mediapipe has fast and flexible AI/ML pipelines.
-Examples with Javascript Links!
-
-1. Image Classifier: https://mediapipe-studio.webapps.google.com/demo/image_classifier
-2. Object Detector: https://mediapipe-studio.webapps.google.com/demo/object_detector
-3. Text Classification: https://mediapipe-studio.webapps.google.com/demo/text_classifier
-4. Gesture Recognizer: https://mediapipe-studio.webapps.google.com/demo/gesture_recognizer
-5. Hand Landmark Detection: https://mediapipe-studio.webapps.google.com/demo/hand_landmarker
-6. Audio Classifier: https://mediapipe-studio.webapps.google.com/demo/audio_classifier
-
-
-Get started with just Javascript!!
-Getting Started: https://google.github.io/mediapipe/getting_started/javascript.html
-
-Javascript Solutions - Ready to Demo:
-1. Face Mesh: https://codepen.io/mediapipe/full/KKgVaPJ
-2. Face Detection: https://codepen.io/mediapipe/full/dyOzvZM
-3. Hands: https://codepen.io/mediapipe/full/RwGWYJw
-4. Face, Hands, Body: https://codepen.io/mediapipe/full/LYRRYEw
-5. Objectron: https://codepen.io/mediapipe/full/BaWvzdY
-6. Full Skeletal Pose: https://codepen.io/mediapipe/full/jOMbvxw
-7. Self Segmentation From Background: https://codepen.io/mediapipe/full/wvJyQpq
-
-Demonstration in Action with Screenshots:
-
-Self Segmentation From Background:
-
-
-Full Skeletal Pose:
-
-
-Hands - Both in 3D Projection even hidden surface vertices - Mahalo:
-
-
-Holistic - Face, Hands, Body:
-
-
-Face Detection:
-
-
-Face Mesh Real Time - 30 Frames per second!
-
-
-
-
-# ASR Voice and Virtual Assistants With Avatars
-1. https://huggingface.co/spaces/awacke1/ASR-openai-whisper-large
-2. https://huggingface.co/spaces/awacke1/ASR-voidful-wav2vec2-xlsr-multilingual-56
-3. https://huggingface.co/spaces/awacke1/ASR-nvidia-stt_en_conformer_ctc_large
-4. https://huggingface.co/spaces/awacke1/ASR-facebook-hubert-large-ls960-ft
-5. https://huggingface.co/spaces/awacke1/ASR-openai-whisper-tiny.en
-6. https://huggingface.co/spaces/awacke1/ASR-openai-whisper-tiny
-7. https://huggingface.co/spaces/awacke1/ASR-openai-whisper-medium
-8. https://huggingface.co/spaces/awacke1/ASR-nvidia-stt_en_conformer_transducer_xlarge
-9. https://huggingface.co/spaces/awacke1/ASR-openai-whisper-base
-10. https://huggingface.co/spaces/awacke1/ASR-facebook-wav2vec2-large-960h-lv60-self
-11. https://huggingface.co/spaces/awacke1/ASR-facebook-wav2vec2-base-960h
-12. https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test
-13. https://huggingface.co/spaces/awacke1/ASRGenerateStory
-14. https://huggingface.co/spaces/awacke1/TTS-STT-Blocks
-15. https://huggingface.co/spaces/awacke1/2-LiveASR
-16. https://huggingface.co/spaces/awacke1/CloneAnyVoice
-17. https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla
-18. https://huggingface.co/spaces/awacke1/ASRSpeechRecognition1
-19. https://huggingface.co/spaces/awacke1/1110-ASRLiveExample
-20. https://huggingface.co/spaces/awacke1/Z1-ASRLiveSpeechRecognition-GR
-21. https://huggingface.co/spaces/awacke1/PrivateASRWithMemory
-22. https://huggingface.co/spaces/awacke1/TimerASRLive
-
-# Best Voice Apps - HF:
-1. https://huggingface.co/spaces/BilalSardar/Voice-Cloning
-2. https://huggingface.co/spaces/RamAnanth1/chatGPT_voice
-3. https://huggingface.co/spaces/Voicemod/speech-synthesis-demo
-4. https://huggingface.co/spaces/ysharma/Voice-to-Youtube
-5. https://huggingface.co/spaces/ramkamal2000/voice-conversion-yourtts
-6. https://huggingface.co/spaces/RamAnanth1/co_chat_voice
-7. https://huggingface.co/spaces/ysharma/Voice-to-jokes
-8. https://huggingface.co/spaces/jayesh95/Voice-QA
-
-
-
-# Supervised Learning (SL) for ML and Reinforcement Learning with Human Feedback (RLHF):
-
-For human imitation we use reinforcement learning for fine tuning since feedback based on rewards shapes the quality of output where an agent completes a task and then observes a result. SL works on ranks not responses so is good for modifying elements at the token level however RLHF is trained to estimate the quality of the response with cumulative rewards for coherent conversation. RLHF considers context and coherence of entire conversation. Supervised learning is used to teach the model initially where the model learns basic structure and content. In the RLHF stage the model is refined with responses that represent improved accuracy.
-
-
-
-
-
-# Mermaid Model for Core NLP Tasks:
-
-```mermaid
-graph LR;
- A[Reader]-->B[Classifier];
- A-->C[Retriever];
- A-->D[Summarizer];
- B-->E[Ranker];
- B-->F[Query Classifier];
- D-->G[Generator];
- F-->H[Question Generator];
- H-->G;
- I[File Converter]-->J[Preprocessor];
- J-->A;
- I-->C;
- K[Snowflake]-->B;
- L[Oracle]-->B;
- M[Pandas CSV]-->A;
- N[Index]-->C;
- N-->E;
- O[Query with Filters]-->F;
- P[Evaluation]-->E;
- P-->F;
- Q[Retraining]-->B;
- Q-->E;
- R[Annotation]-->B;
-```
-
-# Core NLP Task Model for QA
-
-Tasks:
-1. Reader
-2. Summarizer
-3. Classifier
-4. Retriever
-5. Ranker
-6. Query Classifier
-7. Question Generator
-8. Generator
-
-Connectors:
-1. File Converter
-2. Preprocessor
-3. Snowflake
-4. Oracle
-5. Pandas CSV
-
-Supported Workflow:
-1. Index
-2. Query with Filters
-3. Evaluation
-4. Retraining
-5. Annotation
-
-# QA Model Spaces:
-
-QA use cases include QA, Semantic Document and FAQ Search.
-
-1. Streamlit Question Answering w Hugging Face: https://huggingface.co/spaces/awacke1/Question-answering
-2. Seq2Seq:
- - https://huggingface.co/spaces/awacke1/4-Seq2SeqQAT5
- - https://huggingface.co/spaces/awacke1/AW-04-GR-Seq-2-Seq-QA-Auto-Gen
- -
-3. BioGPT: https://huggingface.co/spaces/awacke1/microsoft-BioGPT-Large-PubMedQA
-4. NLP QA Context: https://huggingface.co/spaces/awacke1/NLPContextQATransformersRobertaBaseSquad2
- - https://huggingface.co/spaces/awacke1/SOTA-Plan
-5. https://huggingface.co/spaces/awacke1/Question-answering
-6. QA MLM: https://huggingface.co/spaces/awacke1/SOTA-MedEntity
-
-# 🤖 QA Models and Datasets:
-
-- Reader model extracts answers from text using QA pairs. SQuAD is the primary dataset.
-- Transformers (huggingface) has research momentum and solves real business problems.
-
-## 💻 Process:
-
-1. Best practices for QA systems: https://www.youtube.com/playlist?list=PLHgX2IExbFotW6WgDZ-cMzpDBUNKCMBbF
-2. Optimize Question/Answer Heads for SQuAD.
-3. QA search to ask questions to textual kb.
-4. Return text sections as answers.
-5. Organize text collection.
-6. Find similar documents to given input.
-7. Perform semantic and comprehensive word matching.
-8. Match incoming questions to FAQ KB dataset.
-
-## 📋 Tasks:
-
-1. Visual,
-2. Document, and
-3. Table QA.
-4. Zero Shot Classification.
-5. Translation.
-6. Conversational/Chat.
-7. Text2Text Generation.
-8. ASR/TTS.
-
-# Mermaid model
-
-```mermaid
-graph LR;
- A[Reader model]-->B[SQuAD];
- C[Transformers from Huggingface]-->D[Real Business Problems];
- E[Best practices for QA systems]-->F[Optimize Question/Answer Heads for SQuAD];
- G[QA search]-->H[Textual KB];
- H-->I[Return text sections as answers];
- J[Organize text collection]-->K[Find similar documents to given input];
- K-->I;
- L[Perform semantic and comprehensive word matching]-->I;
- M[Match incoming questions to FAQ KB dataset]-->I;
- N[Visual QA]-->O[Document QA];
- N-->P[Table QA];
- Q[Zero Shot Classification]-->I;
- R[Translation]-->I;
- S[Conversational/Chat]-->I;
- T[Text2Text Generation]-->I;
- U[ASR/TTS]-->I;
-
-```
-
-# Top 50 Assessments in Physical and Mental Health
-
-Below are the top 50 mental and physical health assessments.
-1. **Patient Health Questionnaire (PHQ-9)** 🧠 - Major depressive disorder (ICD-10: F32)
-2. **Generalized Anxiety Disorder 7-item Scale (GAD-7)** 😰 - Generalized anxiety disorder (ICD-10: F41.1)
-3. **Hamilton Rating Scale for Depression (HRSD)** 🧠 - Major depressive disorder (ICD-10: F32)
-4. **World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0)** 🧠💪 - Physical and mental disability (ICD-10: Z73.1)
-5. **Short Form-36 Health Survey (SF-36)** 💪🧠 - Health-related quality of life (CPT: 99499)
-6. **Health Assessment Questionnaire (HAQ)** 💪 - Functional status assessment (CPT: 97750)
-7. **EuroQol-5D (EQ-5D)** 💪🧠 - Health-related quality of life (LOINC: 83792-6)
-8. **Geriatric Depression Scale (GDS)** 🧑🦳🧠 - Depression in older adults (ICD-10: F32.1)
-9. **Mini-Mental State Examination (MMSE)** 🧑🦳💭 - Cognitive impairment (ICD-10: F06.7)
-10. **Pain Catastrophizing Scale (PCS)** 💔 - Chronic pain (LOINC: 86351-6)
-11. **Oswestry Disability Index (ODI)** 💪💔 - Back pain (CPT: 97750)
-12. **Fibromyalgia Impact Questionnaire (FIQ)** 💔😩 - Fibromyalgia (SNOMED: 316962002)
-13. **Beck Depression Inventory (BDI)** 🧠 - Depression (ICD-10: F32)
-14. **Posttraumatic Stress Disorder Checklist (PCL)** 😰😞 - Posttraumatic stress disorder (ICD-10: F43.1)
-15. **Alcohol Use Disorders Identification Test (AUDIT)** 🍻 - Alcohol use disorder (ICD-10: F10)
-16. **Drug Abuse Screening Test (DAST)** 💊 - Substance use disorder (ICD-10: F19)
-17. **Eating Attitudes Test (EAT)** 🍴 - Eating disorders (ICD-10: F50)
-18. **Adolescent Eating Disorder Examination (ADE)** 🍴👩🦰 - Eating disorders in adolescents (ICD-10: F50)
-19. **Child Behavior Checklist (CBCL)** 👧🧒 - Child behavior problems (ICD-10: F90)
-20. **Autism Spectrum Quotient (AQ)** 🧑🦱 - Autism spectrum disorder (ICD-10: F84.0)
-21. **Columbia-Suicide Severity Rating Scale (C-SSRS)** 🩸 - Suicide risk (ICD-10: Z65.8)
-22. **Perceived Stress Scale (PSS)** 😩 - Stress (LOINC: 75217-3)
-23. **Satisfaction with Life Scale (SWLS)** 😊 - Life satisfaction (LOINC: 69406-9)
-24. **Health Belief Model Scale (HBM)** 💊💉 - Health beliefs (LOINC: 88018)
-25. **Multidimensional Health Locus of Control Scale (MHLC)** 💊💉 - Health locus of control (LOINC: 87561-7)
-26. **Life Orientation Test-Revised (LOT-R)** 😃 - Optimism (LOINC: 75315-5)
-27. **State-Trait Anxiety Inventory (STAI)** 😰 - Anxiety (LOINC: 71092-3)
-28. **Multidimensional Scale of Perceived Social Support (MSPSS)** 👥 - Social support (LOINC: 86649-4)
-29. **Job Content Questionnaire (JCQ)** 💼 - Job stress (LOINC: 76554-9)
-30. **Burnout Measure (BO)** 🔥 - Burnout (LOINC: 89049-8)
-31. **Family Assessment Device (FAD)** 👨👩👧 - Family functioning (LOINC: 84113-2)
-32. **Perceived Control Scale (PCS)** 💪 - Perceived control (LOINC: 86447-0)
-33. **General Self-Efficacy Scale (GSES)** 💪 - Self-efficacy (LOINC: 76563-0)
-34. **Coping Strategies Inventory (CSI)** 😓 - Coping strategies (LOINC: 89057-1)
-35. **Acceptance and Action Questionnaire (AAQ-II)** 🧘 - Acceptance and commitment therapy (LOINC: 88027-2)
-36. **Attention Deficit Hyperactivity Disorder Self-Report Scale (ASRS)** 👧🧒 - ADHD (ICD-10: F90)
-37. **Impact of Event Scale-Revised (IES-R)** 😔😞 - Trauma (LOINC: 86237-7)
-38. **Insomnia Severity Index (ISI)** 💤 - Insomnia (LOINC: 82451-5)
-39. **Social Phobia Inventory (SPIN)** 😰 - Social anxiety disorder (ICD-10: F40.1)
-40. **Panic Disorder Severity Scale (PDSS)** 😰 - Panic disorder (ICD-10: F41.0)
-41. **Yale-Brown Obsessive Compulsive Scale (Y-BOCS)** 🤔 - Obsessive-compulsive disorder (ICD-10: F42)
-42. **Social Interaction Anxiety Scale (SIAS)** 😰 - Social anxiety disorder (ICD-10: F40.1)
-43. **Generalized Anxiety Disorder Scale (GADS)** 😰 - Generalized anxiety disorder (ICD-10: F41.1)
-44. **Postpartum Depression Screening Scale (PDSS)** 🤱🧠 - Postpartum depression (ICD-10: F53.0)
-45. **Child and Adolescent Symptom Inventory (CASI)** 👧🧒🧠 - Child and adolescent mental health (ICD-10: F90)
-46. **Strengths and Difficulties Questionnaire (SDQ)** 👧🧒🧠 - Child and adolescent mental health (ICD-10: F90)
-47. **Kessler Psychological Distress Scale (K10)** 🧠 - Psychological distress (LOINC: 76550-6)
-48. **World Health Organization Quality of Life Scale (WHOQOL)** 💪🧠 - Quality of life (LOINC: 88055-2)
-49. **Multidimensional Pain Inventory (MPI)** 💔 - Chronic pain (LOINC: 71808-8)
-50. **Cornell Scale for Depression in Dementia (CSDD)** 👴👵🧠 - Depression in dementia patients (ICD-10: F03.90)
-
-
-# SMART/FHIR/SDC Survey-Assess-Plan
-
-These SMART/FHIR/SDC compatible Surveys demonstrate how to build and conducct surveys with EMR/EHR Compliance Standards
-
-1. Smart FHIR Connect and Test BMI Calculator: https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-BMI
-2. Smart FHIR Kits SDC HL7: https://huggingface.co/spaces/awacke1/SMART-FHIR-Kits-SDC-HL7
-3. Smart FHIR Assessment Exercise: https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Exercise
-4. Smart FHIR Assessment Blood Pressure: https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Blood-Pressure
-5. Smart FHIR - Observations-Assessments-Rules-Referrals-Providers-Programs-Fulfillment-Alerrts-Notes-SDOH: https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Observation-SDKs
-
-
-# Graphs Survey-Assess-Plan-Goals
-
-These top 5 graph examples introduce visual ideas to use to survey, assess, plan and reach goals.
-
-1. Graph OMS and LOCUS Standards and Quality Metrics: https://huggingface.co/spaces/awacke1/NLPGraphOMSandLOCUS
-2. Graph Pain and High Medium Low Confidence: https://huggingface.co/spaces/awacke1/VISNLP-Graph
-3. Graph Action Mechanics: https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz
-4. Graph - OMS, MH, Charts, Maps, DOT lang for Pyvis VisJS: https://huggingface.co/spaces/awacke1/CPVisGraph
-5. Graph - Plan and Assess: https://huggingface.co/spaces/awacke1/Git-GPG-Git-Actions-01-GraphViz
-
-# ICD10, CPT, LOINC, SNOMED, HCPCS, OMS Codes for Top Health Conditions and Treatment Preferences Assessment
-
-Assess Topic| Assess Metric | Code Emoji | Code Topic | Code Type | Code
-------------|---------------|------------|------------|------------|-----------
-Childhood Immunization| % of children immunized by age two |🧒💉 | Clinical Code| ICD10 | Z28.2
-Breast Cancer Screening| % of women with mammogram in past 2 yrs |🩺🎀 | Clinical Code| CPT| 77067
-Colorectal Cancer Screening| % of adults screened for colorectal cancer| 🩺💩 | Clinical Code| CPT| 82274
-Comprehensive Diabetes Care| % of diabetic patients who had all recommended tests| 🩺🩹 | Clinical Code| LOINC| 4548-4
-Controlling High Blood Pressure| % of patients with controlled blood pressure| 🩺💊 | Clinical Code| ICD10|I10
-Medication Management for Asthma| % of asthma patients with proper meds| 💊🌬️ | Clinical Code| SNOMED|195967001
-Follow-up After Mental Illness Hospitalization| % of patients with follow-up care| 🩺🏥 | Clinical Code| HCPCS|G0181
-Prenatal & Postpartum Care| % of pregnant women with proper care |🤰🩺 | Clinical Code| ICD10|Z34
-Comprehensive Eye Exam| % of diabetic patients with eye exam |🩺👀 | Clinical Code| CPT| 92014
-Childhood Weight Assessment| % of children with BMI assessment |🧒📏 | Clinical Code| ICD10| Z00.121
-Chlamydia Screening in Women| % of sexually active women screened| 🩺👩 | Clinical Code| CPT|87491
-Avoidance of Antibiotic Treatment for Acute Bronchitis| % of patients without antibiotics |🩺💊 | Clinical Code| ICD10|J20.9
-Osteoporosis Management in Women|% of women with bone density test |🩺💪 | Clinical Code| CPT|77080
-Use of High-Risk Medications in the Elderly| % of elderly with safe meds |💊👴👵 | Clinical Code| HCPCS |G9612
-Diabetes Screening for Schizophrenia or Bipolar Disorder| % of patients with mental illness screened |🧠🩺 | Clinical Code| SNOMED| 169609005
-All-Cause Readmissions| % of patients readmitted within 30 days |🩺🏥 | Clinical Code| ICD10| Z51.5
-Antidepressant Medication Management| % of depressed patients with proper meds & follow-up |🩺🧠 | Clinical Code| CPT|96127
-Follow-up Care for Children Prescribed ADHD Medication|% of children with follow-up care |🩺🧒 | Clinical Code| ICD10|F90
-Imaging Studies for Low Back Pain| % of patients without imaging studies|🩺📊 | Clinical Code| ICD10|M54.5
-Spirometry Testing for COPD|% of COPD patients with spirometry testing |🩺🫁 | Clinical Code|CPT|94010
-
-
-""")
\ No newline at end of file
diff --git a/spaces/AIConsultant/MusicGen/README.md b/spaces/AIConsultant/MusicGen/README.md
deleted file mode 100644
index 215eb424f4d2efd9d3295c0b6763b9f205b45c7d..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AudioCraft Plus v2.0.0a (MusicGen + AudioGen)
-emoji: 🎶
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: true
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/htsat.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/htsat.py
deleted file mode 100644
index db96116286d307a73943886f947450215e061ba2..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/htsat.py
+++ /dev/null
@@ -1,1022 +0,0 @@
-# Ke Chen
-# knutchen@ucsd.edu
-# HTS-AT: A HIERARCHICAL TOKEN-SEMANTIC AUDIO TRANSFORMER FOR SOUND CLASSIFICATION AND DETECTION
-# Some layers designed on the model
-# below codes are based and referred from https://github.com/microsoft/Swin-Transformer
-# Swin Transformer for Computer Vision: https://arxiv.org/pdf/2103.14030.pdf
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from itertools import repeat
-import collections.abc
-import math
-import warnings
-
-from torch.nn.init import _calculate_fan_in_and_fan_out
-import torch.utils.checkpoint as checkpoint
-
-import random
-
-from torchlibrosa.stft import Spectrogram, LogmelFilterBank
-from torchlibrosa.augmentation import SpecAugmentation
-
-from itertools import repeat
-from .utils import do_mixup, interpolate
-
-from .feature_fusion import iAFF, AFF, DAF
-
-# from PyTorch internals
-def _ntuple(n):
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
- return parse
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = _ntuple
-
-def drop_path(x, drop_prob: float = 0., training: bool = False):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
- the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
- See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
- changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
- 'survival rate' as the argument.
- """
- if drop_prob == 0. or not training:
- return x
- keep_prob = 1 - drop_prob
- shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
- random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
- random_tensor.floor_() # binarize
- output = x.div(keep_prob) * random_tensor
- return output
-
-
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- """
- def __init__(self, drop_prob=None):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
-class PatchEmbed(nn.Module):
- """ 2D Image to Patch Embedding
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768, norm_layer=None, flatten=True, patch_stride = 16,
- enable_fusion=False, fusion_type='None'):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patch_stride = to_2tuple(patch_stride)
- self.img_size = img_size
- self.patch_size = patch_size
- self.patch_stride = patch_stride
- self.grid_size = (img_size[0] // patch_stride[0], img_size[1] // patch_stride[1])
- self.num_patches = self.grid_size[0] * self.grid_size[1]
- self.flatten = flatten
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- padding = ((patch_size[0] - patch_stride[0]) // 2, (patch_size[1] - patch_stride[1]) // 2)
-
- if (self.enable_fusion) and (self.fusion_type == 'channel_map'):
- self.proj = nn.Conv2d(in_chans*4, embed_dim, kernel_size=patch_size, stride=patch_stride, padding=padding)
- else:
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_stride, padding=padding)
- self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
-
- if (self.enable_fusion) and (self.fusion_type in ['daf_2d','aff_2d','iaff_2d']):
- self.mel_conv2d = nn.Conv2d(in_chans, embed_dim, kernel_size=(patch_size[0], patch_size[1]*3), stride=(patch_stride[0], patch_stride[1] * 3), padding=padding)
- if self.fusion_type == 'daf_2d':
- self.fusion_model = DAF()
- elif self.fusion_type == 'aff_2d':
- self.fusion_model = AFF(channels=embed_dim, type='2D')
- elif self.fusion_type == 'iaff_2d':
- self.fusion_model = iAFF(channels=embed_dim, type='2D')
- def forward(self, x, longer_idx = None):
- if (self.enable_fusion) and (self.fusion_type in ['daf_2d','aff_2d','iaff_2d']):
- global_x = x[:,0:1,:,:]
-
-
- # global processing
- B, C, H, W = global_x.shape
- assert H == self.img_size[0] and W == self.img_size[1], \
- f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
- global_x = self.proj(global_x)
- TW = global_x.size(-1)
- if len(longer_idx) > 0:
- # local processing
- local_x = x[longer_idx,1:,:,:].contiguous()
- B, C, H, W = local_x.shape
- local_x = local_x.view(B*C,1,H,W)
- local_x = self.mel_conv2d(local_x)
- local_x = local_x.view(B,C,local_x.size(1),local_x.size(2),local_x.size(3))
- local_x = local_x.permute((0,2,3,1,4)).contiguous().flatten(3)
- TB,TC,TH,_ = local_x.size()
- if local_x.size(-1) < TW:
- local_x = torch.cat([local_x, torch.zeros((TB,TC,TH,TW-local_x.size(-1)), device=global_x.device)], dim=-1)
- else:
- local_x = local_x[:,:,:,:TW]
-
- global_x[longer_idx] = self.fusion_model(global_x[longer_idx],local_x)
- x = global_x
- else:
- B, C, H, W = x.shape
- assert H == self.img_size[0] and W == self.img_size[1], \
- f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
- x = self.proj(x)
-
- if self.flatten:
- x = x.flatten(2).transpose(1, 2) # BCHW -> BNC
- x = self.norm(x)
- return x
-
-class Mlp(nn.Module):
- """ MLP as used in Vision Transformer, MLP-Mixer and related networks
- """
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-def _no_grad_trunc_normal_(tensor, mean, std, a, b):
- # Cut & paste from PyTorch official master until it's in a few official releases - RW
- # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
- def norm_cdf(x):
- # Computes standard normal cumulative distribution function
- return (1. + math.erf(x / math.sqrt(2.))) / 2.
-
- if (mean < a - 2 * std) or (mean > b + 2 * std):
- warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
- "The distribution of values may be incorrect.",
- stacklevel=2)
-
- with torch.no_grad():
- # Values are generated by using a truncated uniform distribution and
- # then using the inverse CDF for the normal distribution.
- # Get upper and lower cdf values
- l = norm_cdf((a - mean) / std)
- u = norm_cdf((b - mean) / std)
-
- # Uniformly fill tensor with values from [l, u], then translate to
- # [2l-1, 2u-1].
- tensor.uniform_(2 * l - 1, 2 * u - 1)
-
- # Use inverse cdf transform for normal distribution to get truncated
- # standard normal
- tensor.erfinv_()
-
- # Transform to proper mean, std
- tensor.mul_(std * math.sqrt(2.))
- tensor.add_(mean)
-
- # Clamp to ensure it's in the proper range
- tensor.clamp_(min=a, max=b)
- return tensor
-
-
-def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
- # type: (Tensor, float, float, float, float) -> Tensor
- r"""Fills the input Tensor with values drawn from a truncated
- normal distribution. The values are effectively drawn from the
- normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
- with values outside :math:`[a, b]` redrawn until they are within
- the bounds. The method used for generating the random values works
- best when :math:`a \leq \text{mean} \leq b`.
- Args:
- tensor: an n-dimensional `torch.Tensor`
- mean: the mean of the normal distribution
- std: the standard deviation of the normal distribution
- a: the minimum cutoff value
- b: the maximum cutoff value
- Examples:
- >>> w = torch.empty(3, 5)
- >>> nn.init.trunc_normal_(w)
- """
- return _no_grad_trunc_normal_(tensor, mean, std, a, b)
-
-
-def variance_scaling_(tensor, scale=1.0, mode='fan_in', distribution='normal'):
- fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
- if mode == 'fan_in':
- denom = fan_in
- elif mode == 'fan_out':
- denom = fan_out
- elif mode == 'fan_avg':
- denom = (fan_in + fan_out) / 2
-
- variance = scale / denom
-
- if distribution == "truncated_normal":
- # constant is stddev of standard normal truncated to (-2, 2)
- trunc_normal_(tensor, std=math.sqrt(variance) / .87962566103423978)
- elif distribution == "normal":
- tensor.normal_(std=math.sqrt(variance))
- elif distribution == "uniform":
- bound = math.sqrt(3 * variance)
- tensor.uniform_(-bound, bound)
- else:
- raise ValueError(f"invalid distribution {distribution}")
-
-
-def lecun_normal_(tensor):
- variance_scaling_(tensor, mode='fan_in', distribution='truncated_normal')
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- r""" Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x, attn
-
- def extra_repr(self):
- return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
-
-
-# We use the model based on Swintransformer Block, therefore we can use the swin-transformer pretrained model
-class SwinTransformerBlock(nn.Module):
- r""" Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resulotion.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm, norm_before_mlp='ln'):
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- self.norm_before_mlp = norm_before_mlp
- if min(self.input_resolution) <= self.window_size:
- # if window size is larger than input resolution, we don't partition windows
- self.shift_size = 0
- self.window_size = min(self.input_resolution)
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- if self.norm_before_mlp == 'ln':
- self.norm2 = nn.LayerNorm(dim)
- elif self.norm_before_mlp == 'bn':
- self.norm2 = lambda x: nn.BatchNorm1d(dim)(x.transpose(1, 2)).transpose(1, 2)
- else:
- raise NotImplementedError
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if self.shift_size > 0:
- # calculate attention mask for SW-MSA
- H, W = self.input_resolution
- img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
- else:
- attn_mask = None
-
- self.register_buffer("attn_mask", attn_mask)
-
- def forward(self, x):
- # pdb.set_trace()
- H, W = self.input_resolution
- # print("H: ", H)
- # print("W: ", W)
- # pdb.set_trace()
- B, L, C = x.shape
- # assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows, attn = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x, attn
-
- def extra_repr(self):
- return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
- f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
-
-
-
-class PatchMerging(nn.Module):
- r""" Patch Merging Layer.
- Args:
- input_resolution (tuple[int]): Resolution of input feature.
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.input_resolution = input_resolution
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x):
- """
- x: B, H*W, C
- """
- H, W = self.input_resolution
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
- assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
-
- x = x.view(B, H, W, C)
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
- def extra_repr(self):
- return f"input_resolution={self.input_resolution}, dim={self.dim}"
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
- norm_before_mlp='ln'):
-
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
- num_heads=num_heads, window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer, norm_before_mlp=norm_before_mlp)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x):
- attns = []
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x)
- else:
- x, attn = blk(x)
- if not self.training:
- attns.append(attn.unsqueeze(0))
- if self.downsample is not None:
- x = self.downsample(x)
- if not self.training:
- attn = torch.cat(attns, dim = 0)
- attn = torch.mean(attn, dim = 0)
- return x, attn
-
- def extra_repr(self):
- return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
-
-
-# The Core of HTSAT
-class HTSAT_Swin_Transformer(nn.Module):
- r"""HTSAT based on the Swin Transformer
- Args:
- spec_size (int | tuple(int)): Input Spectrogram size. Default 256
- patch_size (int | tuple(int)): Patch size. Default: 4
- path_stride (iot | tuple(int)): Patch Stride for Frequency and Time Axis. Default: 4
- in_chans (int): Number of input image channels. Default: 1 (mono)
- num_classes (int): Number of classes for classification head. Default: 527
- embed_dim (int): Patch embedding dimension. Default: 96
- depths (tuple(int)): Depth of each HTSAT-Swin Transformer layer.
- num_heads (tuple(int)): Number of attention heads in different layers.
- window_size (int): Window size. Default: 8
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
- drop_rate (float): Dropout rate. Default: 0
- attn_drop_rate (float): Attention dropout rate. Default: 0
- drop_path_rate (float): Stochastic depth rate. Default: 0.1
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
- patch_norm (bool): If True, add normalization after patch embedding. Default: True
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
- config (module): The configuration Module from config.py
- """
-
- def __init__(self, spec_size=256, patch_size=4, patch_stride=(4,4),
- in_chans=1, num_classes=527,
- embed_dim=96, depths=[2, 2, 6, 2], num_heads=[4, 8, 16, 32],
- window_size=8, mlp_ratio=4., qkv_bias=True, qk_scale=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
- norm_layer=nn.LayerNorm,
- ape=False, patch_norm=True,
- use_checkpoint=False, norm_before_mlp='ln', config = None,
- enable_fusion = False, fusion_type = 'None', **kwargs):
- super(HTSAT_Swin_Transformer, self).__init__()
-
- self.config = config
- self.spec_size = spec_size
- self.patch_stride = patch_stride
- self.patch_size = patch_size
- self.window_size = window_size
- self.embed_dim = embed_dim
- self.depths = depths
- self.ape = ape
- self.in_chans = in_chans
- self.num_classes = num_classes
- self.num_heads = num_heads
- self.num_layers = len(self.depths)
- self.num_features = int(self.embed_dim * 2 ** (self.num_layers - 1))
-
- self.drop_rate = drop_rate
- self.attn_drop_rate = attn_drop_rate
- self.drop_path_rate = drop_path_rate
-
- self.qkv_bias = qkv_bias
- self.qk_scale = None
-
- self.patch_norm = patch_norm
- self.norm_layer = norm_layer if self.patch_norm else None
- self.norm_before_mlp = norm_before_mlp
- self.mlp_ratio = mlp_ratio
-
- self.use_checkpoint = use_checkpoint
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- # process mel-spec ; used only once
- self.freq_ratio = self.spec_size // self.config.mel_bins
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
- self.interpolate_ratio = 32 # Downsampled ratio
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=config.window_size, hop_length=config.hop_size,
- win_length=config.window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=config.sample_rate, n_fft=config.window_size,
- n_mels=config.mel_bins, fmin=config.fmin, fmax=config.fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2) # 2 2
- self.bn0 = nn.BatchNorm2d(self.config.mel_bins)
-
-
- # split spctrogram into non-overlapping patches
- self.patch_embed = PatchEmbed(
- img_size=self.spec_size, patch_size=self.patch_size, in_chans=self.in_chans,
- embed_dim=self.embed_dim, norm_layer=self.norm_layer, patch_stride = patch_stride,
- enable_fusion=self.enable_fusion, fusion_type=self.fusion_type
- )
-
- num_patches = self.patch_embed.num_patches
- patches_resolution = self.patch_embed.grid_size
- self.patches_resolution = patches_resolution
-
- # absolute position embedding
- if self.ape:
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, self.embed_dim))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=self.drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, self.drop_path_rate, sum(self.depths))] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(dim=int(self.embed_dim * 2 ** i_layer),
- input_resolution=(patches_resolution[0] // (2 ** i_layer),
- patches_resolution[1] // (2 ** i_layer)),
- depth=self.depths[i_layer],
- num_heads=self.num_heads[i_layer],
- window_size=self.window_size,
- mlp_ratio=self.mlp_ratio,
- qkv_bias=self.qkv_bias, qk_scale=self.qk_scale,
- drop=self.drop_rate, attn_drop=self.attn_drop_rate,
- drop_path=dpr[sum(self.depths[:i_layer]):sum(self.depths[:i_layer + 1])],
- norm_layer=self.norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint,
- norm_before_mlp=self.norm_before_mlp)
- self.layers.append(layer)
-
- self.norm = self.norm_layer(self.num_features)
- self.avgpool = nn.AdaptiveAvgPool1d(1)
- self.maxpool = nn.AdaptiveMaxPool1d(1)
-
- SF = self.spec_size // (2 ** (len(self.depths) - 1)) // self.patch_stride[0] // self.freq_ratio
- self.tscam_conv = nn.Conv2d(
- in_channels = self.num_features,
- out_channels = self.num_classes,
- kernel_size = (SF,3),
- padding = (0,1)
- )
- self.head = nn.Linear(num_classes, num_classes)
-
- if (self.enable_fusion) and (self.fusion_type in ['daf_1d','aff_1d','iaff_1d']):
- self.mel_conv1d = nn.Sequential(
- nn.Conv1d(64, 64, kernel_size=5, stride=3, padding=2),
- nn.BatchNorm1d(64)
- )
- if self.fusion_type == 'daf_1d':
- self.fusion_model = DAF()
- elif self.fusion_type == 'aff_1d':
- self.fusion_model = AFF(channels=64, type='1D')
- elif self.fusion_type == 'iaff_1d':
- self.fusion_model = iAFF(channels=64, type='1D')
-
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'absolute_pos_embed'}
-
- @torch.jit.ignore
- def no_weight_decay_keywords(self):
- return {'relative_position_bias_table'}
-
-
- def forward_features(self, x, longer_idx = None):
- # A deprecated optimization for using a hierarchical output from different blocks
-
- frames_num = x.shape[2]
- x = self.patch_embed(x, longer_idx = longer_idx)
- if self.ape:
- x = x + self.absolute_pos_embed
- x = self.pos_drop(x)
- for i, layer in enumerate(self.layers):
- x, attn = layer(x)
- # for x
- x = self.norm(x)
- B, N, C = x.shape
- SF = frames_num // (2 ** (len(self.depths) - 1)) // self.patch_stride[0]
- ST = frames_num // (2 ** (len(self.depths) - 1)) // self.patch_stride[1]
- x = x.permute(0,2,1).contiguous().reshape(B, C, SF, ST)
- B, C, F, T = x.shape
- # group 2D CNN
- c_freq_bin = F // self.freq_ratio
- x = x.reshape(B, C, F // c_freq_bin, c_freq_bin, T)
- x = x.permute(0,1,3,2,4).contiguous().reshape(B, C, c_freq_bin, -1)
- # get latent_output
- fine_grained_latent_output = torch.mean(x, dim = 2)
- fine_grained_latent_output = interpolate(fine_grained_latent_output.permute(0,2,1).contiguous(), 8 * self.patch_stride[1])
-
- latent_output = self.avgpool(torch.flatten(x,2))
- latent_output = torch.flatten(latent_output, 1)
-
- # display the attention map, if needed
-
- x = self.tscam_conv(x)
- x = torch.flatten(x, 2) # B, C, T
-
- fpx = interpolate(torch.sigmoid(x).permute(0,2,1).contiguous(), 8 * self.patch_stride[1])
-
- x = self.avgpool(x)
- x = torch.flatten(x, 1)
-
- output_dict = {
- 'framewise_output': fpx, # already sigmoided
- 'clipwise_output': torch.sigmoid(x),
- 'fine_grained_embedding': fine_grained_latent_output,
- 'embedding': latent_output
- }
-
- return output_dict
-
- def crop_wav(self, x, crop_size, spe_pos = None):
- time_steps = x.shape[2]
- tx = torch.zeros(x.shape[0], x.shape[1], crop_size, x.shape[3]).to(x.device)
- for i in range(len(x)):
- if spe_pos is None:
- crop_pos = random.randint(0, time_steps - crop_size - 1)
- else:
- crop_pos = spe_pos
- tx[i][0] = x[i, 0, crop_pos:crop_pos + crop_size,:]
- return tx
-
- # Reshape the wavform to a img size, if you want to use the pretrained swin transformer model
- def reshape_wav2img(self, x):
- B, C, T, F = x.shape
- target_T = int(self.spec_size * self.freq_ratio)
- target_F = self.spec_size // self.freq_ratio
- assert T <= target_T and F <= target_F, "the wav size should less than or equal to the swin input size"
- # to avoid bicubic zero error
- if T < target_T:
- x = nn.functional.interpolate(x, (target_T, x.shape[3]), mode="bicubic", align_corners=True)
- if F < target_F:
- x = nn.functional.interpolate(x, (x.shape[2], target_F), mode="bicubic", align_corners=True)
- x = x.permute(0,1,3,2).contiguous()
- x = x.reshape(x.shape[0], x.shape[1], x.shape[2], self.freq_ratio, x.shape[3] // self.freq_ratio)
- # print(x.shape)
- x = x.permute(0,1,3,2,4).contiguous()
- x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3], x.shape[4])
- return x
-
- # Repeat the wavform to a img size, if you want to use the pretrained swin transformer model
- def repeat_wat2img(self, x, cur_pos):
- B, C, T, F = x.shape
- target_T = int(self.spec_size * self.freq_ratio)
- target_F = self.spec_size // self.freq_ratio
- assert T <= target_T and F <= target_F, "the wav size should less than or equal to the swin input size"
- # to avoid bicubic zero error
- if T < target_T:
- x = nn.functional.interpolate(x, (target_T, x.shape[3]), mode="bicubic", align_corners=True)
- if F < target_F:
- x = nn.functional.interpolate(x, (x.shape[2], target_F), mode="bicubic", align_corners=True)
- x = x.permute(0,1,3,2).contiguous() # B C F T
- x = x[:,:,:,cur_pos:cur_pos + self.spec_size]
- x = x.repeat(repeats = (1,1,4,1))
- return x
-
- def forward(self, x: torch.Tensor, mixup_lambda = None, infer_mode = False, device=None):# out_feat_keys: List[str] = None):
-
- if self.enable_fusion and x["longer"].sum() == 0:
- # if no audio is longer than 10s, then randomly select one audio to be longer
- x["longer"][torch.randint(0, x["longer"].shape[0], (1,))] = True
-
- if not self.enable_fusion:
- x = x["waveform"].to(device=device, non_blocking=True)
- x = self.spectrogram_extractor(x) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
- if self.training:
- x = self.spec_augmenter(x)
-
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
-
- x = self.reshape_wav2img(x)
- output_dict = self.forward_features(x)
- else:
- longer_list = x["longer"].to(device=device, non_blocking=True)
- x = x["mel_fusion"].to(device=device, non_blocking=True)
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
- longer_list_idx = torch.where(longer_list)[0]
- if self.fusion_type in ['daf_1d','aff_1d','iaff_1d']:
- new_x = x[:,0:1,:,:].clone().contiguous()
- if len(longer_list_idx) > 0:
- # local processing
- fusion_x_local = x[longer_list_idx,1:,:,:].clone().contiguous()
- FB,FC,FT,FF = fusion_x_local.size()
- fusion_x_local = fusion_x_local.view(FB * FC, FT, FF)
- fusion_x_local = torch.permute(fusion_x_local, (0,2,1)).contiguous()
- fusion_x_local = self.mel_conv1d(fusion_x_local)
- fusion_x_local = fusion_x_local.view(FB,FC,FF,fusion_x_local.size(-1))
- fusion_x_local = torch.permute(fusion_x_local, (0,2,1,3)).contiguous().flatten(2)
- if fusion_x_local.size(-1) < FT:
- fusion_x_local = torch.cat([fusion_x_local, torch.zeros((FB,FF,FT- fusion_x_local.size(-1)), device=device)], dim=-1)
- else:
- fusion_x_local = fusion_x_local[:,:,:FT]
- # 1D fusion
- new_x = new_x.squeeze(1).permute((0,2,1)).contiguous()
- new_x[longer_list_idx] = self.fusion_model(new_x[longer_list_idx], fusion_x_local)
- x = new_x.permute((0,2,1)).contiguous()[:,None,:,:]
- else:
- x = new_x
-
- elif self.fusion_type in ['daf_2d','aff_2d','iaff_2d','channel_map']:
- x = x # no change
-
- if self.training:
- x = self.spec_augmenter(x)
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
-
- x = self.reshape_wav2img(x)
- output_dict = self.forward_features(x, longer_idx = longer_list_idx)
-
- # if infer_mode:
- # # in infer mode. we need to handle different length audio input
- # frame_num = x.shape[2]
- # target_T = int(self.spec_size * self.freq_ratio)
- # repeat_ratio = math.floor(target_T / frame_num)
- # x = x.repeat(repeats=(1,1,repeat_ratio,1))
- # x = self.reshape_wav2img(x)
- # output_dict = self.forward_features(x)
- # else:
- # if x.shape[2] > self.freq_ratio * self.spec_size:
- # if self.training:
- # x = self.crop_wav(x, crop_size=self.freq_ratio * self.spec_size)
- # x = self.reshape_wav2img(x)
- # output_dict = self.forward_features(x)
- # else:
- # # Change: Hard code here
- # overlap_size = (x.shape[2] - 1) // 4
- # output_dicts = []
- # crop_size = (x.shape[2] - 1) // 2
- # for cur_pos in range(0, x.shape[2] - crop_size - 1, overlap_size):
- # tx = self.crop_wav(x, crop_size = crop_size, spe_pos = cur_pos)
- # tx = self.reshape_wav2img(tx)
- # output_dicts.append(self.forward_features(tx))
- # clipwise_output = torch.zeros_like(output_dicts[0]["clipwise_output"]).float().to(x.device)
- # framewise_output = torch.zeros_like(output_dicts[0]["framewise_output"]).float().to(x.device)
- # for d in output_dicts:
- # clipwise_output += d["clipwise_output"]
- # framewise_output += d["framewise_output"]
- # clipwise_output = clipwise_output / len(output_dicts)
- # framewise_output = framewise_output / len(output_dicts)
- # output_dict = {
- # 'framewise_output': framewise_output,
- # 'clipwise_output': clipwise_output
- # }
- # else: # this part is typically used, and most easy one
- # x = self.reshape_wav2img(x)
- # output_dict = self.forward_features(x)
- # x = self.head(x)
-
- # We process the data in the dataloader part, in that here we only consider the input_T < fixed_T
-
-
-
- return output_dict
-
-def create_htsat_model(audio_cfg, enable_fusion=False, fusion_type='None'):
- try:
-
- assert audio_cfg.model_name in ["tiny", "base", "large"], "model name for HTS-AT is wrong!"
- if audio_cfg.model_name == "tiny":
- model = HTSAT_Swin_Transformer(
- spec_size=256,
- patch_size=4,
- patch_stride=(4,4),
- num_classes=audio_cfg.class_num,
- embed_dim=96,
- depths=[2,2,6,2],
- num_heads=[4,8,16,32],
- window_size=8,
- config = audio_cfg,
- enable_fusion = enable_fusion,
- fusion_type = fusion_type
- )
- elif audio_cfg.model_name == "base":
- model = HTSAT_Swin_Transformer(
- spec_size=256,
- patch_size=4,
- patch_stride=(4,4),
- num_classes=audio_cfg.class_num,
- embed_dim=128,
- depths=[2,2,12,2],
- num_heads=[4,8,16,32],
- window_size=8,
- config = audio_cfg,
- enable_fusion = enable_fusion,
- fusion_type = fusion_type
- )
- elif audio_cfg.model_name == "large":
- model = HTSAT_Swin_Transformer(
- spec_size=256,
- patch_size=4,
- patch_stride=(4,4),
- num_classes=audio_cfg.class_num,
- embed_dim=256,
- depths=[2,2,12,2],
- num_heads=[4,8,16,32],
- window_size=8,
- config = audio_cfg,
- enable_fusion = enable_fusion,
- fusion_type = fusion_type
- )
-
- return model
- except:
- raise RuntimeError(f'Import Model for {audio_cfg.model_name} not found, or the audio cfg parameters are not enough.')
-
\ No newline at end of file
diff --git a/spaces/ALSv/midjourney-v4-1/app.py b/spaces/ALSv/midjourney-v4-1/app.py
deleted file mode 100644
index 262436d8b50f87b0953c645576cc3184b3b27b43..0000000000000000000000000000000000000000
--- a/spaces/ALSv/midjourney-v4-1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Joeythemonster/anything-midjourney-v-4-1").launch()
\ No newline at end of file
diff --git a/spaces/Ababababababbababa/Ashaar/app.py b/spaces/Ababababababbababa/Ashaar/app.py
deleted file mode 100644
index 580d3b353dfe066a53293417f4380121aaa5827b..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/app.py
+++ /dev/null
@@ -1,151 +0,0 @@
-import os
-os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
-import gradio as gr
-from transformers import pipeline
-from transformers import AutoTokenizer, AutoModelForCausalLM
-from Ashaar.utils import get_output_df, get_highlighted_patterns_html
-from Ashaar.bait_analysis import BaitAnalysis
-from langs import *
-import sys
-import json
-import argparse
-
-arg_parser = argparse.ArgumentParser()
-arg_parser.add_argument('--lang', type = str, default = 'ar')
-args = arg_parser.parse_args()
-lang = args.lang
-
-if lang == 'ar':
- TITLE = TITLE_ar
- DESCRIPTION = DESCRIPTION_ar
- textbox_trg_text = textbox_trg_text_ar
- textbox_inp_text = textbox_inp_text_ar
- btn_trg_text = btn_trg_text_ar
- btn_inp_text = btn_inp_text_ar
- css = """ #textbox{ direction: RTL;}"""
-
-else:
- TITLE = TITLE_en
- DESCRIPTION = DESCRIPTION_en
- textbox_trg_text = textbox_trg_text_en
- textbox_inp_text = textbox_inp_text_en
- btn_trg_text = btn_trg_text_en
- btn_inp_text = btn_inp_text_en
- css = ""
-
-gpt_tokenizer = AutoTokenizer.from_pretrained('arbml/ashaar_tokenizer')
-model = AutoModelForCausalLM.from_pretrained('arbml/Ashaar_model')
-
-theme_to_token = json.load(open("extra/theme_tokens.json", "r"))
-token_to_theme = {t:m for m,t in theme_to_token.items()}
-meter_to_token = json.load(open("extra/meter_tokens.json", "r"))
-token_to_meter = {t:m for m,t in meter_to_token.items()}
-
-analysis = BaitAnalysis()
-meter, theme, qafiyah = "", "", ""
-
-def analyze(poem):
- global meter,theme,qafiyah, generate_btn
- shatrs = poem.split("\n")
- baits = [' # '.join(shatrs[2*i:2*i+2]) for i in range(len(shatrs)//2)]
- output = analysis.analyze(baits,override_tashkeel=True)
- meter = output['meter']
- qafiyah = output['qafiyah'][0]
- theme = output['theme'][-1]
- df = get_output_df(output)
- return get_highlighted_patterns_html(df), gr.Button.update(interactive=True)
-
-def generate(inputs, top_p = 3):
- baits = inputs.split('\n')
- if len(baits) % 2 !=0:
- baits = baits[:-1]
- poem = ' '.join(['<|bsep|> '+baits[i]+' <|vsep|> '+baits[i+1]+' |bsep|>' for i in range(0, len(baits), 2)])
- prompt = f"""
- {meter_to_token[meter]} {qafiyah} {theme_to_token[theme]}
- <|psep|>
- {poem}
- """.strip()
- print(prompt)
- encoded_input = gpt_tokenizer(prompt, return_tensors='pt')
- output = model.generate(**encoded_input, max_length = 512, top_p = 3, do_sample=True)
-
- result = ""
- prev_token = ""
- line_cnts = 0
- for i, beam in enumerate(output[:, len(encoded_input.input_ids[0]):]):
- if line_cnts >= 10:
- break
- for token in beam:
- if line_cnts >= 10:
- break
- decoded = gpt_tokenizer.decode(token)
- if 'meter' in decoded or 'theme' in decoded:
- break
- if decoded in ["<|vsep|>", "|bsep|>"]:
- result += "\n"
- line_cnts+=1
- elif decoded in ['<|bsep|>', '<|psep|>', '|psep|>']:
- pass
- else:
- result += decoded
- prev_token = decoded
- else:
- break
- # return theme+" "+ f"من بحر {meter} مع قافية بحر ({qafiyah})" + "\n" +result
- return result, gr.Button.update(interactive=False)
-
-examples = [
- [
-"""القلب أعلم يا عذول بدائه
-وأحق منك بجفنه وبمائه"""
- ],
- [
-"""رمتِ الفؤادَ مليحة عذراءُ
- بسهامِ لحظٍ ما لهنَّ دواءُ"""
- ],
- [
-"""أذَلَّ الحِرْصُ والطَّمَعُ الرِّقابَا
-وقَد يَعفو الكَريمُ، إذا استَرَابَا"""
- ]
-]
-
-with gr.Blocks(theme=gr.themes.Soft(), css=css) as demo:
- with gr.Row():
- with gr.Column():
- gr.HTML(TITLE)
- gr.HTML(DESCRIPTION)
-
- with gr.Row():
- with gr.Column():
- textbox_output = gr.Textbox(lines=10, label=textbox_trg_text, elem_id="textbox")
- with gr.Column():
- inputs = gr.Textbox(lines=10, label=textbox_inp_text, elem_id="textbox")
-
-
- with gr.Row():
- with gr.Column():
- if lang == 'ar':
- trg_btn = gr.Button(btn_trg_text, interactive=False)
- else:
- trg_btn = gr.Button(btn_trg_text)
-
- with gr.Column():
- if lang == 'ar':
- inp_btn = gr.Button(btn_inp_text)
- else:
- inp_btn = gr.Button(btn_inp_text, interactive = False)
-
- with gr.Row():
- html_output = gr.HTML()
-
- if lang == 'en':
- gr.Examples(examples, textbox_output)
- inp_btn.click(generate, inputs = textbox_output, outputs=[inputs, inp_btn])
- trg_btn.click(analyze, inputs = textbox_output, outputs=[html_output,inp_btn])
- else:
- gr.Examples(examples, inputs)
- trg_btn.click(generate, inputs = inputs, outputs=[textbox_output, trg_btn])
- inp_btn.click(analyze, inputs = inputs, outputs=[html_output,trg_btn] )
-
-# demo.launch(server_name = '0.0.0.0', share=True)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Ababababababbababa/Sha3bor_Aragpt2_Base/README.md b/spaces/Ababababababbababa/Sha3bor_Aragpt2_Base/README.md
deleted file mode 100644
index 8bc209a4444457e39e800d2be1c2cb5afbcbdd7b..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Sha3bor_Aragpt2_Base/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sha3bor Aragpt2 Base
-emoji: 🏆
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Abhaykoul/BardCookies-AI_Query/app.py b/spaces/Abhaykoul/BardCookies-AI_Query/app.py
deleted file mode 100644
index 27c89cefe27a0a83fee4ec75a4cdbf95bb32d924..0000000000000000000000000000000000000000
--- a/spaces/Abhaykoul/BardCookies-AI_Query/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from bardapi import BardCookies
-import requests
-from requests.exceptions import ReadTimeout
-import gradio as gr
-
-def get_bard_response(Secure_1PSID, Secure_1PSIDTS, Secure_1PSIDCC, Query):
- cookie_dict = {
- "__Secure-1PSID": Secure_1PSID,
- "__Secure-1PSIDTS": Secure_1PSIDTS,
- "__Secure-1PSIDCC": Secure_1PSIDCC
- }
-
- bard = BardCookies(cookie_dict=cookie_dict)
- retries = 3 # Number of retries
- for _ in range(retries):
- try:
- Reply = bard.get_answer(Query)['content']
- return Reply
- except ReadTimeout:
- continue
- return "Failed to fetch data after multiple retries."
-
-iface = gr.Interface(
- fn=get_bard_response,
- inputs=[
- gr.components.Textbox(label="__Secure-1PSID"),
- gr.components.Textbox(label="__Secure-1PSIDTS"),
- gr.components.Textbox(label="__Secure-1PSIDCC"),
- gr.components.Textbox(label="Query")
- ],
- outputs="text",
- title="BardCookies - AI Query",
- description = "Enter your cookies and a query to get a response from BardCookies. If you need help with cookies, check out the Chrome extension for managing cookies. Go to bard.google.com and then use EditThisCookie extension and copy Secure_1PSID, Secure_1PSIDTS, Secure_1PSIDCC from it. Bard Chat."
-)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/agentverse/registry.py b/spaces/AgentVerse/agentVerse/agentverse/registry.py
deleted file mode 100644
index b53b571416736fe4e7d83e23bd0dad71950b43fa..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/registry.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from typing import Dict
-
-from pydantic import BaseModel
-
-
-class Registry(BaseModel):
- """Registry for storing and building classes."""
-
- name: str
- entries: Dict = {}
-
- def register(self, key: str):
- def decorator(class_builder):
- self.entries[key] = class_builder
- return class_builder
-
- return decorator
-
- def build(self, type: str, **kwargs):
- if type not in self.entries:
- raise ValueError(
- f'{type} is not registered. Please register with the .register("{type}") method provided in {self.name} registry'
- )
- return self.entries[type](**kwargs)
-
- def get_all_entries(self):
- return self.entries
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/oval/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/oval/Factory.js
deleted file mode 100644
index 9d68b3357604dcb84d81b7e54a065823a630d51e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/oval/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import Oval from './Oval.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('oval', function (config) {
- var gameObject = new Oval(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.Spinner.Oval', Oval);
-
-export default Oval;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/easemove/EaseMove.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/easemove/EaseMove.ts
deleted file mode 100644
index 80fe9fa41b42426b2c71beb6fdf6ff3b2cd00762..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/easemove/EaseMove.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import { EaseMove, EaseMoveTo, EaseMoveFrom } from '../../../plugins/easemove';
-export { EaseMove, EaseMoveTo, EaseMoveFrom };
\ No newline at end of file
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/monotonic_align/__init__.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/Ameaou/academic-chatgpt3.1/docs/README_EN.md b/spaces/Ameaou/academic-chatgpt3.1/docs/README_EN.md
deleted file mode 100644
index db214f5327b8cdcd84ed1c57390c3b24ba83d78f..0000000000000000000000000000000000000000
--- a/spaces/Ameaou/academic-chatgpt3.1/docs/README_EN.md
+++ /dev/null
@@ -1,291 +0,0 @@
-> **Note**
->
-> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
->
-
-# ChatGPT Academic Optimization
-
-**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a [README in English](docs/README_EN.md) translated by this project itself.**
-
-> **Note**
->
-> 1. Please note that only **functions with red color** supports reading files, some functions are located in the **dropdown menu** of plugins. Additionally, we welcome and prioritize any new plugin PRs with **highest priority**!
->
-> 2. The functionality of each file in this project is detailed in the self-translation report [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the project. With the iteration of the version, you can also click on the relevant function plugins at any time to call GPT to regenerate the self-analysis report of the project. The FAQ summary is in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) section.
->
-
-
-
-
-Function | Description
---- | ---
-One-Click Polish | Supports one-click polishing and finding grammar errors in academic papers.
-One-Key Translation Between Chinese and English | One-click translation between Chinese and English.
-One-Key Code Interpretation | Can correctly display and interpret code.
-[Custom Shortcut Keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
-[Configure Proxy Server](https://www.bilibili.com/video/BV1rc411W7Dr) | Supports configuring proxy servers.
-Modular Design | Supports custom high-order function plugins and [function plugins], and plugins support [hot updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-[Self-programming Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] [One-Key Read] (https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) The source code of this project is analyzed.
-[Program Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] One-click can analyze the project tree of other Python/C/C++/Java/Lua/... projects
-Read the Paper | [Function Plugin] One-click interpretation of the full text of latex paper and generation of abstracts
-Latex Full Text Translation, Proofreading | [Function Plugin] One-click translation or proofreading of latex papers.
-Batch Comment Generation | [Function Plugin] One-click batch generation of function comments
-Chat Analysis Report Generation | [Function Plugin] After running, an automatic summary report will be generated
-[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function Plugin] Enter the arxiv article url to translate the abstract and download the PDF with one click
-[Full-text Translation Function of PDF Paper](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function Plugin] Extract the title & abstract of the PDF paper + translate the full text (multithreading)
-[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function Plugin] Given any Google Scholar search page URL, let gpt help you choose interesting articles.
-Formula / Picture / Table Display | Can display both the tex form and the rendering form of formulas at the same time, support formula and code highlighting
-Multithreaded Function Plugin Support | Supports multi-threaded calling chatgpt, one-click processing of massive text or programs
-Start Dark Gradio [Theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__dark-theme=true``` at the end of the browser url to switch to dark theme
-[Multiple LLM Models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | It must feel nice to be served by both GPT3.5, GPT4, and [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B)!
-Huggingface non-Science Net [Online Experience](https://huggingface.co/spaces/qingxu98/gpt-academic) | After logging in to huggingface, copy [this space](https://huggingface.co/spaces/qingxu98/gpt-academic)
-... | ...
-
-
-
-
-- New interface (switch between "left-right layout" and "up-down layout" by modifying the LAYOUT option in config.py)
-
-
-
-
-
-- All buttons are dynamically generated by reading functional.py and can add custom functionality at will, freeing up clipboard
-
-
-
-
-- Proofreading / correcting
-
-
-
-
-- If the output contains formulas, it will be displayed in both the tex form and the rendering form at the same time, which is convenient for copying and reading
-
-
-
-
-- Don't want to read the project code? Just take the whole project to chatgpt
-
-
-
-
-- Multiple major language model mixing calls (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-
-
-
-Multiple major language model mixing call [huggingface beta version](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (the huggingface version does not support chatglm)
-
-
----
-
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download project
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Configure API_KEY and proxy settings
-
-
-In `config.py`, configure the overseas Proxy and OpenAI API KEY as follows:
-```
-1. If you are in China, you need to set up an overseas proxy to use the OpenAI API smoothly. Please read config.py carefully for setup details (1. Modify USE_PROXY to True; 2. Modify proxies according to the instructions).
-2. Configure the OpenAI API KEY. You need to register and obtain an API KEY on the OpenAI website. Once you get the API KEY, you can configure it in the config.py file.
-3. Issues related to proxy networks (network timeouts, proxy failures) are summarized at https://github.com/binary-husky/chatgpt_academic/issues/1
-```
-(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py` and use the same-name configuration in `config.py` to overwrite it. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configuration in `config.py` to` config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure.))
-
-
-3. Install dependencies
-```sh
-# (Option One) Recommended
-python -m pip install -r requirements.txt
-
-# (Option Two) If you use anaconda, the steps are similar:
-# (Option Two.1) conda create -n gptac_venv python=3.11
-# (Option Two.2) conda activate gptac_venv
-# (Option Two.3) python -m pip install -r requirements.txt
-
-# Note: Use official pip source or Ali pip source. Other pip sources (such as some university pips) may have problems, and temporary replacement methods are as follows:
-# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-```
-
-If you need to support Tsinghua ChatGLM, you need to install more dependencies (if you are not familiar with python or your computer configuration is not good, we recommend not to try):
-```sh
-python -m pip install -r request_llm/requirements_chatglm.txt
-```
-
-4. Run
-```sh
-python main.py
-```
-
-5. Test function plugins
-```
-- Test Python project analysis
- In the input area, enter `./crazy_functions/test_project/python/dqn`, and then click "Analyze the entire Python project"
-- Test self-code interpretation
- Click "[Multithreading Demo] Interpretation of This Project Itself (Source Code Interpretation)"
-- Test experimental function template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
- Click "[Function Plugin Template Demo] Today in History"
-- There are more functions to choose from in the function plugin area drop-down menu.
-```
-
-## Installation-Method 2: Use Docker (Linux)
-
-1. ChatGPT only (recommended for most people)
-``` sh
-# download project
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-# configure overseas Proxy and OpenAI API KEY
-Edit config.py with any text editor
-# Install
-docker build -t gpt-academic .
-# Run
-docker run --rm -it --net=host gpt-academic
-
-# Test function plug-in
-## Test function plugin template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
-Click "[Function Plugin Template Demo] Today in History"
-## Test Abstract Writing for Latex Projects
-Enter ./crazy_functions/test_project/latex/attention in the input area, and then click "Read Tex Paper and Write Abstract"
-## Test Python Project Analysis
-Enter ./crazy_functions/test_project/python/dqn in the input area and click "Analyze the entire Python project."
-
-More functions are available in the function plugin area drop-down menu.
-```
-
-2. ChatGPT+ChatGLM (requires strong familiarity with docker + strong computer configuration)
-
-``` sh
-# Modify dockerfile
-cd docs && nano Dockerfile+ChatGLM
-# How to build | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
-docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
-# How to run | 如何运行 (1) 直接运行:
-docker run --rm -it --net=host --gpus=all gpt-academic
-# How to run | 如何运行 (2) 我想运行之前进容器做一些调整:
-docker run --rm -it --net=host --gpus=all gpt-academic bash
-```
-
-
-## Installation-Method 3: Other Deployment Methods
-
-1. Remote Cloud Server Deployment
-Please visit [Deployment Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-2. Use WSL2 (Windows Subsystem for Linux)
-Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-
-## Installation-Proxy Configuration
-### Method 1: Conventional method
-[Configure Proxy](https://github.com/binary-husky/chatgpt_academic/issues/1)
-
-### Method Two: Step-by-step tutorial for newcomers
-[Step-by-step tutorial for newcomers](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
-
----
-
-## Customizing Convenient Buttons (Customizing Academic Shortcuts)
-Open `core_functional.py` with any text editor and add an item as follows, then restart the program (if the button has been successfully added and visible, both the prefix and suffix support hot modification without the need to restart the program to take effect). For example:
-```
-"Super English to Chinese translation": {
- # Prefix, which will be added before your input. For example, to describe your requirements, such as translation, code interpretation, polishing, etc.
- "Prefix": "Please translate the following content into Chinese and use a markdown table to interpret the proprietary terms in the text one by one:\n\n",
-
- # Suffix, which will be added after your input. For example, combined with the prefix, you can put your input content in quotes.
- "Suffix": "",
-},
-```
-
-
-
-
----
-
-
-## Some Function Displays
-
-### Image Display:
-
-
-You are a professional academic paper translator.
-
-
-
-
-
-### If a program can understand and analyze itself:
-
-
-
-
-
-
-
-
-
-### Analysis of any Python/Cpp project:
-
-
-
-
-
-
-
-
-### One-click reading comprehension and summary generation of Latex papers
-
-
-
-
-### Automatic report generation
-
-
-
-
-
-
-### Modular functional design
-
-
-
-
-
-### Source code translation to English
-
-
-
-
-
-## Todo and version planning:
-- version 3.2+ (todo): Function plugin supports more parameter interfaces
-- version 3.1: Support for inquiring multiple GPT models at the same time! Support for api2d, support for multiple apikeys load balancing
-- version 3.0: Support for chatglm and other small llms
-- version 2.6: Refactored the plugin structure, improved interactivity, added more plugins
-- version 2.5: Self-updating, solves the problem of text being too long and token overflowing when summarizing large project source code
-- version 2.4: (1) Added PDF full text translation function; (2) Added function to switch input area position; (3) Added vertical layout option; (4) Multi-threaded function plugin optimization.
-- version 2.3: Enhanced multi-threaded interactivity
-- version 2.2: Function plugin supports hot reloading
-- version 2.1: Foldable layout
-- version 2.0: Introduction of modular function plugins
-- version 1.0: Basic functions
-
-## Reference and learning
-
-```
-The code design of this project has referenced many other excellent projects, including:
-
-# Reference project 1: Borrowed many tips from ChuanhuChatGPT
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Reference project 2: Tsinghua ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-```
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/README.md
deleted file mode 100644
index 6b25679efbe90d556244e7aa6bee3e863c28b069..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
-## Diffusers examples with Intel optimizations
-
-**This research project is not actively maintained by the diffusers team. For any questions or comments, please make sure to tag @hshen14 .**
-
-This aims to provide diffusers examples with Intel optimizations such as Bfloat16 for training/fine-tuning acceleration and 8-bit integer (INT8) for inference acceleration on Intel platforms.
-
-## Accelerating the fine-tuning for textual inversion
-
-We accelereate the fine-tuning for textual inversion with Intel Extension for PyTorch. The [examples](textual_inversion) enable both single node and multi-node distributed training with Bfloat16 support on Intel Xeon Scalable Processor.
-
-## Accelerating the inference for Stable Diffusion using Bfloat16
-
-We start the inference acceleration with Bfloat16 using Intel Extension for PyTorch. The [script](inference_bf16.py) is generally designed to support standard Stable Diffusion models with Bfloat16 support.
-```bash
-pip install diffusers transformers accelerate scipy safetensors
-
-export KMP_BLOCKTIME=1
-export KMP_SETTINGS=1
-export KMP_AFFINITY=granularity=fine,compact,1,0
-
-# Intel OpenMP
-export OMP_NUM_THREADS=< Cores to use >
-export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libiomp5.so
-# Jemalloc is a recommended malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support.
-export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libjemalloc.so
-export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:-1,muzzy_decay_ms:9000000000"
-
-# Launch with default DDIM
-numactl --membind -C python python inference_bf16.py
-# Launch with DPMSolverMultistepScheduler
-numactl --membind -C python python inference_bf16.py --dpm
-
-```
-
-## Accelerating the inference for Stable Diffusion using INT8
-
-Coming soon ...
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py
deleted file mode 100644
index e7265bcdbef2a7ab5e8ba6b3fe13f02cb718b40a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = './fovea_r50_fpn_4x4_1x_coco.py'
-model = dict(
- bbox_head=dict(
- with_deform=True,
- norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)))
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
-optimizer_config = dict(
- _delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index ef7b06dd3806c1d93be41943ab4d7d49f68ac830..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './nonlocal_r50-d8_512x1024_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index 145cadb24016eeea87fccff8171c5b0dfb78f7ab..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_r50-d8.py',
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(align_corners=True),
- auxiliary_head=dict(align_corners=True),
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
diff --git a/spaces/AndySAnker/DeepStruc/models/README.md b/spaces/AndySAnker/DeepStruc/models/README.md
deleted file mode 100644
index e4afa9439921f934d7ffdd5445eed1c5f75571ac..0000000000000000000000000000000000000000
--- a/spaces/AndySAnker/DeepStruc/models/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-[ChemRxiv](https://chemrxiv.org/engage/chemrxiv/article-details/6221f17357a9d20c9a729ecb) | [Paper](https://pubs.rsc.org/en/content/articlelanding/2023/dd/d2dd00086e)
-
-# Models
-This folder contain the DeepStruc model and all other trained models will be save here with the folder name:
-DeepStruc-year-month-day-time.
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/cc_attention.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/cc_attention.py
deleted file mode 100644
index 9207aa95e6730bd9b3362dee612059a5f0ce1c5e..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/cc_attention.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from annotator.uniformer.mmcv.cnn import PLUGIN_LAYERS, Scale
-
-
-def NEG_INF_DIAG(n, device):
- """Returns a diagonal matrix of size [n, n].
-
- The diagonal are all "-inf". This is for avoiding calculating the
- overlapped element in the Criss-Cross twice.
- """
- return torch.diag(torch.tensor(float('-inf')).to(device).repeat(n), 0)
-
-
-@PLUGIN_LAYERS.register_module()
-class CrissCrossAttention(nn.Module):
- """Criss-Cross Attention Module.
-
- .. note::
- Before v1.3.13, we use a CUDA op. Since v1.3.13, we switch
- to a pure PyTorch and equivalent implementation. For more
- details, please refer to https://github.com/open-mmlab/mmcv/pull/1201.
-
- Speed comparison for one forward pass
-
- - Input size: [2,512,97,97]
- - Device: 1 NVIDIA GeForce RTX 2080 Ti
-
- +-----------------------+---------------+------------+---------------+
- | |PyTorch version|CUDA version|Relative speed |
- +=======================+===============+============+===============+
- |with torch.no_grad() |0.00554402 s |0.0299619 s |5.4x |
- +-----------------------+---------------+------------+---------------+
- |no with torch.no_grad()|0.00562803 s |0.0301349 s |5.4x |
- +-----------------------+---------------+------------+---------------+
-
- Args:
- in_channels (int): Channels of the input feature map.
- """
-
- def __init__(self, in_channels):
- super().__init__()
- self.query_conv = nn.Conv2d(in_channels, in_channels // 8, 1)
- self.key_conv = nn.Conv2d(in_channels, in_channels // 8, 1)
- self.value_conv = nn.Conv2d(in_channels, in_channels, 1)
- self.gamma = Scale(0.)
- self.in_channels = in_channels
-
- def forward(self, x):
- """forward function of Criss-Cross Attention.
-
- Args:
- x (Tensor): Input feature. \
- shape (batch_size, in_channels, height, width)
- Returns:
- Tensor: Output of the layer, with shape of \
- (batch_size, in_channels, height, width)
- """
- B, C, H, W = x.size()
- query = self.query_conv(x)
- key = self.key_conv(x)
- value = self.value_conv(x)
- energy_H = torch.einsum('bchw,bciw->bwhi', query, key) + NEG_INF_DIAG(
- H, query.device)
- energy_H = energy_H.transpose(1, 2)
- energy_W = torch.einsum('bchw,bchj->bhwj', query, key)
- attn = F.softmax(
- torch.cat([energy_H, energy_W], dim=-1), dim=-1) # [B,H,W,(H+W)]
- out = torch.einsum('bciw,bhwi->bchw', value, attn[..., :H])
- out += torch.einsum('bchj,bhwj->bchw', value, attn[..., H:])
-
- out = self.gamma(out) + x
- out = out.contiguous()
-
- return out
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(in_channels={self.in_channels})'
- return s
diff --git a/spaces/Ariharasudhan/YoloV5/utils/__init__.py b/spaces/Ariharasudhan/YoloV5/utils/__init__.py
deleted file mode 100644
index 3b1a2c87329a3333e8ea1998e1507dcf0d2a554b..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/__init__.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-utils/initialization
-"""
-
-import contextlib
-import platform
-import threading
-
-
-def emojis(str=''):
- # Return platform-dependent emoji-safe version of string
- return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
-
-
-class TryExcept(contextlib.ContextDecorator):
- # YOLOv5 TryExcept class. Usage: @TryExcept() decorator or 'with TryExcept():' context manager
- def __init__(self, msg=''):
- self.msg = msg
-
- def __enter__(self):
- pass
-
- def __exit__(self, exc_type, value, traceback):
- if value:
- print(emojis(f"{self.msg}{': ' if self.msg else ''}{value}"))
- return True
-
-
-def threaded(func):
- # Multi-threads a target function and returns thread. Usage: @threaded decorator
- def wrapper(*args, **kwargs):
- thread = threading.Thread(target=func, args=args, kwargs=kwargs, daemon=True)
- thread.start()
- return thread
-
- return wrapper
-
-
-def join_threads(verbose=False):
- # Join all daemon threads, i.e. atexit.register(lambda: join_threads())
- main_thread = threading.current_thread()
- for t in threading.enumerate():
- if t is not main_thread:
- if verbose:
- print(f'Joining thread {t.name}')
- t.join()
-
-
-def notebook_init(verbose=True):
- # Check system software and hardware
- print('Checking setup...')
-
- import os
- import shutil
-
- from utils.general import check_font, check_requirements, is_colab
- from utils.torch_utils import select_device # imports
-
- check_font()
-
- import psutil
- from IPython import display # to display images and clear console output
-
- if is_colab():
- shutil.rmtree('/content/sample_data', ignore_errors=True) # remove colab /sample_data directory
-
- # System info
- if verbose:
- gb = 1 << 30 # bytes to GiB (1024 ** 3)
- ram = psutil.virtual_memory().total
- total, used, free = shutil.disk_usage("/")
- display.clear_output()
- s = f'({os.cpu_count()} CPUs, {ram / gb:.1f} GB RAM, {(total - free) / gb:.1f}/{total / gb:.1f} GB disk)'
- else:
- s = ''
-
- select_device(newline=False)
- print(emojis(f'Setup complete ✅ {s}'))
- return display
diff --git a/spaces/Arnaudding001/FrenchTranslationAI/README.md b/spaces/Arnaudding001/FrenchTranslationAI/README.md
deleted file mode 100644
index 178225e19402cab24d8aff04fc6f74e27895fc2b..0000000000000000000000000000000000000000
--- a/spaces/Arnaudding001/FrenchTranslationAI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: FrenchTranslationAI
-emoji: 🔥
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Arnx/MusicGenXvAKN/tests/modules/test_seanet.py b/spaces/Arnx/MusicGenXvAKN/tests/modules/test_seanet.py
deleted file mode 100644
index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/tests/modules/test_seanet.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-
-import pytest
-import torch
-
-from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock
-from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d
-
-
-class TestSEANetModel:
-
- def test_base(self):
- encoder = SEANetEncoder()
- decoder = SEANetDecoder()
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_causal(self):
- encoder = SEANetEncoder(causal=True)
- decoder = SEANetDecoder(causal=True)
- x = torch.randn(1, 1, 24000)
-
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_conv_skip_connection(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False)
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_seanet_encoder_decoder_final_act(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False, final_activation='Tanh')
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in encoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- # here we add + 1 to n_blocks as we increment n_blocks just after the block
- assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm
-
- def test_encoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_encoder_blocks_norm(encoder, disable_blocks, norm)
-
- def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in decoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, StreamableConvTranspose1d):
- n_blocks += 1
- assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- assert resnet_layer.conv.norm_type == 'none' \
- if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
-
- def test_decoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_decoder_blocks_norm(decoder, disable_blocks, norm)
-
- def test_disable_norm_raises_exception(self):
- # Invalid disable_norm_outer_blocks values raise exceptions
- with pytest.raises(AssertionError):
- SEANetEncoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/file_proxy.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/file_proxy.py
deleted file mode 100644
index 4b0b0da6c2a62b2b1468c35ddd69f1bbb9b91aa8..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/file_proxy.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import io
-from typing import IO, TYPE_CHECKING, Any, List
-
-from .ansi import AnsiDecoder
-from .text import Text
-
-if TYPE_CHECKING:
- from .console import Console
-
-
-class FileProxy(io.TextIOBase):
- """Wraps a file (e.g. sys.stdout) and redirects writes to a console."""
-
- def __init__(self, console: "Console", file: IO[str]) -> None:
- self.__console = console
- self.__file = file
- self.__buffer: List[str] = []
- self.__ansi_decoder = AnsiDecoder()
-
- @property
- def rich_proxied_file(self) -> IO[str]:
- """Get proxied file."""
- return self.__file
-
- def __getattr__(self, name: str) -> Any:
- return getattr(self.__file, name)
-
- def write(self, text: str) -> int:
- if not isinstance(text, str):
- raise TypeError(f"write() argument must be str, not {type(text).__name__}")
- buffer = self.__buffer
- lines: List[str] = []
- while text:
- line, new_line, text = text.partition("\n")
- if new_line:
- lines.append("".join(buffer) + line)
- buffer.clear()
- else:
- buffer.append(line)
- break
- if lines:
- console = self.__console
- with console:
- output = Text("\n").join(
- self.__ansi_decoder.decode_line(line) for line in lines
- )
- console.print(output)
- return len(text)
-
- def flush(self) -> None:
- output = "".join(self.__buffer)
- if output:
- self.__console.print(output)
- del self.__buffer[:]
-
- def fileno(self) -> int:
- return self.__file.fileno()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_text.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_text.py
deleted file mode 100644
index c88cfbb2349c6401336bc5ba6623f51afd1eb59d..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_text.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import re
-
-from ._functools import method_cache
-
-
-# from jaraco.text 3.5
-class FoldedCase(str):
- """
- A case insensitive string class; behaves just like str
- except compares equal when the only variation is case.
-
- >>> s = FoldedCase('hello world')
-
- >>> s == 'Hello World'
- True
-
- >>> 'Hello World' == s
- True
-
- >>> s != 'Hello World'
- False
-
- >>> s.index('O')
- 4
-
- >>> s.split('O')
- ['hell', ' w', 'rld']
-
- >>> sorted(map(FoldedCase, ['GAMMA', 'alpha', 'Beta']))
- ['alpha', 'Beta', 'GAMMA']
-
- Sequence membership is straightforward.
-
- >>> "Hello World" in [s]
- True
- >>> s in ["Hello World"]
- True
-
- You may test for set inclusion, but candidate and elements
- must both be folded.
-
- >>> FoldedCase("Hello World") in {s}
- True
- >>> s in {FoldedCase("Hello World")}
- True
-
- String inclusion works as long as the FoldedCase object
- is on the right.
-
- >>> "hello" in FoldedCase("Hello World")
- True
-
- But not if the FoldedCase object is on the left:
-
- >>> FoldedCase('hello') in 'Hello World'
- False
-
- In that case, use in_:
-
- >>> FoldedCase('hello').in_('Hello World')
- True
-
- >>> FoldedCase('hello') > FoldedCase('Hello')
- False
- """
-
- def __lt__(self, other):
- return self.lower() < other.lower()
-
- def __gt__(self, other):
- return self.lower() > other.lower()
-
- def __eq__(self, other):
- return self.lower() == other.lower()
-
- def __ne__(self, other):
- return self.lower() != other.lower()
-
- def __hash__(self):
- return hash(self.lower())
-
- def __contains__(self, other):
- return super().lower().__contains__(other.lower())
-
- def in_(self, other):
- "Does self appear in other?"
- return self in FoldedCase(other)
-
- # cache lower since it's likely to be called frequently.
- @method_cache
- def lower(self):
- return super().lower()
-
- def index(self, sub):
- return self.lower().index(sub.lower())
-
- def split(self, splitter=' ', maxsplit=0):
- pattern = re.compile(re.escape(splitter), re.I)
- return pattern.split(self, maxsplit)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_clib.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_clib.py
deleted file mode 100644
index 67ce2444ea69a0bbdfab0bda8c2aa14951187096..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_clib.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import distutils.command.build_clib as orig
-from distutils.errors import DistutilsSetupError
-from distutils import log
-from setuptools.dep_util import newer_pairwise_group
-
-
-class build_clib(orig.build_clib):
- """
- Override the default build_clib behaviour to do the following:
-
- 1. Implement a rudimentary timestamp-based dependency system
- so 'compile()' doesn't run every time.
- 2. Add more keys to the 'build_info' dictionary:
- * obj_deps - specify dependencies for each object compiled.
- this should be a dictionary mapping a key
- with the source filename to a list of
- dependencies. Use an empty string for global
- dependencies.
- * cflags - specify a list of additional flags to pass to
- the compiler.
- """
-
- def build_libraries(self, libraries):
- for (lib_name, build_info) in libraries:
- sources = build_info.get('sources')
- if sources is None or not isinstance(sources, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'sources' must be present and must be "
- "a list of source filenames" % lib_name)
- sources = list(sources)
-
- log.info("building '%s' library", lib_name)
-
- # Make sure everything is the correct type.
- # obj_deps should be a dictionary of keys as sources
- # and a list/tuple of files that are its dependencies.
- obj_deps = build_info.get('obj_deps', dict())
- if not isinstance(obj_deps, dict):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
- dependencies = []
-
- # Get the global dependencies that are specified by the '' key.
- # These will go into every source's dependency list.
- global_deps = obj_deps.get('', list())
- if not isinstance(global_deps, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
-
- # Build the list to be used by newer_pairwise_group
- # each source will be auto-added to its dependencies.
- for source in sources:
- src_deps = [source]
- src_deps.extend(global_deps)
- extra_deps = obj_deps.get(source, list())
- if not isinstance(extra_deps, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
- src_deps.extend(extra_deps)
- dependencies.append(src_deps)
-
- expected_objects = self.compiler.object_filenames(
- sources,
- output_dir=self.build_temp,
- )
-
- if (
- newer_pairwise_group(dependencies, expected_objects)
- != ([], [])
- ):
- # First, compile the source code to object files in the library
- # directory. (This should probably change to putting object
- # files in a temporary build directory.)
- macros = build_info.get('macros')
- include_dirs = build_info.get('include_dirs')
- cflags = build_info.get('cflags')
- self.compiler.compile(
- sources,
- output_dir=self.build_temp,
- macros=macros,
- include_dirs=include_dirs,
- extra_postargs=cflags,
- debug=self.debug
- )
-
- # Now "link" the object files together into a static library.
- # (On Unix at least, this isn't really linking -- it just
- # builds an archive. Whatever.)
- self.compiler.create_static_lib(
- expected_objects,
- lib_name,
- output_dir=self.build_clib,
- debug=self.debug
- )
diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_gmem_atomics.h b/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_gmem_atomics.h
deleted file mode 100644
index 3308a2851bec88a0b04c17413a92861a74298b89..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_gmem_atomics.h
+++ /dev/null
@@ -1,185 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
- * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-
-#include
-
-namespace histogram_gmem_atomics
-{
- // Decode float4 pixel into bins
- template
- __device__ __forceinline__ void DecodePixel(float4 pixel, unsigned int (&bins)[ACTIVE_CHANNELS])
- {
- float* samples = reinterpret_cast(&pixel);
-
- #pragma unroll
- for (int CHANNEL = 0; CHANNEL < ACTIVE_CHANNELS; ++CHANNEL)
- bins[CHANNEL] = (unsigned int) (samples[CHANNEL] * float(NUM_BINS));
- }
-
- // Decode uchar4 pixel into bins
- template
- __device__ __forceinline__ void DecodePixel(uchar4 pixel, unsigned int (&bins)[ACTIVE_CHANNELS])
- {
- unsigned char* samples = reinterpret_cast(&pixel);
-
- #pragma unroll
- for (int CHANNEL = 0; CHANNEL < ACTIVE_CHANNELS; ++CHANNEL)
- bins[CHANNEL] = (unsigned int) (samples[CHANNEL]);
- }
-
- // Decode uchar1 pixel into bins
- template
- __device__ __forceinline__ void DecodePixel(uchar1 pixel, unsigned int (&bins)[ACTIVE_CHANNELS])
- {
- bins[0] = (unsigned int) pixel.x;
- }
-
- // First-pass histogram kernel (binning into privatized counters)
- template <
- int NUM_PARTS,
- int ACTIVE_CHANNELS,
- int NUM_BINS,
- typename PixelType>
- __global__ void histogram_gmem_atomics(
- const PixelType *in,
- int width,
- int height,
- unsigned int *out)
- {
- // global position and size
- int x = blockIdx.x * blockDim.x + threadIdx.x;
- int y = blockIdx.y * blockDim.y + threadIdx.y;
- int nx = blockDim.x * gridDim.x;
- int ny = blockDim.y * gridDim.y;
-
- // threads in workgroup
- int t = threadIdx.x + threadIdx.y * blockDim.x; // thread index in workgroup, linear in 0..nt-1
- int nt = blockDim.x * blockDim.y; // total threads in workgroup
-
- // group index in 0..ngroups-1
- int g = blockIdx.x + blockIdx.y * gridDim.x;
-
- // initialize smem
- unsigned int *gmem = out + g * NUM_PARTS;
- for (int i = t; i < ACTIVE_CHANNELS * NUM_BINS; i += nt)
- gmem[i] = 0;
- __syncthreads();
-
- // process pixels (updates our group's partial histogram in gmem)
- for (int col = x; col < width; col += nx)
- {
- for (int row = y; row < height; row += ny)
- {
- PixelType pixel = in[row * width + col];
-
- unsigned int bins[ACTIVE_CHANNELS];
- DecodePixel(pixel, bins);
-
- #pragma unroll
- for (int CHANNEL = 0; CHANNEL < ACTIVE_CHANNELS; ++CHANNEL)
- atomicAdd(&gmem[(NUM_BINS * CHANNEL) + bins[CHANNEL]], 1);
- }
- }
- }
-
- // Second pass histogram kernel (accumulation)
- template <
- int NUM_PARTS,
- int ACTIVE_CHANNELS,
- int NUM_BINS>
- __global__ void histogram_gmem_accum(
- const unsigned int *in,
- int n,
- unsigned int *out)
- {
- int i = blockIdx.x * blockDim.x + threadIdx.x;
- if (i > ACTIVE_CHANNELS * NUM_BINS)
- return; // out of range
-
- unsigned int total = 0;
- for (int j = 0; j < n; j++)
- total += in[i + NUM_PARTS * j];
-
- out[i] = total;
- }
-
-
-} // namespace histogram_gmem_atomics
-
-
-template <
- int ACTIVE_CHANNELS,
- int NUM_BINS,
- typename PixelType>
-double run_gmem_atomics(
- PixelType *d_image,
- int width,
- int height,
- unsigned int *d_hist,
- bool warmup)
-{
- enum
- {
- NUM_PARTS = 1024
- };
-
- cudaDeviceProp props;
- cudaGetDeviceProperties(&props, 0);
-
- dim3 block(32, 4);
- dim3 grid(16, 16);
- int total_blocks = grid.x * grid.y;
-
- // allocate partial histogram
- unsigned int *d_part_hist;
- cudaMalloc(&d_part_hist, total_blocks * NUM_PARTS * sizeof(unsigned int));
-
- dim3 block2(128);
- dim3 grid2((3 * NUM_BINS + block.x - 1) / block.x);
-
- GpuTimer gpu_timer;
- gpu_timer.Start();
-
- histogram_gmem_atomics::histogram_gmem_atomics<<>>(
- d_image,
- width,
- height,
- d_part_hist);
-
- histogram_gmem_atomics::histogram_gmem_accum<<>>(
- d_part_hist,
- total_blocks,
- d_hist);
-
- gpu_timer.Stop();
- float elapsed_millis = gpu_timer.ElapsedMillis();
-
- cudaFree(d_part_hist);
-
- return elapsed_millis;
-}
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/find.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/find.h
deleted file mode 100644
index 00e11e53c61d8916d51d044eba11f34092cf597c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/find.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
-InputIterator find(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- const T& value);
-
-
-template
-__host__ __device__
-InputIterator find_if(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- Predicate pred);
-
-
-template
-__host__ __device__
-InputIterator find_if_not(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- Predicate pred);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CYSD/AI-image-detector/app.py b/spaces/CYSD/AI-image-detector/app.py
deleted file mode 100644
index 10f4a7ef433ca6d5ac688e4b07bb8dd6548d163e..0000000000000000000000000000000000000000
--- a/spaces/CYSD/AI-image-detector/app.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-pipe = pipeline("image-classification", "umm-maybe/AI-image-detector")
-
-def image_classifier(image):
- outputs = pipe(image)
- results = {}
- for result in outputs:
- results[result['label']] = result['score']
- return results
-
-demo = gr.Interface(fn=image_classifier, inputs=gr.Image(type="pil"), outputs="label")
-demo.launch()
diff --git a/spaces/CarperAI/pile-v2-eda/README.md b/spaces/CarperAI/pile-v2-eda/README.md
deleted file mode 100644
index 1fdd9a1d56e93868ae98e3025f67a63c7693011e..0000000000000000000000000000000000000000
--- a/spaces/CarperAI/pile-v2-eda/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Pile V2 EDA
-emoji: 🎄
-colorFrom: indigo
-colorTo: grey
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/app.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/app.py
deleted file mode 100644
index 58d9f7164ddfbb5019b072d789dc2fa6205dc9d3..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/app.py
+++ /dev/null
@@ -1,330 +0,0 @@
-""" Command and Control """
-import json
-from typing import Dict, List, NoReturn, Union
-
-from autogpt.agent.agent_manager import AgentManager
-from autogpt.commands.analyze_code import analyze_code
-from autogpt.commands.audio_text import read_audio_from_file
-from autogpt.commands.execute_code import (
- execute_python_file,
- execute_shell,
- execute_shell_popen,
-)
-from autogpt.commands.file_operations import (
- append_to_file,
- delete_file,
- download_file,
- read_file,
- search_files,
- write_to_file,
-)
-from autogpt.commands.git_operations import clone_repository
-from autogpt.commands.google_search import google_official_search, google_search
-from autogpt.commands.image_gen import generate_image
-from autogpt.commands.improve_code import improve_code
-from autogpt.commands.twitter import send_tweet
-from autogpt.commands.web_requests import scrape_links, scrape_text
-from autogpt.commands.web_selenium import browse_website
-from autogpt.commands.write_tests import write_tests
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_llm import fix_and_parse_json
-from autogpt.memory import get_memory
-from autogpt.processing.text import summarize_text
-from autogpt.speech import say_text
-
-CFG = Config()
-AGENT_MANAGER = AgentManager()
-
-
-def is_valid_int(value: str) -> bool:
- """Check if the value is a valid integer
-
- Args:
- value (str): The value to check
-
- Returns:
- bool: True if the value is a valid integer, False otherwise
- """
- try:
- int(value)
- return True
- except ValueError:
- return False
-
-
-def get_command(response_json: Dict):
- """Parse the response and return the command name and arguments
-
- Args:
- response_json (json): The response from the AI
-
- Returns:
- tuple: The command name and arguments
-
- Raises:
- json.decoder.JSONDecodeError: If the response is not valid JSON
-
- Exception: If any other error occurs
- """
- try:
- if "command" not in response_json:
- return "Error:", "Missing 'command' object in JSON"
-
- if not isinstance(response_json, dict):
- return "Error:", f"'response_json' object is not dictionary {response_json}"
-
- command = response_json["command"]
- if not isinstance(command, dict):
- return "Error:", "'command' object is not a dictionary"
-
- if "name" not in command:
- return "Error:", "Missing 'name' field in 'command' object"
-
- command_name = command["name"]
-
- # Use an empty dictionary if 'args' field is not present in 'command' object
- arguments = command.get("args", {})
-
- return command_name, arguments
- except json.decoder.JSONDecodeError:
- return "Error:", "Invalid JSON"
- # All other errors, return "Error: + error message"
- except Exception as e:
- return "Error:", str(e)
-
-
-def map_command_synonyms(command_name: str):
- """Takes the original command name given by the AI, and checks if the
- string matches a list of common/known hallucinations
- """
- synonyms = [
- ("write_file", "write_to_file"),
- ("create_file", "write_to_file"),
- ("search", "google"),
- ]
- for seen_command, actual_command_name in synonyms:
- if command_name == seen_command:
- return actual_command_name
- return command_name
-
-
-def execute_command(command_name: str, arguments):
- """Execute the command and return the result
-
- Args:
- command_name (str): The name of the command to execute
- arguments (dict): The arguments for the command
-
- Returns:
- str: The result of the command
- """
- try:
- command_name = map_command_synonyms(command_name.lower())
- if command_name == "google":
- # Check if the Google API key is set and use the official search method
- # If the API key is not set or has only whitespaces, use the unofficial
- # search method
- key = CFG.google_api_key
- if key and key.strip() and key != "your-google-api-key":
- google_result = google_official_search(arguments["input"])
- return google_result
- else:
- google_result = google_search(arguments["input"])
-
- # google_result can be a list or a string depending on the search results
- if isinstance(google_result, list):
- safe_message = [
- google_result_single.encode("utf-8", "ignore")
- for google_result_single in google_result
- ]
- else:
- safe_message = google_result.encode("utf-8", "ignore")
-
- return safe_message.decode("utf-8")
- elif command_name == "memory_add":
- memory = get_memory(CFG)
- return memory.add(arguments["string"])
- elif command_name == "start_agent":
- return start_agent(
- arguments["name"], arguments["task"], arguments["prompt"]
- )
- elif command_name == "message_agent":
- return message_agent(arguments["key"], arguments["message"])
- elif command_name == "list_agents":
- return list_agents()
- elif command_name == "delete_agent":
- return delete_agent(arguments["key"])
- elif command_name == "get_text_summary":
- return get_text_summary(arguments["url"], arguments["question"])
- elif command_name == "get_hyperlinks":
- return get_hyperlinks(arguments["url"])
- elif command_name == "clone_repository":
- return clone_repository(
- arguments["repository_url"], arguments["clone_path"]
- )
- elif command_name == "read_file":
- return read_file(arguments["file"])
- elif command_name == "write_to_file":
- return write_to_file(arguments["file"], arguments["text"])
- elif command_name == "append_to_file":
- return append_to_file(arguments["file"], arguments["text"])
- elif command_name == "delete_file":
- return delete_file(arguments["file"])
- elif command_name == "search_files":
- return search_files(arguments["directory"])
- elif command_name == "download_file":
- if not CFG.allow_downloads:
- return "Error: You do not have user authorization to download files locally."
- return download_file(arguments["url"], arguments["file"])
- elif command_name == "browse_website":
- return browse_website(arguments["url"], arguments["question"])
- # TODO: Change these to take in a file rather than pasted code, if
- # non-file is given, return instructions "Input should be a python
- # filepath, write your code to file and try again"
- elif command_name == "analyze_code":
- return analyze_code(arguments["code"])
- elif command_name == "improve_code":
- return improve_code(arguments["suggestions"], arguments["code"])
- elif command_name == "write_tests":
- return write_tests(arguments["code"], arguments.get("focus"))
- elif command_name == "execute_python_file": # Add this command
- return execute_python_file(arguments["file"])
- elif command_name == "execute_shell":
- if CFG.execute_local_commands:
- return execute_shell(arguments["command_line"])
- else:
- return (
- "You are not allowed to run local shell commands. To execute"
- " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
- "in your config. Do not attempt to bypass the restriction."
- )
- elif command_name == "execute_shell_popen":
- if CFG.execute_local_commands:
- return execute_shell_popen(arguments["command_line"])
- else:
- return (
- "You are not allowed to run local shell commands. To execute"
- " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
- "in your config. Do not attempt to bypass the restriction."
- )
- elif command_name == "read_audio_from_file":
- return read_audio_from_file(arguments["file"])
- elif command_name == "generate_image":
- return generate_image(arguments["prompt"])
- elif command_name == "send_tweet":
- return send_tweet(arguments["text"])
- elif command_name == "do_nothing":
- return "No action performed."
- elif command_name == "task_complete":
- shutdown()
- else:
- return (
- f"Unknown command '{command_name}'. Please refer to the 'COMMANDS'"
- " list for available commands and only respond in the specified JSON"
- " format."
- )
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def get_text_summary(url: str, question: str) -> str:
- """Return the results of a Google search
-
- Args:
- url (str): The url to scrape
- question (str): The question to summarize the text for
-
- Returns:
- str: The summary of the text
- """
- text = scrape_text(url)
- summary = summarize_text(url, text, question)
- return f""" "Result" : {summary}"""
-
-
-def get_hyperlinks(url: str) -> Union[str, List[str]]:
- """Return the results of a Google search
-
- Args:
- url (str): The url to scrape
-
- Returns:
- str or list: The hyperlinks on the page
- """
- return scrape_links(url)
-
-
-def shutdown() -> NoReturn:
- """Shut down the program"""
- print("Shutting down...")
- quit()
-
-
-def start_agent(name: str, task: str, prompt: str, model=CFG.fast_llm_model) -> str:
- """Start an agent with a given name, task, and prompt
-
- Args:
- name (str): The name of the agent
- task (str): The task of the agent
- prompt (str): The prompt for the agent
- model (str): The model to use for the agent
-
- Returns:
- str: The response of the agent
- """
- # Remove underscores from name
- voice_name = name.replace("_", " ")
-
- first_message = f"""You are {name}. Respond with: "Acknowledged"."""
- agent_intro = f"{voice_name} here, Reporting for duty!"
-
- # Create agent
- if CFG.speak_mode:
- say_text(agent_intro, 1)
- key, ack = AGENT_MANAGER.create_agent(task, first_message, model)
-
- if CFG.speak_mode:
- say_text(f"Hello {voice_name}. Your task is as follows. {task}.")
-
- # Assign task (prompt), get response
- agent_response = AGENT_MANAGER.message_agent(key, prompt)
-
- return f"Agent {name} created with key {key}. First response: {agent_response}"
-
-
-def message_agent(key: str, message: str) -> str:
- """Message an agent with a given key and message"""
- # Check if the key is a valid integer
- if is_valid_int(key):
- agent_response = AGENT_MANAGER.message_agent(int(key), message)
- else:
- return "Invalid key, must be an integer."
-
- # Speak response
- if CFG.speak_mode:
- say_text(agent_response, 1)
- return agent_response
-
-
-def list_agents():
- """List all agents
-
- Returns:
- str: A list of all agents
- """
- return "List of agents:\n" + "\n".join(
- [str(x[0]) + ": " + x[1] for x in AGENT_MANAGER.list_agents()]
- )
-
-
-def delete_agent(key: str) -> str:
- """Delete an agent with a given key
-
- Args:
- key (str): The key of the agent to delete
-
- Returns:
- str: A message indicating whether the agent was deleted or not
- """
- result = AGENT_MANAGER.delete_agent(key)
- return f"Agent {key} deleted." if result else f"Agent {key} does not exist."
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/workspace.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/workspace.py
deleted file mode 100644
index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/workspace.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from __future__ import annotations
-
-import os
-from pathlib import Path
-
-from autogpt.config import Config
-
-CFG = Config()
-
-# Set a dedicated folder for file I/O
-WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace"
-
-# Create the directory if it doesn't exist
-if not os.path.exists(WORKSPACE_PATH):
- os.makedirs(WORKSPACE_PATH)
-
-
-def path_in_workspace(relative_path: str | Path) -> Path:
- """Get full path for item in workspace
-
- Parameters:
- relative_path (str | Path): Path to translate into the workspace
-
- Returns:
- Path: Absolute path for the given path in the workspace
- """
- return safe_path_join(WORKSPACE_PATH, relative_path)
-
-
-def safe_path_join(base: Path, *paths: str | Path) -> Path:
- """Join one or more path components, asserting the resulting path is within the workspace.
-
- Args:
- base (Path): The base path
- *paths (str): The paths to join to the base path
-
- Returns:
- Path: The joined path
- """
- joined_path = base.joinpath(*paths).resolve()
-
- if CFG.restrict_to_workspace and not joined_path.is_relative_to(base):
- raise ValueError(
- f"Attempted to access path '{joined_path}' outside of workspace '{base}'."
- )
-
- return joined_path
diff --git a/spaces/Chris4K/llms_compare/Aloo Chaat Hd Movie Download 1080p __TOP__.md b/spaces/Chris4K/llms_compare/Aloo Chaat Hd Movie Download 1080p __TOP__.md
deleted file mode 100644
index 4a750fc39891032759870e66a61c649654a5964a..0000000000000000000000000000000000000000
--- a/spaces/Chris4K/llms_compare/Aloo Chaat Hd Movie Download 1080p __TOP__.md
+++ /dev/null
@@ -1,56 +0,0 @@
-## Aloo Chaat hd movie download 1080p
-
-
-
-
-
-
-
-
-
-**DOWNLOAD === [https://www.google.com/url?q=https%3A%2F%2Furlgoal.com%2F2txP38&sa=D&sntz=1&usg=AOvVaw1R1ga3x5jvhXx0u0qjRBzQ](https://www.google.com/url?q=https%3A%2F%2Furlgoal.com%2F2txP38&sa=D&sntz=1&usg=AOvVaw1R1ga3x5jvhXx0u0qjRBzQ)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Aloo Chaat: A Delicious Comedy of Love and Culture
-
-
-
-Aloo Chaat is a 2009 Hindi romantic comedy film that revolves around the love story of Nikhil, a Hindu boy who falls in love with Aamna, a Muslim girl. Nikhil returns to his traditional family in India after completing his education in the US and faces the challenge of convincing them to accept his interfaith relationship. He enlists the help of his uncle Hakeem, a sexologist, and Nikki, an American girl, to create a fake marriage drama that would make Aamna look like a better choice for him.
-
-
-
-The film is directed by Robbie Grewal and stars Aftab Shivdasani, Aamna Sharif, Linda Arsenio, Kulbhushan Kharbanda, Sanjai Mishra, and Manoj Pahwa. The film is full of hilarious situations, witty dialogues, and catchy songs that will make you laugh and enjoy the cultural differences and similarities between the characters. The film also explores the themes of family values, social norms, and personal choices in a light-hearted manner.
-
-
-
-If you are looking for a fun and entertaining movie to watch with your family or friends, you can download Aloo Chaat in high definition quality from various online platforms. The film has received mixed reviews from critics but has been appreciated by the audience for its humor and charm. Aloo Chaat is a film that will make you crave for some spicy and tangy street food as well as some sweet and romantic moments.
-
-
-
-The film has a simple plot but is executed with flair and creativity. The film uses the metaphor of aloo chaat, a spicy and tangy dish made of potatoes and various chutneys, to represent the mix of cultures and emotions that the characters go through. The film also has some catchy songs composed by RDB, Xulfi, Vipin Mishra and Mehfuz Maruf that add to the fun and flavor of the film. The film has some memorable scenes such as the one where Nikhil introduces Nikki to his family as his fiancee, the one where Aamna teaches Nikki how to cook Punjabi food, and the one where Nikhil and Aamna confess their love to each other.
-
-
-
-The film also has some brilliant performances by the actors, especially Sanjai Mishra as Chhadami Mama, Nikhil's suspicious uncle who is always on the lookout for clues to expose Nikhil's plan. He delivers some hilarious dialogues and expressions that will make you laugh out loud. Manoj Pahwa as Hakeem Tarachand, Nikhil's uncle and confidant who helps him in his scheme, is also very funny and convincing. Kulbhushan Kharbanda as Purshottam, Nikhil's father who is a staunch believer in Hindu traditions and values, is also very impressive and shows his versatility as an actor. Aftab Shivdasani and Aamna Sharif have a good chemistry and look good together as the lead pair. Linda Arsenio as Nikki, the American girl who pretends to be Nikhil's fiancee, is also very charming and does a good job of playing a spoiled but sweet girl.
-
-
-
-Aloo Chaat is a film that will appeal to anyone who likes comedy, romance, and culture. It is a film that will make you laugh, smile, and feel good. It is a film that will make you appreciate the diversity and richness of Indian culture and society. It is a film that will make you want to try some aloo chaat yourself.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/infer_tools/f0_static.py b/spaces/ChrisPreston/diff-svc_minato_aqua/infer_tools/f0_static.py
deleted file mode 100644
index f57d34819e4042244d5338393f43134f4e27aa22..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/infer_tools/f0_static.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import json
-import os
-import shutil
-from functools import reduce
-from pathlib import Path
-
-import matplotlib
-import matplotlib.pyplot as plt
-import yaml
-from pylab import xticks, np
-from tqdm import tqdm
-
-from modules.vocoders.nsf_hifigan import NsfHifiGAN
-from preprocessing.process_pipeline import get_pitch_parselmouth, get_pitch_crepe
-from utils.hparams import set_hparams, hparams
-
-head_list = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"]
-
-
-def compare_pitch(f0_static_dict, pitch_time_temp, trans_key=0):
- return sum({k: v * f0_static_dict[str(k + trans_key)] for k, v in pitch_time_temp.items() if
- str(k + trans_key) in f0_static_dict}.values())
-
-
-def f0_to_pitch(ff):
- f0_pitch = 69 + 12 * np.log2(ff / 440)
- return round(f0_pitch, 0)
-
-
-def pitch_to_name(pitch):
- return f"{head_list[int(pitch % 12)]}{int(pitch / 12) - 1}"
-
-
-def get_f0(audio_path, crepe=False):
- wav, mel = NsfHifiGAN.wav2spec(audio_path)
- if crepe:
- f0, pitch_coarse = get_pitch_crepe(wav, mel, hparams)
- else:
- f0, pitch_coarse = get_pitch_parselmouth(wav, mel, hparams)
- return f0
-
-
-def merge_f0_dict(dict_list):
- def sum_dict(a, b):
- temp = dict()
- for key in a.keys() | b.keys():
- temp[key] = sum([d.get(key, 0) for d in (a, b)])
- return temp
-
- return reduce(sum_dict, dict_list)
-
-
-def collect_f0(f0):
- pitch_num = {}
- pitch_list = [f0_to_pitch(x) for x in f0[f0 > 0]]
- for key in pitch_list:
- pitch_num[key] = pitch_num.get(key, 0) + 1
- return pitch_num
-
-
-def static_f0_time(f0):
- if isinstance(f0, dict):
- pitch_num = merge_f0_dict({k: collect_f0(v) for k, v in f0.items()}.values())
- else:
- pitch_num = collect_f0(f0)
- static_pitch_time = {}
- sort_key = sorted(pitch_num.keys())
- for key in sort_key:
- static_pitch_time[key] = round(pitch_num[key] * hparams['hop_size'] / hparams['audio_sample_rate'], 2)
- return static_pitch_time
-
-
-def get_end_file(dir_path, end):
- file_lists = []
- for root, dirs, files in os.walk(dir_path):
- files = [f for f in files if f[0] != '.']
- dirs[:] = [d for d in dirs if d[0] != '.']
- for f_file in files:
- if f_file.endswith(end):
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
- return file_lists
-
-
-if __name__ == "__main__":
- # 给config文件增加f0_static统计音域
- config_path = "F:/sovits/diff-svc-main/checkpoints/aquapre/config.yaml"
- hparams = set_hparams(config=config_path, exp_name='', infer=True, reset=True, hparams_str='', print_hparams=False)
- f0_dict = {}
- # 获取batch文件夹下所有wav文件
- wav_paths = get_end_file("F:/sovits/diff-svc-main/batch/aquapre", "wav")
- # parselmouth获取f0
- with tqdm(total=len(wav_paths)) as p_bar:
- p_bar.set_description('Processing')
- for wav_path in wav_paths:
- f0_dict[wav_path] = get_f0(wav_path, crepe=False)
- p_bar.update(1)
- pitch_time = static_f0_time(f0_dict)
- total_time = round(sum(pitch_time.values()), 2)
- pitch_time["total_time"] = total_time
- print(f"total time: {total_time}s")
- shutil.copy(config_path, f"{Path(config_path).parent}\\back_{Path(config_path).name}")
- with open(config_path, encoding='utf-8') as f:
- _hparams = yaml.safe_load(f)
- _hparams['f0_static'] = json.dumps(pitch_time)
- with open(config_path, 'w', encoding='utf-8') as f:
- yaml.safe_dump(_hparams, f)
- print("原config文件已在原目录建立备份:back_config.yaml")
- print("音域统计已保存至config文件,此模型可使用自动变调功能")
- matplotlib.use('TkAgg')
- plt.title("数据集音域统计", fontproperties='SimHei')
- plt.xlabel("音高", fontproperties='SimHei')
- plt.ylabel("时长(s)", fontproperties='SimHei')
- xticks_labels = [pitch_to_name(i) for i in range(36, 96)]
- xticks(np.linspace(36, 96, 60, endpoint=True), xticks_labels)
- plt.plot(pitch_time.keys(), pitch_time.values(), color='dodgerblue')
- plt.show()
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/index.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/index.js
deleted file mode 100644
index 0218fd0a8e10c1eb49303ed7b5c731c00b4d34ce..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/index.js
+++ /dev/null
@@ -1,103 +0,0 @@
-import fs from 'node:fs'
-import { initWebSocket, Config, Version } from './components/index.js'
-import { TMP_DIR, mimeTypes } from './model/index.js'
-import { join, extname } from 'path'
-const files = fs.readdirSync('./plugins/ws-plugin/apps').filter(file => file.endsWith('.js'))
-
-let ret = []
-
-logger.info('-----------------')
-logger.info(`ws-plugin${Version.version}插件初始化~`)
-
-
-files.forEach((file) => {
- ret.push(import(`./apps/${file}`))
-})
-
-ret = await Promise.allSettled(ret)
-
-let apps = {}
-for (let i in files) {
- let name = files[i].replace('.js', '')
-
- if (ret[i].status != 'fulfilled') {
- logger.error(`载入插件错误:${logger.red(name)}`)
- logger.error(ret[i].reason)
- continue
- }
- apps[name] = ret[i].value[Object.keys(ret[i].value)[0]]
-}
-let path = ['./apps/message/message.js', './apps/notice/notice.js', './apps/request/request.js']
-for (const item of path) {
- try {
- await import(`${item}`)
- } catch (e) {
- logger.error(`载入事件错误:${item}`)
- logger.error(e)
- }
-}
-
-initWebSocket()
-if (Version.isTrss) {
- Bot.express.get('/ws-plugin*', async (req, res) => {
- const file = req.query.file
- if (file) {
- const ext = extname(file)
- const contentType = mimeTypes[ext]
- fs.readFile(join(TMP_DIR, file), (err, content) => {
- if (err) {
- res.writeHead(404)
- res.end('File not found')
- } else {
- const name = file.split('-')
- const filename = encodeURIComponent(name[1]) || encodeURIComponent(name[0]) || encodeURIComponent(file)
- res.writeHead(200, {
- 'Content-Type': contentType,
- 'Content-Disposition': `attachment; filename=${filename}`
- })
- res.end(content)
- }
- })
- return
- }
- res.writeHead(404);
- res.end('Page not found')
- })
-} else {
- const getGroupMemberInfo = Bot.getGroupMemberInfo
- /** 劫持修改getGroupMemberInfo方法 */
- Bot.getGroupMemberInfo = async function (group_id, user_id) {
- let result
- try {
- result = await getGroupMemberInfo(group_id, user_id)
- } catch (error) {
- let nickname
- if (error.stack.includes('ws-plugin')) {
- nickname = 'chronocat'
- } else {
- nickname = String(group_id).includes("qg_") ? "QQGuild-Bot" : "WeChat-Bot"
- }
- result = {
- group_id,
- user_id,
- nickname,
- card: "",
- sex: "female",
- age: 6,
- join_time: "",
- last_sent_time: "",
- level: 1,
- role: "member",
- title: "",
- title_expire_time: "",
- shutup_time: 0,
- update_time: "",
- area: "南极洲",
- rank: "潜水",
- }
- }
- return result
- }
-}
-
-export { apps }
diff --git a/spaces/CodingBillionaire/bark-voice-cloning/README.md b/spaces/CodingBillionaire/bark-voice-cloning/README.md
deleted file mode 100644
index 0201ebf6de813acfb8bfd4997583bc5f5c0d036e..0000000000000000000000000000000000000000
--- a/spaces/CodingBillionaire/bark-voice-cloning/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Bark Voice Cloning
-emoji: 🐶
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-python_version: 3.10.11
-app_file: app.py
-models:
-- facebook/hubert-base-ls960
-- GitMylo/bark-voice-cloning
-pinned: false
-license: mit
-duplicated_from: GitMylo/bark-voice-cloning
----
diff --git a/spaces/CofAI/chat.b4/client/js/highlight.min.js b/spaces/CofAI/chat.b4/client/js/highlight.min.js
deleted file mode 100644
index d410b45b38119606525a0a7c0c60c428c5ee6eb7..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/client/js/highlight.min.js
+++ /dev/null
@@ -1 +0,0 @@
-var hljs=function(){"use strict";var e={exports:{}};function n(e){return e instanceof Map?e.clear=e.delete=e.set=()=>{throw Error("map is read-only")}:e instanceof Set&&(e.add=e.clear=e.delete=()=>{throw Error("set is read-only")}),Object.freeze(e),Object.getOwnPropertyNames(e).forEach(t=>{var a=e[t];"object"!=typeof a||Object.isFrozen(a)||n(a)}),e}e.exports=n,e.exports.default=n;class t{constructor(e){void 0===e.data&&(e.data={}),this.data=e.data,this.isMatchIgnored=!1}ignoreMatch(){this.isMatchIgnored=!0}}function a(e){return e.replace(/&/g,"&").replace(//g,">").replace(/"/g,""").replace(/'/g,"'")}function i(e,...n){let t=Object.create(null);for(let a in e)t[a]=e[a];return n.forEach(e=>{for(let n in e)t[n]=e[n]}),t}let r=e=>!!e.scope||e.sublanguage&&e.language;class s{constructor(e,n){this.buffer="",this.classPrefix=n.classPrefix,e.walk(this)}addText(e){this.buffer+=a(e)}openNode(e){if(!r(e))return;let n="";n=e.sublanguage?"language-"+e.language:((e,{prefix:n})=>{if(e.includes(".")){let t=e.split(".");return[`${n}${t.shift()}`,...t.map((e,n)=>`${e}${"_".repeat(n+1)}`),].join(" ")}return`${n}${e}`})(e.scope,{prefix:this.classPrefix}),this.span(n)}closeNode(e){r(e)&&(this.buffer+="")}value(){return this.buffer}span(e){this.buffer+=``}}let l=(e={})=>{let n={children:[]};return Object.assign(n,e),n};class o{constructor(){this.rootNode=l(),this.stack=[this.rootNode]}get top(){return this.stack[this.stack.length-1]}get root(){return this.rootNode}add(e){this.top.children.push(e)}openNode(e){let n=l({scope:e});this.add(n),this.stack.push(n)}closeNode(){if(this.stack.length>1)return this.stack.pop()}closeAllNodes(){for(;this.closeNode(););}toJSON(){return JSON.stringify(this.rootNode,null,4)}walk(e){return this.constructor._walk(e,this.rootNode)}static _walk(e,n){return"string"==typeof n?e.addText(n):n.children&&(e.openNode(n),n.children.forEach(n=>this._walk(e,n)),e.closeNode(n)),e}static _collapse(e){"string"!=typeof e&&e.children&&(e.children.every(e=>"string"==typeof e)?e.children=[e.children.join("")]:e.children.forEach(e=>{o._collapse(e)}))}}class c extends o{constructor(e){super(),this.options=e}addKeyword(e,n){""!==e&&(this.openNode(n),this.addText(e),this.closeNode())}addText(e){""!==e&&this.add(e)}addSublanguage(e,n){let t=e.root;t.sublanguage=!0,t.language=n,this.add(t)}toHTML(){return new s(this,this.options).value()}finalize(){return!0}}function d(e){return e?"string"==typeof e?e:e.source:null}function g(e){return m("(?=",e,")")}function u(e){return m("(?:",e,")*")}function b(e){return m("(?:",e,")?")}function m(...e){return e.map(e=>d(e)).join("")}function p(...e){let n=(e=>{let n=e[e.length-1];return"object"==typeof n&&n.constructor===Object?(e.splice(e.length-1,1),n):{}})(e);return"("+(n.capture?"":"?:")+e.map(e=>d(e)).join("|")+")"}function h(e){return RegExp(e.toString()+"|").exec("").length-1}let f=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./;function E(e,{joinWith:n}){let t=0;return e.map(e=>{t+=1;let n=t,a=d(e),i="";for(;a.length>0;){let r=f.exec(a);if(!r){i+=a;break}i+=a.substring(0,r.index),a=a.substring(r.index+r[0].length),"\\"===r[0][0]&&r[1]?i+="\\"+(Number(r[1])+n):(i+=r[0],"("===r[0]&&t++)}return i}).map(e=>`(${e})`).join(n)}let $="[a-zA-Z]\\w*",y="[a-zA-Z_]\\w*",N="\\b\\d+(\\.\\d+)?",w="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",v="\\b(0b[01]+)",x={begin:"\\\\[\\s\\S]",relevance:0},k=(e,n,t={})=>{let a=i({scope:"comment",begin:e,end:n,contains:[]},t);a.contains.push({scope:"doctag",begin:"[ ]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)",end:/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,excludeBegin:!0,relevance:0});let r=p("I","a","is","so","us","to","at","if","in","it","on",/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/);return a.contains.push({begin:m(/[ ]+/,"(",r,/[.]?[:]?([.][ ]|[ ])/,"){3}")}),a},M=k("//","$"),O=k("/\\*","\\*/"),S=k("#","$");var A=Object.freeze({__proto__:null,MATCH_NOTHING_RE:/\b\B/,IDENT_RE:$,UNDERSCORE_IDENT_RE:y,NUMBER_RE:N,C_NUMBER_RE:w,BINARY_NUMBER_RE:v,RE_STARTERS_RE:"!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",SHEBANG(e={}){let n=/^#![ ]*\//;return e.binary&&(e.begin=m(n,/.*\b/,e.binary,/\b.*/)),i({scope:"meta",begin:n,end:/$/,relevance:0,"on:begin"(e,n){0!==e.index&&n.ignoreMatch()}},e)},BACKSLASH_ESCAPE:x,APOS_STRING_MODE:{scope:"string",begin:"'",end:"'",illegal:"\\n",contains:[x]},QUOTE_STRING_MODE:{scope:"string",begin:'"',end:'"',illegal:"\\n",contains:[x]},PHRASAL_WORDS_MODE:{begin:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/},COMMENT:k,C_LINE_COMMENT_MODE:M,C_BLOCK_COMMENT_MODE:O,HASH_COMMENT_MODE:S,NUMBER_MODE:{scope:"number",begin:N,relevance:0},C_NUMBER_MODE:{scope:"number",begin:w,relevance:0},BINARY_NUMBER_MODE:{scope:"number",begin:v,relevance:0},REGEXP_MODE:{begin:/(?=\/[^/\n]*\/)/,contains:[{scope:"regexp",begin:/\//,end:/\/[gimuy]*/,illegal:/\n/,contains:[x,{begin:/\[/,end:/\]/,relevance:0,contains:[x]},]},]},TITLE_MODE:{scope:"title",begin:$,relevance:0},UNDERSCORE_TITLE_MODE:{scope:"title",begin:y,relevance:0},METHOD_GUARD:{begin:"\\.\\s*[a-zA-Z_]\\w*",relevance:0},END_SAME_AS_BEGIN:e=>Object.assign(e,{"on:begin"(e,n){n.data._beginMatch=e[1]},"on:end"(e,n){n.data._beginMatch!==e[1]&&n.ignoreMatch()}})});function C(e,n){"."===e.input[e.index-1]&&n.ignoreMatch()}function T(e,n){void 0!==e.className&&(e.scope=e.className,delete e.className)}function R(e,n){n&&e.beginKeywords&&(e.begin="\\b("+e.beginKeywords.split(" ").join("|")+")(?!\\.)(?=\\b|\\s)",e.__beforeBegin=C,e.keywords=e.keywords||e.beginKeywords,delete e.beginKeywords,void 0===e.relevance&&(e.relevance=0))}function D(e,n){Array.isArray(e.illegal)&&(e.illegal=p(...e.illegal))}function I(e,n){if(e.match){if(e.begin||e.end)throw Error("begin & end are not supported with match");e.begin=e.match,delete e.match}}function L(e,n){void 0===e.relevance&&(e.relevance=1)}let B=(e,n)=>{if(!e.beforeMatch)return;if(e.starts)throw Error("beforeMatch cannot be used with starts");let t=Object.assign({},e);Object.keys(e).forEach(n=>{delete e[n]}),e.keywords=t.keywords,e.begin=m(t.beforeMatch,g(t.begin)),e.starts={relevance:0,contains:[Object.assign(t,{endsParent:!0})]},e.relevance=0,delete t.beforeMatch},_=["of","and","for","in","not","or","if","then","parent","list","value",],z={},F=e=>{console.error(e)},U=(e,...n)=>{},P=(e,n)=>{z[`${e}/${n}`]||(console.log(`Deprecated as of ${e}. ${n}`),z[`${e}/${n}`]=!0)},j=Error();function K(e,n,{key:t}){let a=0,i=e[t],r={},s={};for(let l=1;l<=n.length;l++)s[l+a]=i[l],r[l+a]=!0,a+=h(n[l-1]);e[t]=s,e[t]._emit=r,e[t]._multi=!0}function q(e){var n;(n=e).scope&&"object"==typeof n.scope&&null!==n.scope&&(n.beginScope=n.scope,delete n.scope),"string"==typeof e.beginScope&&(e.beginScope={_wrap:e.beginScope}),"string"==typeof e.endScope&&(e.endScope={_wrap:e.endScope}),(e=>{if(Array.isArray(e.begin)){if(e.skip||e.excludeBegin||e.returnBegin)throw F("skip, excludeBegin, returnBegin not compatible with beginScope: {}"),j;if("object"!=typeof e.beginScope||null===e.beginScope)throw F("beginScope must be object"),j;K(e,e.begin,{key:"beginScope"}),e.begin=E(e.begin,{joinWith:""})}})(e),(e=>{if(Array.isArray(e.end)){if(e.skip||e.excludeEnd||e.returnEnd)throw F("skip, excludeEnd, returnEnd not compatible with endScope: {}"),j;if("object"!=typeof e.endScope||null===e.endScope)throw F("endScope must be object"),j;K(e,e.end,{key:"endScope"}),e.end=E(e.end,{joinWith:""})}})(e)}class H extends Error{constructor(e,n){super(e),this.name="HTMLInjectionError",this.html=n}}let Z=a,G=i,W=Symbol("nomatch");var Q=(n=>{let a=Object.create(null),r=Object.create(null),s=[],l=!0,o="Could not find the language '{}', did you forget to load/include a language module?",f={disableAutodetect:!0,name:"Plain text",contains:[]},$={ignoreUnescapedHTML:!1,throwUnescapedHTML:!1,noHighlightRe:/^(no-?highlight)$/i,languageDetectRe:/\blang(?:uage)?-([\w-]+)\b/i,classPrefix:"hljs-",cssSelector:"pre code",languages:null,__emitter:c};function y(e){return $.noHighlightRe.test(e)}function N(e,n,t){let a="",i="";"object"==typeof n?(a=e,t=n.ignoreIllegals,i=n.language):(P("10.7.0","highlight(lang, code, ...args) has been deprecated."),P("10.7.0","Please use highlight(code, options) instead.\nhttps://github.com/highlightjs/highlight.js/issues/2277"),i=e,a=n),void 0===t&&(t=!0);let r={code:a,language:i};z("before:highlight",r);let s=r.result?r.result:w(r.language,r.code,t);return s.code=r.code,z("after:highlight",s),s}function w(e,n,r,s){let c=Object.create(null);function g(){var e;if(!M.keywords)return void A.addText(C);let n=0;M.keywordPatternRe.lastIndex=0;let t=M.keywordPatternRe.exec(C),a="";for(;t;){a+=C.substring(n,t.index);let i=N.case_insensitive?t[0].toLowerCase():t[0],r=(e=i,M.keywords[e]);if(r){let[s,l]=r;if(A.addText(a),a="",c[i]=(c[i]||0)+1,c[i]<=7&&(z+=l),s.startsWith("_"))a+=t[0];else{let o=N.classNameAliases[s]||s;A.addKeyword(t[0],o)}}else a+=t[0];n=M.keywordPatternRe.lastIndex,t=M.keywordPatternRe.exec(C)}a+=C.substring(n),A.addText(a)}function u(){null!=M.subLanguage?(()=>{if(""===C)return;let e=null;if("string"==typeof M.subLanguage){if(!a[M.subLanguage])return void A.addText(C);e=w(M.subLanguage,C,!0,S[M.subLanguage]),S[M.subLanguage]=e._top}else e=v(C,M.subLanguage.length?M.subLanguage:null);M.relevance>0&&(z+=e.relevance),A.addSublanguage(e._emitter,e.language)})():g(),C=""}function b(e,n){let t=1,a=n.length-1;for(;t<=a;){if(!e._emit[t]){t++;continue}let i=N.classNameAliases[e[t]]||e[t],r=n[t];i?A.addKeyword(r,i):(C=r,g(),C=""),t++}}function m(e,n){return e.scope&&"string"==typeof e.scope&&A.openNode(N.classNameAliases[e.scope]||e.scope),e.beginScope&&(e.beginScope._wrap?(A.addKeyword(C,N.classNameAliases[e.beginScope._wrap]||e.beginScope._wrap),C=""):e.beginScope._multi&&(b(e.beginScope,n),C="")),M=Object.create(e,{parent:{value:M}})}function p(e){return 0===M.matcher.regexIndex?(C+=e[0],1):(j=!0,0)}let f={};function y(a,i){let s=i&&i[0];if(C+=a,null==s)return u(),0;if("begin"===f.type&&"end"===i.type&&f.index===i.index&&""===s){if(C+=n.slice(i.index,i.index+1),!l){let o=Error(`0 width match regex (${e})`);throw o.languageName=e,o.badRule=f.rule,o}return 1}if(f=i,"begin"===i.type)return(e=>{let n=e[0],a=e.rule,i=new t(a),r=[a.__beforeBegin,a["on:begin"]];for(let s of r)if(s&&(s(e,i),i.isMatchIgnored))return p(n);return a.skip?C+=n:(a.excludeBegin&&(C+=n),u(),a.returnBegin||a.excludeBegin||(C=n)),m(a,e),a.returnBegin?0:n.length})(i);if("illegal"===i.type&&!r){let c=Error('Illegal lexeme "'+s+'" for mode "'+(M.scope||"")+'"');throw c.mode=M,c}if("end"===i.type){let d=function e(a){let i=a[0],r=n.substring(a.index),s=function e(n,a,i){let r=((e,n)=>{let t=e&&e.exec(n);return t&&0===t.index})(n.endRe,i);if(r){if(n["on:end"]){let s=new t(n);n["on:end"](a,s),s.isMatchIgnored&&(r=!1)}if(r){for(;n.endsParent&&n.parent;)n=n.parent;return n}}if(n.endsWithParent)return e(n.parent,a,i)}(M,a,r);if(!s)return W;let l=M;M.endScope&&M.endScope._wrap?(u(),A.addKeyword(i,M.endScope._wrap)):M.endScope&&M.endScope._multi?(u(),b(M.endScope,a)):l.skip?C+=i:(l.returnEnd||l.excludeEnd||(C+=i),u(),l.excludeEnd&&(C=i));do M.scope&&A.closeNode(),M.skip||M.subLanguage||(z+=M.relevance),M=M.parent;while(M!==s.parent);return s.starts&&m(s.starts,a),l.returnEnd?0:i.length}(i);if(d!==W)return d}if("illegal"===i.type&&""===s)return 1;if(P>1e5&&P>3*i.index)throw Error("potential infinite loop, way more iterations than matches");return C+=s,s.length}let N=O(e);if(!N)throw F(o.replace("{}",e)),Error('Unknown language: "'+e+'"');let x=function e(n){function t(e,t){return RegExp(d(e),"m"+(n.case_insensitive?"i":"")+(n.unicodeRegex?"u":"")+(t?"g":""))}class a{constructor(){this.matchIndexes={},this.regexes=[],this.matchAt=1,this.position=0}addRule(e,n){n.position=this.position++,this.matchIndexes[this.matchAt]=n,this.regexes.push([n,e]),this.matchAt+=h(e)+1}compile(){0===this.regexes.length&&(this.exec=()=>null);let e=this.regexes.map(e=>e[1]);this.matcherRe=t(E(e,{joinWith:"|"}),!0),this.lastIndex=0}exec(e){this.matcherRe.lastIndex=this.lastIndex;let n=this.matcherRe.exec(e);if(!n)return null;let t=n.findIndex((e,n)=>n>0&&void 0!==e),a=this.matchIndexes[t];return n.splice(0,t),Object.assign(n,a)}}class r{constructor(){this.rules=[],this.multiRegexes=[],this.count=0,this.lastIndex=0,this.regexIndex=0}getMatcher(e){if(this.multiRegexes[e])return this.multiRegexes[e];let n=new a;return this.rules.slice(e).forEach(([e,t])=>n.addRule(e,t)),n.compile(),this.multiRegexes[e]=n,n}resumingScanAtSamePosition(){return 0!==this.regexIndex}considerAll(){this.regexIndex=0}addRule(e,n){this.rules.push([e,n]),"begin"===n.type&&this.count++}exec(e){let n=this.getMatcher(this.regexIndex);n.lastIndex=this.lastIndex;let t=n.exec(e);if(this.resumingScanAtSamePosition()){if(t&&t.index===this.lastIndex);else{let a=this.getMatcher(0);a.lastIndex=this.lastIndex+1,t=a.exec(e)}}return t&&(this.regexIndex+=t.position+1,this.regexIndex===this.count&&this.considerAll()),t}}if(n.compilerExtensions||(n.compilerExtensions=[]),n.contains&&n.contains.includes("self"))throw Error("ERR: contains `self` is not supported at the top-level of a language. See documentation.");return n.classNameAliases=i(n.classNameAliases||{}),function e(a,s){let l=a;if(a.isCompiled)return l;[T,I,q,B].forEach(e=>e(a,s)),n.compilerExtensions.forEach(e=>e(a,s)),a.__beforeBegin=null,[R,D,L].forEach(e=>e(a,s)),a.isCompiled=!0;let o=null;return"object"==typeof a.keywords&&a.keywords.$pattern&&(a.keywords=Object.assign({},a.keywords),o=a.keywords.$pattern,delete a.keywords.$pattern),o=o||/\w+/,a.keywords&&(a.keywords=function e(n,t,a="keyword"){let i=Object.create(null);return"string"==typeof n?r(a,n.split(" ")):Array.isArray(n)?r(a,n):Object.keys(n).forEach(a=>{Object.assign(i,e(n[a],t,a))}),i;function r(e,n){t&&(n=n.map(e=>e.toLowerCase())),n.forEach(n=>{var t,a,r;let s=n.split("|");i[s[0]]=[e,(t=s[0],a=s[1],a?Number(a):(r=t,_.includes(r.toLowerCase()))?0:1)]})}}(a.keywords,n.case_insensitive)),l.keywordPatternRe=t(o,!0),s&&(a.begin||(a.begin=/\B|\b/),l.beginRe=t(l.begin),a.end||a.endsWithParent||(a.end=/\B|\b/),a.end&&(l.endRe=t(l.end)),l.terminatorEnd=d(l.end)||"",a.endsWithParent&&s.terminatorEnd&&(l.terminatorEnd+=(a.end?"|":"")+s.terminatorEnd)),a.illegal&&(l.illegalRe=t(a.illegal)),a.contains||(a.contains=[]),a.contains=[].concat(...a.contains.map(e=>{var n;return(n="self"===e?a:e).variants&&!n.cachedVariants&&(n.cachedVariants=n.variants.map(e=>i(n,{variants:null},e))),n.cachedVariants?n.cachedVariants:!function e(n){return!!n&&(n.endsWithParent||e(n.starts))}(n)?Object.isFrozen(n)?i(n):n:i(n,{starts:n.starts?i(n.starts):null})})),a.contains.forEach(n=>{e(n,l)}),a.starts&&e(a.starts,s),l.matcher=(e=>{let n=new r;return e.contains.forEach(e=>n.addRule(e.begin,{rule:e,type:"begin"})),e.terminatorEnd&&n.addRule(e.terminatorEnd,{type:"end"}),e.illegal&&n.addRule(e.illegal,{type:"illegal"}),n})(l),l}(n)}(N),k="",M=s||x,S={},A=new $.__emitter($);(()=>{let e=[];for(let n=M;n!==N;n=n.parent)n.scope&&e.unshift(n.scope);e.forEach(e=>A.openNode(e))})();let C="",z=0,U=0,P=0,j=!1;try{for(M.matcher.considerAll();;){P++,j?j=!1:M.matcher.considerAll(),M.matcher.lastIndex=U;let K=M.matcher.exec(n);if(!K)break;let H=y(n.substring(U,K.index),K);U=K.index+H}return y(n.substring(U)),A.closeAllNodes(),A.finalize(),k=A.toHTML(),{language:e,value:k,relevance:z,illegal:!1,_emitter:A,_top:M}}catch(G){if(G.message&&G.message.includes("Illegal"))return{language:e,value:Z(n),illegal:!0,relevance:0,_illegalBy:{message:G.message,index:U,context:n.slice(U-100,U+100),mode:G.mode,resultSoFar:k},_emitter:A};if(l)return{language:e,value:Z(n),illegal:!1,relevance:0,errorRaised:G,_emitter:A,_top:M};throw G}}function v(e,n){n=n||$.languages||Object.keys(a);let t=(e=>{let n={value:Z(e),illegal:!1,relevance:0,_top:f,_emitter:new $.__emitter($)};return n._emitter.addText(e),n})(e),i=n.filter(O).filter(C).map(n=>w(n,e,!1));i.unshift(t);let r=i.sort((e,n)=>{if(e.relevance!==n.relevance)return n.relevance-e.relevance;if(e.language&&n.language){if(O(e.language).supersetOf===n.language)return 1;if(O(n.language).supersetOf===e.language)return -1}return 0}),[s,l]=r,o=s;return o.secondBest=l,o}function x(e){let n=null,t=(e=>{let n=e.className+" ";n+=e.parentNode?e.parentNode.className:"";let t=$.languageDetectRe.exec(n);if(t){let a=O(t[1]);return a||(U(o.replace("{}",t[1])),U("Falling back to no-highlight mode for this block.",e)),a?t[1]:"no-highlight"}return n.split(/\s+/).find(e=>y(e)||O(e))})(e);if(y(t))return;if(z("before:highlightElement",{el:e,language:t}),e.children.length>0&&($.ignoreUnescapedHTML||$.throwUnescapedHTML))throw new H("One of your code blocks includes unescaped HTML.",e.innerHTML);n=e;let a=n.textContent,i=t?N(a,{language:t,ignoreIllegals:!0}):v(a);e.innerHTML=i.value,((e,n,t)=>{let a=n&&r[n]||t;e.classList.add("hljs"),e.classList.add("language-"+a)})(e,t,i.language),e.result={language:i.language,re:i.relevance,relevance:i.relevance},i.secondBest&&(e.secondBest={language:i.secondBest.language,relevance:i.secondBest.relevance}),z("after:highlightElement",{el:e,result:i,text:a})}let k=!1;function M(){"loading"!==document.readyState?document.querySelectorAll($.cssSelector).forEach(x):k=!0}function O(e){return a[e=(e||"").toLowerCase()]||a[r[e]]}function S(e,{languageName:n}){"string"==typeof e&&(e=[e]),e.forEach(e=>{r[e.toLowerCase()]=n})}function C(e){let n=O(e);return n&&!n.disableAutodetect}function z(e,n){let t=e;s.forEach(e=>{e[t]&&e[t](n)})}for(let j in"undefined"!=typeof window&&window.addEventListener&&window.addEventListener("DOMContentLoaded",()=>{k&&M()},!1),Object.assign(n,{highlight:N,highlightAuto:v,highlightAll:M,highlightElement:x,highlightBlock:e=>(P("10.7.0","highlightBlock will be removed entirely in v12.0"),P("10.7.0","Please use highlightElement now."),x(e)),configure(e){$=G($,e)},initHighlighting(){M(),P("10.6.0","initHighlighting() deprecated. Use highlightAll() now.")},initHighlightingOnLoad(){M(),P("10.6.0","initHighlightingOnLoad() deprecated. Use highlightAll() now.")},registerLanguage(e,t){let i=null;try{i=t(n)}catch(r){if(F("Language definition for '{}' could not be registered.".replace("{}",e)),!l)throw r;F(r),i=f}i.name||(i.name=e),a[e]=i,i.rawDefinition=t.bind(null,n),i.aliases&&S(i.aliases,{languageName:e})},unregisterLanguage(e){for(let n of(delete a[e],Object.keys(r)))r[n]===e&&delete r[n]},listLanguages:()=>Object.keys(a),getLanguage:O,registerAliases:S,autoDetection:C,inherit:G,addPlugin(e){var n;(n=e)["before:highlightBlock"]&&!n["before:highlightElement"]&&(n["before:highlightElement"]=e=>{n["before:highlightBlock"](Object.assign({block:e.el},e))}),n["after:highlightBlock"]&&!n["after:highlightElement"]&&(n["after:highlightElement"]=e=>{n["after:highlightBlock"](Object.assign({block:e.el},e))}),s.push(e)}}),n.debugMode=()=>{l=!1},n.safeMode=()=>{l=!0},n.versionString="11.7.0",n.regex={concat:m,lookahead:g,either:p,optional:b,anyNumberOfTimes:u},A)"object"==typeof A[j]&&e.exports(A[j]);return Object.assign(n,A),n})({});let X=e=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:e.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:e.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),V=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video",],J=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height",],Y=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where",],ee=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error",],en=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index",].reverse(),et=Y.concat(ee);var ea="\\.([0-9](_*[0-9])*)",ei="[0-9a-fA-F](_*[0-9a-fA-F])*",er={className:"number",variants:[{begin:`(\\b([0-9](_*[0-9])*)((${ea})|\\.)?|(${ea}))[eE][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:`\\b([0-9](_*[0-9])*)((${ea})[fFdD]?\\b|\\.([fFdD]\\b)?)`},{begin:`(${ea})[fFdD]?\\b`},{begin:"\\b([0-9](_*[0-9])*)[fFdD]\\b"},{begin:`\\b0[xX]((${ei})\\.?|(${ei})?\\.(${ei}))[pP][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:"\\b(0|[1-9](_*[0-9])*)[lL]?\\b"},{begin:`\\b0[xX](${ei})[lL]?\\b`},{begin:"\\b0(_*[0-7])*[lL]?\\b"},{begin:"\\b0[bB][01](_*[01])*[lL]?\\b"},],relevance:0};let es="[A-Za-z$_][0-9A-Za-z$_]*",el=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends",],eo=["true","false","null","undefined","NaN","Infinity"],ec=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly",],ed=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError",],eg=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape",],eu=["arguments","this","super","console","window","document","localStorage","module","global",],eb=[].concat(eg,ec,ed);function em(e){var n;let t=e.regex,a=es,i={begin:/<[A-Za-z0-9\\._:-]+/,end:/\/[A-Za-z0-9\\._:-]+>|\/>/,isTrulyOpeningTag(e,n){let t=e[0].length+e.index,a=e.input[t];if("<"===a||","===a)return void n.ignoreMatch();let i;">"===a&&(((e,{after:n})=>{let t=""+e[0].slice(1);return -1!==e.input.indexOf(t,n)})(e,{after:t})||n.ignoreMatch());let r=e.input.substring(t);((i=r.match(/^\s*=/))||(i=r.match(/^\s+extends\s+/))&&0===i.index)&&n.ignoreMatch()}},r={$pattern:es,keyword:el,literal:eo,built_in:eb,"variable.language":eu},s="\\.([0-9](_?[0-9])*)",l="0|[1-9](_?[0-9])*|0[0-7]*[89][0-9]*",o={className:"number",variants:[{begin:`(\\b(${l})((${s})|\\.)?|(${s}))[eE][+-]?([0-9](_?[0-9])*)\\b`},{begin:`\\b(${l})\\b((${s})\\b|\\.)?|(${s})\\b`},{begin:"\\b(0|[1-9](_?[0-9])*)n\\b"},{begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*n?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*n?\\b"},{begin:"\\b0[oO][0-7](_?[0-7])*n?\\b"},{begin:"\\b0[0-7]+n?\\b"},],relevance:0},c={className:"subst",begin:"\\$\\{",end:"\\}",keywords:r,contains:[]},d={begin:"html`",end:"",starts:{end:"`",returnEnd:!1,contains:[e.BACKSLASH_ESCAPE,c],subLanguage:"xml"}},g={begin:"css`",end:"",starts:{end:"`",returnEnd:!1,contains:[e.BACKSLASH_ESCAPE,c],subLanguage:"css"}},u={className:"string",begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE,c]},b={className:"comment",variants:[e.COMMENT(/\/\*\*(?!\/)/,"\\*/",{relevance:0,contains:[{begin:"(?=@[A-Za-z]+)",relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"},{className:"type",begin:"\\{",end:"\\}",excludeEnd:!0,excludeBegin:!0,relevance:0},{className:"variable",begin:a+"(?=\\s*(-)|$)",endsParent:!0,relevance:0},{begin:/(?=[^\n])\s/,relevance:0},]},]}),e.C_BLOCK_COMMENT_MODE,e.C_LINE_COMMENT_MODE,]},m=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,d,g,u,{match:/\$\d+/},o,];c.contains=m.concat({begin:/\{/,end:/\}/,keywords:r,contains:["self"].concat(m)});let p=[].concat(b,c.contains),h=p.concat([{begin:/\(/,end:/\)/,keywords:r,contains:["self"].concat(p)},]),f={className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:r,contains:h},E={variants:[{match:[/class/,/\s+/,a,/\s+/,/extends/,/\s+/,t.concat(a,"(",t.concat(/\./,a),")*"),],scope:{1:"keyword",3:"title.class",5:"keyword",7:"title.class.inherited"}},{match:[/class/,/\s+/,a],scope:{1:"keyword",3:"title.class"}},]},$={relevance:0,match:t.either(/\bJSON/,/\b[A-Z][a-z]+([A-Z][a-z]*|\d)*/,/\b[A-Z]{2,}([A-Z][a-z]+|\d)+([A-Z][a-z]*)*/,/\b[A-Z]{2,}[a-z]+([A-Z][a-z]+|\d)*([A-Z][a-z]*)*/),className:"title.class",keywords:{_:[...ec,...ed]}},y={match:t.concat(/\b/,(n=[...eg,"super","import"],t.concat("(?!",n.join("|"),")")),a,t.lookahead(/\(/)),className:"title.function",relevance:0},N={begin:t.concat(/\./,t.lookahead(t.concat(a,/(?![0-9A-Za-z$_(])/))),end:a,excludeBegin:!0,keywords:"prototype",className:"property",relevance:0},w="(\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)|"+e.UNDERSCORE_IDENT_RE+")\\s*=>",v={match:[/const|var|let/,/\s+/,a,/\s*/,/=\s*/,/(async\s*)?/,t.lookahead(w),],keywords:"async",className:{1:"keyword",3:"title.function"},contains:[f]};return{name:"Javascript",aliases:["js","jsx","mjs","cjs"],keywords:r,exports:{PARAMS_CONTAINS:h,CLASS_REFERENCE:$},illegal:/#(?![$_A-z])/,contains:[e.SHEBANG({label:"shebang",binary:"node",relevance:5}),{label:"use_strict",className:"meta",relevance:10,begin:/^\s*['"]use (strict|asm)['"]/},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,d,g,u,b,{match:/\$\d+/},o,$,{className:"attr",begin:a+t.lookahead(":"),relevance:0},v,{begin:"("+e.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*",keywords:"return throw case",relevance:0,contains:[b,e.REGEXP_MODE,{className:"function",begin:w,returnBegin:!0,end:"\\s*=>",contains:[{className:"params",variants:[{begin:e.UNDERSCORE_IDENT_RE,relevance:0},{className:null,begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:r,contains:h},]},]},{begin:/,/,relevance:0},{match:/\s+/,relevance:0},{variants:[{begin:"<>",end:">"},{match:/<[A-Za-z0-9\\._:-]+\s*\/>/},{begin:i.begin,"on:begin":i.isTrulyOpeningTag,end:i.end},],subLanguage:"xml",contains:[{begin:i.begin,end:i.end,skip:!0,contains:["self"]},]},]},{variants:[{match:[/function/,/\s+/,a,/(?=\s*\()/]},{match:[/function/,/\s*(?=\()/]},],className:{1:"keyword",3:"title.function"},label:"func.def",contains:[f],illegal:/%/},{beginKeywords:"while if switch catch for"},{begin:"\\b(?!function)"+e.UNDERSCORE_IDENT_RE+"\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)\\s*\\{",returnBegin:!0,label:"func.def",contains:[f,e.inherit(e.TITLE_MODE,{begin:a,className:"title.function"}),]},{match:/\.\.\./,relevance:0},N,{match:"\\$"+a,relevance:0},{match:[/\bconstructor(?=\s*\()/],className:{1:"title.function"},contains:[f]},y,{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"},E,{match:[/get|set/,/\s+/,a,/(?=\()/],className:{1:"keyword",3:"title.function"},contains:[{begin:/\(\)/},f]},{match:/\$[(.]/},]}}let ep=e=>m(/\b/,e,/\w$/.test(e)?/\b/:/\B/),e8=["Protocol","Type"].map(ep),eh=["init","self"].map(ep),ef=["Any","Self"],eE=["actor","any","associatedtype","async","await",/as\?/,/as!/,"as","break","case","catch","class","continue","convenience","default","defer","deinit","didSet","distributed","do","dynamic","else","enum","extension","fallthrough",/fileprivate\(set\)/,"fileprivate","final","for","func","get","guard","if","import","indirect","infix",/init\?/,/init!/,"inout",/internal\(set\)/,"internal","in","is","isolated","nonisolated","lazy","let","mutating","nonmutating",/open\(set\)/,"open","operator","optional","override","postfix","precedencegroup","prefix",/private\(set\)/,"private","protocol",/public\(set\)/,"public","repeat","required","rethrows","return","set","some","static","struct","subscript","super","switch","throws","throw",/try\?/,/try!/,"try","typealias",/unowned\(safe\)/,/unowned\(unsafe\)/,"unowned","var","weak","where","while","willSet",],e$=["false","nil","true"],ey=["assignment","associativity","higherThan","left","lowerThan","none","right",],eN=["#colorLiteral","#column","#dsohandle","#else","#elseif","#endif","#error","#file","#fileID","#fileLiteral","#filePath","#function","#if","#imageLiteral","#keyPath","#line","#selector","#sourceLocation","#warn_unqualified_access","#warning",],ew=["abs","all","any","assert","assertionFailure","debugPrint","dump","fatalError","getVaList","isKnownUniquelyReferenced","max","min","numericCast","pointwiseMax","pointwiseMin","precondition","preconditionFailure","print","readLine","repeatElement","sequence","stride","swap","swift_unboxFromSwiftValueWithType","transcode","type","unsafeBitCast","unsafeDowncast","withExtendedLifetime","withUnsafeMutablePointer","withUnsafePointer","withVaList","withoutActuallyEscaping","zip",],ev=p(/[/=\-+!*%<>&|^~?]/,/[\u00A1-\u00A7]/,/[\u00A9\u00AB]/,/[\u00AC\u00AE]/,/[\u00B0\u00B1]/,/[\u00B6\u00BB\u00BF\u00D7\u00F7]/,/[\u2016-\u2017]/,/[\u2020-\u2027]/,/[\u2030-\u203E]/,/[\u2041-\u2053]/,/[\u2055-\u205E]/,/[\u2190-\u23FF]/,/[\u2500-\u2775]/,/[\u2794-\u2BFF]/,/[\u2E00-\u2E7F]/,/[\u3001-\u3003]/,/[\u3008-\u3020]/,/[\u3030]/),ex=p(ev,/[\u0300-\u036F]/,/[\u1DC0-\u1DFF]/,/[\u20D0-\u20FF]/,/[\uFE00-\uFE0F]/,/[\uFE20-\uFE2F]/),ek=m(ev,ex,"*"),eM=p(/[a-zA-Z_]/,/[\u00A8\u00AA\u00AD\u00AF\u00B2-\u00B5\u00B7-\u00BA]/,/[\u00BC-\u00BE\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]/,/[\u0100-\u02FF\u0370-\u167F\u1681-\u180D\u180F-\u1DBF]/,/[\u1E00-\u1FFF]/,/[\u200B-\u200D\u202A-\u202E\u203F-\u2040\u2054\u2060-\u206F]/,/[\u2070-\u20CF\u2100-\u218F\u2460-\u24FF\u2776-\u2793]/,/[\u2C00-\u2DFF\u2E80-\u2FFF]/,/[\u3004-\u3007\u3021-\u302F\u3031-\u303F\u3040-\uD7FF]/,/[\uF900-\uFD3D\uFD40-\uFDCF\uFDF0-\uFE1F\uFE30-\uFE44]/,/[\uFE47-\uFEFE\uFF00-\uFFFD]/),eO=p(eM,/\d/,/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE20-\uFE2F]/),eS=m(eM,eO,"*"),eA=m(/[A-Z]/,eO,"*"),eC=["autoclosure",m(/convention\(/,p("swift","block","c"),/\)/),"discardableResult","dynamicCallable","dynamicMemberLookup","escaping","frozen","GKInspectable","IBAction","IBDesignable","IBInspectable","IBOutlet","IBSegueAction","inlinable","main","nonobjc","NSApplicationMain","NSCopying","NSManaged",m(/objc\(/,eS,/\)/),"objc","objcMembers","propertyWrapper","requires_stored_property_inits","resultBuilder","testable","UIApplicationMain","unknown","usableFromInline",],eT=["iOS","iOSApplicationExtension","macOS","macOSApplicationExtension","macCatalyst","macCatalystApplicationExtension","watchOS","watchOSApplicationExtension","tvOS","tvOSApplicationExtension","swift",];var eR=Object.freeze({__proto__:null,grmr_bash(e){let n=e.regex,t={};Object.assign(t,{className:"variable",variants:[{begin:n.concat(/\$[\w\d#@][\w\d_]*/,"(?![\\w\\d])(?![$])")},{begin:/\$\{/,end:/\}/,contains:["self",{begin:/:-/,contains:[t]}]},]});let a={className:"subst",begin:/\$\(/,end:/\)/,contains:[e.BACKSLASH_ESCAPE]},i={begin:/<<-?\s*(?=\w+)/,starts:{contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,className:"string"}),]}},r={className:"string",begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,t,a]};a.contains.push(r);let s={begin:/\$?\(\(/,end:/\)\)/,contains:[{begin:/\d+#[0-9a-f]+/,className:"number"},e.NUMBER_MODE,t,]},l=e.SHEBANG({binary:"(fish|bash|zsh|sh|csh|ksh|tcsh|dash|scsh)",relevance:10}),o={className:"function",begin:/\w[\w\d_]*\s*\(\s*\)\s*\{/,returnBegin:!0,contains:[e.inherit(e.TITLE_MODE,{begin:/\w[\w\d_]*/})],relevance:0};return{name:"Bash",aliases:["sh"],keywords:{$pattern:/\b[a-z][a-z0-9._-]+\b/,keyword:["if","then","else","elif","fi","for","while","in","do","done","case","esac","function",],literal:["true","false"],built_in:["break","cd","continue","eval","exec","exit","export","getopts","hash","pwd","readonly","return","shift","test","times","trap","umask","unset","alias","bind","builtin","caller","command","declare","echo","enable","help","let","local","logout","mapfile","printf","read","readarray","source","type","typeset","ulimit","unalias","set","shopt","autoload","bg","bindkey","bye","cap","chdir","clone","comparguments","compcall","compctl","compdescribe","compfiles","compgroups","compquote","comptags","comptry","compvalues","dirs","disable","disown","echotc","echoti","emulate","fc","fg","float","functions","getcap","getln","history","integer","jobs","kill","limit","log","noglob","popd","print","pushd","pushln","rehash","sched","setcap","setopt","stat","suspend","ttyctl","unfunction","unhash","unlimit","unsetopt","vared","wait","whence","where","which","zcompile","zformat","zftp","zle","zmodload","zparseopts","zprof","zpty","zregexparse","zsocket","zstyle","ztcp","chcon","chgrp","chown","chmod","cp","dd","df","dir","dircolors","ln","ls","mkdir","mkfifo","mknod","mktemp","mv","realpath","rm","rmdir","shred","sync","touch","truncate","vdir","b2sum","base32","base64","cat","cksum","comm","csplit","cut","expand","fmt","fold","head","join","md5sum","nl","numfmt","od","paste","ptx","pr","sha1sum","sha224sum","sha256sum","sha384sum","sha512sum","shuf","sort","split","sum","tac","tail","tr","tsort","unexpand","uniq","wc","arch","basename","chroot","date","dirname","du","echo","env","expr","factor","groups","hostid","id","link","logname","nice","nohup","nproc","pathchk","pinky","printenv","printf","pwd","readlink","runcon","seq","sleep","stat","stdbuf","stty","tee","test","timeout","tty","uname","unlink","uptime","users","who","whoami","yes",]},contains:[l,e.SHEBANG(),o,s,e.HASH_COMMENT_MODE,i,{match:/(\/[a-z._-]+)+/},r,{className:"",begin:/\\"/},{className:"string",begin:/'/,end:/'/},t,]}},grmr_c(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",variants:[{begin:"\\b[a-z\\d_]*_t\\b"},{match:/\batomic_[a-z]{3,6}\b/},]},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={keyword:["asm","auto","break","case","continue","default","do","else","enum","extern","for","fortran","goto","if","inline","register","restrict","return","sizeof","struct","switch","typedef","union","volatile","while","_Alignas","_Alignof","_Atomic","_Generic","_Noreturn","_Static_assert","_Thread_local","alignas","alignof","noreturn","static_assert","thread_local","_Pragma",],type:["float","double","signed","unsigned","int","short","long","char","void","_Bool","_Complex","_Imaginary","_Decimal32","_Decimal64","_Decimal128","const","static","complex","bool","imaginary",],literal:"true false NULL",built_in:"std string wstring cin cout cerr clog stdin stdout stderr stringstream istringstream ostringstream auto_ptr deque list queue stack vector map set pair bitset multiset multimap unordered_set unordered_map unordered_multiset unordered_multimap priority_queue make_pair array shared_ptr abort terminate abs acos asin atan2 atan calloc ceil cosh cos exit exp fabs floor fmod fprintf fputs free frexp fscanf future isalnum isalpha iscntrl isdigit isgraph islower isprint ispunct isspace isupper isxdigit tolower toupper labs ldexp log10 log malloc realloc memchr memcmp memcpy memset modf pow printf putchar puts scanf sinh sin snprintf sprintf sqrt sscanf strcat strchr strcmp strcpy strcspn strlen strncat strncmp strncpy strpbrk strrchr strspn strstr tanh tan vfprintf vprintf vsprintf endl initializer_list unique_ptr"},u=[o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],b={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:u.concat([{begin:/\(/,end:/\)/,keywords:g,contains:u.concat(["self"]),relevance:0},]),relevance:0},m={begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[e.inherit(c,{className:"title.function"}),],relevance:0},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C",aliases:["h"],keywords:g,disableAutodetect:!0,illegal:"",contains:[].concat(b,m,u,[o,{begin:e.IDENT_RE+"::",keywords:g},{className:"class",beginKeywords:"enum class struct union",end:/[{;:<>=]/,contains:[{beginKeywords:"final class struct"},e.TITLE_MODE,]},]),exports:{preprocessor:o,strings:s,keywords:g}}},grmr_cpp(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(?!struct)(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",begin:"\\b[a-z\\d_]*_t\\b"},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={type:["bool","char","char16_t","char32_t","char8_t","double","float","int","long","short","void","wchar_t","unsigned","signed","const","static",],keyword:["alignas","alignof","and","and_eq","asm","atomic_cancel","atomic_commit","atomic_noexcept","auto","bitand","bitor","break","case","catch","class","co_await","co_return","co_yield","compl","concept","const_cast|10","consteval","constexpr","constinit","continue","decltype","default","delete","do","dynamic_cast|10","else","enum","explicit","export","extern","false","final","for","friend","goto","if","import","inline","module","mutable","namespace","new","noexcept","not","not_eq","nullptr","operator","or","or_eq","override","private","protected","public","reflexpr","register","reinterpret_cast|10","requires","return","sizeof","static_assert","static_cast|10","struct","switch","synchronized","template","this","thread_local","throw","transaction_safe","transaction_safe_dynamic","true","try","typedef","typeid","typename","union","using","virtual","volatile","while","xor","xor_eq",],literal:["NULL","false","nullopt","nullptr","true"],built_in:["_Pragma"],_type_hints:["any","auto_ptr","barrier","binary_semaphore","bitset","complex","condition_variable","condition_variable_any","counting_semaphore","deque","false_type","future","imaginary","initializer_list","istringstream","jthread","latch","lock_guard","multimap","multiset","mutex","optional","ostringstream","packaged_task","pair","promise","priority_queue","queue","recursive_mutex","recursive_timed_mutex","scoped_lock","set","shared_future","shared_lock","shared_mutex","shared_timed_mutex","shared_ptr","stack","string_view","stringstream","timed_mutex","thread","true_type","tuple","unique_lock","unique_ptr","unordered_map","unordered_multimap","unordered_multiset","unordered_set","variant","vector","weak_ptr","wstring","wstring_view",]},u={className:"function.dispatch",relevance:0,keywords:{_hint:["abort","abs","acos","apply","as_const","asin","atan","atan2","calloc","ceil","cerr","cin","clog","cos","cosh","cout","declval","endl","exchange","exit","exp","fabs","floor","fmod","forward","fprintf","fputs","free","frexp","fscanf","future","invoke","isalnum","isalpha","iscntrl","isdigit","isgraph","islower","isprint","ispunct","isspace","isupper","isxdigit","labs","launder","ldexp","log","log10","make_pair","make_shared","make_shared_for_overwrite","make_tuple","make_unique","malloc","memchr","memcmp","memcpy","memset","modf","move","pow","printf","putchar","puts","realloc","scanf","sin","sinh","snprintf","sprintf","sqrt","sscanf","std","stderr","stdin","stdout","strcat","strchr","strcmp","strcpy","strcspn","strlen","strncat","strncmp","strncpy","strpbrk","strrchr","strspn","strstr","swap","tan","tanh","terminate","to_underlying","tolower","toupper","vfprintf","visit","vprintf","vsprintf",]},begin:n.concat(/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,e.IDENT_RE,n.lookahead(/(<[^<>]+>|)\s*\(/))},b=[u,o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],m={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:b.concat([{begin:/\(/,end:/\)/,keywords:g,contains:b.concat(["self"]),relevance:0},]),relevance:0},p={className:"function",begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[c],relevance:0},{begin:/::/,relevance:0},{begin:/:/,endsWithParent:!0,contains:[s,l]},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C++",aliases:["cc","c++","h++","hpp","hh","hxx","cxx"],keywords:g,illegal:"",classNameAliases:{"function.dispatch":"built_in"},contains:[].concat(m,p,u,b,[o,{begin:"\\b(deque|list|queue|priority_queue|pair|stack|vector|map|set|bitset|multiset|multimap|unordered_map|unordered_set|unordered_multiset|unordered_multimap|array|tuple|optional|variant|function)\\s*<(?!<)",end:">",keywords:g,contains:["self",r]},{begin:e.IDENT_RE+"::",keywords:g},{match:[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/,],className:{1:"keyword",3:"title.class"}},])}},grmr_csharp(e){let n={keyword:["abstract","as","base","break","case","catch","class","const","continue","do","else","event","explicit","extern","finally","fixed","for","foreach","goto","if","implicit","in","interface","internal","is","lock","namespace","new","operator","out","override","params","private","protected","public","readonly","record","ref","return","scoped","sealed","sizeof","stackalloc","static","struct","switch","this","throw","try","typeof","unchecked","unsafe","using","virtual","void","volatile","while",].concat(["add","alias","and","ascending","async","await","by","descending","equals","from","get","global","group","init","into","join","let","nameof","not","notnull","on","or","orderby","partial","remove","select","set","unmanaged","value|0","var","when","where","with","yield",]),built_in:["bool","byte","char","decimal","delegate","double","dynamic","enum","float","int","long","nint","nuint","object","sbyte","short","string","ulong","uint","ushort",],literal:["default","false","null","true"]},t=e.inherit(e.TITLE_MODE,{begin:"[a-zA-Z](\\.?\\w)*"}),a={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)(u|U|l|L|ul|UL|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},i={className:"string",begin:'@"',end:'"',contains:[{begin:'""'}]},r=e.inherit(i,{illegal:/\n/}),s={className:"subst",begin:/\{/,end:/\}/,keywords:n},l=e.inherit(s,{illegal:/\n/}),o={className:"string",begin:/\$"/,end:'"',illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},e.BACKSLASH_ESCAPE,l,]},c={className:"string",begin:/\$@"/,end:'"',contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},s,]},d=e.inherit(c,{illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},l]});s.contains=[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.C_BLOCK_COMMENT_MODE,],l.contains=[d,o,r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.inherit(e.C_BLOCK_COMMENT_MODE,{illegal:/\n/}),];let g={variants:[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},u={begin:"<",end:">",contains:[{beginKeywords:"in out"},t]},b=e.IDENT_RE+"(<"+e.IDENT_RE+"(\\s*,\\s*"+e.IDENT_RE+")*>)?(\\[\\])?",m={begin:"@"+e.IDENT_RE,relevance:0};return{name:"C#",aliases:["cs","c#"],keywords:n,illegal:/::/,contains:[e.COMMENT("///","$",{returnBegin:!0,contains:[{className:"doctag",variants:[{begin:"///",relevance:0},{begin:""},{begin:"?",end:">"},]},]}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"meta",begin:"#",end:"$",keywords:{keyword:"if else elif endif define undef warning error line region endregion pragma checksum"}},g,a,{beginKeywords:"class interface",relevance:0,end:/[{;=]/,illegal:/[^\s:,]/,contains:[{beginKeywords:"where class"},t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},{beginKeywords:"namespace",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"record",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"meta",begin:"^\\s*\\[(?=[\\w])",excludeBegin:!0,end:"\\]",excludeEnd:!0,contains:[{className:"string",begin:/"/,end:/"/},]},{beginKeywords:"new return throw await else",relevance:0},{className:"function",begin:"("+b+"\\s+)+"+e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,end:/\s*[{;=]/,excludeEnd:!0,keywords:n,contains:[{beginKeywords:"public private protected static internal protected abstract async extern override unsafe virtual new sealed partial",relevance:0},{begin:e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,contains:[e.TITLE_MODE,u],relevance:0},{match:/\(\)/},{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:n,relevance:0,contains:[g,a,e.C_BLOCK_COMMENT_MODE]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},m,]}},grmr_css(e){let n=e.regex,t=X(e),a=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE];return{name:"CSS",case_insensitive:!0,illegal:/[=|'\$]/,keywords:{keyframePosition:"from to"},classNameAliases:{keyframePosition:"selector-tag"},contains:[t.BLOCK_COMMENT,{begin:/-(webkit|moz|ms|o)-(?=[a-z])/},t.CSS_NUMBER_MODE,{className:"selector-id",begin:/#[A-Za-z0-9_-]+/,relevance:0},{className:"selector-class",begin:"\\.[a-zA-Z-][a-zA-Z0-9_-]*",relevance:0},t.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",variants:[{begin:":("+Y.join("|")+")"},{begin:":(:)?("+ee.join("|")+")"},]},t.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b"},{begin:/:/,end:/[;}{]/,contains:[t.BLOCK_COMMENT,t.HEXCOLOR,t.IMPORTANT,t.CSS_NUMBER_MODE,...a,{begin:/(url|data-uri)\(/,end:/\)/,relevance:0,keywords:{built_in:"url data-uri"},contains:[...a,{className:"string",begin:/[^)]/,endsWithParent:!0,excludeEnd:!0},]},t.FUNCTION_DISPATCH,]},{begin:n.lookahead(/@/),end:"[{;]",relevance:0,illegal:/:/,contains:[{className:"keyword",begin:/@-?\w[\w]*(-\w+)*/},{begin:/\s/,endsWithParent:!0,excludeEnd:!0,relevance:0,keywords:{$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")},contains:[{begin:/[a-z-]+(?=:)/,className:"attribute"},...a,t.CSS_NUMBER_MODE,]},]},{className:"selector-tag",begin:"\\b("+V.join("|")+")\\b"},]}},grmr_diff(e){let n=e.regex;return{name:"Diff",aliases:["patch"],contains:[{className:"meta",relevance:10,match:n.either(/^@@ +-\d+,\d+ +\+\d+,\d+ +@@/,/^\*\*\* +\d+,\d+ +\*\*\*\*$/,/^--- +\d+,\d+ +----$/)},{className:"comment",variants:[{begin:n.either(/Index: /,/^index/,/={3,}/,/^-{3}/,/^\*{3} /,/^\+{3}/,/^diff --git/),end:/$/},{match:/^\*{15}$/},]},{className:"addition",begin:/^\+/,end:/$/},{className:"deletion",begin:/^-/,end:/$/},{className:"addition",begin:/^!/,end:/$/},]}},grmr_go(e){let n={keyword:["break","case","chan","const","continue","default","defer","else","fallthrough","for","func","go","goto","if","import","interface","map","package","range","return","select","struct","switch","type","var",],type:["bool","byte","complex64","complex128","error","float32","float64","int8","int16","int32","int64","string","uint8","uint16","uint32","uint64","int","uint","uintptr","rune",],literal:["true","false","iota","nil"],built_in:["append","cap","close","complex","copy","imag","len","make","new","panic","print","println","real","recover","delete",]};return{name:"Go",aliases:["golang"],keywords:n,illegal:"",contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"string",variants:[e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,{begin:"`",end:"`"},]},{className:"number",variants:[{begin:e.C_NUMBER_RE+"[i]",relevance:1},e.C_NUMBER_MODE,]},{begin:/:=/},{className:"function",beginKeywords:"func",end:"\\s*(\\{|$)",excludeEnd:!0,contains:[e.TITLE_MODE,{className:"params",begin:/\(/,end:/\)/,endsParent:!0,keywords:n,illegal:/["']/},]},]}},grmr_graphql(e){let n=e.regex;return{name:"GraphQL",aliases:["gql"],case_insensitive:!0,disableAutodetect:!1,keywords:{keyword:["query","mutation","subscription","type","input","schema","directive","interface","union","scalar","fragment","enum","on",],literal:["true","false","null"]},contains:[e.HASH_COMMENT_MODE,e.QUOTE_STRING_MODE,e.NUMBER_MODE,{scope:"punctuation",match:/[.]{3}/,relevance:0},{scope:"punctuation",begin:/[\!\(\)\:\=\[\]\{\|\}]{1}/,relevance:0},{scope:"variable",begin:/\$/,end:/\W/,excludeEnd:!0,relevance:0},{scope:"meta",match:/@\w+/,excludeEnd:!0},{scope:"symbol",begin:n.concat(/[_A-Za-z][_0-9A-Za-z]*/,n.lookahead(/\s*:/)),relevance:0},],illegal:[/[;<']/,/BEGIN/]}},grmr_ini(e){let n=e.regex,t={className:"number",relevance:0,variants:[{begin:/([+-]+)?[\d]+_[\d_]+/},{begin:e.NUMBER_RE},]},a=e.COMMENT();a.variants=[{begin:/;/,end:/$/},{begin:/#/,end:/$/},];let i={className:"variable",variants:[{begin:/\$[\w\d"][\w\d_]*/},{begin:/\$\{(.*?)\}/},]},r={className:"literal",begin:/\bon|off|true|false|yes|no\b/},s={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:"'''",end:"'''",relevance:10},{begin:'"""',end:'"""',relevance:10},{begin:'"',end:'"'},{begin:"'",end:"'"},]},l=n.either(/[A-Za-z0-9_-]+/,/"(\\"|[^"])*"/,/'[^']*'/);return{name:"TOML, also INI",aliases:["toml"],case_insensitive:!0,illegal:/\S/,contains:[a,{className:"section",begin:/\[+/,end:/\]+/},{begin:n.concat(l,"(\\s*\\.\\s*",l,")*",n.lookahead(/\s*=\s*[^#\s]/)),className:"attr",starts:{end:/$/,contains:[a,{begin:/\[/,end:/\]/,contains:[a,r,i,s,t,"self"],relevance:0},r,i,s,t]}},]}},grmr_java(e){let n=e.regex,t="[\xc0-ʸa-zA-Z_$][\xc0-ʸa-zA-Z_$0-9]*",a=t+function e(n,t,a){return -1===a?"":n.replace(t,i=>e(n,t,a-1))}("(?:<"+t+"~~~(?:\\s*,\\s*"+t+"~~~)*>)?",/~~~/g,2),i={keyword:["synchronized","abstract","private","var","static","if","const ","for","while","strictfp","finally","protected","import","native","final","void","enum","else","break","transient","catch","instanceof","volatile","case","assert","package","default","public","try","switch","continue","throws","protected","public","private","module","requires","exports","do","sealed","yield","permits",],literal:["false","true","null"],type:["char","boolean","long","float","int","byte","short","double",],built_in:["super","this"]},r={className:"meta",begin:"@"+t,contains:[{begin:/\(/,end:/\)/,contains:["self"]},]},s={className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[e.C_BLOCK_COMMENT_MODE],endsParent:!0};return{name:"Java",aliases:["jsp"],keywords:i,illegal:/<\/|#/,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{begin:/\w+@/,relevance:0},{className:"doctag",begin:"@[A-Za-z]+"},]}),{begin:/import java\.[a-z]+\./,keywords:"import",relevance:2},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{begin:/"""/,end:/"""/,className:"string",contains:[e.BACKSLASH_ESCAPE]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{match:[/\b(?:class|interface|enum|extends|implements|new)/,/\s+/,t,],className:{1:"keyword",3:"title.class"}},{match:/non-sealed/,scope:"keyword"},{begin:[n.concat(/(?!else)/,t),/\s+/,t,/\s+/,/=(?!=)/],className:{1:"type",3:"variable",5:"operator"}},{begin:[/record/,/\s+/,t],className:{1:"keyword",3:"title.class"},contains:[s,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"new throw return else",relevance:0},{begin:["(?:"+a+"\\s+)",e.UNDERSCORE_IDENT_RE,/\s*(?=\()/],className:{2:"title.function"},keywords:i,contains:[{className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,er,e.C_BLOCK_COMMENT_MODE,]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},er,r,]}},grmr_javascript:em,grmr_json(e){let n=["true","false","null"],t={scope:"literal",beginKeywords:n.join(" ")};return{name:"JSON",keywords:{literal:n},contains:[{className:"attr",begin:/"(\\.|[^\\"\r\n])*"(?=\s*:)/,relevance:1.01},{match:/[{}[\],:]/,className:"punctuation",relevance:0},e.QUOTE_STRING_MODE,t,e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,],illegal:"\\S"}},grmr_kotlin(e){let n={keyword:"abstract as val var vararg get set class object open private protected public noinline crossinline dynamic final enum if else do while for when throw try catch finally import package is in fun override companion reified inline lateinit init interface annotation data sealed internal infix operator out by constructor super tailrec where const inner suspend typealias external expect actual",built_in:"Byte Short Char Int Long Boolean Float Double Void Unit Nothing",literal:"true false null"},t={className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"@"},a={className:"subst",begin:/\$\{/,end:/\}/,contains:[e.C_NUMBER_MODE]},i={className:"variable",begin:"\\$"+e.UNDERSCORE_IDENT_RE},r={className:"string",variants:[{begin:'"""',end:'"""(?=[^"])',contains:[i,a]},{begin:"'",end:"'",illegal:/\n/,contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"',illegal:/\n/,contains:[e.BACKSLASH_ESCAPE,i,a]},]};a.contains.push(r);let s={className:"meta",begin:"@(?:file|property|field|get|set|receiver|param|setparam|delegate)\\s*:(?:\\s*"+e.UNDERSCORE_IDENT_RE+")?"},l={className:"meta",begin:"@"+e.UNDERSCORE_IDENT_RE,contains:[{begin:/\(/,end:/\)/,contains:[e.inherit(r,{className:"string"}),"self"]},]},o=e.COMMENT("/\\*","\\*/",{contains:[e.C_BLOCK_COMMENT_MODE]}),c={variants:[{className:"type",begin:e.UNDERSCORE_IDENT_RE},{begin:/\(/,end:/\)/,contains:[]},]},d=c;return d.variants[1].contains=[c],c.variants[1].contains=[d],{name:"Kotlin",aliases:["kt","kts"],keywords:n,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"}]}),e.C_LINE_COMMENT_MODE,o,{className:"keyword",begin:/\b(break|continue|return|this)\b/,starts:{contains:[{className:"symbol",begin:/@\w+/}]}},t,s,l,{className:"function",beginKeywords:"fun",end:"[(]|$",returnBegin:!0,excludeEnd:!0,keywords:n,relevance:5,contains:[{begin:e.UNDERSCORE_IDENT_RE+"\\s*\\(",returnBegin:!0,relevance:0,contains:[e.UNDERSCORE_TITLE_MODE]},{className:"type",begin:/,end:/>/,keywords:"reified",relevance:0},{className:"params",begin:/\(/,end:/\)/,endsParent:!0,keywords:n,relevance:0,contains:[{begin:/:/,end:/[=,\/]/,endsWithParent:!0,contains:[c,e.C_LINE_COMMENT_MODE,o],relevance:0},e.C_LINE_COMMENT_MODE,o,s,l,r,e.C_NUMBER_MODE,]},o,]},{begin:[/class|interface|trait/,/\s+/,e.UNDERSCORE_IDENT_RE],beginScope:{3:"title.class"},keywords:"class interface trait",end:/[:\{(]|$/,excludeEnd:!0,illegal:"extends implements",contains:[{beginKeywords:"public protected internal private constructor"},e.UNDERSCORE_TITLE_MODE,{className:"type",begin:/,end:/>/,excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:/[,:]\s*/,end:/[<\(,){\s]|$/,excludeBegin:!0,returnEnd:!0},s,l,]},r,{className:"meta",begin:"^#!/usr/bin/env",end:"$",illegal:"\n"},er,]}},grmr_less(e){let n=X(e),t="([\\w-]+|@\\{[\\w-]+\\})",a=[],i=[],r=e=>({className:"string",begin:"~?"+e+".*?"+e}),s=(e,n,t)=>({className:e,begin:n,relevance:t}),l={$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")};i.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,r("'"),r('"'),n.CSS_NUMBER_MODE,{begin:"(url|data-uri)\\(",starts:{className:"string",end:"[\\)\\n]",excludeEnd:!0}},n.HEXCOLOR,{begin:"\\(",end:"\\)",contains:i,keywords:l,relevance:0},s("variable","@@?[\\w-]+",10),s("variable","@\\{[\\w-]+\\}"),s("built_in","~?`[^`]*?`"),{className:"attribute",begin:"[\\w-]+\\s*:",end:":",returnBegin:!0,excludeEnd:!0},n.IMPORTANT,{beginKeywords:"and not"},n.FUNCTION_DISPATCH);let o=i.concat({begin:/\{/,end:/\}/,contains:a}),c={beginKeywords:"when",endsWithParent:!0,contains:[{beginKeywords:"and not"}].concat(i)},d={begin:t+"\\s*:",returnBegin:!0,end:/[;}]/,relevance:0,contains:[{begin:/-(webkit|moz|ms|o)-/},n.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b",end:/(?=:)/,starts:{endsWithParent:!0,illegal:"[<=$]",relevance:0,contains:i}},]},g={variants:[{begin:"[\\.#:&\\[>]",end:"[;{}]"},{begin:t,end:/\{/},],returnBegin:!0,returnEnd:!0,illegal:"[<='$\"]",relevance:0,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,c,s("keyword","all\\b"),s("variable","@\\{[\\w-]+\\}"),{begin:"\\b("+V.join("|")+")\\b",className:"selector-tag"},n.CSS_NUMBER_MODE,s("selector-tag",t,0),s("selector-id","#"+t),s("selector-class","\\."+t,0),s("selector-tag","&",0),n.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",begin:":("+Y.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+ee.join("|")+")"},{begin:/\(/,end:/\)/,relevance:0,contains:o},{begin:"!important"},n.FUNCTION_DISPATCH,]},u={begin:`[\\w-]+:(:)?(${et.join("|")})`,returnBegin:!0,contains:[g]};return a.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"keyword",begin:"@(import|media|charset|font-face|(-[a-z]+-)?keyframes|supports|document|namespace|page|viewport|host)\\b",starts:{end:"[;{}]",keywords:l,returnEnd:!0,contains:i,relevance:0}},{className:"variable",variants:[{begin:"@[\\w-]+\\s*:",relevance:15},{begin:"@[\\w-]+"},],starts:{end:"[;}]",returnEnd:!0,contains:o}},u,d,g,c,n.FUNCTION_DISPATCH),{name:"Less",case_insensitive:!0,illegal:"[=>'/<($\"]",contains:a}},grmr_lua(e){let n="\\[=*\\[",t="\\]=*\\]",a={begin:n,end:t,contains:["self"]},i=[e.COMMENT("--(?!\\[=*\\[)","$"),e.COMMENT("--\\[=*\\[",t,{contains:[a],relevance:10}),];return{name:"Lua",keywords:{$pattern:e.UNDERSCORE_IDENT_RE,literal:"true false nil",keyword:"and break do else elseif end for goto if in local not or repeat return then until while",built_in:"_G _ENV _VERSION __index __newindex __mode __call __metatable __tostring __len __gc __add __sub __mul __div __mod __pow __concat __unm __eq __lt __le assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall arg self coroutine resume yield status wrap create running debug getupvalue debug sethook getmetatable gethook setmetatable setlocal traceback setfenv getinfo setupvalue getlocal getregistry getfenv io lines write close flush open output type read stderr stdin input stdout popen tmpfile math log max acos huge ldexp pi cos tanh pow deg tan cosh sinh random randomseed frexp ceil floor rad abs sqrt modf asin min mod fmod log10 atan2 exp sin atan os exit setlocale date getenv difftime remove time clock tmpname rename execute package preload loadlib loaded loaders cpath config path seeall string sub upper len gfind rep find match char dump gmatch reverse byte format gsub lower table setn insert getn foreachi maxn foreach concat sort remove"},contains:i.concat([{className:"function",beginKeywords:"function",end:"\\)",contains:[e.inherit(e.TITLE_MODE,{begin:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),{className:"params",begin:"\\(",endsWithParent:!0,contains:i},].concat(i)},e.C_NUMBER_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:n,end:t,contains:[a],relevance:5},])}},grmr_makefile(e){let n={className:"variable",variants:[{begin:"\\$\\("+e.UNDERSCORE_IDENT_RE+"\\)",contains:[e.BACKSLASH_ESCAPE]},{begin:/\$[@%\^\+\*]/},]},t={className:"string",begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,n]},a={begin:"^"+e.UNDERSCORE_IDENT_RE+"\\s*(?=[:+?]?=)"};return{name:"Makefile",aliases:["mk","mak","make"],keywords:{$pattern:/[\w-]+/,keyword:"define endef undefine ifdef ifndef ifeq ifneq else endif include -include sinclude override export unexport private vpath"},contains:[e.HASH_COMMENT_MODE,n,t,{className:"variable",begin:/\$\([\w-]+\s/,end:/\)/,keywords:{built_in:"subst patsubst strip findstring filter filter-out sort word wordlist firstword lastword dir notdir suffix basename addsuffix addprefix join wildcard realpath abspath error warning shell origin flavor foreach if or and call eval file value"},contains:[n]},a,{className:"meta",begin:/^\.PHONY:/,end:/$/,keywords:{$pattern:/[\.\w]+/,keyword:".PHONY"}},{className:"section",begin:/^[^\s]+:/,end:/$/,contains:[n]},]}},grmr_xml(e){let n=e.regex,t=n.concat(/[\p{L}_]/u,n.optional(/[\p{L}0-9_.-]*:/u),/[\p{L}0-9_.-]*/u),a={className:"symbol",begin:/&[a-z]+;|[0-9]+;|[a-f0-9]+;/},i={begin:/\s/,contains:[{className:"keyword",begin:/#?[a-z_][a-z1-9_-]+/,illegal:/\n/},]},r=e.inherit(i,{begin:/\(/,end:/\)/}),s=e.inherit(e.APOS_STRING_MODE,{className:"string"}),l=e.inherit(e.QUOTE_STRING_MODE,{className:"string"}),o={endsWithParent:!0,illegal:/,relevance:0,contains:[{className:"attr",begin:/[\p{L}0-9._:-]+/u,relevance:0},{begin:/=\s*/,relevance:0,contains:[{className:"string",endsParent:!0,variants:[{begin:/"/,end:/"/,contains:[a]},{begin:/'/,end:/'/,contains:[a]},{begin:/[^\s"'=<>`]+/},]},]},]};return{name:"HTML, XML",aliases:["html","xhtml","rss","atom","xjb","xsd","xsl","plist","wsf","svg",],case_insensitive:!0,unicodeRegex:!0,contains:[{className:"meta",begin://,relevance:10,contains:[i,l,s,r,{begin:/\[/,end:/\]/,contains:[{className:"meta",begin://,contains:[i,r,l,s]},]},]},e.COMMENT(//,{relevance:10}),{begin://,relevance:10},a,{className:"meta",end:/\?>/,variants:[{begin:/<\?xml/,relevance:10,contains:[l]},{begin:/<\?[a-z][a-z0-9]+/},]},{className:"tag",begin:/
- ''', unsafe_allow_html=True)
-
-
-# st.markdown(
- # st.footer(
- # """
- # Configuration Check page
- # """,
- # unsafe_allow_html=True,
- # )
-
- cssFooter="""
-
-
- """
- st.markdown(cssFooter, unsafe_allow_html=True)
\ No newline at end of file
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
deleted file mode 100644
index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-from .decode_head import BaseDecodeHead
-
-
-class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta):
- """Base class for cascade decode head used in
- :class:`CascadeEncoderDecoder."""
-
- def __init__(self, *args, **kwargs):
- super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs)
-
- @abstractmethod
- def forward(self, inputs, prev_output):
- """Placeholder of forward function."""
- pass
-
- def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg,
- train_cfg):
- """Forward function for training.
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- gt_semantic_seg (Tensor): Semantic segmentation masks
- used if the architecture supports semantic segmentation task.
- train_cfg (dict): The training config.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- seg_logits = self.forward(inputs, prev_output)
- losses = self.losses(seg_logits, gt_semantic_seg)
-
- return losses
-
- def forward_test(self, inputs, prev_output, img_metas, test_cfg):
- """Forward function for testing.
-
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- test_cfg (dict): The testing config.
-
- Returns:
- Tensor: Output segmentation map.
- """
- return self.forward(inputs, prev_output)
diff --git a/spaces/kokofixcomputers/chat-ui/src/lib/shareConversation.ts b/spaces/kokofixcomputers/chat-ui/src/lib/shareConversation.ts
deleted file mode 100644
index 4768b604a42258d5d97231dd0e44f9198ef1864c..0000000000000000000000000000000000000000
--- a/spaces/kokofixcomputers/chat-ui/src/lib/shareConversation.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { base } from "$app/paths";
-import { ERROR_MESSAGES, error } from "$lib/stores/errors";
-import { share } from "./utils/share";
-
-export async function shareConversation(id: string, title: string) {
- try {
- const res = await fetch(`${base}/conversation/${id}/share`, {
- method: "POST",
- headers: {
- "Content-Type": "application/json",
- },
- });
-
- if (!res.ok) {
- error.set("Error while sharing conversation, try again.");
- console.error("Error while sharing conversation: " + (await res.text()));
- return;
- }
-
- const { url } = await res.json();
-
- share(url, title);
- } catch (err) {
- error.set(ERROR_MESSAGES.default);
- console.error(err);
- }
-}
diff --git a/spaces/konverner/deep-voice-cloning/Dockerfile b/spaces/konverner/deep-voice-cloning/Dockerfile
deleted file mode 100644
index 58e260a4e96f3b89a15514769fb2437a43495fef..0000000000000000000000000000000000000000
--- a/spaces/konverner/deep-voice-cloning/Dockerfile
+++ /dev/null
@@ -1,4 +0,0 @@
-FROM python:3.9
-MAINTAINER Konstantin Verner
-COPY . .
-RUN pip install .
\ No newline at end of file
diff --git a/spaces/kukuhtw/AutoGPT/tests/test_config.py b/spaces/kukuhtw/AutoGPT/tests/test_config.py
deleted file mode 100644
index b472a24c78edd1f931a76c68e08ed544bbe61d98..0000000000000000000000000000000000000000
--- a/spaces/kukuhtw/AutoGPT/tests/test_config.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from unittest import TestCase
-
-from autogpt.config import Config
-
-
-class TestConfig(TestCase):
- """
- Test cases for the Config class, which handles the configuration settings
- for the AI and ensures it behaves as a singleton.
- """
-
- def setUp(self):
- """
- Set up the test environment by creating an instance of the Config class.
- """
- self.config = Config()
-
- def test_singleton(self):
- """
- Test if the Config class behaves as a singleton by ensuring that two instances are the same.
- """
- config2 = Config()
- self.assertIs(self.config, config2)
-
- def test_initial_values(self):
- """
- Test if the initial values of the Config class attributes are set correctly.
- """
- self.assertFalse(self.config.debug_mode)
- self.assertFalse(self.config.continuous_mode)
- self.assertFalse(self.config.speak_mode)
- self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo")
- self.assertEqual(self.config.smart_llm_model, "gpt-4")
- self.assertEqual(self.config.fast_token_limit, 4000)
- self.assertEqual(self.config.smart_token_limit, 8000)
-
- def test_set_continuous_mode(self):
- """
- Test if the set_continuous_mode() method updates the continuous_mode attribute.
- """
- self.config.set_continuous_mode(True)
- self.assertTrue(self.config.continuous_mode)
-
- def test_set_speak_mode(self):
- """
- Test if the set_speak_mode() method updates the speak_mode attribute.
- """
- self.config.set_speak_mode(True)
- self.assertTrue(self.config.speak_mode)
-
- def test_set_fast_llm_model(self):
- """
- Test if the set_fast_llm_model() method updates the fast_llm_model attribute.
- """
- self.config.set_fast_llm_model("gpt-3.5-turbo-test")
- self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo-test")
-
- def test_set_smart_llm_model(self):
- """
- Test if the set_smart_llm_model() method updates the smart_llm_model attribute.
- """
- self.config.set_smart_llm_model("gpt-4-test")
- self.assertEqual(self.config.smart_llm_model, "gpt-4-test")
-
- def test_set_fast_token_limit(self):
- """
- Test if the set_fast_token_limit() method updates the fast_token_limit attribute.
- """
- self.config.set_fast_token_limit(5000)
- self.assertEqual(self.config.fast_token_limit, 5000)
-
- def test_set_smart_token_limit(self):
- """
- Test if the set_smart_token_limit() method updates the smart_token_limit attribute.
- """
- self.config.set_smart_token_limit(9000)
- self.assertEqual(self.config.smart_token_limit, 9000)
-
- def test_set_debug_mode(self):
- """
- Test if the set_debug_mode() method updates the debug_mode attribute.
- """
- self.config.set_debug_mode(True)
- self.assertTrue(self.config.debug_mode)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/psLib.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/psLib.py
deleted file mode 100644
index 1e0408ce9c16f9a784f53ef1d17af88b0ab65647..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/psLib.py
+++ /dev/null
@@ -1,399 +0,0 @@
-from fontTools.misc.textTools import bytechr, byteord, bytesjoin, tobytes, tostr
-from fontTools.misc import eexec
-from .psOperators import (
- PSOperators,
- ps_StandardEncoding,
- ps_array,
- ps_boolean,
- ps_dict,
- ps_integer,
- ps_literal,
- ps_mark,
- ps_name,
- ps_operator,
- ps_procedure,
- ps_procmark,
- ps_real,
- ps_string,
-)
-import re
-from collections.abc import Callable
-from string import whitespace
-import logging
-
-
-log = logging.getLogger(__name__)
-
-ps_special = b"()<>[]{}%" # / is one too, but we take care of that one differently
-
-skipwhiteRE = re.compile(bytesjoin([b"[", whitespace, b"]*"]))
-endofthingPat = bytesjoin([b"[^][(){}<>/%", whitespace, b"]*"])
-endofthingRE = re.compile(endofthingPat)
-commentRE = re.compile(b"%[^\n\r]*")
-
-# XXX This not entirely correct as it doesn't allow *nested* embedded parens:
-stringPat = rb"""
- \(
- (
- (
- [^()]* \ [()]
- )
- |
- (
- [^()]* \( [^()]* \)
- )
- )*
- [^()]*
- \)
-"""
-stringPat = b"".join(stringPat.split())
-stringRE = re.compile(stringPat)
-
-hexstringRE = re.compile(bytesjoin([b"<[", whitespace, b"0-9A-Fa-f]*>"]))
-
-
-class PSTokenError(Exception):
- pass
-
-
-class PSError(Exception):
- pass
-
-
-class PSTokenizer(object):
- def __init__(self, buf=b"", encoding="ascii"):
- # Force self.buf to be a byte string
- buf = tobytes(buf)
- self.buf = buf
- self.len = len(buf)
- self.pos = 0
- self.closed = False
- self.encoding = encoding
-
- def read(self, n=-1):
- """Read at most 'n' bytes from the buffer, or less if the read
- hits EOF before obtaining 'n' bytes.
- If 'n' is negative or omitted, read all data until EOF is reached.
- """
- if self.closed:
- raise ValueError("I/O operation on closed file")
- if n is None or n < 0:
- newpos = self.len
- else:
- newpos = min(self.pos + n, self.len)
- r = self.buf[self.pos : newpos]
- self.pos = newpos
- return r
-
- def close(self):
- if not self.closed:
- self.closed = True
- del self.buf, self.pos
-
- def getnexttoken(
- self,
- # localize some stuff, for performance
- len=len,
- ps_special=ps_special,
- stringmatch=stringRE.match,
- hexstringmatch=hexstringRE.match,
- commentmatch=commentRE.match,
- endmatch=endofthingRE.match,
- ):
-
- self.skipwhite()
- if self.pos >= self.len:
- return None, None
- pos = self.pos
- buf = self.buf
- char = bytechr(byteord(buf[pos]))
- if char in ps_special:
- if char in b"{}[]":
- tokentype = "do_special"
- token = char
- elif char == b"%":
- tokentype = "do_comment"
- _, nextpos = commentmatch(buf, pos).span()
- token = buf[pos:nextpos]
- elif char == b"(":
- tokentype = "do_string"
- m = stringmatch(buf, pos)
- if m is None:
- raise PSTokenError("bad string at character %d" % pos)
- _, nextpos = m.span()
- token = buf[pos:nextpos]
- elif char == b"<":
- tokentype = "do_hexstring"
- m = hexstringmatch(buf, pos)
- if m is None:
- raise PSTokenError("bad hexstring at character %d" % pos)
- _, nextpos = m.span()
- token = buf[pos:nextpos]
- else:
- raise PSTokenError("bad token at character %d" % pos)
- else:
- if char == b"/":
- tokentype = "do_literal"
- m = endmatch(buf, pos + 1)
- else:
- tokentype = ""
- m = endmatch(buf, pos)
- if m is None:
- raise PSTokenError("bad token at character %d" % pos)
- _, nextpos = m.span()
- token = buf[pos:nextpos]
- self.pos = pos + len(token)
- token = tostr(token, encoding=self.encoding)
- return tokentype, token
-
- def skipwhite(self, whitematch=skipwhiteRE.match):
- _, nextpos = whitematch(self.buf, self.pos).span()
- self.pos = nextpos
-
- def starteexec(self):
- self.pos = self.pos + 1
- self.dirtybuf = self.buf[self.pos :]
- self.buf, R = eexec.decrypt(self.dirtybuf, 55665)
- self.len = len(self.buf)
- self.pos = 4
-
- def stopeexec(self):
- if not hasattr(self, "dirtybuf"):
- return
- self.buf = self.dirtybuf
- del self.dirtybuf
-
-
-class PSInterpreter(PSOperators):
- def __init__(self, encoding="ascii"):
- systemdict = {}
- userdict = {}
- self.encoding = encoding
- self.dictstack = [systemdict, userdict]
- self.stack = []
- self.proclevel = 0
- self.procmark = ps_procmark()
- self.fillsystemdict()
-
- def fillsystemdict(self):
- systemdict = self.dictstack[0]
- systemdict["["] = systemdict["mark"] = self.mark = ps_mark()
- systemdict["]"] = ps_operator("]", self.do_makearray)
- systemdict["true"] = ps_boolean(1)
- systemdict["false"] = ps_boolean(0)
- systemdict["StandardEncoding"] = ps_array(ps_StandardEncoding)
- systemdict["FontDirectory"] = ps_dict({})
- self.suckoperators(systemdict, self.__class__)
-
- def suckoperators(self, systemdict, klass):
- for name in dir(klass):
- attr = getattr(self, name)
- if isinstance(attr, Callable) and name[:3] == "ps_":
- name = name[3:]
- systemdict[name] = ps_operator(name, attr)
- for baseclass in klass.__bases__:
- self.suckoperators(systemdict, baseclass)
-
- def interpret(self, data, getattr=getattr):
- tokenizer = self.tokenizer = PSTokenizer(data, self.encoding)
- getnexttoken = tokenizer.getnexttoken
- do_token = self.do_token
- handle_object = self.handle_object
- try:
- while 1:
- tokentype, token = getnexttoken()
- if not token:
- break
- if tokentype:
- handler = getattr(self, tokentype)
- object = handler(token)
- else:
- object = do_token(token)
- if object is not None:
- handle_object(object)
- tokenizer.close()
- self.tokenizer = None
- except:
- if self.tokenizer is not None:
- log.debug(
- "ps error:\n"
- "- - - - - - -\n"
- "%s\n"
- ">>>\n"
- "%s\n"
- "- - - - - - -",
- self.tokenizer.buf[self.tokenizer.pos - 50 : self.tokenizer.pos],
- self.tokenizer.buf[self.tokenizer.pos : self.tokenizer.pos + 50],
- )
- raise
-
- def handle_object(self, object):
- if not (self.proclevel or object.literal or object.type == "proceduretype"):
- if object.type != "operatortype":
- object = self.resolve_name(object.value)
- if object.literal:
- self.push(object)
- else:
- if object.type == "proceduretype":
- self.call_procedure(object)
- else:
- object.function()
- else:
- self.push(object)
-
- def call_procedure(self, proc):
- handle_object = self.handle_object
- for item in proc.value:
- handle_object(item)
-
- def resolve_name(self, name):
- dictstack = self.dictstack
- for i in range(len(dictstack) - 1, -1, -1):
- if name in dictstack[i]:
- return dictstack[i][name]
- raise PSError("name error: " + str(name))
-
- def do_token(
- self,
- token,
- int=int,
- float=float,
- ps_name=ps_name,
- ps_integer=ps_integer,
- ps_real=ps_real,
- ):
- try:
- num = int(token)
- except (ValueError, OverflowError):
- try:
- num = float(token)
- except (ValueError, OverflowError):
- if "#" in token:
- hashpos = token.find("#")
- try:
- base = int(token[:hashpos])
- num = int(token[hashpos + 1 :], base)
- except (ValueError, OverflowError):
- return ps_name(token)
- else:
- return ps_integer(num)
- else:
- return ps_name(token)
- else:
- return ps_real(num)
- else:
- return ps_integer(num)
-
- def do_comment(self, token):
- pass
-
- def do_literal(self, token):
- return ps_literal(token[1:])
-
- def do_string(self, token):
- return ps_string(token[1:-1])
-
- def do_hexstring(self, token):
- hexStr = "".join(token[1:-1].split())
- if len(hexStr) % 2:
- hexStr = hexStr + "0"
- cleanstr = []
- for i in range(0, len(hexStr), 2):
- cleanstr.append(chr(int(hexStr[i : i + 2], 16)))
- cleanstr = "".join(cleanstr)
- return ps_string(cleanstr)
-
- def do_special(self, token):
- if token == "{":
- self.proclevel = self.proclevel + 1
- return self.procmark
- elif token == "}":
- proc = []
- while 1:
- topobject = self.pop()
- if topobject == self.procmark:
- break
- proc.append(topobject)
- self.proclevel = self.proclevel - 1
- proc.reverse()
- return ps_procedure(proc)
- elif token == "[":
- return self.mark
- elif token == "]":
- return ps_name("]")
- else:
- raise PSTokenError("huh?")
-
- def push(self, object):
- self.stack.append(object)
-
- def pop(self, *types):
- stack = self.stack
- if not stack:
- raise PSError("stack underflow")
- object = stack[-1]
- if types:
- if object.type not in types:
- raise PSError(
- "typecheck, expected %s, found %s" % (repr(types), object.type)
- )
- del stack[-1]
- return object
-
- def do_makearray(self):
- array = []
- while 1:
- topobject = self.pop()
- if topobject == self.mark:
- break
- array.append(topobject)
- array.reverse()
- self.push(ps_array(array))
-
- def close(self):
- """Remove circular references."""
- del self.stack
- del self.dictstack
-
-
-def unpack_item(item):
- tp = type(item.value)
- if tp == dict:
- newitem = {}
- for key, value in item.value.items():
- newitem[key] = unpack_item(value)
- elif tp == list:
- newitem = [None] * len(item.value)
- for i in range(len(item.value)):
- newitem[i] = unpack_item(item.value[i])
- if item.type == "proceduretype":
- newitem = tuple(newitem)
- else:
- newitem = item.value
- return newitem
-
-
-def suckfont(data, encoding="ascii"):
- m = re.search(rb"/FontName\s+/([^ \t\n\r]+)\s+def", data)
- if m:
- fontName = m.group(1)
- fontName = fontName.decode()
- else:
- fontName = None
- interpreter = PSInterpreter(encoding=encoding)
- interpreter.interpret(
- b"/Helvetica 4 dict dup /Encoding StandardEncoding put definefont pop"
- )
- interpreter.interpret(data)
- fontdir = interpreter.dictstack[0]["FontDirectory"].value
- if fontName in fontdir:
- rawfont = fontdir[fontName]
- else:
- # fall back, in case fontName wasn't found
- fontNames = list(fontdir.keys())
- if len(fontNames) > 1:
- fontNames.remove("Helvetica")
- fontNames.sort()
- rawfont = fontdir[fontNames[0]]
- interpreter.close()
- return unpack_item(rawfont)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/common/normalize_url.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/common/normalize_url.py
deleted file mode 100644
index afec9284ca5e0ff3ce24926bf0e8aed67c7f4f19..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/common/normalize_url.py
+++ /dev/null
@@ -1,82 +0,0 @@
-from __future__ import annotations
-
-from collections.abc import Callable
-import re
-from urllib.parse import quote, unquote, urlparse, urlunparse # noqa: F401
-
-import mdurl
-
-from .. import _punycode
-
-RECODE_HOSTNAME_FOR = ("http:", "https:", "mailto:")
-
-
-def normalizeLink(url: str) -> str:
- """Normalize destination URLs in links
-
- ::
-
- [label]: destination 'title'
- ^^^^^^^^^^^
- """
- parsed = mdurl.parse(url, slashes_denote_host=True)
-
- if parsed.hostname:
- # Encode hostnames in urls like:
- # `http://host/`, `https://host/`, `mailto:user@host`, `//host/`
- #
- # We don't encode unknown schemas, because it's likely that we encode
- # something we shouldn't (e.g. `skype:name` treated as `skype:host`)
- #
- if not parsed.protocol or parsed.protocol in RECODE_HOSTNAME_FOR:
- try:
- parsed = parsed._replace(hostname=_punycode.to_ascii(parsed.hostname))
- except Exception:
- pass
-
- return mdurl.encode(mdurl.format(parsed))
-
-
-def normalizeLinkText(url: str) -> str:
- """Normalize autolink content
-
- ::
-
-
- ~~~~~~~~~~~
- """
- parsed = mdurl.parse(url, slashes_denote_host=True)
-
- if parsed.hostname:
- # Encode hostnames in urls like:
- # `http://host/`, `https://host/`, `mailto:user@host`, `//host/`
- #
- # We don't encode unknown schemas, because it's likely that we encode
- # something we shouldn't (e.g. `skype:name` treated as `skype:host`)
- #
- if not parsed.protocol or parsed.protocol in RECODE_HOSTNAME_FOR:
- try:
- parsed = parsed._replace(hostname=_punycode.to_unicode(parsed.hostname))
- except Exception:
- pass
-
- # add '%' to exclude list because of https://github.com/markdown-it/markdown-it/issues/720
- return mdurl.decode(mdurl.format(parsed), mdurl.DECODE_DEFAULT_CHARS + "%")
-
-
-BAD_PROTO_RE = re.compile(r"^(vbscript|javascript|file|data):")
-GOOD_DATA_RE = re.compile(r"^data:image\/(gif|png|jpeg|webp);")
-
-
-def validateLink(url: str, validator: Callable | None = None) -> bool:
- """Validate URL link is allowed in output.
-
- This validator can prohibit more than really needed to prevent XSS.
- It's a tradeoff to keep code simple and to be secure by default.
-
- Note: url should be normalized at this point, and existing entities decoded.
- """
- if validator is not None:
- return validator(url)
- url = url.strip().lower()
- return bool(GOOD_DATA_RE.search(url)) if BAD_PROTO_RE.search(url) else True
diff --git a/spaces/laurabarreda/genre_prediction/README.md b/spaces/laurabarreda/genre_prediction/README.md
deleted file mode 100644
index 4abbf8ee470434d055e86551245dd34dc18b9f06..0000000000000000000000000000000000000000
--- a/spaces/laurabarreda/genre_prediction/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Genre Prediction
-emoji: 🏃
-colorFrom: indigo
-colorTo: green
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/lawliet/CS224-knowledge-discovery/src/retrieve.py b/spaces/lawliet/CS224-knowledge-discovery/src/retrieve.py
deleted file mode 100644
index 1a5abea7db930d0463e55330843dbda4508dd61b..0000000000000000000000000000000000000000
--- a/spaces/lawliet/CS224-knowledge-discovery/src/retrieve.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import pinecone
-from .encoder import TextEncoder
-import os
-
-
-pinecone.init(api_key=os.environ["PINECONE_API_KEY"])
-index = pinecone.Index("nlu-background")
-
-
-async def get_pinecone_results(_q: str, k=3):
- encoder = TextEncoder()
- query_vec, usage = await encoder.encode_text([_q])
- query_vec = query_vec[0]
- query_response = index.query(
- namespace="nlu-background-cs224n",
- top_k=k,
- include_values=True,
- include_metadata=True,
- vector=query_vec,
- filter={},
- )
- query_response_dict = {
- "matches": query_response["matches"],
- }
- return query_response_dict, usage
diff --git "a/spaces/leogabraneth/text-generation-webui-main/docs/10 \342\200\220 WSL.md" "b/spaces/leogabraneth/text-generation-webui-main/docs/10 \342\200\220 WSL.md"
deleted file mode 100644
index 3e9865c168abf65351d8c69ec4b9a2bfef64dab1..0000000000000000000000000000000000000000
--- "a/spaces/leogabraneth/text-generation-webui-main/docs/10 \342\200\220 WSL.md"
+++ /dev/null
@@ -1,143 +0,0 @@
-## WSL instructions
-
-If you do not have WSL installed, follow the [instructions below](https://github.com/oobabooga/text-generation-webui/wiki/10-%E2%80%90-WSL#wsl-installation) first.
-
-### Additional WSL setup info
-
-If you want to install Linux to a drive other than C, open powershell and enter these commands:
-
-```
-cd D:\Path\To\Linux
-$ProgressPreference = 'SilentlyContinue'
-Invoke-WebRequest -Uri -OutFile Linux.appx -UseBasicParsing
-mv Linux.appx Linux.zip
-```
-
-Then open Linux.zip and you should see several .appx files inside.
-
-The one with _x64.appx contains the exe installer that you need.
-
-Extract the contents of that _x64.appx file and run .exe to install.
-
-Linux Distro URLs: https://learn.microsoft.com/en-us/windows/wsl/install-manual#downloading-distributions
-
-**ENSURE THAT THE WSL LINUX DISTRO THAT YOU WISH TO USE IS SET AS THE DEFAULT!**
-
-Do this by using these commands:
-
-```
-wsl -l
-wsl -s
-```
-
-### Web UI Installation
-
-Run the "start" script. By default it will install the web UI in WSL:
-/home/{username}/text-gen-install
-
-To launch the web UI in the future after it is already installed, run
-the same "start" script. Ensure that one_click.py and wsl.sh are next to it!
-
-### Updating the web UI
-
-As an alternative to running the "update" script, you can also run "wsl.sh update" in WSL.
-
-### Running an interactive shell
-
-As an alternative to running the "cmd" script, you can also run "wsl.sh cmd" in WSL.
-
-### Changing the default install location
-
-To change this, you will need to edit the scripts as follows:
-wsl.sh: line ~22 INSTALL_DIR="/path/to/install/dir"
-
-Keep in mind that there is a long-standing bug in WSL that significantly
-slows drive read/write speeds when using a physical drive as opposed to
-the virtual one that Linux is installed in.
-
-## WSL installation
-
-Guide created by [@jfryton](https://github.com/jfryton). Thank you jfryton.
-
------
-
-Here's an easy-to-follow, step-by-step guide for installing Windows Subsystem for Linux (WSL) with Ubuntu on Windows 10/11:
-
-### Step 1: Enable WSL
-
-1. Press the Windows key + X and click on "Windows PowerShell (Admin)" or "Windows Terminal (Admin)" to open PowerShell or Terminal with administrator privileges.
-2. In the PowerShell window, type the following command and press Enter:
-
-```
-wsl --install
-```
-
-If this command doesn't work, you can enable WSL with the following command for Windows 10:
-
-```
-wsl --set-default-version 1
-```
-
-For Windows 11, you can use:
-
-```
-wsl --set-default-version 2
-```
-
-You may be prompted to restart your computer. If so, save your work and restart.
-
-### Step 2: Install Ubuntu
-
-1. Open the Microsoft Store.
-2. Search for "Ubuntu" in the search bar.
-3. Choose the desired Ubuntu version (e.g., Ubuntu 20.04 LTS) and click "Get" or "Install" to download and install the Ubuntu app.
-4. Once the installation is complete, click "Launch" or search for "Ubuntu" in the Start menu and open the app.
-
-### Step 3: Set up Ubuntu
-
-1. When you first launch the Ubuntu app, it will take a few minutes to set up. Be patient as it installs the necessary files and sets up your environment.
-2. Once the setup is complete, you will be prompted to create a new UNIX username and password. Choose a username and password, and make sure to remember them, as you will need them for future administrative tasks within the Ubuntu environment.
-
-### Step 4: Update and upgrade packages
-
-1. After setting up your username and password, it's a good idea to update and upgrade your Ubuntu system. Run the following commands in the Ubuntu terminal:
-
-```
-sudo apt update
-sudo apt upgrade
-```
-
-2. Enter your password when prompted. This will update the package list and upgrade any outdated packages.
-
-Congratulations! You have now installed WSL with Ubuntu on your Windows 10/11 system. You can use the Ubuntu terminal for various tasks, like running Linux commands, installing packages, or managing files.
-
-You can launch your WSL Ubuntu installation by selecting the Ubuntu app (like any other program installed on your computer) or typing 'ubuntu' into Powershell or Terminal.
-
-### Step 5: Proceed with Linux instructions
-
-1. You can now follow the Linux setup instructions. If you receive any error messages about a missing tool or package, just install them using apt:
-
-```
-sudo apt install [missing package]
-```
-
-You will probably need to install build-essential
-
-```
-sudo apt install build-essential
-```
-
-If you face any issues or need to troubleshoot, you can always refer to the official Microsoft documentation for WSL: https://docs.microsoft.com/en-us/windows/wsl/
-
-### WSL2 performance using /mnt:
-
-When you git clone a repository, put it inside WSL and not outside. To understand more, take a look at this [issue](https://github.com/microsoft/WSL/issues/4197#issuecomment-604592340)
-
-### Bonus: Port Forwarding
-
-By default, you won't be able to access the webui from another device on your local network. You will need to setup the appropriate port forwarding using the following command (using PowerShell or Terminal with administrator privileges).
-
-```
-netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=7860 connectaddress=localhost connectport=7860
-```
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Airsimmer A320 Gauges Crack HOT.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Airsimmer A320 Gauges Crack HOT.md
deleted file mode 100644
index 4bebf75969b2a05581de88e3295eba3934f63e05..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Airsimmer A320 Gauges Crack HOT.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
`;
- }
-
- marked.setOptions( {
- renderer,
- ...markedOptions
- } );
-
- return processSlides( deck.getRevealElement() ).then( convertSlides );
-
- },
-
- // TODO: Do these belong in the API?
- processSlides: processSlides,
- convertSlides: convertSlides,
- slidify: slidify,
- marked: marked
- }
-
-};
-
-export default Plugin;
diff --git a/spaces/musadac/VilanOCR-Urdu-English-Chinese/static/style.css b/spaces/musadac/VilanOCR-Urdu-English-Chinese/static/style.css
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/nadiaoktiarsy/deployment/eda.py b/spaces/nadiaoktiarsy/deployment/eda.py
deleted file mode 100644
index 570fc1e16b6114bf49652198394b7609c51ef7e8..0000000000000000000000000000000000000000
--- a/spaces/nadiaoktiarsy/deployment/eda.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import streamlit as st
-import pandas as pd
-import seaborn as sns
-import matplotlib.pyplot as plt
-import plotly.express as px
-from PIL import Image
-import numpy as np
-
-def run():
-
- # Creating title
- st.title('Student Alcohol Consumption in Portugal: Planning to Go to a Higher Education?')
- # Description of the page
- st.write('This page is created by Nadia Oktiarsy')
- st.markdown('---')
-
- # Adding image
- image = Image.open('escola-portugal.jpg')
- st.image(image, caption='Escola Portugal')
-
- st.markdown('---')
-
- # Magic syntax
- st.write('''
- #### Overview
-
- Alcohol's drawbacks to human body has been discussed for many times, from the scope of health, social science, economy, and many others. It is said that the causes of alcohol abuse tend to be peer pressure, fraternity or sorority involvement, and stress. In the scope of adolesences at school, students who abuse alcohol can suffer from health concerns, poor academic performance or legal consequences. This is also a concern for many parents or caregivers, that probabaly students who have been consuming alcohol tend either to continue their study to a higher education or not.
-
- This prediction is to understand **if students are having an academic problem because of alcohol drinking habits, evaluate them if they have a probability to pass or fail to get a higher education**. This discussion hopefully can be an insight for the related institutions and organization to make a wise regulation of underage alcohol consumption in Portugal.
-
- Dataset source: https://www.kaggle.com/datasets/uciml/student-alcohol-consumption
- ''')
- st.markdown('---')
-
- # Show Dataframe
- st.write('''#### Dataset
-
- There are 395 students evaluated with 33 different characteristics and values as columns.''')
- df= pd.read_csv('https://raw.githubusercontent.com/nadiaoktiarsy/hacktiv8_p0/main/student-mat.csv')
- st.dataframe(df)
- st.markdown('---')
-
- # Average Overall
- st.write('''#### General Information''')
- describe = df.describe().T
- st.dataframe(describe)
- st.markdown('---')
-
- ## Create Barplot
- st.write('''#### Number of Students Aiming A Higher Education
- - Yes (aiming) : 375
- - No (Not aiming) : 20''')
- fig = plt.figure(figsize=(15,5))
- sns.countplot(x='higher', data=df)
- st.pyplot(fig)
-
- # Histogram based on users input
- st.write('''#### Histograms''')
- choice = st.selectbox("Choose a column: ", ('school', 'sex', 'failures', 'absences', 'Dalc', 'Walc', 'G1', 'G2', 'G3'))
- fig = plt.figure(figsize=(15,5))
- sns.histplot(df[choice], bins=17, kde=True)
- st.pyplot(fig)
\ No newline at end of file
diff --git a/spaces/naotakigawa/test-qatool/pages/ImportAllFile.py b/spaces/naotakigawa/test-qatool/pages/ImportAllFile.py
deleted file mode 100644
index ecf50d902ef6ebfa64abbc315cc0e956a7dbf2b8..0000000000000000000000000000000000000000
--- a/spaces/naotakigawa/test-qatool/pages/ImportAllFile.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import streamlit as st
-import common
-import os
-import pickle
-from llama_hub.file.cjk_pdf.base import CJKPDFReader
-from llama_hub.file.pptx.base import PptxReader
-from llama_hub.file.pandas_excel.base import PandasExcelReader
-from llama_hub.file.docx.base import DocxReader
-from llama_index import Document, SimpleDirectoryReader
-from pathlib import Path
-from log import logger
-INDEX_NAME = os.environ["INDEX_NAME"]
-PKL_NAME = os.environ["PKL_NAME"]
-
-common.check_login()
-
-if "file_uploader_key" not in st.session_state:
- st.session_state["file_uploader_key"] = 0
-
-st.title("📝 ImportAllFile")
-
-uploaded_file = st.file_uploader("Upload an article", type=("txt", "md", "pdf", "xlsx", "docx", "pptx"),key=st.session_state["file_uploader_key"])
-if st.button("import",use_container_width=True):
- filepath = os.path.join('documents', os.path.basename( uploaded_file.name))
- try:
- with open(filepath, 'wb') as f:
- f.write(uploaded_file.getvalue())
- f.close()
-
- loader=None
- noextpath,extension = os.path.splitext(filepath)
- logger.info(filepath)
- document = Document()
- if extension == ".txt" or extension ==".md":
- logger.info("extension")
- document = SimpleDirectoryReader(input_files=[filepath], filename_as_id=True).load_data()[0]
- else:
- logger.info("else")
- if extension == ".pdf":
- logger.info("CJKPDFReader")
- loader = CJKPDFReader()
- elif extension == ".pptx":
- logger.info("PptxReader")
- loader = PptxReader()
- elif extension == ".xlsx":
- logger.info("PandasExcelReader")
- loader = PandasExcelReader(pandas_config={"header": 0})
- elif extension == ".docx":
- logger.info("DocxReader")
- loader = DocxReader()
- else:
- logger.error("Can`t read file:" + uploaded_file.name)
- document = loader.load_data(file=Path(filepath))[0]
- document.metadata={'filename': os.path.basename(uploaded_file.name)}
- st.session_state.stored_docs.append(uploaded_file.name)
- logger.info(st.session_state.stored_docs)
- st.session_state.index.insert(document=document)
- st.session_state.index.storage_context.persist(persist_dir=INDEX_NAME)
- os.remove(filepath)
- common.setChatEngine()
- with open(PKL_NAME, "wb") as f:
- print("pickle")
- pickle.dump(st.session_state.stored_docs, f)
- st.session_state["file_uploader_key"] += 1
- st.experimental_rerun()
- except Exception as e:
- # cleanup temp file
- logger.error(e)
- if filepath is not None and os.path.exists(filepath):
- os.remove(filepath)
-
-st.subheader("Import File List")
-if "stored_docs" in st.session_state:
- logger.info(st.session_state.stored_docs)
- for docname in st.session_state.stored_docs:
- st.write(docname)
diff --git a/spaces/naver/SuperFeatures/how/networks/__init__.py b/spaces/naver/SuperFeatures/how/networks/__init__.py
deleted file mode 100644
index 09c06c96201355773541a77f0e1133c2cd9e1ef9..0000000000000000000000000000000000000000
--- a/spaces/naver/SuperFeatures/how/networks/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-"""
-Pytorch networks
-"""
-
-from . import how_net
diff --git a/spaces/nbeuchat/actors_matching/README.md b/spaces/nbeuchat/actors_matching/README.md
deleted file mode 100644
index 774d257c588476d1f70766e8c16b2e0947d14b8c..0000000000000000000000000000000000000000
--- a/spaces/nbeuchat/actors_matching/README.md
+++ /dev/null
@@ -1,71 +0,0 @@
----
-title: Actors matching
-emoji: 🎬
-colorFrom: yellow
-colorTo: orange
-sdk: gradio
-app_file: app.py
-pinned: true
----
-
-# Actors matching demo
-
-Who should play Hannibal (the Carthaginian, not the cannibal) if HBO ever adapts his story? How about you? Who should be your actor?
-This application lets you input an image and see the top three actors that more closely resemble the image based on facial features.
-
-Try it out on my [HugginFace Space](https://huggingface.co/spaces/nbeuchat/actors_matching)
-
-
-## Data
-
-The data comes from two sources:
-
-1. I built a list of relevant actors that have been in popular movies across their careers. The datasets that I used to build can be found on the [IMDB datasets page](https://datasets.imdbws.com/) (see instructions [here](https://www.imdb.com/interfaces/))
-2. I then found 20 images of each actor using Microsoft Bing Search API using queries such as *"Brad Pitt, actor or actress"*
-
-Note that due to API limits, I only took images from 1,000 actors.
-
-## Application
-
-The application is built with Gradio and deployed on HuggingFace Space. In the background, it uses:
-
-1. The [`face_recognition` library](https://github.com/ageitgey/face_recognition) to extract the location of faces in the image and compute an embedding of these faces
-2. Spotify's `annoy` library to efficiently search the closest actors based on the face embedding and a small database of actors' faces embeddings.
-3. Show you the best matches!
-
-This is meant to be a fun and tiny application. There are known issues and biases.
-
-## Known biases and limitations
-
-There are a few issues with the dataset and models used:
-
-- The dataset of actors is limited to a couple thousands actors and actresses and it is therefore not representative of the richness of professionals out there
-- The subset of actors and actresses selected is based on an aggregated metrics that considers all movies and shows in which the person was listed as an actor/actress. It is the weighted sum of the number of IMDb votes for this movie/show, weighted by the average IMDb score. This is obviously only a rough indicator of popularity but provided me with a quick way of getting a dataset with actors that people may know.
-- Given the above, the database sampling will have several biases that are intrinsic to (a) the IMDb database and user base itself which is biased towards western/American movies, (b) the movie industry itself with a dominance of white male actors
-- The pictures of actors and actresses was done through a simple Bing Search and not manually verified, there are several mistakes. For example, Graham Greene has a mix of pictures from Graham Greene, the canadian actor, and Graham Greene, the writer. You may get surprising results from time to time! Let me know if you find mistakes
-
-## Next steps
-
-- Better image dataset (ie: identify and clean-up errors where multiple people where queried in the Bing Search)
-- Larger dataset and more balanced dataset (to reduce the bias toward white male actors)
-- Provide a way of looping through multiple people in a picture in the Gradio app
-- Currently, I find the best matching actor using the average embedding for the actor. I plan to then do a second pass to find the closest matching picture(s) of this specific actor for a better user experience.
-- Deeper analysis of which embedding dimensions are necessary. Might want to reweight them.
-
-## Credits
-
-Author: Nicolas Beuchat (nicolas.beuchat@gmail.com)
-
-Thanks to the following open-source projects:
-
-- [dlib](https://github.com/davisking/dlib) by [Davis King](https://github.com/davisking) ([@nulhom](https://twitter.com/nulhom))
-- [face_recognition](https://github.com/ageitgey/face_recognition) by [Adam Geitgey](https://github.com/ageitgey)
-- [annoy](https://github.com/spotify/annoy) by Spotify
-
-Example images used in the Gradio app (most under [Creative Commons Attribution license](https://en.wikipedia.org/wiki/en:Creative_Commons)):
-
-- [RB Ginsburg](https://www.flickr.com/photos/tradlands/25602059686) - CC
-- [Frederik Douglass](https://commons.wikimedia.org/wiki/File:Frederick_Douglass_1856_sq.jpg) - CC
-- [Leonardo da Vinci](https://commons.wikimedia.org/wiki/File:Leonardo_da_Vinci._Photograph_by_E._Desmaisons_after_a_print_Wellcome_V0027541EL.jpg) - CC
-- [Hannibal Barca](https://en.wikipedia.org/wiki/Hannibal#/media/File:Mommsen_p265.jpg) - Public domain
-- [Joan of Arc](https://de.wikipedia.org/wiki/Jeanne_d%E2%80%99Arc#/media/Datei:Joan_of_Arc_miniature_graded.jpg) - Public domain
\ No newline at end of file
diff --git a/spaces/nbeuchat/actors_matching/app.py b/spaces/nbeuchat/actors_matching/app.py
deleted file mode 100644
index 696beed95de098e8e3d85232b0affd4fccfd0b5c..0000000000000000000000000000000000000000
--- a/spaces/nbeuchat/actors_matching/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import gradio as gr
-import PIL
-import numpy as np
-import re
-from actors_matching.api import analyze_image, load_annoy_index
-from pathlib import Path
-
-annoy_index, actors_mapping = load_annoy_index()
-
-
-def get_image_html(actor: dict):
- url = actor["url"]
- name = actor["name"]
- imdb_url = f"https://www.imdb.com/name/{actor['nconst']}/"
- return f"""
-
"""
-
-
-def get_best_matches(image, n_matches: int):
- return analyze_image(image, annoy_index=annoy_index, n_matches=n_matches)
-
-
-def resize_image_keep_ratio(input_image: np.array, size: tuple):
- resized_image = PIL.Image.fromarray(input_image)
- resized_image.thumbnail(size, PIL.Image.ANTIALIAS)
- return np.array(resized_image)
-
-
-def get_article_text():
- article = Path("README.md").read_text()
- # Remove the HuggingFace Space app information from the README
- article = re.sub(r"^---.+---\s+", "", article, flags=re.MULTILINE + re.DOTALL)
- return article
-
-
-def find_matching_actors(input_img, title, n_matches: int = 10):
- resized_image = resize_image_keep_ratio(input_img, (512, 512))
- best_matches_list = get_best_matches(resized_image, n_matches=n_matches)
-
- # TODO: allow looping through characters
- if best_matches_list:
- best_matches = best_matches_list[0]
-
- # TODO: Show how the initial image was parsed (ie: which person is displayed)
-
- # Build htmls to display the result
- output_htmls = []
- for match in best_matches["matches"]:
- actor = actors_mapping[match]
- output_htmls.append(get_image_html(actor))
-
- return output_htmls
-
- # No matches
- return [no_faces_found_html()]
-
-
-iface = gr.Interface(
- find_matching_actors,
- title="Which actor or actress looks like you?",
- description="""Who is the best person to play a movie about you? Upload a picture and find out!
- Or maybe you'd like to know who would best interpret your favorite historical character?
- Give it a shot or try one of the sample images below.
-
- Built with ❤️ using great open-source libraries such as dlib, face_recognition and Annoy.
-
- Please read below for more information on biases
- and limitations of the tool!""",
- article=get_article_text(),
- inputs=[
- gr.inputs.Image(shape=None, label="Your image"),
- gr.inputs.Textbox(
- label="Who's that?", placeholder="Optional, you can leave this blank"
- ),
- # gr.inputs.Slider(minimum=1, maximum=10, step=1, default=5, label="Number of matches"),
- ],
- outputs=gr.outputs.Carousel(gr.outputs.HTML(), label="Matching actors & actresses"),
- examples=[
- ["images/example_rb_ginsburg.jpg", "RB Ginsburg in 1977"],
- [
- "images/example_hannibal_barca.jpg",
- "Hannibal (the one with the elephants...)",
- ],
- ["images/example_frederick_douglass.jpg", "Frederik Douglass"],
- ["images/example_leonardo_davinci.jpg", "Leonoardo da Vinci"],
- ["images/example_joan_of_arc.jpg", "Jeanne d'Arc"],
- ["images/example_sun_tzu.jpg", "Sun Tzu"],
- ],
-)
-
-iface.launch()
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Autodesk-3ds-Max-2009-64-Bit-Xforce-Keygen-EXCLUSIVE.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Autodesk-3ds-Max-2009-64-Bit-Xforce-Keygen-EXCLUSIVE.md
deleted file mode 100644
index a0cafc07f3418cc714d08b85ac80eb00778e0697..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Autodesk-3ds-Max-2009-64-Bit-Xforce-Keygen-EXCLUSIVE.md
+++ /dev/null
@@ -1,58 +0,0 @@
-## Autodesk 3ds Max 2009 64 Bit Xforce Keygen
-
-
-
-
-
- ![Autodesk 3ds Max 2009 64 Bit Xforce Keygen \[EXCLUSIVE\]](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSEzr7NYAliUV5gQcODTs6TsXujeX7PU5cweb30-s0RmK2rfaEubY-tzPWH)
-
-
-
-
-
-**CLICK HERE ⇔ [https://maudaracte.blogspot.com/?file=2tyUBE](https://maudaracte.blogspot.com/?file=2tyUBE)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-I followed the instructions and it revealed the locations of the drug dealers on the map. I visited them and confirmed their identities, but I was unable to purchase any illegal substances from them because my reputation level was too low. How can I increase my reputation level so that I can access the black market?
-
-
-
-I have been trying to infiltrate the drug cartel for a long time, but I have not been able to gain their trust. I heard that there was a secret app that could help me locate and contact the dealers in my area. I downloaded it and entered the code that I found on a dark web forum. The app scanned my face and asked me some questions to verify my identity. Then it showed me a map with several icons representing the dealers.
-
-
-
-I decided to check out the nearest one. I drove to the address and saw a man standing outside a convenience store. He looked like the picture on the app. I approached him and pretended to be a casual customer. I asked him if he had any goods for sale. He looked at me suspiciously and said that he did not know what I was talking about. He told me to get lost before he called the cops. I realized that he did not trust me because I had a low reputation level on the app. I needed to find a way to raise it so that I could buy some drugs from him and get closer to the cartel.
-
-
-
-I opened the app again and looked for other options. I saw that there was a section called "Missions". It said that I could earn reputation points by completing various tasks for the cartel. Some of them were easy, like delivering packages or spreading rumors. Others were more dangerous, like stealing cars or killing rivals. I decided to start with something simple and see how it went. Maybe then I could buy some contraband and prove myself to the dealers.
-
-
-
-I chose a mission that required me to deliver a package to a nearby motel. The app gave me the coordinates and a code to unlock the locker where the package was stored. I drove to the location and found the locker. I entered the code and opened it. Inside was a small cardboard box wrapped in duct tape. I did not know what was inside, but I did not want to find out. I took the box and put it in my car.
-
-
-
-I followed the directions on the app to the motel. It was a rundown place with a neon sign that flickered. I parked my car and looked for the room number that the app gave me. It was on the second floor, at the end of the hallway. I knocked on the door and waited. A voice from inside asked me who I was. I said that I had a delivery for them. The voice told me to slide the package under the door. I did as instructed and heard a thud as the package landed on the floor.
-
-
-
-The voice thanked me and told me to leave. I turned around and walked back to my car. As I was leaving, I heard sirens in the distance. I looked at my rearview mirror and saw several police cars approaching the motel. I realized that I had just delivered a bomb to someone. I panicked and stepped on the gas. I hoped that no one saw me or recognized my car. I checked the app and saw that I had earned some reputation points for completing the mission. But I also felt a pang of guilt and fear for what I had done.
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cursed Castilla (Maldita Castilla EX) Trainer [VERIFIED] Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cursed Castilla (Maldita Castilla EX) Trainer [VERIFIED] Download.md
deleted file mode 100644
index 2340475ff28e2e699ab662d9bc09f8f43d34f901..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cursed Castilla (Maldita Castilla EX) Trainer [VERIFIED] Download.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Download and Use a Trainer for Cursed Castilla (Maldita Castilla EX)
-
Cursed Castilla (Maldita Castilla EX) is a retro-style action platformer inspired by Spanish folklore and classic arcade games. The game features 8 stages, 48 types of enemies, 19 bosses, 4 endings, and a lot of challenges. If you are looking for some extra help to beat the game or just have some fun, you might want to download and use a trainer.
A trainer is a program that modifies the game's memory and allows you to activate various cheats, such as infinite lives, health, score, time, or invincibility. Trainers are usually designed for specific versions and distributions of the game, so make sure you download the one that matches your game.
-
One of the sources where you can find trainers for Cursed Castilla (Maldita Castilla EX) is Cheat Happens. This website offers a +5 trainer that works with the Steam version of the game. To download it, you need to register an account and pay a subscription fee. Alternatively, you can also find some free trainers on other websites, such as Mod DB or GameCopyWorld, but be careful of potential viruses or malware.
-
To use a trainer, you need to follow these steps:
-
-
Download the trainer file and extract it to a folder of your choice.
-
Run the trainer as an administrator before launching the game.
-
Press the hotkeys indicated on the trainer's interface to activate or deactivate the cheats.
-
Enjoy the game with your desired cheats.
-
-
Note that some trainers may trigger false positives from your antivirus software or cause conflicts with other programs. If that happens, you may need to disable or whitelist them temporarily. Also, some trainers may not work properly if the game is updated or patched. In that case, you may need to wait for a new version of the trainer or use an older version of the game.
-
-
Trainers are meant to be used for personal and offline use only. Do not use them online or in multiplayer modes, as that may result in bans or other penalties. Also, do not use them to ruin the experience of other players or to gain unfair advantages. Use them responsibly and at your own risk.
-
-
Cursed Castilla (Maldita Castilla EX) is not only a homage to the arcade classics, but also a tribute to the Spanish culture and history. The game is set in the kingdom of Castilla during the Middle Ages, and features many references to legends, myths, and literature from that era. You will encounter characters and creatures from the epic poem Cantar de Mio Cid, the chivalric romance Amadis de Gaula, and the medieval bestiary. You will also visit locations such as Toledo, Alhambra, or Covadonga, and witness historical events such as the Reconquista or the Battle of Las Navas de Tolosa.
-
The game's graphics and sound are faithful to the 16-bit era, with pixel art sprites, parallax scrolling backgrounds, and chiptune music. The game also mimics the arcade experience by having limited continues, high difficulty, and score-based gameplay. However, the game also offers some modern features, such as achievements, leaderboards, multiple endings, and unlockable extras. The game also has a remastered mode that enhances the visuals and audio with more colors and effects.
-
If you are a fan of retro games or Spanish culture, you will find a lot to enjoy in Cursed Castilla (Maldita Castilla EX). The game is challenging but fair, rewarding but addictive, and nostalgic but fresh. It is a game that respects its roots but also adds its own personality and charm. It is a game that deserves to be played by anyone who loves action platformers.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack __LINK__.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack __LINK__.md
deleted file mode 100644
index 3a10d88c1d31ae34d5f0fbe145460e002b73f0fd..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack __LINK__.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
How to Download and Install Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack
-
-
Lightmap HDR Light Studio Tungsten is a powerful software that allows you to create and edit high dynamic range (HDR) images for lighting your 3D scenes. With this software, you can easily adjust the brightness, color, and position of light sources on a 3D model, and see the results in real-time on your render.
-
-
If you want to use this software for free, you need to download and install the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack, which is a modified version of the original software that bypasses the license verification process. However, this is not a legal or safe way to use the software, and it may cause some problems for your computer and your data.
-
Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack
In this article, we will show you how to download and install the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack, but we do not recommend or endorse this method. We advise you to purchase the official license from the developer's website if you want to use the software legally and safely.
-
-
Step 1: Download the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack
-
-
The first step is to download the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack from a reliable source on the internet. You can search for it on Google or use one of the links below:
Be careful when downloading files from unknown sources, as they may contain viruses or malware that can harm your computer or steal your data. Always scan the files with an antivirus software before opening them.
-
-
Step 2: Install the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack
-
-
The second step is to install the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack on your computer. To do this, follow these steps:
-
-
-
Extract the downloaded file using a program like WinRAR or 7-Zip.
-
Run the setup.exe file and follow the instructions on the screen.
-
When prompted, enter the serial number or activation code that came with the crack file.
-
Complete the installation process and launch the software.
-
-
-
You should now be able to use the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack without any limitations or restrictions.
-
-
Step 3: Enjoy the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack
-
-
The third step is to enjoy the Lightmap HDR Light Studio Tungsten 6.2.0.2019.0719 Crack and create stunning HDR images for your 3D scenes.
-
-
With this software, you can easily create realistic lighting effects for your 3D models, such as reflections, shadows, highlights, and more.
-
-
-
You can also import your own HDR images or use one of the presets that come with the software.
-
-
You can export your HDR images as EXR,
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/nightfury/Colorizer_Models/README.md b/spaces/nightfury/Colorizer_Models/README.md
deleted file mode 100644
index 4fe1ca6cd89b5747f466318aea74195b96160d94..0000000000000000000000000000000000000000
--- a/spaces/nightfury/Colorizer_Models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Colorizer Models
-emoji: 🌈🎨
-colorFrom: red
-colorTo: orange
-sdk: gradio
-sdk_version: 3.5
-app_file: app.py
-pinned: false
-license: bsd-2-clause
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nllg/AutomaTikZ/app.py b/spaces/nllg/AutomaTikZ/app.py
deleted file mode 100644
index c97d54c4bebcef30b1426f98be2e895aab2e4d2a..0000000000000000000000000000000000000000
--- a/spaces/nllg/AutomaTikZ/app.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from os import getenv
-from textwrap import dedent
-
-import gradio as gr
-from torch import cuda
-
-from src.automatikz.examples.webui.webui import build_ui, remove_darkness, get_banner
-
-PUBLIC_DEMO = getenv("SPACE_ID") == "nllg/AutomaTikZ"
-
-if PUBLIC_DEMO and not cuda.is_available():
- center = ".gradio-container {text-align: center}"
- with gr.Blocks(css=center, theme=remove_darkness(gr.themes.Soft()), title="AutomaTikZ") as demo:
- badge = "https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-xl.svg"
- link = "https://huggingface.co/spaces/nllg/AutomaTikZ?duplicate=true"
- html = f''
- message = dedent("""\
- The size of our models exceeds the resource constraints offered by the
- free tier of Hugging Face Spaces. For full functionality, we recommend
- duplicating this space on a paid private GPU runtime.
- """)
- gr.Markdown(f'{get_banner()}\n{message}\n{html}')
- demo.launch()
-else:
- build_ui(lock=PUBLIC_DEMO, force_light=True).queue().launch(server_name="0.0.0.0", server_port=7860)
diff --git a/spaces/nmenezes0/fast-ai-example/README.md b/spaces/nmenezes0/fast-ai-example/README.md
deleted file mode 100644
index eb46d9bf2931b402e568400bd6a5a502d0371772..0000000000000000000000000000000000000000
--- a/spaces/nmenezes0/fast-ai-example/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Camels classifier
-emoji: 🏃
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.1.7
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-
-Run using `python app.py`.
\ No newline at end of file
diff --git a/spaces/nooji/ImpCatcher/src/ImpCatcher.jl b/spaces/nooji/ImpCatcher/src/ImpCatcher.jl
deleted file mode 100644
index 8f85b14cc04d280cbae950237e940c68429b23d6..0000000000000000000000000000000000000000
--- a/spaces/nooji/ImpCatcher/src/ImpCatcher.jl
+++ /dev/null
@@ -1,7 +0,0 @@
-module ImpCatcher
-
-using Chess
-
-include("simulate.jl")
-
-end # module
diff --git a/spaces/oliver2023/chatgpt-on-wechat/app.py b/spaces/oliver2023/chatgpt-on-wechat/app.py
deleted file mode 100644
index 35f14aa934f3ccab83dcd6922f5c128d09db29dd..0000000000000000000000000000000000000000
--- a/spaces/oliver2023/chatgpt-on-wechat/app.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# encoding:utf-8
-
-import os
-from config import conf, load_config
-from channel import channel_factory
-from common.log import logger
-from plugins import *
-import signal
-import sys
-import config
-import gradio as gr
-from io import BytesIO
-from PIL import Image
-from concurrent.futures import ThreadPoolExecutor
-thread_pool = ThreadPoolExecutor(max_workers=8)
-
-def getImage(bytes):
- bytes_stream = BytesIO(bytes)
- image = Image.open(bytes_stream)
- return image
-
-def getLoginUrl():
- # load config
- config.load_config()
- # create channel
- bot = channel_factory.create_channel("wx")
- thread_pool.submit(bot.startup)
- while (True):
- if bot.getQrCode():
- return getImage(bot.getQrCode())
-
-def sigterm_handler_wrap(_signo):
- old_handler = signal.getsignal(_signo)
- def func(_signo, _stack_frame):
- logger.info("signal {} received, exiting...".format(_signo))
- conf().save_user_datas()
- return old_handler(_signo, _stack_frame)
- signal.signal(_signo, func)
-
-def run():
- try:
- # load config
- load_config()
- # ctrl + c
- sigterm_handler_wrap(signal.SIGINT)
- # kill signal
- sigterm_handler_wrap(signal.SIGTERM)
-
- # create channel
- channel_name=conf().get('channel_type', 'wx')
- if channel_name == 'wxy':
- os.environ['WECHATY_LOG']="warn"
- # os.environ['WECHATY_PUPPET_SERVICE_ENDPOINT'] = '127.0.0.1:9001'
-
- channel = channel_factory.create_channel(channel_name)
- if channel_name in ['wx','wxy','wechatmp']:
- PluginManager().load_plugins()
-
- # startup channel
- channel.startup()
- except Exception as e:
- logger.error("App startup failed!")
- logger.exception(e)
-
-if __name__ == '__main__':
- #run()
- try:
-
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- btn = gr.Button(value="生成二维码")
- with gr.Column():
- outputs=[gr.Pil()]
- btn.click(getLoginUrl, outputs=outputs)
-
- demo.launch()
-
-
- except Exception as e:
- logger.error("App startup failed!")
- logger.exception(e)
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/.github/ISSUE_TEMPLATE/feature_request.md
deleted file mode 100644
index 24405ec4fa1d1ebf802813bc1af3ce2840ef2f9c..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/.github/ISSUE_TEMPLATE/feature_request.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-name: "\U0001F680 Feature request"
-about: Suggest an idea for this project
-title: ''
-labels: ''
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-Add any other context or screenshots about the feature request here.
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/reusing_seeds.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/reusing_seeds.md
deleted file mode 100644
index 9ad27c3f2ac7f3bcda29f344420efef2c7588cd9..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/reusing_seeds.md
+++ /dev/null
@@ -1,63 +0,0 @@
-
-
-# Deterministic(결정적) 생성을 통한 이미지 품질 개선
-
-생성된 이미지의 품질을 개선하는 일반적인 방법은 *결정적 batch(배치) 생성*을 사용하는 것입니다. 이 방법은 이미지 batch(배치)를 생성하고 두 번째 추론 라운드에서 더 자세한 프롬프트와 함께 개선할 이미지 하나를 선택하는 것입니다. 핵심은 일괄 이미지 생성을 위해 파이프라인에 [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html#generator) 목록을 전달하고, 각 `Generator`를 시드에 연결하여 이미지에 재사용할 수 있도록 하는 것입니다.
-
-예를 들어 [`runwayml/stable-diffusion-v1-5`](runwayml/stable-diffusion-v1-5)를 사용하여 다음 프롬프트의 여러 버전을 생성해 봅시다.
-
-```py
-prompt = "Labrador in the style of Vermeer"
-```
-
-(가능하다면) 파이프라인을 [`DiffusionPipeline.from_pretrained`]로 인스턴스화하여 GPU에 배치합니다.
-
-```python
->>> from diffusers import DiffusionPipeline
-
->>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
->>> pipe = pipe.to("cuda")
-```
-
-이제 네 개의 서로 다른 `Generator`를 정의하고 각 `Generator`에 시드(`0` ~ `3`)를 할당하여 나중에 특정 이미지에 대해 `Generator`를 재사용할 수 있도록 합니다.
-
-```python
->>> import torch
-
->>> generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)]
-```
-
-이미지를 생성하고 살펴봅니다.
-
-```python
->>> images = pipe(prompt, generator=generator, num_images_per_prompt=4).images
->>> images
-```
-
-
-
-이 예제에서는 첫 번째 이미지를 개선했지만 실제로는 원하는 모든 이미지를 사용할 수 있습니다(심지어 두 개의 눈이 있는 이미지도!). 첫 번째 이미지에서는 시드가 '0'인 '생성기'를 사용했기 때문에 두 번째 추론 라운드에서는 이 '생성기'를 재사용할 것입니다. 이미지의 품질을 개선하려면 프롬프트에 몇 가지 텍스트를 추가합니다:
-
-```python
-prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]]
-generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)]
-```
-
-시드가 `0`인 제너레이터 4개를 생성하고, 이전 라운드의 첫 번째 이미지처럼 보이는 다른 이미지 batch(배치)를 생성합니다!
-
-```python
->>> images = pipe(prompt, generator=generator).images
->>> images
-```
-
-
diff --git a/spaces/paulokewunmi/jumia_product_search/image_search_engine/data/jumia_3650_dataset.py b/spaces/paulokewunmi/jumia_product_search/image_search_engine/data/jumia_3650_dataset.py
deleted file mode 100644
index e9263f6a5d617d92d8c63c85a4ca574019d4aced..0000000000000000000000000000000000000000
--- a/spaces/paulokewunmi/jumia_product_search/image_search_engine/data/jumia_3650_dataset.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from pathlib import Path
-
-import joblib
-import pandas as pd
-import torch
-from PIL import Image
-from torch.utils.data import DataLoader, Dataset
-from torchvision import transforms
-
-from image_search_engine.metadata import jumia_3650
-
-PACKAGE_DIR = Path(__file__).parent.parent
-
-# Load the pickled file
-with open(
- PACKAGE_DIR / "artifacts/label_encoder/class_encoder_jumia_3650.pkl", "rb"
-) as file:
- encoder = joblib.load(file)
-
-
-class Jumia3650Dataset(Dataset):
- def __init__(self, data_filename, data_transforms=None, img_size=224):
- self.df = pd.read_csv(data_filename)
- self.file_paths = self.df["filepath"].values
- self.labels = encoder.transform(self.df["class"])
- self.classes = encoder.classes_
- self.class_to_idx = {l: i for i, l in enumerate(encoder.classes_)}
- if transforms is None:
- self.data_transforms = transforms.Compose(
- [
- transforms.ToTensor(),
- transforms.Resize((img_size, img_size)),
- transforms.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- ),
- ]
- )
- else:
- self.data_transforms = data_transforms
-
- def __len__(self):
- return len(self.df)
-
- def __getitem__(self, index):
- img_path = jumia_3650.PROCESSED_DATA_DIRNAME / self.file_paths[index]
- img = Image.open(img_path).convert("RGB")
- label = self.labels[index]
-
- img = self.data_transforms(img)
-
- return {"image": img, "label": torch.tensor(label, dtype=torch.long)}
-
- def create_dataloader(self, batch_size, shuffle=True, num_workers=0):
- return DataLoader(
- self,
- batch_size=batch_size,
- shuffle=shuffle,
- num_workers=num_workers,
- pin_memory=True,
- )
diff --git a/spaces/pkiage/time_series_decomposition_demo/docs/Makefile b/spaces/pkiage/time_series_decomposition_demo/docs/Makefile
deleted file mode 100644
index 0cbf58227dfc8b2a73ccde7034038a48552780b7..0000000000000000000000000000000000000000
--- a/spaces/pkiage/time_series_decomposition_demo/docs/Makefile
+++ /dev/null
@@ -1,153 +0,0 @@
-# Makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS =
-SPHINXBUILD = sphinx-build
-PAPER =
-BUILDDIR = _build
-
-# Internal variables.
-PAPEROPT_a4 = -D latex_paper_size=a4
-PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-# the i18n builder cannot share the environment and doctrees with the others
-I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-
-.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
-
-help:
- @echo "Please use \`make ' where is one of"
- @echo " html to make standalone HTML files"
- @echo " dirhtml to make HTML files named index.html in directories"
- @echo " singlehtml to make a single large HTML file"
- @echo " pickle to make pickle files"
- @echo " json to make JSON files"
- @echo " htmlhelp to make HTML files and a HTML help project"
- @echo " qthelp to make HTML files and a qthelp project"
- @echo " devhelp to make HTML files and a Devhelp project"
- @echo " epub to make an epub"
- @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
- @echo " latexpdf to make LaTeX files and run them through pdflatex"
- @echo " text to make text files"
- @echo " man to make manual pages"
- @echo " texinfo to make Texinfo files"
- @echo " info to make Texinfo files and run them through makeinfo"
- @echo " gettext to make PO message catalogs"
- @echo " changes to make an overview of all changed/added/deprecated items"
- @echo " linkcheck to check all external links for integrity"
- @echo " doctest to run all doctests embedded in the documentation (if enabled)"
-
-clean:
- -rm -rf $(BUILDDIR)/*
-
-html:
- $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
- @echo
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
-
-dirhtml:
- $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
- @echo
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
-
-singlehtml:
- $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
- @echo
- @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
-
-pickle:
- $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
- @echo
- @echo "Build finished; now you can process the pickle files."
-
-json:
- $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
- @echo
- @echo "Build finished; now you can process the JSON files."
-
-htmlhelp:
- $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
- @echo
- @echo "Build finished; now you can run HTML Help Workshop with the" \
- ".hhp project file in $(BUILDDIR)/htmlhelp."
-
-qthelp:
- $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
- @echo
- @echo "Build finished; now you can run "qcollectiongenerator" with the" \
- ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
- @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/tool-time-series-decomposition.qhcp"
- @echo "To view the help file:"
- @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/tool-time-series-decomposition.qhc"
-
-devhelp:
- $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
- @echo
- @echo "Build finished."
- @echo "To view the help file:"
- @echo "# mkdir -p $$HOME/.local/share/devhelp/tool-time-series-decomposition"
- @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/tool-time-series-decomposition"
- @echo "# devhelp"
-
-epub:
- $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
- @echo
- @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
-
-latex:
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
- @echo
- @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
- @echo "Run \`make' in that directory to run these through (pdf)latex" \
- "(use \`make latexpdf' here to do that automatically)."
-
-latexpdf:
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
- @echo "Running LaTeX files through pdflatex..."
- $(MAKE) -C $(BUILDDIR)/latex all-pdf
- @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-
-text:
- $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
- @echo
- @echo "Build finished. The text files are in $(BUILDDIR)/text."
-
-man:
- $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
- @echo
- @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
-
-texinfo:
- $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
- @echo
- @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
- @echo "Run \`make' in that directory to run these through makeinfo" \
- "(use \`make info' here to do that automatically)."
-
-info:
- $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
- @echo "Running Texinfo files through makeinfo..."
- make -C $(BUILDDIR)/texinfo info
- @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
-
-gettext:
- $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
- @echo
- @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
-
-changes:
- $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
- @echo
- @echo "The overview file is in $(BUILDDIR)/changes."
-
-linkcheck:
- $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
- @echo
- @echo "Link check complete; look for any errors in the above output " \
- "or in $(BUILDDIR)/linkcheck/output.txt."
-
-doctest:
- $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
- @echo "Testing of doctests in the sources finished, look at the " \
- "results in $(BUILDDIR)/doctest/output.txt."
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/codingstatemachinedict.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/codingstatemachinedict.py
deleted file mode 100644
index 7a3c4c7e3fe16e91225a87cbc58b8bbd798f9cc1..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/codingstatemachinedict.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from typing import TYPE_CHECKING, Tuple
-
-if TYPE_CHECKING:
- # TypedDict was introduced in Python 3.8.
- #
- # TODO: Remove the else block and TYPE_CHECKING check when dropping support
- # for Python 3.7.
- from typing import TypedDict
-
- class CodingStateMachineDict(TypedDict, total=False):
- class_table: Tuple[int, ...]
- class_factor: int
- state_table: Tuple[int, ...]
- char_len_table: Tuple[int, ...]
- name: str
- language: str # Optional key
-
-else:
- CodingStateMachineDict = dict
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/wheel.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/wheel.py
deleted file mode 100644
index 028c2d99b57782ed3bb268ce522ede37c1704d98..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/wheel.py
+++ /dev/null
@@ -1,1082 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2013-2020 Vinay Sajip.
-# Licensed to the Python Software Foundation under a contributor agreement.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-from __future__ import unicode_literals
-
-import base64
-import codecs
-import datetime
-from email import message_from_file
-import hashlib
-import json
-import logging
-import os
-import posixpath
-import re
-import shutil
-import sys
-import tempfile
-import zipfile
-
-from . import __version__, DistlibException
-from .compat import sysconfig, ZipFile, fsdecode, text_type, filter
-from .database import InstalledDistribution
-from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME,
- LEGACY_METADATA_FILENAME)
-from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache,
- cached_property, get_cache_base, read_exports, tempdir,
- get_platform)
-from .version import NormalizedVersion, UnsupportedVersionError
-
-logger = logging.getLogger(__name__)
-
-cache = None # created when needed
-
-if hasattr(sys, 'pypy_version_info'): # pragma: no cover
- IMP_PREFIX = 'pp'
-elif sys.platform.startswith('java'): # pragma: no cover
- IMP_PREFIX = 'jy'
-elif sys.platform == 'cli': # pragma: no cover
- IMP_PREFIX = 'ip'
-else:
- IMP_PREFIX = 'cp'
-
-VER_SUFFIX = sysconfig.get_config_var('py_version_nodot')
-if not VER_SUFFIX: # pragma: no cover
- VER_SUFFIX = '%s%s' % sys.version_info[:2]
-PYVER = 'py' + VER_SUFFIX
-IMPVER = IMP_PREFIX + VER_SUFFIX
-
-ARCH = get_platform().replace('-', '_').replace('.', '_')
-
-ABI = sysconfig.get_config_var('SOABI')
-if ABI and ABI.startswith('cpython-'):
- ABI = ABI.replace('cpython-', 'cp').split('-')[0]
-else:
- def _derive_abi():
- parts = ['cp', VER_SUFFIX]
- if sysconfig.get_config_var('Py_DEBUG'):
- parts.append('d')
- if IMP_PREFIX == 'cp':
- vi = sys.version_info[:2]
- if vi < (3, 8):
- wpm = sysconfig.get_config_var('WITH_PYMALLOC')
- if wpm is None:
- wpm = True
- if wpm:
- parts.append('m')
- if vi < (3, 3):
- us = sysconfig.get_config_var('Py_UNICODE_SIZE')
- if us == 4 or (us is None and sys.maxunicode == 0x10FFFF):
- parts.append('u')
- return ''.join(parts)
- ABI = _derive_abi()
- del _derive_abi
-
-FILENAME_RE = re.compile(r'''
-(?P[^-]+)
--(?P\d+[^-]*)
-(-(?P\d+[^-]*))?
--(?P\w+\d+(\.\w+\d+)*)
--(?P\w+)
--(?P\w+(\.\w+)*)
-\.whl$
-''', re.IGNORECASE | re.VERBOSE)
-
-NAME_VERSION_RE = re.compile(r'''
-(?P[^-]+)
--(?P\d+[^-]*)
-(-(?P\d+[^-]*))?$
-''', re.IGNORECASE | re.VERBOSE)
-
-SHEBANG_RE = re.compile(br'\s*#![^\r\n]*')
-SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$')
-SHEBANG_PYTHON = b'#!python'
-SHEBANG_PYTHONW = b'#!pythonw'
-
-if os.sep == '/':
- to_posix = lambda o: o
-else:
- to_posix = lambda o: o.replace(os.sep, '/')
-
-if sys.version_info[0] < 3:
- import imp
-else:
- imp = None
- import importlib.machinery
- import importlib.util
-
-def _get_suffixes():
- if imp:
- return [s[0] for s in imp.get_suffixes()]
- else:
- return importlib.machinery.EXTENSION_SUFFIXES
-
-def _load_dynamic(name, path):
- # https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
- if imp:
- return imp.load_dynamic(name, path)
- else:
- spec = importlib.util.spec_from_file_location(name, path)
- module = importlib.util.module_from_spec(spec)
- sys.modules[name] = module
- spec.loader.exec_module(module)
- return module
-
-class Mounter(object):
- def __init__(self):
- self.impure_wheels = {}
- self.libs = {}
-
- def add(self, pathname, extensions):
- self.impure_wheels[pathname] = extensions
- self.libs.update(extensions)
-
- def remove(self, pathname):
- extensions = self.impure_wheels.pop(pathname)
- for k, v in extensions:
- if k in self.libs:
- del self.libs[k]
-
- def find_module(self, fullname, path=None):
- if fullname in self.libs:
- result = self
- else:
- result = None
- return result
-
- def load_module(self, fullname):
- if fullname in sys.modules:
- result = sys.modules[fullname]
- else:
- if fullname not in self.libs:
- raise ImportError('unable to find extension for %s' % fullname)
- result = _load_dynamic(fullname, self.libs[fullname])
- result.__loader__ = self
- parts = fullname.rsplit('.', 1)
- if len(parts) > 1:
- result.__package__ = parts[0]
- return result
-
-_hook = Mounter()
-
-
-class Wheel(object):
- """
- Class to build and install from Wheel files (PEP 427).
- """
-
- wheel_version = (1, 1)
- hash_kind = 'sha256'
-
- def __init__(self, filename=None, sign=False, verify=False):
- """
- Initialise an instance using a (valid) filename.
- """
- self.sign = sign
- self.should_verify = verify
- self.buildver = ''
- self.pyver = [PYVER]
- self.abi = ['none']
- self.arch = ['any']
- self.dirname = os.getcwd()
- if filename is None:
- self.name = 'dummy'
- self.version = '0.1'
- self._filename = self.filename
- else:
- m = NAME_VERSION_RE.match(filename)
- if m:
- info = m.groupdict('')
- self.name = info['nm']
- # Reinstate the local version separator
- self.version = info['vn'].replace('_', '-')
- self.buildver = info['bn']
- self._filename = self.filename
- else:
- dirname, filename = os.path.split(filename)
- m = FILENAME_RE.match(filename)
- if not m:
- raise DistlibException('Invalid name or '
- 'filename: %r' % filename)
- if dirname:
- self.dirname = os.path.abspath(dirname)
- self._filename = filename
- info = m.groupdict('')
- self.name = info['nm']
- self.version = info['vn']
- self.buildver = info['bn']
- self.pyver = info['py'].split('.')
- self.abi = info['bi'].split('.')
- self.arch = info['ar'].split('.')
-
- @property
- def filename(self):
- """
- Build and return a filename from the various components.
- """
- if self.buildver:
- buildver = '-' + self.buildver
- else:
- buildver = ''
- pyver = '.'.join(self.pyver)
- abi = '.'.join(self.abi)
- arch = '.'.join(self.arch)
- # replace - with _ as a local version separator
- version = self.version.replace('-', '_')
- return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver,
- pyver, abi, arch)
-
- @property
- def exists(self):
- path = os.path.join(self.dirname, self.filename)
- return os.path.isfile(path)
-
- @property
- def tags(self):
- for pyver in self.pyver:
- for abi in self.abi:
- for arch in self.arch:
- yield pyver, abi, arch
-
- @cached_property
- def metadata(self):
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- wrapper = codecs.getreader('utf-8')
- with ZipFile(pathname, 'r') as zf:
- wheel_metadata = self.get_wheel_metadata(zf)
- wv = wheel_metadata['Wheel-Version'].split('.', 1)
- file_version = tuple([int(i) for i in wv])
- # if file_version < (1, 1):
- # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME,
- # LEGACY_METADATA_FILENAME]
- # else:
- # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME]
- fns = [WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME]
- result = None
- for fn in fns:
- try:
- metadata_filename = posixpath.join(info_dir, fn)
- with zf.open(metadata_filename) as bf:
- wf = wrapper(bf)
- result = Metadata(fileobj=wf)
- if result:
- break
- except KeyError:
- pass
- if not result:
- raise ValueError('Invalid wheel, because metadata is '
- 'missing: looked in %s' % ', '.join(fns))
- return result
-
- def get_wheel_metadata(self, zf):
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- metadata_filename = posixpath.join(info_dir, 'WHEEL')
- with zf.open(metadata_filename) as bf:
- wf = codecs.getreader('utf-8')(bf)
- message = message_from_file(wf)
- return dict(message)
-
- @cached_property
- def info(self):
- pathname = os.path.join(self.dirname, self.filename)
- with ZipFile(pathname, 'r') as zf:
- result = self.get_wheel_metadata(zf)
- return result
-
- def process_shebang(self, data):
- m = SHEBANG_RE.match(data)
- if m:
- end = m.end()
- shebang, data_after_shebang = data[:end], data[end:]
- # Preserve any arguments after the interpreter
- if b'pythonw' in shebang.lower():
- shebang_python = SHEBANG_PYTHONW
- else:
- shebang_python = SHEBANG_PYTHON
- m = SHEBANG_DETAIL_RE.match(shebang)
- if m:
- args = b' ' + m.groups()[-1]
- else:
- args = b''
- shebang = shebang_python + args
- data = shebang + data_after_shebang
- else:
- cr = data.find(b'\r')
- lf = data.find(b'\n')
- if cr < 0 or cr > lf:
- term = b'\n'
- else:
- if data[cr:cr + 2] == b'\r\n':
- term = b'\r\n'
- else:
- term = b'\r'
- data = SHEBANG_PYTHON + term + data
- return data
-
- def get_hash(self, data, hash_kind=None):
- if hash_kind is None:
- hash_kind = self.hash_kind
- try:
- hasher = getattr(hashlib, hash_kind)
- except AttributeError:
- raise DistlibException('Unsupported hash algorithm: %r' % hash_kind)
- result = hasher(data).digest()
- result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii')
- return hash_kind, result
-
- def write_record(self, records, record_path, archive_record_path):
- records = list(records) # make a copy, as mutated
- records.append((archive_record_path, '', ''))
- with CSVWriter(record_path) as writer:
- for row in records:
- writer.writerow(row)
-
- def write_records(self, info, libdir, archive_paths):
- records = []
- distinfo, info_dir = info
- hasher = getattr(hashlib, self.hash_kind)
- for ap, p in archive_paths:
- with open(p, 'rb') as f:
- data = f.read()
- digest = '%s=%s' % self.get_hash(data)
- size = os.path.getsize(p)
- records.append((ap, digest, size))
-
- p = os.path.join(distinfo, 'RECORD')
- ap = to_posix(os.path.join(info_dir, 'RECORD'))
- self.write_record(records, p, ap)
- archive_paths.append((ap, p))
-
- def build_zip(self, pathname, archive_paths):
- with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf:
- for ap, p in archive_paths:
- logger.debug('Wrote %s to %s in wheel', p, ap)
- zf.write(p, ap)
-
- def build(self, paths, tags=None, wheel_version=None):
- """
- Build a wheel from files in specified paths, and use any specified tags
- when determining the name of the wheel.
- """
- if tags is None:
- tags = {}
-
- libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0]
- if libkey == 'platlib':
- is_pure = 'false'
- default_pyver = [IMPVER]
- default_abi = [ABI]
- default_arch = [ARCH]
- else:
- is_pure = 'true'
- default_pyver = [PYVER]
- default_abi = ['none']
- default_arch = ['any']
-
- self.pyver = tags.get('pyver', default_pyver)
- self.abi = tags.get('abi', default_abi)
- self.arch = tags.get('arch', default_arch)
-
- libdir = paths[libkey]
-
- name_ver = '%s-%s' % (self.name, self.version)
- data_dir = '%s.data' % name_ver
- info_dir = '%s.dist-info' % name_ver
-
- archive_paths = []
-
- # First, stuff which is not in site-packages
- for key in ('data', 'headers', 'scripts'):
- if key not in paths:
- continue
- path = paths[key]
- if os.path.isdir(path):
- for root, dirs, files in os.walk(path):
- for fn in files:
- p = fsdecode(os.path.join(root, fn))
- rp = os.path.relpath(p, path)
- ap = to_posix(os.path.join(data_dir, key, rp))
- archive_paths.append((ap, p))
- if key == 'scripts' and not p.endswith('.exe'):
- with open(p, 'rb') as f:
- data = f.read()
- data = self.process_shebang(data)
- with open(p, 'wb') as f:
- f.write(data)
-
- # Now, stuff which is in site-packages, other than the
- # distinfo stuff.
- path = libdir
- distinfo = None
- for root, dirs, files in os.walk(path):
- if root == path:
- # At the top level only, save distinfo for later
- # and skip it for now
- for i, dn in enumerate(dirs):
- dn = fsdecode(dn)
- if dn.endswith('.dist-info'):
- distinfo = os.path.join(root, dn)
- del dirs[i]
- break
- assert distinfo, '.dist-info directory expected, not found'
-
- for fn in files:
- # comment out next suite to leave .pyc files in
- if fsdecode(fn).endswith(('.pyc', '.pyo')):
- continue
- p = os.path.join(root, fn)
- rp = to_posix(os.path.relpath(p, path))
- archive_paths.append((rp, p))
-
- # Now distinfo. Assumed to be flat, i.e. os.listdir is enough.
- files = os.listdir(distinfo)
- for fn in files:
- if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'):
- p = fsdecode(os.path.join(distinfo, fn))
- ap = to_posix(os.path.join(info_dir, fn))
- archive_paths.append((ap, p))
-
- wheel_metadata = [
- 'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version),
- 'Generator: distlib %s' % __version__,
- 'Root-Is-Purelib: %s' % is_pure,
- ]
- for pyver, abi, arch in self.tags:
- wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch))
- p = os.path.join(distinfo, 'WHEEL')
- with open(p, 'w') as f:
- f.write('\n'.join(wheel_metadata))
- ap = to_posix(os.path.join(info_dir, 'WHEEL'))
- archive_paths.append((ap, p))
-
- # sort the entries by archive path. Not needed by any spec, but it
- # keeps the archive listing and RECORD tidier than they would otherwise
- # be. Use the number of path segments to keep directory entries together,
- # and keep the dist-info stuff at the end.
- def sorter(t):
- ap = t[0]
- n = ap.count('/')
- if '.dist-info' in ap:
- n += 10000
- return (n, ap)
- archive_paths = sorted(archive_paths, key=sorter)
-
- # Now, at last, RECORD.
- # Paths in here are archive paths - nothing else makes sense.
- self.write_records((distinfo, info_dir), libdir, archive_paths)
- # Now, ready to build the zip file
- pathname = os.path.join(self.dirname, self.filename)
- self.build_zip(pathname, archive_paths)
- return pathname
-
- def skip_entry(self, arcname):
- """
- Determine whether an archive entry should be skipped when verifying
- or installing.
- """
- # The signature file won't be in RECORD,
- # and we don't currently don't do anything with it
- # We also skip directories, as they won't be in RECORD
- # either. See:
- #
- # https://github.com/pypa/wheel/issues/294
- # https://github.com/pypa/wheel/issues/287
- # https://github.com/pypa/wheel/pull/289
- #
- return arcname.endswith(('/', '/RECORD.jws'))
-
- def install(self, paths, maker, **kwargs):
- """
- Install a wheel to the specified paths. If kwarg ``warner`` is
- specified, it should be a callable, which will be called with two
- tuples indicating the wheel version of this software and the wheel
- version in the file, if there is a discrepancy in the versions.
- This can be used to issue any warnings to raise any exceptions.
- If kwarg ``lib_only`` is True, only the purelib/platlib files are
- installed, and the headers, scripts, data and dist-info metadata are
- not written. If kwarg ``bytecode_hashed_invalidation`` is True, written
- bytecode will try to use file-hash based invalidation (PEP-552) on
- supported interpreter versions (CPython 2.7+).
-
- The return value is a :class:`InstalledDistribution` instance unless
- ``options.lib_only`` is True, in which case the return value is ``None``.
- """
-
- dry_run = maker.dry_run
- warner = kwargs.get('warner')
- lib_only = kwargs.get('lib_only', False)
- bc_hashed_invalidation = kwargs.get('bytecode_hashed_invalidation', False)
-
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- data_dir = '%s.data' % name_ver
- info_dir = '%s.dist-info' % name_ver
-
- metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME)
- wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
- record_name = posixpath.join(info_dir, 'RECORD')
-
- wrapper = codecs.getreader('utf-8')
-
- with ZipFile(pathname, 'r') as zf:
- with zf.open(wheel_metadata_name) as bwf:
- wf = wrapper(bwf)
- message = message_from_file(wf)
- wv = message['Wheel-Version'].split('.', 1)
- file_version = tuple([int(i) for i in wv])
- if (file_version != self.wheel_version) and warner:
- warner(self.wheel_version, file_version)
-
- if message['Root-Is-Purelib'] == 'true':
- libdir = paths['purelib']
- else:
- libdir = paths['platlib']
-
- records = {}
- with zf.open(record_name) as bf:
- with CSVReader(stream=bf) as reader:
- for row in reader:
- p = row[0]
- records[p] = row
-
- data_pfx = posixpath.join(data_dir, '')
- info_pfx = posixpath.join(info_dir, '')
- script_pfx = posixpath.join(data_dir, 'scripts', '')
-
- # make a new instance rather than a copy of maker's,
- # as we mutate it
- fileop = FileOperator(dry_run=dry_run)
- fileop.record = True # so we can rollback if needed
-
- bc = not sys.dont_write_bytecode # Double negatives. Lovely!
-
- outfiles = [] # for RECORD writing
-
- # for script copying/shebang processing
- workdir = tempfile.mkdtemp()
- # set target dir later
- # we default add_launchers to False, as the
- # Python Launcher should be used instead
- maker.source_dir = workdir
- maker.target_dir = None
- try:
- for zinfo in zf.infolist():
- arcname = zinfo.filename
- if isinstance(arcname, text_type):
- u_arcname = arcname
- else:
- u_arcname = arcname.decode('utf-8')
- if self.skip_entry(u_arcname):
- continue
- row = records[u_arcname]
- if row[2] and str(zinfo.file_size) != row[2]:
- raise DistlibException('size mismatch for '
- '%s' % u_arcname)
- if row[1]:
- kind, value = row[1].split('=', 1)
- with zf.open(arcname) as bf:
- data = bf.read()
- _, digest = self.get_hash(data, kind)
- if digest != value:
- raise DistlibException('digest mismatch for '
- '%s' % arcname)
-
- if lib_only and u_arcname.startswith((info_pfx, data_pfx)):
- logger.debug('lib_only: skipping %s', u_arcname)
- continue
- is_script = (u_arcname.startswith(script_pfx)
- and not u_arcname.endswith('.exe'))
-
- if u_arcname.startswith(data_pfx):
- _, where, rp = u_arcname.split('/', 2)
- outfile = os.path.join(paths[where], convert_path(rp))
- else:
- # meant for site-packages.
- if u_arcname in (wheel_metadata_name, record_name):
- continue
- outfile = os.path.join(libdir, convert_path(u_arcname))
- if not is_script:
- with zf.open(arcname) as bf:
- fileop.copy_stream(bf, outfile)
- # Issue #147: permission bits aren't preserved. Using
- # zf.extract(zinfo, libdir) should have worked, but didn't,
- # see https://www.thetopsites.net/article/53834422.shtml
- # So ... manually preserve permission bits as given in zinfo
- if os.name == 'posix':
- # just set the normal permission bits
- os.chmod(outfile, (zinfo.external_attr >> 16) & 0x1FF)
- outfiles.append(outfile)
- # Double check the digest of the written file
- if not dry_run and row[1]:
- with open(outfile, 'rb') as bf:
- data = bf.read()
- _, newdigest = self.get_hash(data, kind)
- if newdigest != digest:
- raise DistlibException('digest mismatch '
- 'on write for '
- '%s' % outfile)
- if bc and outfile.endswith('.py'):
- try:
- pyc = fileop.byte_compile(outfile,
- hashed_invalidation=bc_hashed_invalidation)
- outfiles.append(pyc)
- except Exception:
- # Don't give up if byte-compilation fails,
- # but log it and perhaps warn the user
- logger.warning('Byte-compilation failed',
- exc_info=True)
- else:
- fn = os.path.basename(convert_path(arcname))
- workname = os.path.join(workdir, fn)
- with zf.open(arcname) as bf:
- fileop.copy_stream(bf, workname)
-
- dn, fn = os.path.split(outfile)
- maker.target_dir = dn
- filenames = maker.make(fn)
- fileop.set_executable_mode(filenames)
- outfiles.extend(filenames)
-
- if lib_only:
- logger.debug('lib_only: returning None')
- dist = None
- else:
- # Generate scripts
-
- # Try to get pydist.json so we can see if there are
- # any commands to generate. If this fails (e.g. because
- # of a legacy wheel), log a warning but don't give up.
- commands = None
- file_version = self.info['Wheel-Version']
- if file_version == '1.0':
- # Use legacy info
- ep = posixpath.join(info_dir, 'entry_points.txt')
- try:
- with zf.open(ep) as bwf:
- epdata = read_exports(bwf)
- commands = {}
- for key in ('console', 'gui'):
- k = '%s_scripts' % key
- if k in epdata:
- commands['wrap_%s' % key] = d = {}
- for v in epdata[k].values():
- s = '%s:%s' % (v.prefix, v.suffix)
- if v.flags:
- s += ' [%s]' % ','.join(v.flags)
- d[v.name] = s
- except Exception:
- logger.warning('Unable to read legacy script '
- 'metadata, so cannot generate '
- 'scripts')
- else:
- try:
- with zf.open(metadata_name) as bwf:
- wf = wrapper(bwf)
- commands = json.load(wf).get('extensions')
- if commands:
- commands = commands.get('python.commands')
- except Exception:
- logger.warning('Unable to read JSON metadata, so '
- 'cannot generate scripts')
- if commands:
- console_scripts = commands.get('wrap_console', {})
- gui_scripts = commands.get('wrap_gui', {})
- if console_scripts or gui_scripts:
- script_dir = paths.get('scripts', '')
- if not os.path.isdir(script_dir):
- raise ValueError('Valid script path not '
- 'specified')
- maker.target_dir = script_dir
- for k, v in console_scripts.items():
- script = '%s = %s' % (k, v)
- filenames = maker.make(script)
- fileop.set_executable_mode(filenames)
-
- if gui_scripts:
- options = {'gui': True }
- for k, v in gui_scripts.items():
- script = '%s = %s' % (k, v)
- filenames = maker.make(script, options)
- fileop.set_executable_mode(filenames)
-
- p = os.path.join(libdir, info_dir)
- dist = InstalledDistribution(p)
-
- # Write SHARED
- paths = dict(paths) # don't change passed in dict
- del paths['purelib']
- del paths['platlib']
- paths['lib'] = libdir
- p = dist.write_shared_locations(paths, dry_run)
- if p:
- outfiles.append(p)
-
- # Write RECORD
- dist.write_installed_files(outfiles, paths['prefix'],
- dry_run)
- return dist
- except Exception: # pragma: no cover
- logger.exception('installation failed.')
- fileop.rollback()
- raise
- finally:
- shutil.rmtree(workdir)
-
- def _get_dylib_cache(self):
- global cache
- if cache is None:
- # Use native string to avoid issues on 2.x: see Python #20140.
- base = os.path.join(get_cache_base(), str('dylib-cache'),
- '%s.%s' % sys.version_info[:2])
- cache = Cache(base)
- return cache
-
- def _get_extensions(self):
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- arcname = posixpath.join(info_dir, 'EXTENSIONS')
- wrapper = codecs.getreader('utf-8')
- result = []
- with ZipFile(pathname, 'r') as zf:
- try:
- with zf.open(arcname) as bf:
- wf = wrapper(bf)
- extensions = json.load(wf)
- cache = self._get_dylib_cache()
- prefix = cache.prefix_to_dir(pathname)
- cache_base = os.path.join(cache.base, prefix)
- if not os.path.isdir(cache_base):
- os.makedirs(cache_base)
- for name, relpath in extensions.items():
- dest = os.path.join(cache_base, convert_path(relpath))
- if not os.path.exists(dest):
- extract = True
- else:
- file_time = os.stat(dest).st_mtime
- file_time = datetime.datetime.fromtimestamp(file_time)
- info = zf.getinfo(relpath)
- wheel_time = datetime.datetime(*info.date_time)
- extract = wheel_time > file_time
- if extract:
- zf.extract(relpath, cache_base)
- result.append((name, dest))
- except KeyError:
- pass
- return result
-
- def is_compatible(self):
- """
- Determine if a wheel is compatible with the running system.
- """
- return is_compatible(self)
-
- def is_mountable(self):
- """
- Determine if a wheel is asserted as mountable by its metadata.
- """
- return True # for now - metadata details TBD
-
- def mount(self, append=False):
- pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
- if not self.is_compatible():
- msg = 'Wheel %s not compatible with this Python.' % pathname
- raise DistlibException(msg)
- if not self.is_mountable():
- msg = 'Wheel %s is marked as not mountable.' % pathname
- raise DistlibException(msg)
- if pathname in sys.path:
- logger.debug('%s already in path', pathname)
- else:
- if append:
- sys.path.append(pathname)
- else:
- sys.path.insert(0, pathname)
- extensions = self._get_extensions()
- if extensions:
- if _hook not in sys.meta_path:
- sys.meta_path.append(_hook)
- _hook.add(pathname, extensions)
-
- def unmount(self):
- pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
- if pathname not in sys.path:
- logger.debug('%s not in path', pathname)
- else:
- sys.path.remove(pathname)
- if pathname in _hook.impure_wheels:
- _hook.remove(pathname)
- if not _hook.impure_wheels:
- if _hook in sys.meta_path:
- sys.meta_path.remove(_hook)
-
- def verify(self):
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- data_dir = '%s.data' % name_ver
- info_dir = '%s.dist-info' % name_ver
-
- metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME)
- wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
- record_name = posixpath.join(info_dir, 'RECORD')
-
- wrapper = codecs.getreader('utf-8')
-
- with ZipFile(pathname, 'r') as zf:
- with zf.open(wheel_metadata_name) as bwf:
- wf = wrapper(bwf)
- message = message_from_file(wf)
- wv = message['Wheel-Version'].split('.', 1)
- file_version = tuple([int(i) for i in wv])
- # TODO version verification
-
- records = {}
- with zf.open(record_name) as bf:
- with CSVReader(stream=bf) as reader:
- for row in reader:
- p = row[0]
- records[p] = row
-
- for zinfo in zf.infolist():
- arcname = zinfo.filename
- if isinstance(arcname, text_type):
- u_arcname = arcname
- else:
- u_arcname = arcname.decode('utf-8')
- # See issue #115: some wheels have .. in their entries, but
- # in the filename ... e.g. __main__..py ! So the check is
- # updated to look for .. in the directory portions
- p = u_arcname.split('/')
- if '..' in p:
- raise DistlibException('invalid entry in '
- 'wheel: %r' % u_arcname)
-
- if self.skip_entry(u_arcname):
- continue
- row = records[u_arcname]
- if row[2] and str(zinfo.file_size) != row[2]:
- raise DistlibException('size mismatch for '
- '%s' % u_arcname)
- if row[1]:
- kind, value = row[1].split('=', 1)
- with zf.open(arcname) as bf:
- data = bf.read()
- _, digest = self.get_hash(data, kind)
- if digest != value:
- raise DistlibException('digest mismatch for '
- '%s' % arcname)
-
- def update(self, modifier, dest_dir=None, **kwargs):
- """
- Update the contents of a wheel in a generic way. The modifier should
- be a callable which expects a dictionary argument: its keys are
- archive-entry paths, and its values are absolute filesystem paths
- where the contents the corresponding archive entries can be found. The
- modifier is free to change the contents of the files pointed to, add
- new entries and remove entries, before returning. This method will
- extract the entire contents of the wheel to a temporary location, call
- the modifier, and then use the passed (and possibly updated)
- dictionary to write a new wheel. If ``dest_dir`` is specified, the new
- wheel is written there -- otherwise, the original wheel is overwritten.
-
- The modifier should return True if it updated the wheel, else False.
- This method returns the same value the modifier returns.
- """
-
- def get_version(path_map, info_dir):
- version = path = None
- key = '%s/%s' % (info_dir, LEGACY_METADATA_FILENAME)
- if key not in path_map:
- key = '%s/PKG-INFO' % info_dir
- if key in path_map:
- path = path_map[key]
- version = Metadata(path=path).version
- return version, path
-
- def update_version(version, path):
- updated = None
- try:
- v = NormalizedVersion(version)
- i = version.find('-')
- if i < 0:
- updated = '%s+1' % version
- else:
- parts = [int(s) for s in version[i + 1:].split('.')]
- parts[-1] += 1
- updated = '%s+%s' % (version[:i],
- '.'.join(str(i) for i in parts))
- except UnsupportedVersionError:
- logger.debug('Cannot update non-compliant (PEP-440) '
- 'version %r', version)
- if updated:
- md = Metadata(path=path)
- md.version = updated
- legacy = path.endswith(LEGACY_METADATA_FILENAME)
- md.write(path=path, legacy=legacy)
- logger.debug('Version updated from %r to %r', version,
- updated)
-
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- record_name = posixpath.join(info_dir, 'RECORD')
- with tempdir() as workdir:
- with ZipFile(pathname, 'r') as zf:
- path_map = {}
- for zinfo in zf.infolist():
- arcname = zinfo.filename
- if isinstance(arcname, text_type):
- u_arcname = arcname
- else:
- u_arcname = arcname.decode('utf-8')
- if u_arcname == record_name:
- continue
- if '..' in u_arcname:
- raise DistlibException('invalid entry in '
- 'wheel: %r' % u_arcname)
- zf.extract(zinfo, workdir)
- path = os.path.join(workdir, convert_path(u_arcname))
- path_map[u_arcname] = path
-
- # Remember the version.
- original_version, _ = get_version(path_map, info_dir)
- # Files extracted. Call the modifier.
- modified = modifier(path_map, **kwargs)
- if modified:
- # Something changed - need to build a new wheel.
- current_version, path = get_version(path_map, info_dir)
- if current_version and (current_version == original_version):
- # Add or update local version to signify changes.
- update_version(current_version, path)
- # Decide where the new wheel goes.
- if dest_dir is None:
- fd, newpath = tempfile.mkstemp(suffix='.whl',
- prefix='wheel-update-',
- dir=workdir)
- os.close(fd)
- else:
- if not os.path.isdir(dest_dir):
- raise DistlibException('Not a directory: %r' % dest_dir)
- newpath = os.path.join(dest_dir, self.filename)
- archive_paths = list(path_map.items())
- distinfo = os.path.join(workdir, info_dir)
- info = distinfo, info_dir
- self.write_records(info, workdir, archive_paths)
- self.build_zip(newpath, archive_paths)
- if dest_dir is None:
- shutil.copyfile(newpath, pathname)
- return modified
-
-def _get_glibc_version():
- import platform
- ver = platform.libc_ver()
- result = []
- if ver[0] == 'glibc':
- for s in ver[1].split('.'):
- result.append(int(s) if s.isdigit() else 0)
- result = tuple(result)
- return result
-
-def compatible_tags():
- """
- Return (pyver, abi, arch) tuples compatible with this Python.
- """
- versions = [VER_SUFFIX]
- major = VER_SUFFIX[0]
- for minor in range(sys.version_info[1] - 1, - 1, -1):
- versions.append(''.join([major, str(minor)]))
-
- abis = []
- for suffix in _get_suffixes():
- if suffix.startswith('.abi'):
- abis.append(suffix.split('.', 2)[1])
- abis.sort()
- if ABI != 'none':
- abis.insert(0, ABI)
- abis.append('none')
- result = []
-
- arches = [ARCH]
- if sys.platform == 'darwin':
- m = re.match(r'(\w+)_(\d+)_(\d+)_(\w+)$', ARCH)
- if m:
- name, major, minor, arch = m.groups()
- minor = int(minor)
- matches = [arch]
- if arch in ('i386', 'ppc'):
- matches.append('fat')
- if arch in ('i386', 'ppc', 'x86_64'):
- matches.append('fat3')
- if arch in ('ppc64', 'x86_64'):
- matches.append('fat64')
- if arch in ('i386', 'x86_64'):
- matches.append('intel')
- if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'):
- matches.append('universal')
- while minor >= 0:
- for match in matches:
- s = '%s_%s_%s_%s' % (name, major, minor, match)
- if s != ARCH: # already there
- arches.append(s)
- minor -= 1
-
- # Most specific - our Python version, ABI and arch
- for abi in abis:
- for arch in arches:
- result.append((''.join((IMP_PREFIX, versions[0])), abi, arch))
- # manylinux
- if abi != 'none' and sys.platform.startswith('linux'):
- arch = arch.replace('linux_', '')
- parts = _get_glibc_version()
- if len(parts) == 2:
- if parts >= (2, 5):
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux1_%s' % arch))
- if parts >= (2, 12):
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux2010_%s' % arch))
- if parts >= (2, 17):
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux2014_%s' % arch))
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux_%s_%s_%s' % (parts[0], parts[1],
- arch)))
-
- # where no ABI / arch dependency, but IMP_PREFIX dependency
- for i, version in enumerate(versions):
- result.append((''.join((IMP_PREFIX, version)), 'none', 'any'))
- if i == 0:
- result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any'))
-
- # no IMP_PREFIX, ABI or arch dependency
- for i, version in enumerate(versions):
- result.append((''.join(('py', version)), 'none', 'any'))
- if i == 0:
- result.append((''.join(('py', version[0])), 'none', 'any'))
-
- return set(result)
-
-
-COMPATIBLE_TAGS = compatible_tags()
-
-del compatible_tags
-
-
-def is_compatible(wheel, tags=None):
- if not isinstance(wheel, Wheel):
- wheel = Wheel(wheel) # assume it's a filename
- result = False
- if tags is None:
- tags = COMPATIBLE_TAGS
- for ver, abi, arch in tags:
- if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch:
- result = True
- break
- return result
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/packages/backports/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/prerna9811/Chord/portaudio/src/common/pa_gitrevision.h b/spaces/prerna9811/Chord/portaudio/src/common/pa_gitrevision.h
deleted file mode 100644
index b5a042ca2a58738c0c7e714630c8c0a4aad13474..0000000000000000000000000000000000000000
--- a/spaces/prerna9811/Chord/portaudio/src/common/pa_gitrevision.h
+++ /dev/null
@@ -1 +0,0 @@
-#define PA_GIT_REVISION 147dd722548358763a8b649b3e4b41dfffbcfbb6
diff --git a/spaces/princeml/emotion_streamlite_app/README.md b/spaces/princeml/emotion_streamlite_app/README.md
deleted file mode 100644
index c881c4f2bbea160ea43e611155bfceb65a04b45c..0000000000000000000000000000000000000000
--- a/spaces/princeml/emotion_streamlite_app/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Emotion Streamlite App
-emoji: 🔥
-colorFrom: pink
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/classifyTools.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/classifyTools.py
deleted file mode 100644
index e46386230e5c826486963cf47640ae0a920377cb..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/classifyTools.py
+++ /dev/null
@@ -1,172 +0,0 @@
-""" fontTools.misc.classifyTools.py -- tools for classifying things.
-"""
-
-
-class Classifier(object):
-
- """
- Main Classifier object, used to classify things into similar sets.
- """
-
- def __init__(self, sort=True):
-
- self._things = set() # set of all things known so far
- self._sets = [] # list of class sets produced so far
- self._mapping = {} # map from things to their class set
- self._dirty = False
- self._sort = sort
-
- def add(self, set_of_things):
- """
- Add a set to the classifier. Any iterable is accepted.
- """
- if not set_of_things:
- return
-
- self._dirty = True
-
- things, sets, mapping = self._things, self._sets, self._mapping
-
- s = set(set_of_things)
- intersection = s.intersection(things) # existing things
- s.difference_update(intersection) # new things
- difference = s
- del s
-
- # Add new class for new things
- if difference:
- things.update(difference)
- sets.append(difference)
- for thing in difference:
- mapping[thing] = difference
- del difference
-
- while intersection:
- # Take one item and process the old class it belongs to
- old_class = mapping[next(iter(intersection))]
- old_class_intersection = old_class.intersection(intersection)
-
- # Update old class to remove items from new set
- old_class.difference_update(old_class_intersection)
-
- # Remove processed items from todo list
- intersection.difference_update(old_class_intersection)
-
- # Add new class for the intersection with old class
- sets.append(old_class_intersection)
- for thing in old_class_intersection:
- mapping[thing] = old_class_intersection
- del old_class_intersection
-
- def update(self, list_of_sets):
- """
- Add a a list of sets to the classifier. Any iterable of iterables is accepted.
- """
- for s in list_of_sets:
- self.add(s)
-
- def _process(self):
- if not self._dirty:
- return
-
- # Do any deferred processing
- sets = self._sets
- self._sets = [s for s in sets if s]
-
- if self._sort:
- self._sets = sorted(self._sets, key=lambda s: (-len(s), sorted(s)))
-
- self._dirty = False
-
- # Output methods
-
- def getThings(self):
- """Returns the set of all things known so far.
-
- The return value belongs to the Classifier object and should NOT
- be modified while the classifier is still in use.
- """
- self._process()
- return self._things
-
- def getMapping(self):
- """Returns the mapping from things to their class set.
-
- The return value belongs to the Classifier object and should NOT
- be modified while the classifier is still in use.
- """
- self._process()
- return self._mapping
-
- def getClasses(self):
- """Returns the list of class sets.
-
- The return value belongs to the Classifier object and should NOT
- be modified while the classifier is still in use.
- """
- self._process()
- return self._sets
-
-
-def classify(list_of_sets, sort=True):
- """
- Takes a iterable of iterables (list of sets from here on; but any
- iterable works.), and returns the smallest list of sets such that
- each set, is either a subset, or is disjoint from, each of the input
- sets.
-
- In other words, this function classifies all the things present in
- any of the input sets, into similar classes, based on which sets
- things are a member of.
-
- If sort=True, return class sets are sorted by decreasing size and
- their natural sort order within each class size. Otherwise, class
- sets are returned in the order that they were identified, which is
- generally not significant.
-
- >>> classify([]) == ([], {})
- True
- >>> classify([[]]) == ([], {})
- True
- >>> classify([[], []]) == ([], {})
- True
- >>> classify([[1]]) == ([{1}], {1: {1}})
- True
- >>> classify([[1,2]]) == ([{1, 2}], {1: {1, 2}, 2: {1, 2}})
- True
- >>> classify([[1],[2]]) == ([{1}, {2}], {1: {1}, 2: {2}})
- True
- >>> classify([[1,2],[2]]) == ([{1}, {2}], {1: {1}, 2: {2}})
- True
- >>> classify([[1,2],[2,4]]) == ([{1}, {2}, {4}], {1: {1}, 2: {2}, 4: {4}})
- True
- >>> classify([[1,2],[2,4,5]]) == (
- ... [{4, 5}, {1}, {2}], {1: {1}, 2: {2}, 4: {4, 5}, 5: {4, 5}})
- True
- >>> classify([[1,2],[2,4,5]], sort=False) == (
- ... [{1}, {4, 5}, {2}], {1: {1}, 2: {2}, 4: {4, 5}, 5: {4, 5}})
- True
- >>> classify([[1,2,9],[2,4,5]], sort=False) == (
- ... [{1, 9}, {4, 5}, {2}], {1: {1, 9}, 2: {2}, 4: {4, 5}, 5: {4, 5},
- ... 9: {1, 9}})
- True
- >>> classify([[1,2,9,15],[2,4,5]], sort=False) == (
- ... [{1, 9, 15}, {4, 5}, {2}], {1: {1, 9, 15}, 2: {2}, 4: {4, 5},
- ... 5: {4, 5}, 9: {1, 9, 15}, 15: {1, 9, 15}})
- True
- >>> classes, mapping = classify([[1,2,9,15],[2,4,5],[15,5]], sort=False)
- >>> set([frozenset(c) for c in classes]) == set(
- ... [frozenset(s) for s in ({1, 9}, {4}, {2}, {5}, {15})])
- True
- >>> mapping == {1: {1, 9}, 2: {2}, 4: {4}, 5: {5}, 9: {1, 9}, 15: {15}}
- True
- """
- classifier = Classifier(sort=sort)
- classifier.update(list_of_sets)
- return classifier.getClasses(), classifier.getMapping()
-
-
-if __name__ == "__main__":
- import sys, doctest
-
- sys.exit(doctest.testmod(optionflags=doctest.ELLIPSIS).failed)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/filenames.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/filenames.py
deleted file mode 100644
index d279f89cc82cc280370d09ebdb16cb301f62aa57..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/filenames.py
+++ /dev/null
@@ -1,246 +0,0 @@
-"""
-This module implements the algorithm for converting between a "user name" -
-something that a user can choose arbitrarily inside a font editor - and a file
-name suitable for use in a wide range of operating systems and filesystems.
-
-The `UFO 3 specification `_
-provides an example of an algorithm for such conversion, which avoids illegal
-characters, reserved file names, ambiguity between upper- and lower-case
-characters, and clashes with existing files.
-
-This code was originally copied from
-`ufoLib `_
-by Tal Leming and is copyright (c) 2005-2016, The RoboFab Developers:
-
-- Erik van Blokland
-- Tal Leming
-- Just van Rossum
-"""
-
-
-illegalCharacters = r"\" * + / : < > ? [ \ ] | \0".split(" ")
-illegalCharacters += [chr(i) for i in range(1, 32)]
-illegalCharacters += [chr(0x7F)]
-reservedFileNames = "CON PRN AUX CLOCK$ NUL A:-Z: COM1".lower().split(" ")
-reservedFileNames += "LPT1 LPT2 LPT3 COM2 COM3 COM4".lower().split(" ")
-maxFileNameLength = 255
-
-
-class NameTranslationError(Exception):
- pass
-
-
-def userNameToFileName(userName, existing=[], prefix="", suffix=""):
- """Converts from a user name to a file name.
-
- Takes care to avoid illegal characters, reserved file names, ambiguity between
- upper- and lower-case characters, and clashes with existing files.
-
- Args:
- userName (str): The input file name.
- existing: A case-insensitive list of all existing file names.
- prefix: Prefix to be prepended to the file name.
- suffix: Suffix to be appended to the file name.
-
- Returns:
- A suitable filename.
-
- Raises:
- NameTranslationError: If no suitable name could be generated.
-
- Examples::
-
- >>> userNameToFileName("a") == "a"
- True
- >>> userNameToFileName("A") == "A_"
- True
- >>> userNameToFileName("AE") == "A_E_"
- True
- >>> userNameToFileName("Ae") == "A_e"
- True
- >>> userNameToFileName("ae") == "ae"
- True
- >>> userNameToFileName("aE") == "aE_"
- True
- >>> userNameToFileName("a.alt") == "a.alt"
- True
- >>> userNameToFileName("A.alt") == "A_.alt"
- True
- >>> userNameToFileName("A.Alt") == "A_.A_lt"
- True
- >>> userNameToFileName("A.aLt") == "A_.aL_t"
- True
- >>> userNameToFileName(u"A.alT") == "A_.alT_"
- True
- >>> userNameToFileName("T_H") == "T__H_"
- True
- >>> userNameToFileName("T_h") == "T__h"
- True
- >>> userNameToFileName("t_h") == "t_h"
- True
- >>> userNameToFileName("F_F_I") == "F__F__I_"
- True
- >>> userNameToFileName("f_f_i") == "f_f_i"
- True
- >>> userNameToFileName("Aacute_V.swash") == "A_acute_V_.swash"
- True
- >>> userNameToFileName(".notdef") == "_notdef"
- True
- >>> userNameToFileName("con") == "_con"
- True
- >>> userNameToFileName("CON") == "C_O_N_"
- True
- >>> userNameToFileName("con.alt") == "_con.alt"
- True
- >>> userNameToFileName("alt.con") == "alt._con"
- True
- """
- # the incoming name must be a str
- if not isinstance(userName, str):
- raise ValueError("The value for userName must be a string.")
- # establish the prefix and suffix lengths
- prefixLength = len(prefix)
- suffixLength = len(suffix)
- # replace an initial period with an _
- # if no prefix is to be added
- if not prefix and userName[0] == ".":
- userName = "_" + userName[1:]
- # filter the user name
- filteredUserName = []
- for character in userName:
- # replace illegal characters with _
- if character in illegalCharacters:
- character = "_"
- # add _ to all non-lower characters
- elif character != character.lower():
- character += "_"
- filteredUserName.append(character)
- userName = "".join(filteredUserName)
- # clip to 255
- sliceLength = maxFileNameLength - prefixLength - suffixLength
- userName = userName[:sliceLength]
- # test for illegal files names
- parts = []
- for part in userName.split("."):
- if part.lower() in reservedFileNames:
- part = "_" + part
- parts.append(part)
- userName = ".".join(parts)
- # test for clash
- fullName = prefix + userName + suffix
- if fullName.lower() in existing:
- fullName = handleClash1(userName, existing, prefix, suffix)
- # finished
- return fullName
-
-
-def handleClash1(userName, existing=[], prefix="", suffix=""):
- """
- existing should be a case-insensitive list
- of all existing file names.
-
- >>> prefix = ("0" * 5) + "."
- >>> suffix = "." + ("0" * 10)
- >>> existing = ["a" * 5]
-
- >>> e = list(existing)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000001.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.append(prefix + "aaaaa" + "1".zfill(15) + suffix)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000002.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.append(prefix + "AAAAA" + "2".zfill(15) + suffix)
- >>> handleClash1(userName="A" * 5, existing=e,
- ... prefix=prefix, suffix=suffix) == (
- ... '00000.AAAAA000000000000001.0000000000')
- True
- """
- # if the prefix length + user name length + suffix length + 15 is at
- # or past the maximum length, silce 15 characters off of the user name
- prefixLength = len(prefix)
- suffixLength = len(suffix)
- if prefixLength + len(userName) + suffixLength + 15 > maxFileNameLength:
- l = prefixLength + len(userName) + suffixLength + 15
- sliceLength = maxFileNameLength - l
- userName = userName[:sliceLength]
- finalName = None
- # try to add numbers to create a unique name
- counter = 1
- while finalName is None:
- name = userName + str(counter).zfill(15)
- fullName = prefix + name + suffix
- if fullName.lower() not in existing:
- finalName = fullName
- break
- else:
- counter += 1
- if counter >= 999999999999999:
- break
- # if there is a clash, go to the next fallback
- if finalName is None:
- finalName = handleClash2(existing, prefix, suffix)
- # finished
- return finalName
-
-
-def handleClash2(existing=[], prefix="", suffix=""):
- """
- existing should be a case-insensitive list
- of all existing file names.
-
- >>> prefix = ("0" * 5) + "."
- >>> suffix = "." + ("0" * 10)
- >>> existing = [prefix + str(i) + suffix for i in range(100)]
-
- >>> e = list(existing)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.100.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.remove(prefix + "1" + suffix)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.1.0000000000')
- True
-
- >>> e = list(existing)
- >>> e.remove(prefix + "2" + suffix)
- >>> handleClash2(existing=e, prefix=prefix, suffix=suffix) == (
- ... '00000.2.0000000000')
- True
- """
- # calculate the longest possible string
- maxLength = maxFileNameLength - len(prefix) - len(suffix)
- maxValue = int("9" * maxLength)
- # try to find a number
- finalName = None
- counter = 1
- while finalName is None:
- fullName = prefix + str(counter) + suffix
- if fullName.lower() not in existing:
- finalName = fullName
- break
- else:
- counter += 1
- if counter >= maxValue:
- break
- # raise an error if nothing has been found
- if finalName is None:
- raise NameTranslationError("No unique name could be found.")
- # finished
- return finalName
-
-
-if __name__ == "__main__":
- import doctest
- import sys
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_C_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_C_.py
deleted file mode 100644
index 573b3f9c3970766ea817994509f4939ef4f70f0c..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_C_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-class table_T_S_I_C_(BaseTTXConverter):
- pass
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/code.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/code.py
deleted file mode 100644
index a35cc08225b063e75a7177c6b9913812c5262360..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/code.py
+++ /dev/null
@@ -1,127 +0,0 @@
-"""gr.Code() component"""
-
-from __future__ import annotations
-
-from pathlib import Path
-from typing import Any, Literal
-
-from gradio_client.documentation import document, set_documentation_group
-
-from gradio.components.base import Component
-from gradio.events import Events
-
-set_documentation_group("component")
-
-
-@document("languages")
-class Code(Component):
- """
- Creates a Code editor for entering, editing or viewing code.
- Preprocessing: passes a {str} of code into the function.
- Postprocessing: expects the function to return a {str} of code or a single-element {tuple}: {(string_filepath,)}
- """
-
- languages = [
- "python",
- "markdown",
- "json",
- "html",
- "css",
- "javascript",
- "typescript",
- "yaml",
- "dockerfile",
- "shell",
- "r",
- None,
- ]
-
- EVENTS = [Events.change, Events.input]
-
- def __init__(
- self,
- value: str | tuple[str] | None = None,
- language: Literal[
- "python",
- "markdown",
- "json",
- "html",
- "css",
- "javascript",
- "typescript",
- "yaml",
- "dockerfile",
- "shell",
- "r",
- ]
- | None = None,
- *,
- every: float | None = None,
- lines: int = 5,
- label: str | None = None,
- interactive: bool | None = None,
- show_label: bool | None = None,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- render: bool = True,
- ):
- """
- Parameters:
- value: Default value to show in the code editor. If callable, the function will be called whenever the app loads to set the initial value of the component.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- language: The language to display the code as. Supported languages listed in `gr.Code.languages`.
- label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.
- interactive: Whether user should be able to enter code or only view it.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
- """
- if language not in Code.languages:
- raise ValueError(f"Language {language} not supported.")
-
- self.language = language
- self.lines = lines
- super().__init__(
- label=label,
- every=every,
- interactive=interactive,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- render=render,
- value=value,
- )
-
- def preprocess(self, payload: Any) -> Any:
- return payload
-
- def postprocess(self, value: tuple | str | None) -> None | str:
- if value is None:
- return None
- elif isinstance(value, tuple):
- with open(value[0]) as file_data:
- return file_data.read()
- else:
- return value.strip()
-
- def flag(self, payload: Any, flag_dir: str | Path = "") -> str:
- return super().flag(payload, flag_dir)
-
- def api_info(self) -> dict[str, Any]:
- return {"type": "string"}
-
- def example_inputs(self) -> Any:
- return "print('Hello World')"
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-6be916c4.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-6be916c4.js
deleted file mode 100644
index 0406643c95a69e083e1210199704c6b6bff9474e..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-6be916c4.js
+++ /dev/null
@@ -1,2 +0,0 @@
-const{SvelteComponent:f,append:u,attr:d,detach:o,element:y,init:g,insert:v,noop:_,safe_not_equal:c,set_data:m,text:b,toggle_class:r}=window.__gradio__svelte__internal;function A(t){let e,n=(Array.isArray(t[0])?t[0].join(", "):t[0])+"",s;return{c(){e=y("div"),s=b(n),d(e,"class","svelte-rgtszb"),r(e,"table",t[1]==="table"),r(e,"gallery",t[1]==="gallery"),r(e,"selected",t[2])},m(l,a){v(l,e,a),u(e,s)},p(l,[a]){a&1&&n!==(n=(Array.isArray(l[0])?l[0].join(", "):l[0])+"")&&m(s,n),a&2&&r(e,"table",l[1]==="table"),a&2&&r(e,"gallery",l[1]==="gallery"),a&4&&r(e,"selected",l[2])},i:_,o:_,d(l){l&&o(e)}}}function h(t,e,n){let{value:s}=e,{type:l}=e,{selected:a=!1}=e;return t.$$set=i=>{"value"in i&&n(0,s=i.value),"type"in i&&n(1,l=i.type),"selected"in i&&n(2,a=i.selected)},[s,l,a]}class j extends f{constructor(e){super(),g(this,e,h,A,c,{value:0,type:1,selected:2})}}export{j as default};
-//# sourceMappingURL=Example-6be916c4.js.map
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/__init__.py
deleted file mode 100644
index 47703b7d492d3788178b6c3d544c9abcad1d2ded..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/__init__.py
+++ /dev/null
@@ -1,456 +0,0 @@
-"""
-NumPy
-=====
-
-Provides
- 1. An array object of arbitrary homogeneous items
- 2. Fast mathematical operations over arrays
- 3. Linear Algebra, Fourier Transforms, Random Number Generation
-
-How to use the documentation
-----------------------------
-Documentation is available in two forms: docstrings provided
-with the code, and a loose standing reference guide, available from
-`the NumPy homepage `_.
-
-We recommend exploring the docstrings using
-`IPython `_, an advanced Python shell with
-TAB-completion and introspection capabilities. See below for further
-instructions.
-
-The docstring examples assume that `numpy` has been imported as ``np``::
-
- >>> import numpy as np
-
-Code snippets are indicated by three greater-than signs::
-
- >>> x = 42
- >>> x = x + 1
-
-Use the built-in ``help`` function to view a function's docstring::
-
- >>> help(np.sort)
- ... # doctest: +SKIP
-
-For some objects, ``np.info(obj)`` may provide additional help. This is
-particularly true if you see the line "Help on ufunc object:" at the top
-of the help() page. Ufuncs are implemented in C, not Python, for speed.
-The native Python help() does not know how to view their help, but our
-np.info() function does.
-
-To search for documents containing a keyword, do::
-
- >>> np.lookfor('keyword')
- ... # doctest: +SKIP
-
-General-purpose documents like a glossary and help on the basic concepts
-of numpy are available under the ``doc`` sub-module::
-
- >>> from numpy import doc
- >>> help(doc)
- ... # doctest: +SKIP
-
-Available subpackages
----------------------
-lib
- Basic functions used by several sub-packages.
-random
- Core Random Tools
-linalg
- Core Linear Algebra Tools
-fft
- Core FFT routines
-polynomial
- Polynomial tools
-testing
- NumPy testing tools
-distutils
- Enhancements to distutils with support for
- Fortran compilers support and more (for Python <= 3.11).
-
-Utilities
----------
-test
- Run numpy unittests
-show_config
- Show numpy build configuration
-matlib
- Make everything matrices.
-__version__
- NumPy version string
-
-Viewing documentation using IPython
------------------------------------
-
-Start IPython and import `numpy` usually under the alias ``np``: `import
-numpy as np`. Then, directly past or use the ``%cpaste`` magic to paste
-examples into the shell. To see which functions are available in `numpy`,
-type ``np.`` (where ```` refers to the TAB key), or use
-``np.*cos*?`` (where ```` refers to the ENTER key) to narrow
-down the list. To view the docstring for a function, use
-``np.cos?`` (to view the docstring) and ``np.cos??`` (to view
-the source code).
-
-Copies vs. in-place operation
------------------------------
-Most of the functions in `numpy` return a copy of the array argument
-(e.g., `np.sort`). In-place versions of these functions are often
-available as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.
-Exceptions to this rule are documented.
-
-"""
-import sys
-import warnings
-
-from ._globals import _NoValue, _CopyMode
-# These exceptions were moved in 1.25 and are hidden from __dir__()
-from .exceptions import (
- ComplexWarning, ModuleDeprecationWarning, VisibleDeprecationWarning,
- TooHardError, AxisError)
-
-
-# If a version with git hash was stored, use that instead
-from . import version
-from .version import __version__
-
-# We first need to detect if we're being called as part of the numpy setup
-# procedure itself in a reliable manner.
-try:
- __NUMPY_SETUP__
-except NameError:
- __NUMPY_SETUP__ = False
-
-if __NUMPY_SETUP__:
- sys.stderr.write('Running from numpy source directory.\n')
-else:
- # Allow distributors to run custom init code before importing numpy.core
- from . import _distributor_init
-
- try:
- from numpy.__config__ import show as show_config
- except ImportError as e:
- msg = """Error importing numpy: you should not try to import numpy from
- its source directory; please exit the numpy source tree, and relaunch
- your python interpreter from there."""
- raise ImportError(msg) from e
-
- __all__ = [
- 'exceptions', 'ModuleDeprecationWarning', 'VisibleDeprecationWarning',
- 'ComplexWarning', 'TooHardError', 'AxisError']
-
- # mapping of {name: (value, deprecation_msg)}
- __deprecated_attrs__ = {}
-
- from . import core
- from .core import *
- from . import compat
- from . import exceptions
- from . import dtypes
- from . import lib
- # NOTE: to be revisited following future namespace cleanup.
- # See gh-14454 and gh-15672 for discussion.
- from .lib import *
-
- from . import linalg
- from . import fft
- from . import polynomial
- from . import random
- from . import ctypeslib
- from . import ma
- from . import matrixlib as _mat
- from .matrixlib import *
-
- # Deprecations introduced in NumPy 1.20.0, 2020-06-06
- import builtins as _builtins
-
- _msg = (
- "module 'numpy' has no attribute '{n}'.\n"
- "`np.{n}` was a deprecated alias for the builtin `{n}`. "
- "To avoid this error in existing code, use `{n}` by itself. "
- "Doing this will not modify any behavior and is safe. {extended_msg}\n"
- "The aliases was originally deprecated in NumPy 1.20; for more "
- "details and guidance see the original release note at:\n"
- " https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations")
-
- _specific_msg = (
- "If you specifically wanted the numpy scalar type, use `np.{}` here.")
-
- _int_extended_msg = (
- "When replacing `np.{}`, you may wish to use e.g. `np.int64` "
- "or `np.int32` to specify the precision. If you wish to review "
- "your current use, check the release note link for "
- "additional information.")
-
- _type_info = [
- ("object", ""), # The NumPy scalar only exists by name.
- ("bool", _specific_msg.format("bool_")),
- ("float", _specific_msg.format("float64")),
- ("complex", _specific_msg.format("complex128")),
- ("str", _specific_msg.format("str_")),
- ("int", _int_extended_msg.format("int"))]
-
- __former_attrs__ = {
- n: _msg.format(n=n, extended_msg=extended_msg)
- for n, extended_msg in _type_info
- }
-
- # Future warning introduced in NumPy 1.24.0, 2022-11-17
- _msg = (
- "`np.{n}` is a deprecated alias for `{an}`. (Deprecated NumPy 1.24)")
-
- # Some of these are awkward (since `np.str` may be preferable in the long
- # term), but overall the names ending in 0 seem undesirable
- _type_info = [
- ("bool8", bool_, "np.bool_"),
- ("int0", intp, "np.intp"),
- ("uint0", uintp, "np.uintp"),
- ("str0", str_, "np.str_"),
- ("bytes0", bytes_, "np.bytes_"),
- ("void0", void, "np.void"),
- ("object0", object_,
- "`np.object0` is a deprecated alias for `np.object_`. "
- "`object` can be used instead. (Deprecated NumPy 1.24)")]
-
- # Some of these could be defined right away, but most were aliases to
- # the Python objects and only removed in NumPy 1.24. Defining them should
- # probably wait for NumPy 1.26 or 2.0.
- # When defined, these should possibly not be added to `__all__` to avoid
- # import with `from numpy import *`.
- __future_scalars__ = {"bool", "long", "ulong", "str", "bytes", "object"}
-
- __deprecated_attrs__.update({
- n: (alias, _msg.format(n=n, an=an)) for n, alias, an in _type_info})
-
- import math
-
- __deprecated_attrs__['math'] = (math,
- "`np.math` is a deprecated alias for the standard library `math` "
- "module (Deprecated Numpy 1.25). Replace usages of `np.math` with "
- "`math`")
-
- del math, _msg, _type_info
-
- from .core import abs
- # now that numpy modules are imported, can initialize limits
- core.getlimits._register_known_types()
-
- __all__.extend(['__version__', 'show_config'])
- __all__.extend(core.__all__)
- __all__.extend(_mat.__all__)
- __all__.extend(lib.__all__)
- __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
-
- # Remove min and max from __all__ to avoid `from numpy import *` override
- # the builtins min/max. Temporary fix for 1.25.x/1.26.x, see gh-24229.
- __all__.remove('min')
- __all__.remove('max')
- __all__.remove('round')
-
- # Remove one of the two occurrences of `issubdtype`, which is exposed as
- # both `numpy.core.issubdtype` and `numpy.lib.issubdtype`.
- __all__.remove('issubdtype')
-
- # These are exported by np.core, but are replaced by the builtins below
- # remove them to ensure that we don't end up with `np.long == np.int_`,
- # which would be a breaking change.
- del long, unicode
- __all__.remove('long')
- __all__.remove('unicode')
-
- # Remove things that are in the numpy.lib but not in the numpy namespace
- # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)
- # that prevents adding more things to the main namespace by accident.
- # The list below will grow until the `from .lib import *` fixme above is
- # taken care of
- __all__.remove('Arrayterator')
- del Arrayterator
-
- # These names were removed in NumPy 1.20. For at least one release,
- # attempts to access these names in the numpy namespace will trigger
- # a warning, and calling the function will raise an exception.
- _financial_names = ['fv', 'ipmt', 'irr', 'mirr', 'nper', 'npv', 'pmt',
- 'ppmt', 'pv', 'rate']
- __expired_functions__ = {
- name: (f'In accordance with NEP 32, the function {name} was removed '
- 'from NumPy version 1.20. A replacement for this function '
- 'is available in the numpy_financial library: '
- 'https://pypi.org/project/numpy-financial')
- for name in _financial_names}
-
- # Filter out Cython harmless warnings
- warnings.filterwarnings("ignore", message="numpy.dtype size changed")
- warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
- warnings.filterwarnings("ignore", message="numpy.ndarray size changed")
-
- # oldnumeric and numarray were removed in 1.9. In case some packages import
- # but do not use them, we define them here for backward compatibility.
- oldnumeric = 'removed'
- numarray = 'removed'
-
- def __getattr__(attr):
- # Warn for expired attributes, and return a dummy function
- # that always raises an exception.
- import warnings
- import math
- try:
- msg = __expired_functions__[attr]
- except KeyError:
- pass
- else:
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
-
- def _expired(*args, **kwds):
- raise RuntimeError(msg)
-
- return _expired
-
- # Emit warnings for deprecated attributes
- try:
- val, msg = __deprecated_attrs__[attr]
- except KeyError:
- pass
- else:
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
- return val
-
- if attr in __future_scalars__:
- # And future warnings for those that will change, but also give
- # the AttributeError
- warnings.warn(
- f"In the future `np.{attr}` will be defined as the "
- "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
-
- if attr in __former_attrs__:
- raise AttributeError(__former_attrs__[attr])
-
- if attr == 'testing':
- import numpy.testing as testing
- return testing
- elif attr == 'Tester':
- "Removed in NumPy 1.25.0"
- raise RuntimeError("Tester was removed in NumPy 1.25.")
-
- raise AttributeError("module {!r} has no attribute "
- "{!r}".format(__name__, attr))
-
- def __dir__():
- public_symbols = globals().keys() | {'testing'}
- public_symbols -= {
- "core", "matrixlib",
- # These were moved in 1.25 and may be deprecated eventually:
- "ModuleDeprecationWarning", "VisibleDeprecationWarning",
- "ComplexWarning", "TooHardError", "AxisError"
- }
- return list(public_symbols)
-
- # Pytest testing
- from numpy._pytesttester import PytestTester
- test = PytestTester(__name__)
- del PytestTester
-
- def _sanity_check():
- """
- Quick sanity checks for common bugs caused by environment.
- There are some cases e.g. with wrong BLAS ABI that cause wrong
- results under specific runtime conditions that are not necessarily
- achieved during test suite runs, and it is useful to catch those early.
-
- See https://github.com/numpy/numpy/issues/8577 and other
- similar bug reports.
-
- """
- try:
- x = ones(2, dtype=float32)
- if not abs(x.dot(x) - float32(2.0)) < 1e-5:
- raise AssertionError()
- except AssertionError:
- msg = ("The current Numpy installation ({!r}) fails to "
- "pass simple sanity checks. This can be caused for example "
- "by incorrect BLAS library being linked in, or by mixing "
- "package managers (pip, conda, apt, ...). Search closed "
- "numpy issues for similar problems.")
- raise RuntimeError(msg.format(__file__)) from None
-
- _sanity_check()
- del _sanity_check
-
- def _mac_os_check():
- """
- Quick Sanity check for Mac OS look for accelerate build bugs.
- Testing numpy polyfit calls init_dgelsd(LAPACK)
- """
- try:
- c = array([3., 2., 1.])
- x = linspace(0, 2, 5)
- y = polyval(c, x)
- _ = polyfit(x, y, 2, cov=True)
- except ValueError:
- pass
-
- if sys.platform == "darwin":
- with warnings.catch_warnings(record=True) as w:
- _mac_os_check()
- # Throw runtime error, if the test failed Check for warning and error_message
- error_message = ""
- if len(w) > 0:
- error_message = "{}: {}".format(w[-1].category.__name__, str(w[-1].message))
- msg = (
- "Polyfit sanity test emitted a warning, most likely due "
- "to using a buggy Accelerate backend."
- "\nIf you compiled yourself, more information is available at:"
- "\nhttps://numpy.org/doc/stable/user/building.html#accelerated-blas-lapack-libraries"
- "\nOtherwise report this to the vendor "
- "that provided NumPy.\n{}\n".format(error_message))
- raise RuntimeError(msg)
- del _mac_os_check
-
- # We usually use madvise hugepages support, but on some old kernels it
- # is slow and thus better avoided.
- # Specifically kernel version 4.6 had a bug fix which probably fixed this:
- # https://github.com/torvalds/linux/commit/7cf91a98e607c2f935dbcc177d70011e95b8faff
- import os
- use_hugepage = os.environ.get("NUMPY_MADVISE_HUGEPAGE", None)
- if sys.platform == "linux" and use_hugepage is None:
- # If there is an issue with parsing the kernel version,
- # set use_hugepages to 0. Usage of LooseVersion will handle
- # the kernel version parsing better, but avoided since it
- # will increase the import time. See: #16679 for related discussion.
- try:
- use_hugepage = 1
- kernel_version = os.uname().release.split(".")[:2]
- kernel_version = tuple(int(v) for v in kernel_version)
- if kernel_version < (4, 6):
- use_hugepage = 0
- except ValueError:
- use_hugepages = 0
- elif use_hugepage is None:
- # This is not Linux, so it should not matter, just enable anyway
- use_hugepage = 1
- else:
- use_hugepage = int(use_hugepage)
-
- # Note that this will currently only make a difference on Linux
- core.multiarray._set_madvise_hugepage(use_hugepage)
- del use_hugepage
-
- # Give a warning if NumPy is reloaded or imported on a sub-interpreter
- # We do this from python, since the C-module may not be reloaded and
- # it is tidier organized.
- core.multiarray._multiarray_umath._reload_guard()
-
- # default to "weak" promotion for "NumPy 2".
- core._set_promotion_state(
- os.environ.get("NPY_PROMOTION_STATE",
- "weak" if _using_numpy2_behavior() else "legacy"))
-
- # Tell PyInstaller where to find hook-numpy.py
- def _pyinstaller_hooks_dir():
- from pathlib import Path
- return [str(Path(__file__).with_name("_pyinstaller").resolve())]
-
- # Remove symbols imported for internal use
- del os
-
-
-# Remove symbols imported for internal use
-del sys, warnings
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_pyinstaller/test_pyinstaller.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_pyinstaller/test_pyinstaller.py
deleted file mode 100644
index a9061da19b88c4243a3fd28bf05fd2986292d836..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_pyinstaller/test_pyinstaller.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import subprocess
-from pathlib import Path
-
-import pytest
-
-
-# PyInstaller has been very unproactive about replacing 'imp' with 'importlib'.
-@pytest.mark.filterwarnings('ignore::DeprecationWarning')
-# It also leaks io.BytesIO()s.
-@pytest.mark.filterwarnings('ignore::ResourceWarning')
-@pytest.mark.parametrize("mode", ["--onedir", "--onefile"])
-@pytest.mark.slow
-def test_pyinstaller(mode, tmp_path):
- """Compile and run pyinstaller-smoke.py using PyInstaller."""
-
- pyinstaller_cli = pytest.importorskip("PyInstaller.__main__").run
-
- source = Path(__file__).with_name("pyinstaller-smoke.py").resolve()
- args = [
- # Place all generated files in ``tmp_path``.
- '--workpath', str(tmp_path / "build"),
- '--distpath', str(tmp_path / "dist"),
- '--specpath', str(tmp_path),
- mode,
- str(source),
- ]
- pyinstaller_cli(args)
-
- if mode == "--onefile":
- exe = tmp_path / "dist" / source.stem
- else:
- exe = tmp_path / "dist" / source.stem / source.stem
-
- p = subprocess.run([str(exe)], check=True, stdout=subprocess.PIPE)
- assert p.stdout.strip() == b"I made it!"
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_typing.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_typing.py
deleted file mode 100644
index 743815b91210d2e7ca12125eedb3224147ffffe0..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/_typing.py
+++ /dev/null
@@ -1,476 +0,0 @@
-from __future__ import annotations
-
-from collections.abc import (
- Hashable,
- Iterator,
- Mapping,
- Sequence,
-)
-from datetime import (
- date,
- datetime,
- timedelta,
- tzinfo,
-)
-from os import PathLike
-import sys
-from typing import (
- TYPE_CHECKING,
- Any,
- Callable,
- Literal,
- Optional,
- Protocol,
- Type as type_t,
- TypeVar,
- Union,
-)
-
-import numpy as np
-
-# To prevent import cycles place any internal imports in the branch below
-# and use a string literal forward reference to it in subsequent types
-# https://mypy.readthedocs.io/en/latest/common_issues.html#import-cycles
-if TYPE_CHECKING:
- import numpy.typing as npt
-
- from pandas._libs import (
- NaTType,
- Period,
- Timedelta,
- Timestamp,
- )
- from pandas._libs.tslibs import BaseOffset
-
- from pandas.core.dtypes.dtypes import ExtensionDtype
-
- from pandas import Interval
- from pandas.arrays import (
- DatetimeArray,
- TimedeltaArray,
- )
- from pandas.core.arrays.base import ExtensionArray
- from pandas.core.frame import DataFrame
- from pandas.core.generic import NDFrame
- from pandas.core.groupby.generic import (
- DataFrameGroupBy,
- GroupBy,
- SeriesGroupBy,
- )
- from pandas.core.indexes.base import Index
- from pandas.core.internals import (
- ArrayManager,
- BlockManager,
- SingleArrayManager,
- SingleBlockManager,
- )
- from pandas.core.resample import Resampler
- from pandas.core.series import Series
- from pandas.core.window.rolling import BaseWindow
-
- from pandas.io.formats.format import EngFormatter
- from pandas.tseries.holiday import AbstractHolidayCalendar
-
- ScalarLike_co = Union[
- int,
- float,
- complex,
- str,
- bytes,
- np.generic,
- ]
-
- # numpy compatible types
- NumpyValueArrayLike = Union[ScalarLike_co, npt.ArrayLike]
- # Name "npt._ArrayLikeInt_co" is not defined [name-defined]
- NumpySorter = Optional[npt._ArrayLikeInt_co] # type: ignore[name-defined]
-
- if sys.version_info >= (3, 10):
- from typing import TypeGuard # pyright: ignore[reportUnusedImport]
- else:
- from typing_extensions import TypeGuard # pyright: ignore[reportUnusedImport]
-
- if sys.version_info >= (3, 11):
- from typing import Self # pyright: ignore[reportUnusedImport]
- else:
- from typing_extensions import Self # pyright: ignore[reportUnusedImport]
-else:
- npt: Any = None
- Self: Any = None
- TypeGuard: Any = None
-
-HashableT = TypeVar("HashableT", bound=Hashable)
-
-# array-like
-
-ArrayLike = Union["ExtensionArray", np.ndarray]
-AnyArrayLike = Union[ArrayLike, "Index", "Series"]
-TimeArrayLike = Union["DatetimeArray", "TimedeltaArray"]
-
-# list-like
-
-# Cannot use `Sequence` because a string is a sequence, and we don't want to
-# accept that. Could refine if https://github.com/python/typing/issues/256 is
-# resolved to differentiate between Sequence[str] and str
-ListLike = Union[AnyArrayLike, list, range]
-
-# scalars
-
-PythonScalar = Union[str, float, bool]
-DatetimeLikeScalar = Union["Period", "Timestamp", "Timedelta"]
-PandasScalar = Union["Period", "Timestamp", "Timedelta", "Interval"]
-Scalar = Union[PythonScalar, PandasScalar, np.datetime64, np.timedelta64, date]
-IntStrT = TypeVar("IntStrT", int, str)
-
-
-# timestamp and timedelta convertible types
-
-TimestampConvertibleTypes = Union[
- "Timestamp", date, np.datetime64, np.int64, float, str
-]
-TimestampNonexistent = Union[
- Literal["shift_forward", "shift_backward", "NaT", "raise"], timedelta
-]
-TimedeltaConvertibleTypes = Union[
- "Timedelta", timedelta, np.timedelta64, np.int64, float, str
-]
-Timezone = Union[str, tzinfo]
-
-ToTimestampHow = Literal["s", "e", "start", "end"]
-
-# NDFrameT is stricter and ensures that the same subclass of NDFrame always is
-# used. E.g. `def func(a: NDFrameT) -> NDFrameT: ...` means that if a
-# Series is passed into a function, a Series is always returned and if a DataFrame is
-# passed in, a DataFrame is always returned.
-NDFrameT = TypeVar("NDFrameT", bound="NDFrame")
-
-NumpyIndexT = TypeVar("NumpyIndexT", np.ndarray, "Index")
-
-AxisInt = int
-Axis = Union[AxisInt, Literal["index", "columns", "rows"]]
-IndexLabel = Union[Hashable, Sequence[Hashable]]
-Level = Hashable
-Shape = tuple[int, ...]
-Suffixes = tuple[Optional[str], Optional[str]]
-Ordered = Optional[bool]
-JSONSerializable = Optional[Union[PythonScalar, list, dict]]
-Frequency = Union[str, "BaseOffset"]
-Axes = ListLike
-
-RandomState = Union[
- int,
- np.ndarray,
- np.random.Generator,
- np.random.BitGenerator,
- np.random.RandomState,
-]
-
-# dtypes
-NpDtype = Union[str, np.dtype, type_t[Union[str, complex, bool, object]]]
-Dtype = Union["ExtensionDtype", NpDtype]
-AstypeArg = Union["ExtensionDtype", "npt.DTypeLike"]
-# DtypeArg specifies all allowable dtypes in a functions its dtype argument
-DtypeArg = Union[Dtype, dict[Hashable, Dtype]]
-DtypeObj = Union[np.dtype, "ExtensionDtype"]
-
-# converters
-ConvertersArg = dict[Hashable, Callable[[Dtype], Dtype]]
-
-# parse_dates
-ParseDatesArg = Union[
- bool, list[Hashable], list[list[Hashable]], dict[Hashable, list[Hashable]]
-]
-
-# For functions like rename that convert one label to another
-Renamer = Union[Mapping[Any, Hashable], Callable[[Any], Hashable]]
-
-# to maintain type information across generic functions and parametrization
-T = TypeVar("T")
-
-# used in decorators to preserve the signature of the function it decorates
-# see https://mypy.readthedocs.io/en/stable/generics.html#declaring-decorators
-FuncType = Callable[..., Any]
-F = TypeVar("F", bound=FuncType)
-
-# types of vectorized key functions for DataFrame::sort_values and
-# DataFrame::sort_index, among others
-ValueKeyFunc = Optional[Callable[["Series"], Union["Series", AnyArrayLike]]]
-IndexKeyFunc = Optional[Callable[["Index"], Union["Index", AnyArrayLike]]]
-
-# types of `func` kwarg for DataFrame.aggregate and Series.aggregate
-AggFuncTypeBase = Union[Callable, str]
-AggFuncTypeDict = dict[Hashable, Union[AggFuncTypeBase, list[AggFuncTypeBase]]]
-AggFuncType = Union[
- AggFuncTypeBase,
- list[AggFuncTypeBase],
- AggFuncTypeDict,
-]
-AggObjType = Union[
- "Series",
- "DataFrame",
- "GroupBy",
- "SeriesGroupBy",
- "DataFrameGroupBy",
- "BaseWindow",
- "Resampler",
-]
-
-PythonFuncType = Callable[[Any], Any]
-
-# filenames and file-like-objects
-AnyStr_co = TypeVar("AnyStr_co", str, bytes, covariant=True)
-AnyStr_contra = TypeVar("AnyStr_contra", str, bytes, contravariant=True)
-
-
-class BaseBuffer(Protocol):
- @property
- def mode(self) -> str:
- # for _get_filepath_or_buffer
- ...
-
- def seek(self, __offset: int, __whence: int = ...) -> int:
- # with one argument: gzip.GzipFile, bz2.BZ2File
- # with two arguments: zip.ZipFile, read_sas
- ...
-
- def seekable(self) -> bool:
- # for bz2.BZ2File
- ...
-
- def tell(self) -> int:
- # for zip.ZipFile, read_stata, to_stata
- ...
-
-
-class ReadBuffer(BaseBuffer, Protocol[AnyStr_co]):
- def read(self, __n: int = ...) -> AnyStr_co:
- # for BytesIOWrapper, gzip.GzipFile, bz2.BZ2File
- ...
-
-
-class WriteBuffer(BaseBuffer, Protocol[AnyStr_contra]):
- def write(self, __b: AnyStr_contra) -> Any:
- # for gzip.GzipFile, bz2.BZ2File
- ...
-
- def flush(self) -> Any:
- # for gzip.GzipFile, bz2.BZ2File
- ...
-
-
-class ReadPickleBuffer(ReadBuffer[bytes], Protocol):
- def readline(self) -> bytes:
- ...
-
-
-class WriteExcelBuffer(WriteBuffer[bytes], Protocol):
- def truncate(self, size: int | None = ...) -> int:
- ...
-
-
-class ReadCsvBuffer(ReadBuffer[AnyStr_co], Protocol):
- def __iter__(self) -> Iterator[AnyStr_co]:
- # for engine=python
- ...
-
- def fileno(self) -> int:
- # for _MMapWrapper
- ...
-
- def readline(self) -> AnyStr_co:
- # for engine=python
- ...
-
- @property
- def closed(self) -> bool:
- # for enine=pyarrow
- ...
-
-
-FilePath = Union[str, "PathLike[str]"]
-
-# for arbitrary kwargs passed during reading/writing files
-StorageOptions = Optional[dict[str, Any]]
-
-
-# compression keywords and compression
-CompressionDict = dict[str, Any]
-CompressionOptions = Optional[
- Union[Literal["infer", "gzip", "bz2", "zip", "xz", "zstd", "tar"], CompressionDict]
-]
-
-# types in DataFrameFormatter
-FormattersType = Union[
- list[Callable], tuple[Callable, ...], Mapping[Union[str, int], Callable]
-]
-ColspaceType = Mapping[Hashable, Union[str, int]]
-FloatFormatType = Union[str, Callable, "EngFormatter"]
-ColspaceArgType = Union[
- str, int, Sequence[Union[str, int]], Mapping[Hashable, Union[str, int]]
-]
-
-# Arguments for fillna()
-FillnaOptions = Literal["backfill", "bfill", "ffill", "pad"]
-InterpolateOptions = Literal[
- "linear",
- "time",
- "index",
- "values",
- "nearest",
- "zero",
- "slinear",
- "quadratic",
- "cubic",
- "barycentric",
- "polynomial",
- "krogh",
- "piecewise_polynomial",
- "spline",
- "pchip",
- "akima",
- "cubicspline",
- "from_derivatives",
-]
-
-# internals
-Manager = Union[
- "ArrayManager", "SingleArrayManager", "BlockManager", "SingleBlockManager"
-]
-SingleManager = Union["SingleArrayManager", "SingleBlockManager"]
-Manager2D = Union["ArrayManager", "BlockManager"]
-
-# indexing
-# PositionalIndexer -> valid 1D positional indexer, e.g. can pass
-# to ndarray.__getitem__
-# ScalarIndexer is for a single value as the index
-# SequenceIndexer is for list like or slices (but not tuples)
-# PositionalIndexerTuple is extends the PositionalIndexer for 2D arrays
-# These are used in various __getitem__ overloads
-# TODO(typing#684): add Ellipsis, see
-# https://github.com/python/typing/issues/684#issuecomment-548203158
-# https://bugs.python.org/issue41810
-# Using List[int] here rather than Sequence[int] to disallow tuples.
-ScalarIndexer = Union[int, np.integer]
-SequenceIndexer = Union[slice, list[int], np.ndarray]
-PositionalIndexer = Union[ScalarIndexer, SequenceIndexer]
-PositionalIndexerTuple = tuple[PositionalIndexer, PositionalIndexer]
-PositionalIndexer2D = Union[PositionalIndexer, PositionalIndexerTuple]
-if TYPE_CHECKING:
- TakeIndexer = Union[Sequence[int], Sequence[np.integer], npt.NDArray[np.integer]]
-else:
- TakeIndexer = Any
-
-# Shared by functions such as drop and astype
-IgnoreRaise = Literal["ignore", "raise"]
-
-# Windowing rank methods
-WindowingRankType = Literal["average", "min", "max"]
-
-# read_csv engines
-CSVEngine = Literal["c", "python", "pyarrow", "python-fwf"]
-
-# read_json engines
-JSONEngine = Literal["ujson", "pyarrow"]
-
-# read_xml parsers
-XMLParsers = Literal["lxml", "etree"]
-
-# Interval closed type
-IntervalLeftRight = Literal["left", "right"]
-IntervalClosedType = Union[IntervalLeftRight, Literal["both", "neither"]]
-
-# datetime and NaTType
-DatetimeNaTType = Union[datetime, "NaTType"]
-DateTimeErrorChoices = Union[IgnoreRaise, Literal["coerce"]]
-
-# sort_index
-SortKind = Literal["quicksort", "mergesort", "heapsort", "stable"]
-NaPosition = Literal["first", "last"]
-
-# Arguments for nsmalles and n_largest
-NsmallestNlargestKeep = Literal["first", "last", "all"]
-
-# quantile interpolation
-QuantileInterpolation = Literal["linear", "lower", "higher", "midpoint", "nearest"]
-
-# plotting
-PlottingOrientation = Literal["horizontal", "vertical"]
-
-# dropna
-AnyAll = Literal["any", "all"]
-
-# merge
-MergeHow = Literal["left", "right", "inner", "outer", "cross"]
-MergeValidate = Literal[
- "one_to_one",
- "1:1",
- "one_to_many",
- "1:m",
- "many_to_one",
- "m:1",
- "many_to_many",
- "m:m",
-]
-
-# join
-JoinHow = Literal["left", "right", "inner", "outer"]
-JoinValidate = Literal[
- "one_to_one",
- "1:1",
- "one_to_many",
- "1:m",
- "many_to_one",
- "m:1",
- "many_to_many",
- "m:m",
-]
-
-# reindex
-ReindexMethod = Union[FillnaOptions, Literal["nearest"]]
-
-MatplotlibColor = Union[str, Sequence[float]]
-TimeGrouperOrigin = Union[
- "Timestamp", Literal["epoch", "start", "start_day", "end", "end_day"]
-]
-TimeAmbiguous = Union[Literal["infer", "NaT", "raise"], "npt.NDArray[np.bool_]"]
-TimeNonexistent = Union[
- Literal["shift_forward", "shift_backward", "NaT", "raise"], timedelta
-]
-DropKeep = Literal["first", "last", False]
-CorrelationMethod = Union[
- Literal["pearson", "kendall", "spearman"], Callable[[np.ndarray, np.ndarray], float]
-]
-AlignJoin = Literal["outer", "inner", "left", "right"]
-DtypeBackend = Literal["pyarrow", "numpy_nullable"]
-
-TimeUnit = Literal["s", "ms", "us", "ns"]
-OpenFileErrors = Literal[
- "strict",
- "ignore",
- "replace",
- "surrogateescape",
- "xmlcharrefreplace",
- "backslashreplace",
- "namereplace",
-]
-
-# update
-UpdateJoin = Literal["left"]
-
-# applymap
-NaAction = Literal["ignore"]
-
-# from_dict
-FromDictOrient = Literal["columns", "index", "tight"]
-
-# to_gbc
-ToGbqIfexist = Literal["fail", "replace", "append"]
-
-# to_stata
-ToStataByteorder = Literal[">", "<", "little", "big"]
-
-# ExcelWriter
-ExcelWriterIfSheetExists = Literal["error", "new", "replace", "overlay"]
-
-# Offsets
-OffsetCalendar = Union[np.busdaycalendar, "AbstractHolidayCalendar"]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/list/array.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/list/array.py
deleted file mode 100644
index 5b8955087436e87d1b43ef1fcd5a4cdcb98e05bf..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/list/array.py
+++ /dev/null
@@ -1,134 +0,0 @@
-"""
-Test extension array for storing nested data in a pandas container.
-
-The ListArray stores an ndarray of lists.
-"""
-from __future__ import annotations
-
-import numbers
-import string
-from typing import TYPE_CHECKING
-
-import numpy as np
-
-from pandas.core.dtypes.base import ExtensionDtype
-
-import pandas as pd
-from pandas.api.types import (
- is_object_dtype,
- is_string_dtype,
-)
-from pandas.core.arrays import ExtensionArray
-
-if TYPE_CHECKING:
- from pandas._typing import type_t
-
-
-class ListDtype(ExtensionDtype):
- type = list
- name = "list"
- na_value = np.nan
-
- @classmethod
- def construct_array_type(cls) -> type_t[ListArray]:
- """
- Return the array type associated with this dtype.
-
- Returns
- -------
- type
- """
- return ListArray
-
-
-class ListArray(ExtensionArray):
- dtype = ListDtype()
- __array_priority__ = 1000
-
- def __init__(self, values, dtype=None, copy=False) -> None:
- if not isinstance(values, np.ndarray):
- raise TypeError("Need to pass a numpy array as values")
- for val in values:
- if not isinstance(val, self.dtype.type) and not pd.isna(val):
- raise TypeError("All values must be of type " + str(self.dtype.type))
- self.data = values
-
- @classmethod
- def _from_sequence(cls, scalars, dtype=None, copy=False):
- data = np.empty(len(scalars), dtype=object)
- data[:] = scalars
- return cls(data)
-
- def __getitem__(self, item):
- if isinstance(item, numbers.Integral):
- return self.data[item]
- else:
- # slice, list-like, mask
- return type(self)(self.data[item])
-
- def __len__(self) -> int:
- return len(self.data)
-
- def isna(self):
- return np.array(
- [not isinstance(x, list) and np.isnan(x) for x in self.data], dtype=bool
- )
-
- def take(self, indexer, allow_fill=False, fill_value=None):
- # re-implement here, since NumPy has trouble setting
- # sized objects like UserDicts into scalar slots of
- # an ndarary.
- indexer = np.asarray(indexer)
- msg = (
- "Index is out of bounds or cannot do a "
- "non-empty take from an empty array."
- )
-
- if allow_fill:
- if fill_value is None:
- fill_value = self.dtype.na_value
- # bounds check
- if (indexer < -1).any():
- raise ValueError
- try:
- output = [
- self.data[loc] if loc != -1 else fill_value for loc in indexer
- ]
- except IndexError as err:
- raise IndexError(msg) from err
- else:
- try:
- output = [self.data[loc] for loc in indexer]
- except IndexError as err:
- raise IndexError(msg) from err
-
- return self._from_sequence(output)
-
- def copy(self):
- return type(self)(self.data[:])
-
- def astype(self, dtype, copy=True):
- if isinstance(dtype, type(self.dtype)) and dtype == self.dtype:
- if copy:
- return self.copy()
- return self
- elif is_string_dtype(dtype) and not is_object_dtype(dtype):
- # numpy has problems with astype(str) for nested elements
- return np.array([str(x) for x in self.data], dtype=dtype)
- return np.array(self.data, dtype=dtype, copy=copy)
-
- @classmethod
- def _concat_same_type(cls, to_concat):
- data = np.concatenate([x.data for x in to_concat])
- return cls(data)
-
-
-def make_data():
- # TODO: Use a regular dict. See _NDFrameIndexer._setitem_with_indexer
- rng = np.random.default_rng(2)
- data = np.empty(100, dtype=object)
- data[:] = [
- [rng.choice(list(string.ascii_letters)) for _ in range(rng.integers(0, 10))]
- for _ in range(100)
- ]
- return data
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_numpy.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_numpy.py
deleted file mode 100644
index a54729de57a97c3bc46de5aab1f6495afc5b922f..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_numpy.py
+++ /dev/null
@@ -1,437 +0,0 @@
-"""
-This file contains a minimal set of tests for compliance with the extension
-array interface test suite, and should contain no other tests.
-The test suite for the full functionality of the array is located in
-`pandas/tests/arrays/`.
-
-The tests in this file are inherited from the BaseExtensionTests, and only
-minimal tweaks should be applied to get the tests passing (by overwriting a
-parent method).
-
-Additional tests should either be added to one of the BaseExtensionTests
-classes (if they are relevant for the extension interface for all dtypes), or
-be added to the array-specific tests in `pandas/tests/arrays/`.
-
-Note: we do not bother with base.BaseIndexTests because NumpyExtensionArray
-will never be held in an Index.
-"""
-import numpy as np
-import pytest
-
-from pandas.core.dtypes.cast import can_hold_element
-from pandas.core.dtypes.dtypes import NumpyEADtype
-
-import pandas as pd
-import pandas._testing as tm
-from pandas.api.types import is_object_dtype
-from pandas.core.arrays.numpy_ import NumpyExtensionArray
-from pandas.core.internals import blocks
-from pandas.tests.extension import base
-
-
-def _can_hold_element_patched(obj, element) -> bool:
- if isinstance(element, NumpyExtensionArray):
- element = element.to_numpy()
- return can_hold_element(obj, element)
-
-
-orig_assert_attr_equal = tm.assert_attr_equal
-
-
-def _assert_attr_equal(attr: str, left, right, obj: str = "Attributes"):
- """
- patch tm.assert_attr_equal so NumpyEADtype("object") is closed enough to
- np.dtype("object")
- """
- if attr == "dtype":
- lattr = getattr(left, "dtype", None)
- rattr = getattr(right, "dtype", None)
- if isinstance(lattr, NumpyEADtype) and not isinstance(rattr, NumpyEADtype):
- left = left.astype(lattr.numpy_dtype)
- elif isinstance(rattr, NumpyEADtype) and not isinstance(lattr, NumpyEADtype):
- right = right.astype(rattr.numpy_dtype)
-
- orig_assert_attr_equal(attr, left, right, obj)
-
-
-@pytest.fixture(params=["float", "object"])
-def dtype(request):
- return NumpyEADtype(np.dtype(request.param))
-
-
-@pytest.fixture
-def allow_in_pandas(monkeypatch):
- """
- A monkeypatch to tells pandas to let us in.
-
- By default, passing a NumpyExtensionArray to an index / series / frame
- constructor will unbox that NumpyExtensionArray to an ndarray, and treat
- it as a non-EA column. We don't want people using EAs without
- reason.
-
- The mechanism for this is a check against ABCNumpyExtensionArray
- in each constructor.
-
- But, for testing, we need to allow them in pandas. So we patch
- the _typ of NumpyExtensionArray, so that we evade the ABCNumpyExtensionArray
- check.
- """
- with monkeypatch.context() as m:
- m.setattr(NumpyExtensionArray, "_typ", "extension")
- m.setattr(blocks, "can_hold_element", _can_hold_element_patched)
- m.setattr(tm.asserters, "assert_attr_equal", _assert_attr_equal)
- yield
-
-
-@pytest.fixture
-def data(allow_in_pandas, dtype):
- if dtype.numpy_dtype == "object":
- return pd.Series([(i,) for i in range(100)]).array
- return NumpyExtensionArray(np.arange(1, 101, dtype=dtype._dtype))
-
-
-@pytest.fixture
-def data_missing(allow_in_pandas, dtype):
- if dtype.numpy_dtype == "object":
- return NumpyExtensionArray(np.array([np.nan, (1,)], dtype=object))
- return NumpyExtensionArray(np.array([np.nan, 1.0]))
-
-
-@pytest.fixture
-def na_cmp():
- def cmp(a, b):
- return np.isnan(a) and np.isnan(b)
-
- return cmp
-
-
-@pytest.fixture
-def data_for_sorting(allow_in_pandas, dtype):
- """Length-3 array with a known sort order.
-
- This should be three items [B, C, A] with
- A < B < C
- """
- if dtype.numpy_dtype == "object":
- # Use an empty tuple for first element, then remove,
- # to disable np.array's shape inference.
- return NumpyExtensionArray(np.array([(), (2,), (3,), (1,)], dtype=object)[1:])
- return NumpyExtensionArray(np.array([1, 2, 0]))
-
-
-@pytest.fixture
-def data_missing_for_sorting(allow_in_pandas, dtype):
- """Length-3 array with a known sort order.
-
- This should be three items [B, NA, A] with
- A < B and NA missing.
- """
- if dtype.numpy_dtype == "object":
- return NumpyExtensionArray(np.array([(1,), np.nan, (0,)], dtype=object))
- return NumpyExtensionArray(np.array([1, np.nan, 0]))
-
-
-@pytest.fixture
-def data_for_grouping(allow_in_pandas, dtype):
- """Data for factorization, grouping, and unique tests.
-
- Expected to be like [B, B, NA, NA, A, A, B, C]
-
- Where A < B < C and NA is missing
- """
- if dtype.numpy_dtype == "object":
- a, b, c = (1,), (2,), (3,)
- else:
- a, b, c = np.arange(3)
- return NumpyExtensionArray(
- np.array([b, b, np.nan, np.nan, a, a, b, c], dtype=dtype.numpy_dtype)
- )
-
-
-@pytest.fixture
-def data_for_twos(dtype):
- if dtype.kind == "O":
- pytest.skip("Not a numeric dtype")
- arr = np.ones(100) * 2
- return NumpyExtensionArray._from_sequence(arr, dtype=dtype)
-
-
-@pytest.fixture
-def skip_numpy_object(dtype, request):
- """
- Tests for NumpyExtensionArray with nested data. Users typically won't create
- these objects via `pd.array`, but they can show up through `.array`
- on a Series with nested data. Many of the base tests fail, as they aren't
- appropriate for nested data.
-
- This fixture allows these tests to be skipped when used as a usefixtures
- marker to either an individual test or a test class.
- """
- if dtype == "object":
- mark = pytest.mark.xfail(reason="Fails for object dtype")
- request.node.add_marker(mark)
-
-
-skip_nested = pytest.mark.usefixtures("skip_numpy_object")
-
-
-class BaseNumPyTests:
- pass
-
-
-class TestCasting(BaseNumPyTests, base.BaseCastingTests):
- pass
-
-
-class TestConstructors(BaseNumPyTests, base.BaseConstructorsTests):
- @pytest.mark.skip(reason="We don't register our dtype")
- # We don't want to register. This test should probably be split in two.
- def test_from_dtype(self, data):
- pass
-
- @skip_nested
- def test_series_constructor_scalar_with_index(self, data, dtype):
- # ValueError: Length of passed values is 1, index implies 3.
- super().test_series_constructor_scalar_with_index(data, dtype)
-
-
-class TestDtype(BaseNumPyTests, base.BaseDtypeTests):
- def test_check_dtype(self, data, request):
- if data.dtype.numpy_dtype == "object":
- request.node.add_marker(
- pytest.mark.xfail(
- reason=f"NumpyExtensionArray expectedly clashes with a "
- f"NumPy name: {data.dtype.numpy_dtype}"
- )
- )
- super().test_check_dtype(data)
-
- def test_is_not_object_type(self, dtype, request):
- if dtype.numpy_dtype == "object":
- # Different from BaseDtypeTests.test_is_not_object_type
- # because NumpyEADtype(object) is an object type
- assert is_object_dtype(dtype)
- else:
- super().test_is_not_object_type(dtype)
-
-
-class TestGetitem(BaseNumPyTests, base.BaseGetitemTests):
- @skip_nested
- def test_getitem_scalar(self, data):
- # AssertionError
- super().test_getitem_scalar(data)
-
-
-class TestGroupby(BaseNumPyTests, base.BaseGroupbyTests):
- pass
-
-
-class TestInterface(BaseNumPyTests, base.BaseInterfaceTests):
- @skip_nested
- def test_array_interface(self, data):
- # NumPy array shape inference
- super().test_array_interface(data)
-
-
-class TestMethods(BaseNumPyTests, base.BaseMethodsTests):
- @skip_nested
- def test_shift_fill_value(self, data):
- # np.array shape inference. Shift implementation fails.
- super().test_shift_fill_value(data)
-
- @skip_nested
- def test_fillna_copy_frame(self, data_missing):
- # The "scalar" for this array isn't a scalar.
- super().test_fillna_copy_frame(data_missing)
-
- @skip_nested
- def test_fillna_copy_series(self, data_missing):
- # The "scalar" for this array isn't a scalar.
- super().test_fillna_copy_series(data_missing)
-
- @skip_nested
- def test_searchsorted(self, data_for_sorting, as_series):
- # Test setup fails.
- super().test_searchsorted(data_for_sorting, as_series)
-
- @pytest.mark.xfail(reason="NumpyExtensionArray.diff may fail on dtype")
- def test_diff(self, data, periods):
- return super().test_diff(data, periods)
-
- def test_insert(self, data, request):
- if data.dtype.numpy_dtype == object:
- mark = pytest.mark.xfail(reason="Dimension mismatch in np.concatenate")
- request.node.add_marker(mark)
-
- super().test_insert(data)
-
- @skip_nested
- def test_insert_invalid(self, data, invalid_scalar):
- # NumpyExtensionArray[object] can hold anything, so skip
- super().test_insert_invalid(data, invalid_scalar)
-
-
-class TestArithmetics(BaseNumPyTests, base.BaseArithmeticOpsTests):
- divmod_exc = None
- series_scalar_exc = None
- frame_scalar_exc = None
- series_array_exc = None
-
- @skip_nested
- def test_divmod(self, data):
- super().test_divmod(data)
-
- @skip_nested
- def test_arith_series_with_scalar(self, data, all_arithmetic_operators):
- super().test_arith_series_with_scalar(data, all_arithmetic_operators)
-
- def test_arith_series_with_array(self, data, all_arithmetic_operators, request):
- opname = all_arithmetic_operators
- if data.dtype.numpy_dtype == object and opname not in ["__add__", "__radd__"]:
- mark = pytest.mark.xfail(reason="Fails for object dtype")
- request.node.add_marker(mark)
- super().test_arith_series_with_array(data, all_arithmetic_operators)
-
- @skip_nested
- def test_arith_frame_with_scalar(self, data, all_arithmetic_operators):
- super().test_arith_frame_with_scalar(data, all_arithmetic_operators)
-
-
-class TestPrinting(BaseNumPyTests, base.BasePrintingTests):
- pass
-
-
-class TestReduce(BaseNumPyTests, base.BaseReduceTests):
- def _supports_reduction(self, obj, op_name: str) -> bool:
- if tm.get_dtype(obj).kind == "O":
- return op_name in ["sum", "min", "max", "any", "all"]
- return True
-
- def check_reduce(self, s, op_name, skipna):
- res_op = getattr(s, op_name)
- # avoid coercing int -> float. Just cast to the actual numpy type.
- exp_op = getattr(s.astype(s.dtype._dtype), op_name)
- if op_name == "count":
- result = res_op()
- expected = exp_op()
- else:
- result = res_op(skipna=skipna)
- expected = exp_op(skipna=skipna)
- tm.assert_almost_equal(result, expected)
-
- @pytest.mark.skip("tests not written yet")
- @pytest.mark.parametrize("skipna", [True, False])
- def test_reduce_frame(self, data, all_numeric_reductions, skipna):
- pass
-
-
-class TestMissing(BaseNumPyTests, base.BaseMissingTests):
- @skip_nested
- def test_fillna_series(self, data_missing):
- # Non-scalar "scalar" values.
- super().test_fillna_series(data_missing)
-
- @skip_nested
- def test_fillna_frame(self, data_missing):
- # Non-scalar "scalar" values.
- super().test_fillna_frame(data_missing)
-
-
-class TestReshaping(BaseNumPyTests, base.BaseReshapingTests):
- pass
-
-
-class TestSetitem(BaseNumPyTests, base.BaseSetitemTests):
- @skip_nested
- def test_setitem_invalid(self, data, invalid_scalar):
- # object dtype can hold anything, so doesn't raise
- super().test_setitem_invalid(data, invalid_scalar)
-
- @skip_nested
- def test_setitem_sequence_broadcasts(self, data, box_in_series):
- # ValueError: cannot set using a list-like indexer with a different
- # length than the value
- super().test_setitem_sequence_broadcasts(data, box_in_series)
-
- @skip_nested
- @pytest.mark.parametrize("setter", ["loc", None])
- def test_setitem_mask_broadcast(self, data, setter):
- # ValueError: cannot set using a list-like indexer with a different
- # length than the value
- super().test_setitem_mask_broadcast(data, setter)
-
- @skip_nested
- def test_setitem_scalar_key_sequence_raise(self, data):
- # Failed: DID NOT RAISE
- super().test_setitem_scalar_key_sequence_raise(data)
-
- # TODO: there is some issue with NumpyExtensionArray, therefore,
- # skip the setitem test for now, and fix it later (GH 31446)
-
- @skip_nested
- @pytest.mark.parametrize(
- "mask",
- [
- np.array([True, True, True, False, False]),
- pd.array([True, True, True, False, False], dtype="boolean"),
- ],
- ids=["numpy-array", "boolean-array"],
- )
- def test_setitem_mask(self, data, mask, box_in_series):
- super().test_setitem_mask(data, mask, box_in_series)
-
- @skip_nested
- @pytest.mark.parametrize(
- "idx",
- [[0, 1, 2], pd.array([0, 1, 2], dtype="Int64"), np.array([0, 1, 2])],
- ids=["list", "integer-array", "numpy-array"],
- )
- def test_setitem_integer_array(self, data, idx, box_in_series):
- super().test_setitem_integer_array(data, idx, box_in_series)
-
- @pytest.mark.parametrize(
- "idx, box_in_series",
- [
- ([0, 1, 2, pd.NA], False),
- pytest.param([0, 1, 2, pd.NA], True, marks=pytest.mark.xfail),
- (pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
- (pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
- ],
- ids=["list-False", "list-True", "integer-array-False", "integer-array-True"],
- )
- def test_setitem_integer_with_missing_raises(self, data, idx, box_in_series):
- super().test_setitem_integer_with_missing_raises(data, idx, box_in_series)
-
- @skip_nested
- def test_setitem_slice(self, data, box_in_series):
- super().test_setitem_slice(data, box_in_series)
-
- @skip_nested
- def test_setitem_loc_iloc_slice(self, data):
- super().test_setitem_loc_iloc_slice(data)
-
- def test_setitem_with_expansion_dataframe_column(self, data, full_indexer):
- # https://github.com/pandas-dev/pandas/issues/32395
- df = expected = pd.DataFrame({"data": pd.Series(data)})
- result = pd.DataFrame(index=df.index)
-
- # because result has object dtype, the attempt to do setting inplace
- # is successful, and object dtype is retained
- key = full_indexer(df)
- result.loc[key, "data"] = df["data"]
-
- # base class method has expected = df; NumpyExtensionArray behaves oddly because
- # we patch _typ for these tests.
- if data.dtype.numpy_dtype != object:
- if not isinstance(key, slice) or key != slice(None):
- expected = pd.DataFrame({"data": data.to_numpy()})
- tm.assert_frame_equal(result, expected)
-
-
-@skip_nested
-class TestParsing(BaseNumPyTests, base.BaseParsingTests):
- pass
-
-
-class Test2DCompat(BaseNumPyTests, base.NDArrayBacked2DTests):
- pass
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_backend.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_backend.py
deleted file mode 100644
index c0ad8e0c9608d3d04723f472a5956d3e366ffcac..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_backend.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import sys
-import types
-
-import pytest
-
-import pandas.util._test_decorators as td
-
-import pandas
-
-
-@pytest.fixture
-def dummy_backend():
- db = types.ModuleType("pandas_dummy_backend")
- setattr(db, "plot", lambda *args, **kwargs: "used_dummy")
- return db
-
-
-@pytest.fixture
-def restore_backend():
- """Restore the plotting backend to matplotlib"""
- with pandas.option_context("plotting.backend", "matplotlib"):
- yield
-
-
-def test_backend_is_not_module():
- msg = "Could not find plotting backend 'not_an_existing_module'."
- with pytest.raises(ValueError, match=msg):
- pandas.set_option("plotting.backend", "not_an_existing_module")
-
- assert pandas.options.plotting.backend == "matplotlib"
-
-
-def test_backend_is_correct(monkeypatch, restore_backend, dummy_backend):
- monkeypatch.setitem(sys.modules, "pandas_dummy_backend", dummy_backend)
-
- pandas.set_option("plotting.backend", "pandas_dummy_backend")
- assert pandas.get_option("plotting.backend") == "pandas_dummy_backend"
- assert (
- pandas.plotting._core._get_plot_backend("pandas_dummy_backend") is dummy_backend
- )
-
-
-def test_backend_can_be_set_in_plot_call(monkeypatch, restore_backend, dummy_backend):
- monkeypatch.setitem(sys.modules, "pandas_dummy_backend", dummy_backend)
- df = pandas.DataFrame([1, 2, 3])
-
- assert pandas.get_option("plotting.backend") == "matplotlib"
- assert df.plot(backend="pandas_dummy_backend") == "used_dummy"
-
-
-def test_register_entrypoint(restore_backend, tmp_path, monkeypatch, dummy_backend):
- monkeypatch.syspath_prepend(tmp_path)
- monkeypatch.setitem(sys.modules, "pandas_dummy_backend", dummy_backend)
-
- dist_info = tmp_path / "my_backend-0.0.0.dist-info"
- dist_info.mkdir()
- # entry_point name should not match module name - otherwise pandas will
- # fall back to backend lookup by module name
- (dist_info / "entry_points.txt").write_bytes(
- b"[pandas_plotting_backends]\nmy_ep_backend = pandas_dummy_backend\n"
- )
-
- assert pandas.plotting._core._get_plot_backend("my_ep_backend") is dummy_backend
-
- with pandas.option_context("plotting.backend", "my_ep_backend"):
- assert pandas.plotting._core._get_plot_backend() is dummy_backend
-
-
-def test_setting_backend_without_plot_raises(monkeypatch):
- # GH-28163
- module = types.ModuleType("pandas_plot_backend")
- monkeypatch.setitem(sys.modules, "pandas_plot_backend", module)
-
- assert pandas.options.plotting.backend == "matplotlib"
- with pytest.raises(
- ValueError, match="Could not find plotting backend 'pandas_plot_backend'."
- ):
- pandas.set_option("plotting.backend", "pandas_plot_backend")
-
- assert pandas.options.plotting.backend == "matplotlib"
-
-
-@td.skip_if_mpl
-def test_no_matplotlib_ok():
- msg = (
- 'matplotlib is required for plotting when the default backend "matplotlib" is '
- "selected."
- )
- with pytest.raises(ImportError, match=msg):
- pandas.plotting._core._get_plot_backend("matplotlib")
-
-
-def test_extra_kinds_ok(monkeypatch, restore_backend, dummy_backend):
- # https://github.com/pandas-dev/pandas/pull/28647
- monkeypatch.setitem(sys.modules, "pandas_dummy_backend", dummy_backend)
- pandas.set_option("plotting.backend", "pandas_dummy_backend")
- df = pandas.DataFrame({"A": [1, 2, 3]})
- df.plot(kind="not a real kind")
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_repeat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_repeat.py
deleted file mode 100644
index 8ecc8052ff49c150444cf395b68e6163fb761775..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_repeat.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas import (
- MultiIndex,
- Series,
-)
-import pandas._testing as tm
-
-
-class TestRepeat:
- def test_repeat(self):
- ser = Series(np.random.default_rng(2).standard_normal(3), index=["a", "b", "c"])
-
- reps = ser.repeat(5)
- exp = Series(ser.values.repeat(5), index=ser.index.values.repeat(5))
- tm.assert_series_equal(reps, exp)
-
- to_rep = [2, 3, 4]
- reps = ser.repeat(to_rep)
- exp = Series(ser.values.repeat(to_rep), index=ser.index.values.repeat(to_rep))
- tm.assert_series_equal(reps, exp)
-
- def test_numpy_repeat(self):
- ser = Series(np.arange(3), name="x")
- expected = Series(
- ser.values.repeat(2), name="x", index=ser.index.values.repeat(2)
- )
- tm.assert_series_equal(np.repeat(ser, 2), expected)
-
- msg = "the 'axis' parameter is not supported"
- with pytest.raises(ValueError, match=msg):
- np.repeat(ser, 2, axis=0)
-
- def test_repeat_with_multiindex(self):
- # GH#9361, fixed by GH#7891
- m_idx = MultiIndex.from_tuples([(1, 2), (3, 4), (5, 6), (7, 8)])
- data = ["a", "b", "c", "d"]
- m_df = Series(data, index=m_idx)
- assert m_df.repeat(3).shape == (3 * len(data),)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_tolist.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_tolist.py
deleted file mode 100644
index 4af473528e23850794139ac563cc04c6d3c54617..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_tolist.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import pytest
-
-import pandas.util._test_decorators as td
-
-from pandas import (
- Interval,
- Period,
- Series,
- Timedelta,
- Timestamp,
-)
-
-
-@pytest.mark.parametrize(
- "values, dtype, expected_dtype",
- (
- ([1], "int64", int),
- ([1], "Int64", int),
- ([1.0], "float64", float),
- ([1.0], "Float64", float),
- (["abc"], "object", str),
- (["abc"], "string", str),
- ([Interval(1, 3)], "interval", Interval),
- ([Period("2000-01-01", "D")], "period[D]", Period),
- ([Timedelta(days=1)], "timedelta64[ns]", Timedelta),
- ([Timestamp("2000-01-01")], "datetime64[ns]", Timestamp),
- pytest.param([1], "int64[pyarrow]", int, marks=td.skip_if_no("pyarrow")),
- pytest.param([1.0], "float64[pyarrow]", float, marks=td.skip_if_no("pyarrow")),
- pytest.param(["abc"], "string[pyarrow]", str, marks=td.skip_if_no("pyarrow")),
- ),
-)
-def test_tolist_scalar_dtype(values, dtype, expected_dtype):
- # GH49890
- ser = Series(values, dtype=dtype)
- result_dtype = type(ser.tolist()[0])
- assert result_dtype == expected_dtype
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/index/package_finder.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/index/package_finder.py
deleted file mode 100644
index 223d06df67e21ff59ae191613d8c905ea646e877..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/index/package_finder.py
+++ /dev/null
@@ -1,1004 +0,0 @@
-"""Routines related to PyPI, indexes"""
-
-# The following comment should be removed at some point in the future.
-# mypy: strict-optional=False
-
-import functools
-import itertools
-import logging
-import re
-from typing import FrozenSet, Iterable, List, Optional, Set, Tuple, Union
-
-from pip._vendor.packaging import specifiers
-from pip._vendor.packaging.tags import Tag
-from pip._vendor.packaging.utils import canonicalize_name
-from pip._vendor.packaging.version import _BaseVersion
-from pip._vendor.packaging.version import parse as parse_version
-
-from pip._internal.exceptions import (
- BestVersionAlreadyInstalled,
- DistributionNotFound,
- InvalidWheelFilename,
- UnsupportedWheel,
-)
-from pip._internal.index.collector import LinkCollector, parse_links
-from pip._internal.models.candidate import InstallationCandidate
-from pip._internal.models.format_control import FormatControl
-from pip._internal.models.link import Link
-from pip._internal.models.search_scope import SearchScope
-from pip._internal.models.selection_prefs import SelectionPreferences
-from pip._internal.models.target_python import TargetPython
-from pip._internal.models.wheel import Wheel
-from pip._internal.req import InstallRequirement
-from pip._internal.utils._log import getLogger
-from pip._internal.utils.filetypes import WHEEL_EXTENSION
-from pip._internal.utils.hashes import Hashes
-from pip._internal.utils.logging import indent_log
-from pip._internal.utils.misc import build_netloc
-from pip._internal.utils.packaging import check_requires_python
-from pip._internal.utils.unpacking import SUPPORTED_EXTENSIONS
-
-__all__ = ["FormatControl", "BestCandidateResult", "PackageFinder"]
-
-
-logger = getLogger(__name__)
-
-BuildTag = Union[Tuple[()], Tuple[int, str]]
-CandidateSortingKey = Tuple[int, int, int, _BaseVersion, Optional[int], BuildTag]
-
-
-def _check_link_requires_python(
- link: Link,
- version_info: Tuple[int, int, int],
- ignore_requires_python: bool = False,
-) -> bool:
- """
- Return whether the given Python version is compatible with a link's
- "Requires-Python" value.
-
- :param version_info: A 3-tuple of ints representing the Python
- major-minor-micro version to check.
- :param ignore_requires_python: Whether to ignore the "Requires-Python"
- value if the given Python version isn't compatible.
- """
- try:
- is_compatible = check_requires_python(
- link.requires_python,
- version_info=version_info,
- )
- except specifiers.InvalidSpecifier:
- logger.debug(
- "Ignoring invalid Requires-Python (%r) for link: %s",
- link.requires_python,
- link,
- )
- else:
- if not is_compatible:
- version = ".".join(map(str, version_info))
- if not ignore_requires_python:
- logger.verbose(
- "Link requires a different Python (%s not in: %r): %s",
- version,
- link.requires_python,
- link,
- )
- return False
-
- logger.debug(
- "Ignoring failed Requires-Python check (%s not in: %r) for link: %s",
- version,
- link.requires_python,
- link,
- )
-
- return True
-
-
-class LinkEvaluator:
-
- """
- Responsible for evaluating links for a particular project.
- """
-
- _py_version_re = re.compile(r"-py([123]\.?[0-9]?)$")
-
- # Don't include an allow_yanked default value to make sure each call
- # site considers whether yanked releases are allowed. This also causes
- # that decision to be made explicit in the calling code, which helps
- # people when reading the code.
- def __init__(
- self,
- project_name: str,
- canonical_name: str,
- formats: FrozenSet[str],
- target_python: TargetPython,
- allow_yanked: bool,
- ignore_requires_python: Optional[bool] = None,
- ) -> None:
- """
- :param project_name: The user supplied package name.
- :param canonical_name: The canonical package name.
- :param formats: The formats allowed for this package. Should be a set
- with 'binary' or 'source' or both in it.
- :param target_python: The target Python interpreter to use when
- evaluating link compatibility. This is used, for example, to
- check wheel compatibility, as well as when checking the Python
- version, e.g. the Python version embedded in a link filename
- (or egg fragment) and against an HTML link's optional PEP 503
- "data-requires-python" attribute.
- :param allow_yanked: Whether files marked as yanked (in the sense
- of PEP 592) are permitted to be candidates for install.
- :param ignore_requires_python: Whether to ignore incompatible
- PEP 503 "data-requires-python" values in HTML links. Defaults
- to False.
- """
- if ignore_requires_python is None:
- ignore_requires_python = False
-
- self._allow_yanked = allow_yanked
- self._canonical_name = canonical_name
- self._ignore_requires_python = ignore_requires_python
- self._formats = formats
- self._target_python = target_python
-
- self.project_name = project_name
-
- def evaluate_link(self, link: Link) -> Tuple[bool, Optional[str]]:
- """
- Determine whether a link is a candidate for installation.
-
- :return: A tuple (is_candidate, result), where `result` is (1) a
- version string if `is_candidate` is True, and (2) if
- `is_candidate` is False, an optional string to log the reason
- the link fails to qualify.
- """
- version = None
- if link.is_yanked and not self._allow_yanked:
- reason = link.yanked_reason or ""
- return (False, f"yanked for reason: {reason}")
-
- if link.egg_fragment:
- egg_info = link.egg_fragment
- ext = link.ext
- else:
- egg_info, ext = link.splitext()
- if not ext:
- return (False, "not a file")
- if ext not in SUPPORTED_EXTENSIONS:
- return (False, f"unsupported archive format: {ext}")
- if "binary" not in self._formats and ext == WHEEL_EXTENSION:
- reason = "No binaries permitted for {}".format(self.project_name)
- return (False, reason)
- if "macosx10" in link.path and ext == ".zip":
- return (False, "macosx10 one")
- if ext == WHEEL_EXTENSION:
- try:
- wheel = Wheel(link.filename)
- except InvalidWheelFilename:
- return (False, "invalid wheel filename")
- if canonicalize_name(wheel.name) != self._canonical_name:
- reason = "wrong project name (not {})".format(self.project_name)
- return (False, reason)
-
- supported_tags = self._target_python.get_tags()
- if not wheel.supported(supported_tags):
- # Include the wheel's tags in the reason string to
- # simplify troubleshooting compatibility issues.
- file_tags = wheel.get_formatted_file_tags()
- reason = (
- "none of the wheel's tags ({}) are compatible "
- "(run pip debug --verbose to show compatible tags)".format(
- ", ".join(file_tags)
- )
- )
- return (False, reason)
-
- version = wheel.version
-
- # This should be up by the self.ok_binary check, but see issue 2700.
- if "source" not in self._formats and ext != WHEEL_EXTENSION:
- reason = f"No sources permitted for {self.project_name}"
- return (False, reason)
-
- if not version:
- version = _extract_version_from_fragment(
- egg_info,
- self._canonical_name,
- )
- if not version:
- reason = f"Missing project version for {self.project_name}"
- return (False, reason)
-
- match = self._py_version_re.search(version)
- if match:
- version = version[: match.start()]
- py_version = match.group(1)
- if py_version != self._target_python.py_version:
- return (False, "Python version is incorrect")
-
- supports_python = _check_link_requires_python(
- link,
- version_info=self._target_python.py_version_info,
- ignore_requires_python=self._ignore_requires_python,
- )
- if not supports_python:
- # Return None for the reason text to suppress calling
- # _log_skipped_link().
- return (False, None)
-
- logger.debug("Found link %s, version: %s", link, version)
-
- return (True, version)
-
-
-def filter_unallowed_hashes(
- candidates: List[InstallationCandidate],
- hashes: Hashes,
- project_name: str,
-) -> List[InstallationCandidate]:
- """
- Filter out candidates whose hashes aren't allowed, and return a new
- list of candidates.
-
- If at least one candidate has an allowed hash, then all candidates with
- either an allowed hash or no hash specified are returned. Otherwise,
- the given candidates are returned.
-
- Including the candidates with no hash specified when there is a match
- allows a warning to be logged if there is a more preferred candidate
- with no hash specified. Returning all candidates in the case of no
- matches lets pip report the hash of the candidate that would otherwise
- have been installed (e.g. permitting the user to more easily update
- their requirements file with the desired hash).
- """
- if not hashes:
- logger.debug(
- "Given no hashes to check %s links for project %r: "
- "discarding no candidates",
- len(candidates),
- project_name,
- )
- # Make sure we're not returning back the given value.
- return list(candidates)
-
- matches_or_no_digest = []
- # Collect the non-matches for logging purposes.
- non_matches = []
- match_count = 0
- for candidate in candidates:
- link = candidate.link
- if not link.has_hash:
- pass
- elif link.is_hash_allowed(hashes=hashes):
- match_count += 1
- else:
- non_matches.append(candidate)
- continue
-
- matches_or_no_digest.append(candidate)
-
- if match_count:
- filtered = matches_or_no_digest
- else:
- # Make sure we're not returning back the given value.
- filtered = list(candidates)
-
- if len(filtered) == len(candidates):
- discard_message = "discarding no candidates"
- else:
- discard_message = "discarding {} non-matches:\n {}".format(
- len(non_matches),
- "\n ".join(str(candidate.link) for candidate in non_matches),
- )
-
- logger.debug(
- "Checked %s links for project %r against %s hashes "
- "(%s matches, %s no digest): %s",
- len(candidates),
- project_name,
- hashes.digest_count,
- match_count,
- len(matches_or_no_digest) - match_count,
- discard_message,
- )
-
- return filtered
-
-
-class CandidatePreferences:
-
- """
- Encapsulates some of the preferences for filtering and sorting
- InstallationCandidate objects.
- """
-
- def __init__(
- self,
- prefer_binary: bool = False,
- allow_all_prereleases: bool = False,
- ) -> None:
- """
- :param allow_all_prereleases: Whether to allow all pre-releases.
- """
- self.allow_all_prereleases = allow_all_prereleases
- self.prefer_binary = prefer_binary
-
-
-class BestCandidateResult:
- """A collection of candidates, returned by `PackageFinder.find_best_candidate`.
-
- This class is only intended to be instantiated by CandidateEvaluator's
- `compute_best_candidate()` method.
- """
-
- def __init__(
- self,
- candidates: List[InstallationCandidate],
- applicable_candidates: List[InstallationCandidate],
- best_candidate: Optional[InstallationCandidate],
- ) -> None:
- """
- :param candidates: A sequence of all available candidates found.
- :param applicable_candidates: The applicable candidates.
- :param best_candidate: The most preferred candidate found, or None
- if no applicable candidates were found.
- """
- assert set(applicable_candidates) <= set(candidates)
-
- if best_candidate is None:
- assert not applicable_candidates
- else:
- assert best_candidate in applicable_candidates
-
- self._applicable_candidates = applicable_candidates
- self._candidates = candidates
-
- self.best_candidate = best_candidate
-
- def iter_all(self) -> Iterable[InstallationCandidate]:
- """Iterate through all candidates."""
- return iter(self._candidates)
-
- def iter_applicable(self) -> Iterable[InstallationCandidate]:
- """Iterate through the applicable candidates."""
- return iter(self._applicable_candidates)
-
-
-class CandidateEvaluator:
-
- """
- Responsible for filtering and sorting candidates for installation based
- on what tags are valid.
- """
-
- @classmethod
- def create(
- cls,
- project_name: str,
- target_python: Optional[TargetPython] = None,
- prefer_binary: bool = False,
- allow_all_prereleases: bool = False,
- specifier: Optional[specifiers.BaseSpecifier] = None,
- hashes: Optional[Hashes] = None,
- ) -> "CandidateEvaluator":
- """Create a CandidateEvaluator object.
-
- :param target_python: The target Python interpreter to use when
- checking compatibility. If None (the default), a TargetPython
- object will be constructed from the running Python.
- :param specifier: An optional object implementing `filter`
- (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable
- versions.
- :param hashes: An optional collection of allowed hashes.
- """
- if target_python is None:
- target_python = TargetPython()
- if specifier is None:
- specifier = specifiers.SpecifierSet()
-
- supported_tags = target_python.get_tags()
-
- return cls(
- project_name=project_name,
- supported_tags=supported_tags,
- specifier=specifier,
- prefer_binary=prefer_binary,
- allow_all_prereleases=allow_all_prereleases,
- hashes=hashes,
- )
-
- def __init__(
- self,
- project_name: str,
- supported_tags: List[Tag],
- specifier: specifiers.BaseSpecifier,
- prefer_binary: bool = False,
- allow_all_prereleases: bool = False,
- hashes: Optional[Hashes] = None,
- ) -> None:
- """
- :param supported_tags: The PEP 425 tags supported by the target
- Python in order of preference (most preferred first).
- """
- self._allow_all_prereleases = allow_all_prereleases
- self._hashes = hashes
- self._prefer_binary = prefer_binary
- self._project_name = project_name
- self._specifier = specifier
- self._supported_tags = supported_tags
- # Since the index of the tag in the _supported_tags list is used
- # as a priority, precompute a map from tag to index/priority to be
- # used in wheel.find_most_preferred_tag.
- self._wheel_tag_preferences = {
- tag: idx for idx, tag in enumerate(supported_tags)
- }
-
- def get_applicable_candidates(
- self,
- candidates: List[InstallationCandidate],
- ) -> List[InstallationCandidate]:
- """
- Return the applicable candidates from a list of candidates.
- """
- # Using None infers from the specifier instead.
- allow_prereleases = self._allow_all_prereleases or None
- specifier = self._specifier
- versions = {
- str(v)
- for v in specifier.filter(
- # We turn the version object into a str here because otherwise
- # when we're debundled but setuptools isn't, Python will see
- # packaging.version.Version and
- # pkg_resources._vendor.packaging.version.Version as different
- # types. This way we'll use a str as a common data interchange
- # format. If we stop using the pkg_resources provided specifier
- # and start using our own, we can drop the cast to str().
- (str(c.version) for c in candidates),
- prereleases=allow_prereleases,
- )
- }
-
- # Again, converting version to str to deal with debundling.
- applicable_candidates = [c for c in candidates if str(c.version) in versions]
-
- filtered_applicable_candidates = filter_unallowed_hashes(
- candidates=applicable_candidates,
- hashes=self._hashes,
- project_name=self._project_name,
- )
-
- return sorted(filtered_applicable_candidates, key=self._sort_key)
-
- def _sort_key(self, candidate: InstallationCandidate) -> CandidateSortingKey:
- """
- Function to pass as the `key` argument to a call to sorted() to sort
- InstallationCandidates by preference.
-
- Returns a tuple such that tuples sorting as greater using Python's
- default comparison operator are more preferred.
-
- The preference is as follows:
-
- First and foremost, candidates with allowed (matching) hashes are
- always preferred over candidates without matching hashes. This is
- because e.g. if the only candidate with an allowed hash is yanked,
- we still want to use that candidate.
-
- Second, excepting hash considerations, candidates that have been
- yanked (in the sense of PEP 592) are always less preferred than
- candidates that haven't been yanked. Then:
-
- If not finding wheels, they are sorted by version only.
- If finding wheels, then the sort order is by version, then:
- 1. existing installs
- 2. wheels ordered via Wheel.support_index_min(self._supported_tags)
- 3. source archives
- If prefer_binary was set, then all wheels are sorted above sources.
-
- Note: it was considered to embed this logic into the Link
- comparison operators, but then different sdist links
- with the same version, would have to be considered equal
- """
- valid_tags = self._supported_tags
- support_num = len(valid_tags)
- build_tag: BuildTag = ()
- binary_preference = 0
- link = candidate.link
- if link.is_wheel:
- # can raise InvalidWheelFilename
- wheel = Wheel(link.filename)
- try:
- pri = -(
- wheel.find_most_preferred_tag(
- valid_tags, self._wheel_tag_preferences
- )
- )
- except ValueError:
- raise UnsupportedWheel(
- "{} is not a supported wheel for this platform. It "
- "can't be sorted.".format(wheel.filename)
- )
- if self._prefer_binary:
- binary_preference = 1
- if wheel.build_tag is not None:
- match = re.match(r"^(\d+)(.*)$", wheel.build_tag)
- build_tag_groups = match.groups()
- build_tag = (int(build_tag_groups[0]), build_tag_groups[1])
- else: # sdist
- pri = -(support_num)
- has_allowed_hash = int(link.is_hash_allowed(self._hashes))
- yank_value = -1 * int(link.is_yanked) # -1 for yanked.
- return (
- has_allowed_hash,
- yank_value,
- binary_preference,
- candidate.version,
- pri,
- build_tag,
- )
-
- def sort_best_candidate(
- self,
- candidates: List[InstallationCandidate],
- ) -> Optional[InstallationCandidate]:
- """
- Return the best candidate per the instance's sort order, or None if
- no candidate is acceptable.
- """
- if not candidates:
- return None
- best_candidate = max(candidates, key=self._sort_key)
- return best_candidate
-
- def compute_best_candidate(
- self,
- candidates: List[InstallationCandidate],
- ) -> BestCandidateResult:
- """
- Compute and return a `BestCandidateResult` instance.
- """
- applicable_candidates = self.get_applicable_candidates(candidates)
-
- best_candidate = self.sort_best_candidate(applicable_candidates)
-
- return BestCandidateResult(
- candidates,
- applicable_candidates=applicable_candidates,
- best_candidate=best_candidate,
- )
-
-
-class PackageFinder:
- """This finds packages.
-
- This is meant to match easy_install's technique for looking for
- packages, by reading pages and looking for appropriate links.
- """
-
- def __init__(
- self,
- link_collector: LinkCollector,
- target_python: TargetPython,
- allow_yanked: bool,
- use_deprecated_html5lib: bool,
- format_control: Optional[FormatControl] = None,
- candidate_prefs: Optional[CandidatePreferences] = None,
- ignore_requires_python: Optional[bool] = None,
- ) -> None:
- """
- This constructor is primarily meant to be used by the create() class
- method and from tests.
-
- :param format_control: A FormatControl object, used to control
- the selection of source packages / binary packages when consulting
- the index and links.
- :param candidate_prefs: Options to use when creating a
- CandidateEvaluator object.
- """
- if candidate_prefs is None:
- candidate_prefs = CandidatePreferences()
-
- format_control = format_control or FormatControl(set(), set())
-
- self._allow_yanked = allow_yanked
- self._candidate_prefs = candidate_prefs
- self._ignore_requires_python = ignore_requires_python
- self._link_collector = link_collector
- self._target_python = target_python
- self._use_deprecated_html5lib = use_deprecated_html5lib
-
- self.format_control = format_control
-
- # These are boring links that have already been logged somehow.
- self._logged_links: Set[Link] = set()
-
- # Don't include an allow_yanked default value to make sure each call
- # site considers whether yanked releases are allowed. This also causes
- # that decision to be made explicit in the calling code, which helps
- # people when reading the code.
- @classmethod
- def create(
- cls,
- link_collector: LinkCollector,
- selection_prefs: SelectionPreferences,
- target_python: Optional[TargetPython] = None,
- *,
- use_deprecated_html5lib: bool,
- ) -> "PackageFinder":
- """Create a PackageFinder.
-
- :param selection_prefs: The candidate selection preferences, as a
- SelectionPreferences object.
- :param target_python: The target Python interpreter to use when
- checking compatibility. If None (the default), a TargetPython
- object will be constructed from the running Python.
- """
- if target_python is None:
- target_python = TargetPython()
-
- candidate_prefs = CandidatePreferences(
- prefer_binary=selection_prefs.prefer_binary,
- allow_all_prereleases=selection_prefs.allow_all_prereleases,
- )
-
- return cls(
- candidate_prefs=candidate_prefs,
- link_collector=link_collector,
- target_python=target_python,
- allow_yanked=selection_prefs.allow_yanked,
- format_control=selection_prefs.format_control,
- ignore_requires_python=selection_prefs.ignore_requires_python,
- use_deprecated_html5lib=use_deprecated_html5lib,
- )
-
- @property
- def target_python(self) -> TargetPython:
- return self._target_python
-
- @property
- def search_scope(self) -> SearchScope:
- return self._link_collector.search_scope
-
- @search_scope.setter
- def search_scope(self, search_scope: SearchScope) -> None:
- self._link_collector.search_scope = search_scope
-
- @property
- def find_links(self) -> List[str]:
- return self._link_collector.find_links
-
- @property
- def index_urls(self) -> List[str]:
- return self.search_scope.index_urls
-
- @property
- def trusted_hosts(self) -> Iterable[str]:
- for host_port in self._link_collector.session.pip_trusted_origins:
- yield build_netloc(*host_port)
-
- @property
- def allow_all_prereleases(self) -> bool:
- return self._candidate_prefs.allow_all_prereleases
-
- def set_allow_all_prereleases(self) -> None:
- self._candidate_prefs.allow_all_prereleases = True
-
- @property
- def prefer_binary(self) -> bool:
- return self._candidate_prefs.prefer_binary
-
- def set_prefer_binary(self) -> None:
- self._candidate_prefs.prefer_binary = True
-
- def make_link_evaluator(self, project_name: str) -> LinkEvaluator:
- canonical_name = canonicalize_name(project_name)
- formats = self.format_control.get_allowed_formats(canonical_name)
-
- return LinkEvaluator(
- project_name=project_name,
- canonical_name=canonical_name,
- formats=formats,
- target_python=self._target_python,
- allow_yanked=self._allow_yanked,
- ignore_requires_python=self._ignore_requires_python,
- )
-
- def _sort_links(self, links: Iterable[Link]) -> List[Link]:
- """
- Returns elements of links in order, non-egg links first, egg links
- second, while eliminating duplicates
- """
- eggs, no_eggs = [], []
- seen: Set[Link] = set()
- for link in links:
- if link not in seen:
- seen.add(link)
- if link.egg_fragment:
- eggs.append(link)
- else:
- no_eggs.append(link)
- return no_eggs + eggs
-
- def _log_skipped_link(self, link: Link, reason: str) -> None:
- if link not in self._logged_links:
- # Put the link at the end so the reason is more visible and because
- # the link string is usually very long.
- logger.debug("Skipping link: %s: %s", reason, link)
- self._logged_links.add(link)
-
- def get_install_candidate(
- self, link_evaluator: LinkEvaluator, link: Link
- ) -> Optional[InstallationCandidate]:
- """
- If the link is a candidate for install, convert it to an
- InstallationCandidate and return it. Otherwise, return None.
- """
- is_candidate, result = link_evaluator.evaluate_link(link)
- if not is_candidate:
- if result:
- self._log_skipped_link(link, reason=result)
- return None
-
- return InstallationCandidate(
- name=link_evaluator.project_name,
- link=link,
- version=result,
- )
-
- def evaluate_links(
- self, link_evaluator: LinkEvaluator, links: Iterable[Link]
- ) -> List[InstallationCandidate]:
- """
- Convert links that are candidates to InstallationCandidate objects.
- """
- candidates = []
- for link in self._sort_links(links):
- candidate = self.get_install_candidate(link_evaluator, link)
- if candidate is not None:
- candidates.append(candidate)
-
- return candidates
-
- def process_project_url(
- self, project_url: Link, link_evaluator: LinkEvaluator
- ) -> List[InstallationCandidate]:
- logger.debug(
- "Fetching project page and analyzing links: %s",
- project_url,
- )
- html_page = self._link_collector.fetch_page(project_url)
- if html_page is None:
- return []
-
- page_links = list(parse_links(html_page, self._use_deprecated_html5lib))
-
- with indent_log():
- package_links = self.evaluate_links(
- link_evaluator,
- links=page_links,
- )
-
- return package_links
-
- @functools.lru_cache(maxsize=None)
- def find_all_candidates(self, project_name: str) -> List[InstallationCandidate]:
- """Find all available InstallationCandidate for project_name
-
- This checks index_urls and find_links.
- All versions found are returned as an InstallationCandidate list.
-
- See LinkEvaluator.evaluate_link() for details on which files
- are accepted.
- """
- link_evaluator = self.make_link_evaluator(project_name)
-
- collected_sources = self._link_collector.collect_sources(
- project_name=project_name,
- candidates_from_page=functools.partial(
- self.process_project_url,
- link_evaluator=link_evaluator,
- ),
- )
-
- page_candidates_it = itertools.chain.from_iterable(
- source.page_candidates()
- for sources in collected_sources
- for source in sources
- if source is not None
- )
- page_candidates = list(page_candidates_it)
-
- file_links_it = itertools.chain.from_iterable(
- source.file_links()
- for sources in collected_sources
- for source in sources
- if source is not None
- )
- file_candidates = self.evaluate_links(
- link_evaluator,
- sorted(file_links_it, reverse=True),
- )
-
- if logger.isEnabledFor(logging.DEBUG) and file_candidates:
- paths = []
- for candidate in file_candidates:
- assert candidate.link.url # we need to have a URL
- try:
- paths.append(candidate.link.file_path)
- except Exception:
- paths.append(candidate.link.url) # it's not a local file
-
- logger.debug("Local files found: %s", ", ".join(paths))
-
- # This is an intentional priority ordering
- return file_candidates + page_candidates
-
- def make_candidate_evaluator(
- self,
- project_name: str,
- specifier: Optional[specifiers.BaseSpecifier] = None,
- hashes: Optional[Hashes] = None,
- ) -> CandidateEvaluator:
- """Create a CandidateEvaluator object to use."""
- candidate_prefs = self._candidate_prefs
- return CandidateEvaluator.create(
- project_name=project_name,
- target_python=self._target_python,
- prefer_binary=candidate_prefs.prefer_binary,
- allow_all_prereleases=candidate_prefs.allow_all_prereleases,
- specifier=specifier,
- hashes=hashes,
- )
-
- @functools.lru_cache(maxsize=None)
- def find_best_candidate(
- self,
- project_name: str,
- specifier: Optional[specifiers.BaseSpecifier] = None,
- hashes: Optional[Hashes] = None,
- ) -> BestCandidateResult:
- """Find matches for the given project and specifier.
-
- :param specifier: An optional object implementing `filter`
- (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable
- versions.
-
- :return: A `BestCandidateResult` instance.
- """
- candidates = self.find_all_candidates(project_name)
- candidate_evaluator = self.make_candidate_evaluator(
- project_name=project_name,
- specifier=specifier,
- hashes=hashes,
- )
- return candidate_evaluator.compute_best_candidate(candidates)
-
- def find_requirement(
- self, req: InstallRequirement, upgrade: bool
- ) -> Optional[InstallationCandidate]:
- """Try to find a Link matching req
-
- Expects req, an InstallRequirement and upgrade, a boolean
- Returns a InstallationCandidate if found,
- Raises DistributionNotFound or BestVersionAlreadyInstalled otherwise
- """
- hashes = req.hashes(trust_internet=False)
- best_candidate_result = self.find_best_candidate(
- req.name,
- specifier=req.specifier,
- hashes=hashes,
- )
- best_candidate = best_candidate_result.best_candidate
-
- installed_version: Optional[_BaseVersion] = None
- if req.satisfied_by is not None:
- installed_version = req.satisfied_by.version
-
- def _format_versions(cand_iter: Iterable[InstallationCandidate]) -> str:
- # This repeated parse_version and str() conversion is needed to
- # handle different vendoring sources from pip and pkg_resources.
- # If we stop using the pkg_resources provided specifier and start
- # using our own, we can drop the cast to str().
- return (
- ", ".join(
- sorted(
- {str(c.version) for c in cand_iter},
- key=parse_version,
- )
- )
- or "none"
- )
-
- if installed_version is None and best_candidate is None:
- logger.critical(
- "Could not find a version that satisfies the requirement %s "
- "(from versions: %s)",
- req,
- _format_versions(best_candidate_result.iter_all()),
- )
-
- raise DistributionNotFound(
- "No matching distribution found for {}".format(req)
- )
-
- best_installed = False
- if installed_version and (
- best_candidate is None or best_candidate.version <= installed_version
- ):
- best_installed = True
-
- if not upgrade and installed_version is not None:
- if best_installed:
- logger.debug(
- "Existing installed version (%s) is most up-to-date and "
- "satisfies requirement",
- installed_version,
- )
- else:
- logger.debug(
- "Existing installed version (%s) satisfies requirement "
- "(most up-to-date version is %s)",
- installed_version,
- best_candidate.version,
- )
- return None
-
- if best_installed:
- # We have an existing version, and its the best version
- logger.debug(
- "Installed version (%s) is most up-to-date (past versions: %s)",
- installed_version,
- _format_versions(best_candidate_result.iter_applicable()),
- )
- raise BestVersionAlreadyInstalled
-
- logger.debug(
- "Using version %s (newest of versions: %s)",
- best_candidate.version,
- _format_versions(best_candidate_result.iter_applicable()),
- )
- return best_candidate
-
-
-def _find_name_version_sep(fragment: str, canonical_name: str) -> int:
- """Find the separator's index based on the package's canonical name.
-
- :param fragment: A + filename "fragment" (stem) or
- egg fragment.
- :param canonical_name: The package's canonical name.
-
- This function is needed since the canonicalized name does not necessarily
- have the same length as the egg info's name part. An example::
-
- >>> fragment = 'foo__bar-1.0'
- >>> canonical_name = 'foo-bar'
- >>> _find_name_version_sep(fragment, canonical_name)
- 8
- """
- # Project name and version must be separated by one single dash. Find all
- # occurrences of dashes; if the string in front of it matches the canonical
- # name, this is the one separating the name and version parts.
- for i, c in enumerate(fragment):
- if c != "-":
- continue
- if canonicalize_name(fragment[:i]) == canonical_name:
- return i
- raise ValueError(f"{fragment} does not match {canonical_name}")
-
-
-def _extract_version_from_fragment(fragment: str, canonical_name: str) -> Optional[str]:
- """Parse the version string from a + filename
- "fragment" (stem) or egg fragment.
-
- :param fragment: The string to parse. E.g. foo-2.1
- :param canonical_name: The canonicalized name of the package this
- belongs to.
- """
- try:
- version_start = _find_name_version_sep(fragment, canonical_name) + 1
- except ValueError:
- return None
- version = fragment[version_start:]
- if not version:
- return None
- return version
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_wininst.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_wininst.py
deleted file mode 100644
index 0e9ddaa21419e9581392d170a51dfcf53203d5e8..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_wininst.py
+++ /dev/null
@@ -1,377 +0,0 @@
-"""distutils.command.bdist_wininst
-
-Implements the Distutils 'bdist_wininst' command: create a windows installer
-exe-program."""
-
-import os
-import sys
-import warnings
-from distutils.core import Command
-from distutils.util import get_platform
-from distutils.dir_util import remove_tree
-from distutils.errors import *
-from distutils.sysconfig import get_python_version
-from distutils import log
-
-class bdist_wininst(Command):
-
- description = "create an executable installer for MS Windows"
-
- user_options = [('bdist-dir=', None,
- "temporary directory for creating the distribution"),
- ('plat-name=', 'p',
- "platform name to embed in generated filenames "
- "(default: %s)" % get_platform()),
- ('keep-temp', 'k',
- "keep the pseudo-installation tree around after " +
- "creating the distribution archive"),
- ('target-version=', None,
- "require a specific python version" +
- " on the target system"),
- ('no-target-compile', 'c',
- "do not compile .py to .pyc on the target system"),
- ('no-target-optimize', 'o',
- "do not compile .py to .pyo (optimized) "
- "on the target system"),
- ('dist-dir=', 'd',
- "directory to put final built distributions in"),
- ('bitmap=', 'b',
- "bitmap to use for the installer instead of python-powered logo"),
- ('title=', 't',
- "title to display on the installer background instead of default"),
- ('skip-build', None,
- "skip rebuilding everything (for testing/debugging)"),
- ('install-script=', None,
- "basename of installation script to be run after "
- "installation or before deinstallation"),
- ('pre-install-script=', None,
- "Fully qualified filename of a script to be run before "
- "any files are installed. This script need not be in the "
- "distribution"),
- ('user-access-control=', None,
- "specify Vista's UAC handling - 'none'/default=no "
- "handling, 'auto'=use UAC if target Python installed for "
- "all users, 'force'=always use UAC"),
- ]
-
- boolean_options = ['keep-temp', 'no-target-compile', 'no-target-optimize',
- 'skip-build']
-
- # bpo-10945: bdist_wininst requires mbcs encoding only available on Windows
- _unsupported = (sys.platform != "win32")
-
- def __init__(self, *args, **kw):
- super().__init__(*args, **kw)
- warnings.warn("bdist_wininst command is deprecated since Python 3.8, "
- "use bdist_wheel (wheel packages) instead",
- DeprecationWarning, 2)
-
- def initialize_options(self):
- self.bdist_dir = None
- self.plat_name = None
- self.keep_temp = 0
- self.no_target_compile = 0
- self.no_target_optimize = 0
- self.target_version = None
- self.dist_dir = None
- self.bitmap = None
- self.title = None
- self.skip_build = None
- self.install_script = None
- self.pre_install_script = None
- self.user_access_control = None
-
-
- def finalize_options(self):
- self.set_undefined_options('bdist', ('skip_build', 'skip_build'))
-
- if self.bdist_dir is None:
- if self.skip_build and self.plat_name:
- # If build is skipped and plat_name is overridden, bdist will
- # not see the correct 'plat_name' - so set that up manually.
- bdist = self.distribution.get_command_obj('bdist')
- bdist.plat_name = self.plat_name
- # next the command will be initialized using that name
- bdist_base = self.get_finalized_command('bdist').bdist_base
- self.bdist_dir = os.path.join(bdist_base, 'wininst')
-
- if not self.target_version:
- self.target_version = ""
-
- if not self.skip_build and self.distribution.has_ext_modules():
- short_version = get_python_version()
- if self.target_version and self.target_version != short_version:
- raise DistutilsOptionError(
- "target version can only be %s, or the '--skip-build'" \
- " option must be specified" % (short_version,))
- self.target_version = short_version
-
- self.set_undefined_options('bdist',
- ('dist_dir', 'dist_dir'),
- ('plat_name', 'plat_name'),
- )
-
- if self.install_script:
- for script in self.distribution.scripts:
- if self.install_script == os.path.basename(script):
- break
- else:
- raise DistutilsOptionError(
- "install_script '%s' not found in scripts"
- % self.install_script)
-
- def run(self):
- if (sys.platform != "win32" and
- (self.distribution.has_ext_modules() or
- self.distribution.has_c_libraries())):
- raise DistutilsPlatformError \
- ("distribution contains extensions and/or C libraries; "
- "must be compiled on a Windows 32 platform")
-
- if not self.skip_build:
- self.run_command('build')
-
- install = self.reinitialize_command('install', reinit_subcommands=1)
- install.root = self.bdist_dir
- install.skip_build = self.skip_build
- install.warn_dir = 0
- install.plat_name = self.plat_name
-
- install_lib = self.reinitialize_command('install_lib')
- # we do not want to include pyc or pyo files
- install_lib.compile = 0
- install_lib.optimize = 0
-
- if self.distribution.has_ext_modules():
- # If we are building an installer for a Python version other
- # than the one we are currently running, then we need to ensure
- # our build_lib reflects the other Python version rather than ours.
- # Note that for target_version!=sys.version, we must have skipped the
- # build step, so there is no issue with enforcing the build of this
- # version.
- target_version = self.target_version
- if not target_version:
- assert self.skip_build, "Should have already checked this"
- target_version = '%d.%d' % sys.version_info[:2]
- plat_specifier = ".%s-%s" % (self.plat_name, target_version)
- build = self.get_finalized_command('build')
- build.build_lib = os.path.join(build.build_base,
- 'lib' + plat_specifier)
-
- # Use a custom scheme for the zip-file, because we have to decide
- # at installation time which scheme to use.
- for key in ('purelib', 'platlib', 'headers', 'scripts', 'data'):
- value = key.upper()
- if key == 'headers':
- value = value + '/Include/$dist_name'
- setattr(install,
- 'install_' + key,
- value)
-
- log.info("installing to %s", self.bdist_dir)
- install.ensure_finalized()
-
- # avoid warning of 'install_lib' about installing
- # into a directory not in sys.path
- sys.path.insert(0, os.path.join(self.bdist_dir, 'PURELIB'))
-
- install.run()
-
- del sys.path[0]
-
- # And make an archive relative to the root of the
- # pseudo-installation tree.
- from tempfile import mktemp
- archive_basename = mktemp()
- fullname = self.distribution.get_fullname()
- arcname = self.make_archive(archive_basename, "zip",
- root_dir=self.bdist_dir)
- # create an exe containing the zip-file
- self.create_exe(arcname, fullname, self.bitmap)
- if self.distribution.has_ext_modules():
- pyversion = get_python_version()
- else:
- pyversion = 'any'
- self.distribution.dist_files.append(('bdist_wininst', pyversion,
- self.get_installer_filename(fullname)))
- # remove the zip-file again
- log.debug("removing temporary file '%s'", arcname)
- os.remove(arcname)
-
- if not self.keep_temp:
- remove_tree(self.bdist_dir, dry_run=self.dry_run)
-
- def get_inidata(self):
- # Return data describing the installation.
- lines = []
- metadata = self.distribution.metadata
-
- # Write the [metadata] section.
- lines.append("[metadata]")
-
- # 'info' will be displayed in the installer's dialog box,
- # describing the items to be installed.
- info = (metadata.long_description or '') + '\n'
-
- # Escape newline characters
- def escape(s):
- return s.replace("\n", "\\n")
-
- for name in ["author", "author_email", "description", "maintainer",
- "maintainer_email", "name", "url", "version"]:
- data = getattr(metadata, name, "")
- if data:
- info = info + ("\n %s: %s" % \
- (name.capitalize(), escape(data)))
- lines.append("%s=%s" % (name, escape(data)))
-
- # The [setup] section contains entries controlling
- # the installer runtime.
- lines.append("\n[Setup]")
- if self.install_script:
- lines.append("install_script=%s" % self.install_script)
- lines.append("info=%s" % escape(info))
- lines.append("target_compile=%d" % (not self.no_target_compile))
- lines.append("target_optimize=%d" % (not self.no_target_optimize))
- if self.target_version:
- lines.append("target_version=%s" % self.target_version)
- if self.user_access_control:
- lines.append("user_access_control=%s" % self.user_access_control)
-
- title = self.title or self.distribution.get_fullname()
- lines.append("title=%s" % escape(title))
- import time
- import distutils
- build_info = "Built %s with distutils-%s" % \
- (time.ctime(time.time()), distutils.__version__)
- lines.append("build_info=%s" % build_info)
- return "\n".join(lines)
-
- def create_exe(self, arcname, fullname, bitmap=None):
- import struct
-
- self.mkpath(self.dist_dir)
-
- cfgdata = self.get_inidata()
-
- installer_name = self.get_installer_filename(fullname)
- self.announce("creating %s" % installer_name)
-
- if bitmap:
- with open(bitmap, "rb") as f:
- bitmapdata = f.read()
- bitmaplen = len(bitmapdata)
- else:
- bitmaplen = 0
-
- with open(installer_name, "wb") as file:
- file.write(self.get_exe_bytes())
- if bitmap:
- file.write(bitmapdata)
-
- # Convert cfgdata from unicode to ascii, mbcs encoded
- if isinstance(cfgdata, str):
- cfgdata = cfgdata.encode("mbcs")
-
- # Append the pre-install script
- cfgdata = cfgdata + b"\0"
- if self.pre_install_script:
- # We need to normalize newlines, so we open in text mode and
- # convert back to bytes. "latin-1" simply avoids any possible
- # failures.
- with open(self.pre_install_script, "r",
- encoding="latin-1") as script:
- script_data = script.read().encode("latin-1")
- cfgdata = cfgdata + script_data + b"\n\0"
- else:
- # empty pre-install script
- cfgdata = cfgdata + b"\0"
- file.write(cfgdata)
-
- # The 'magic number' 0x1234567B is used to make sure that the
- # binary layout of 'cfgdata' is what the wininst.exe binary
- # expects. If the layout changes, increment that number, make
- # the corresponding changes to the wininst.exe sources, and
- # recompile them.
- header = struct.pack(" Bool:
- ...
-
-
-@overload
-def item(value: int, _parent: Item | None = ..., _sort_keys: bool = ...) -> Integer:
- ...
-
-
-@overload
-def item(value: float, _parent: Item | None = ..., _sort_keys: bool = ...) -> Float:
- ...
-
-
-@overload
-def item(value: str, _parent: Item | None = ..., _sort_keys: bool = ...) -> String:
- ...
-
-
-@overload
-def item(
- value: datetime, _parent: Item | None = ..., _sort_keys: bool = ...
-) -> DateTime:
- ...
-
-
-@overload
-def item(value: date, _parent: Item | None = ..., _sort_keys: bool = ...) -> Date:
- ...
-
-
-@overload
-def item(value: time, _parent: Item | None = ..., _sort_keys: bool = ...) -> Time:
- ...
-
-
-@overload
-def item(
- value: Sequence[dict], _parent: Item | None = ..., _sort_keys: bool = ...
-) -> AoT:
- ...
-
-
-@overload
-def item(value: Sequence, _parent: Item | None = ..., _sort_keys: bool = ...) -> Array:
- ...
-
-
-@overload
-def item(value: dict, _parent: Array = ..., _sort_keys: bool = ...) -> InlineTable:
- ...
-
-
-@overload
-def item(value: dict, _parent: Item | None = ..., _sort_keys: bool = ...) -> Table:
- ...
-
-
-@overload
-def item(value: ItemT, _parent: Item | None = ..., _sort_keys: bool = ...) -> ItemT:
- ...
-
-
-def item(value: Any, _parent: Item | None = None, _sort_keys: bool = False) -> Item:
- """Create a TOML item from a Python object.
-
- :Example:
-
- >>> item(42)
- 42
- >>> item([1, 2, 3])
- [1, 2, 3]
- >>> item({'a': 1, 'b': 2})
- a = 1
- b = 2
- """
-
- from tomlkit.container import Container
-
- if isinstance(value, Item):
- return value
-
- if isinstance(value, bool):
- return Bool(value, Trivia())
- elif isinstance(value, int):
- return Integer(value, Trivia(), str(value))
- elif isinstance(value, float):
- return Float(value, Trivia(), str(value))
- elif isinstance(value, dict):
- table_constructor = (
- InlineTable if isinstance(_parent, (Array, InlineTable)) else Table
- )
- val = table_constructor(Container(), Trivia(), False)
- for k, v in sorted(
- value.items(),
- key=lambda i: (isinstance(i[1], dict), i[0]) if _sort_keys else 1,
- ):
- val[k] = item(v, _parent=val, _sort_keys=_sort_keys)
-
- return val
- elif isinstance(value, (list, tuple)):
- if (
- value
- and all(isinstance(v, dict) for v in value)
- and (_parent is None or isinstance(_parent, Table))
- ):
- a = AoT([])
- table_constructor = Table
- else:
- a = Array([], Trivia())
- table_constructor = InlineTable
-
- for v in value:
- if isinstance(v, dict):
- table = table_constructor(Container(), Trivia(), True)
-
- for k, _v in sorted(
- v.items(),
- key=lambda i: (isinstance(i[1], dict), i[0] if _sort_keys else 1),
- ):
- i = item(_v, _parent=table, _sort_keys=_sort_keys)
- if isinstance(table, InlineTable):
- i.trivia.trail = ""
-
- table[k] = i
-
- v = table
-
- a.append(v)
-
- return a
- elif isinstance(value, str):
- return String.from_raw(value)
- elif isinstance(value, datetime):
- return DateTime(
- value.year,
- value.month,
- value.day,
- value.hour,
- value.minute,
- value.second,
- value.microsecond,
- value.tzinfo,
- Trivia(),
- value.isoformat().replace("+00:00", "Z"),
- )
- elif isinstance(value, date):
- return Date(value.year, value.month, value.day, Trivia(), value.isoformat())
- elif isinstance(value, time):
- return Time(
- value.hour,
- value.minute,
- value.second,
- value.microsecond,
- value.tzinfo,
- Trivia(),
- value.isoformat(),
- )
- else:
- for encoder in CUSTOM_ENCODERS:
- try:
- rv = encoder(value)
- except TypeError:
- pass
- else:
- if not isinstance(rv, Item):
- raise _ConvertError(
- f"Custom encoder returned {type(rv)}, not a subclass of Item"
- )
- return rv
-
- raise _ConvertError(f"Invalid type {type(value)}")
-
-
-class StringType(Enum):
- # Single Line Basic
- SLB = '"'
- # Multi Line Basic
- MLB = '"""'
- # Single Line Literal
- SLL = "'"
- # Multi Line Literal
- MLL = "'''"
-
- @classmethod
- def select(cls, literal=False, multiline=False) -> StringType:
- return {
- (False, False): cls.SLB,
- (False, True): cls.MLB,
- (True, False): cls.SLL,
- (True, True): cls.MLL,
- }[(literal, multiline)]
-
- @property
- def escaped_sequences(self) -> Collection[str]:
- # https://toml.io/en/v1.0.0#string
- escaped_in_basic = CONTROL_CHARS | {"\\"}
- allowed_in_multiline = {"\n", "\r"}
- return {
- StringType.SLB: escaped_in_basic | {'"'},
- StringType.MLB: (escaped_in_basic | {'"""'}) - allowed_in_multiline,
- StringType.SLL: (),
- StringType.MLL: (),
- }[self]
-
- @property
- def invalid_sequences(self) -> Collection[str]:
- # https://toml.io/en/v1.0.0#string
- forbidden_in_literal = CONTROL_CHARS - {"\t"}
- allowed_in_multiline = {"\n", "\r"}
- return {
- StringType.SLB: (),
- StringType.MLB: (),
- StringType.SLL: forbidden_in_literal | {"'"},
- StringType.MLL: (forbidden_in_literal | {"'''"}) - allowed_in_multiline,
- }[self]
-
- @property
- def unit(self) -> str:
- return self.value[0]
-
- def is_basic(self) -> bool:
- return self in {StringType.SLB, StringType.MLB}
-
- def is_literal(self) -> bool:
- return self in {StringType.SLL, StringType.MLL}
-
- def is_singleline(self) -> bool:
- return self in {StringType.SLB, StringType.SLL}
-
- def is_multiline(self) -> bool:
- return self in {StringType.MLB, StringType.MLL}
-
- def toggle(self) -> StringType:
- return {
- StringType.SLB: StringType.MLB,
- StringType.MLB: StringType.SLB,
- StringType.SLL: StringType.MLL,
- StringType.MLL: StringType.SLL,
- }[self]
-
-
-class BoolType(Enum):
- TRUE = "true"
- FALSE = "false"
-
- def __bool__(self):
- return {BoolType.TRUE: True, BoolType.FALSE: False}[self]
-
- def __iter__(self):
- return iter(self.value)
-
- def __len__(self):
- return len(self.value)
-
-
-@dataclasses.dataclass
-class Trivia:
- """
- Trivia information (aka metadata).
- """
-
- # Whitespace before a value.
- indent: str = ""
- # Whitespace after a value, but before a comment.
- comment_ws: str = ""
- # Comment, starting with # character, or empty string if no comment.
- comment: str = ""
- # Trailing newline.
- trail: str = "\n"
-
- def copy(self) -> Trivia:
- return dataclasses.replace(self)
-
-
-class KeyType(Enum):
- """
- The type of a Key.
-
- Keys can be bare (unquoted), or quoted using basic ("), or literal (')
- quotes following the same escaping rules as single-line StringType.
- """
-
- Bare = ""
- Basic = '"'
- Literal = "'"
-
-
-class Key(abc.ABC):
- """Base class for a key"""
-
- sep: str
- _original: str
- _keys: list[SingleKey]
- _dotted: bool
- key: str
-
- @abc.abstractmethod
- def __hash__(self) -> int:
- pass
-
- @abc.abstractmethod
- def __eq__(self, __o: object) -> bool:
- pass
-
- def is_dotted(self) -> bool:
- """If the key is followed by other keys"""
- return self._dotted
-
- def __iter__(self) -> Iterator[SingleKey]:
- return iter(self._keys)
-
- def concat(self, other: Key) -> DottedKey:
- """Concatenate keys into a dotted key"""
- keys = self._keys + other._keys
- return DottedKey(keys, sep=self.sep)
-
- def is_multi(self) -> bool:
- """Check if the key contains multiple keys"""
- return len(self._keys) > 1
-
- def as_string(self) -> str:
- """The TOML representation"""
- return self._original
-
- def __str__(self) -> str:
- return self.as_string()
-
- def __repr__(self) -> str:
- return f""
-
-
-class SingleKey(Key):
- """A single key"""
-
- def __init__(
- self,
- k: str,
- t: KeyType | None = None,
- sep: str | None = None,
- original: str | None = None,
- ) -> None:
- if t is None:
- if not k or any(
- c not in string.ascii_letters + string.digits + "-" + "_" for c in k
- ):
- t = KeyType.Basic
- else:
- t = KeyType.Bare
-
- self.t = t
- if sep is None:
- sep = " = "
-
- self.sep = sep
- self.key = k
- if original is None:
- key_str = escape_string(k) if t == KeyType.Basic else k
- original = f"{t.value}{key_str}{t.value}"
-
- self._original = original
- self._keys = [self]
- self._dotted = False
-
- @property
- def delimiter(self) -> str:
- """The delimiter: double quote/single quote/none"""
- return self.t.value
-
- def is_bare(self) -> bool:
- """Check if the key is bare"""
- return self.t == KeyType.Bare
-
- def __hash__(self) -> int:
- return hash(self.key)
-
- def __eq__(self, other: Any) -> bool:
- if isinstance(other, Key):
- return isinstance(other, SingleKey) and self.key == other.key
-
- return self.key == other
-
-
-class DottedKey(Key):
- def __init__(
- self,
- keys: Iterable[SingleKey],
- sep: str | None = None,
- original: str | None = None,
- ) -> None:
- self._keys = list(keys)
- if original is None:
- original = ".".join(k.as_string() for k in self._keys)
-
- self.sep = " = " if sep is None else sep
- self._original = original
- self._dotted = False
- self.key = ".".join(k.key for k in self._keys)
-
- def __hash__(self) -> int:
- return hash(tuple(self._keys))
-
- def __eq__(self, __o: object) -> bool:
- return isinstance(__o, DottedKey) and self._keys == __o._keys
-
-
-class Item:
- """
- An item within a TOML document.
- """
-
- def __init__(self, trivia: Trivia) -> None:
- self._trivia = trivia
-
- @property
- def trivia(self) -> Trivia:
- """The trivia element associated with this item"""
- return self._trivia
-
- @property
- def discriminant(self) -> int:
- raise NotImplementedError()
-
- def as_string(self) -> str:
- """The TOML representation"""
- raise NotImplementedError()
-
- @property
- def value(self) -> Any:
- return self
-
- def unwrap(self) -> Any:
- """Returns as pure python object (ppo)"""
- raise NotImplementedError()
-
- # Helpers
-
- def comment(self, comment: str) -> Item:
- """Attach a comment to this item"""
- if not comment.strip().startswith("#"):
- comment = "# " + comment
-
- self._trivia.comment_ws = " "
- self._trivia.comment = comment
-
- return self
-
- def indent(self, indent: int) -> Item:
- """Indent this item with given number of spaces"""
- if self._trivia.indent.startswith("\n"):
- self._trivia.indent = "\n" + " " * indent
- else:
- self._trivia.indent = " " * indent
-
- return self
-
- def is_boolean(self) -> bool:
- return isinstance(self, Bool)
-
- def is_table(self) -> bool:
- return isinstance(self, Table)
-
- def is_inline_table(self) -> bool:
- return isinstance(self, InlineTable)
-
- def is_aot(self) -> bool:
- return isinstance(self, AoT)
-
- def _getstate(self, protocol=3):
- return (self._trivia,)
-
- def __reduce__(self):
- return self.__reduce_ex__(2)
-
- def __reduce_ex__(self, protocol):
- return self.__class__, self._getstate(protocol)
-
-
-class Whitespace(Item):
- """
- A whitespace literal.
- """
-
- def __init__(self, s: str, fixed: bool = False) -> None:
- self._s = s
- self._fixed = fixed
-
- @property
- def s(self) -> str:
- return self._s
-
- @property
- def value(self) -> str:
- """The wrapped string of the whitespace"""
- return self._s
-
- @property
- def trivia(self) -> Trivia:
- raise RuntimeError("Called trivia on a Whitespace variant.")
-
- @property
- def discriminant(self) -> int:
- return 0
-
- def is_fixed(self) -> bool:
- """If the whitespace is fixed, it can't be merged or discarded from the output."""
- return self._fixed
-
- def as_string(self) -> str:
- return self._s
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} {repr(self._s)}>"
-
- def _getstate(self, protocol=3):
- return self._s, self._fixed
-
-
-class Comment(Item):
- """
- A comment literal.
- """
-
- @property
- def discriminant(self) -> int:
- return 1
-
- def as_string(self) -> str:
- return (
- f"{self._trivia.indent}{decode(self._trivia.comment)}{self._trivia.trail}"
- )
-
- def __str__(self) -> str:
- return f"{self._trivia.indent}{decode(self._trivia.comment)}"
-
-
-class Integer(Item, _CustomInt):
- """
- An integer literal.
- """
-
- def __new__(cls, value: int, trivia: Trivia, raw: str) -> Integer:
- return int.__new__(cls, value)
-
- def __init__(self, value: int, trivia: Trivia, raw: str) -> None:
- super().__init__(trivia)
- self._original = value
- self._raw = raw
- self._sign = False
-
- if re.match(r"^[+\-]\d+$", raw):
- self._sign = True
-
- def unwrap(self) -> int:
- return self._original
-
- __int__ = unwrap
-
- @property
- def discriminant(self) -> int:
- return 2
-
- @property
- def value(self) -> int:
- """The wrapped integer value"""
- return self
-
- def as_string(self) -> str:
- return self._raw
-
- def _new(self, result):
- raw = str(result)
- if self._sign:
- sign = "+" if result >= 0 else "-"
- raw = sign + raw
-
- return Integer(result, self._trivia, raw)
-
- def _getstate(self, protocol=3):
- return int(self), self._trivia, self._raw
-
- # int methods
- __abs__ = wrap_method(int.__abs__)
- __add__ = wrap_method(int.__add__)
- __and__ = wrap_method(int.__and__)
- __ceil__ = wrap_method(int.__ceil__)
- __eq__ = int.__eq__
- __floor__ = wrap_method(int.__floor__)
- __floordiv__ = wrap_method(int.__floordiv__)
- __invert__ = wrap_method(int.__invert__)
- __le__ = int.__le__
- __lshift__ = wrap_method(int.__lshift__)
- __lt__ = int.__lt__
- __mod__ = wrap_method(int.__mod__)
- __mul__ = wrap_method(int.__mul__)
- __neg__ = wrap_method(int.__neg__)
- __or__ = wrap_method(int.__or__)
- __pos__ = wrap_method(int.__pos__)
- __pow__ = wrap_method(int.__pow__)
- __radd__ = wrap_method(int.__radd__)
- __rand__ = wrap_method(int.__rand__)
- __rfloordiv__ = wrap_method(int.__rfloordiv__)
- __rlshift__ = wrap_method(int.__rlshift__)
- __rmod__ = wrap_method(int.__rmod__)
- __rmul__ = wrap_method(int.__rmul__)
- __ror__ = wrap_method(int.__ror__)
- __round__ = wrap_method(int.__round__)
- __rpow__ = wrap_method(int.__rpow__)
- __rrshift__ = wrap_method(int.__rrshift__)
- __rshift__ = wrap_method(int.__rshift__)
- __rtruediv__ = wrap_method(int.__rtruediv__)
- __rxor__ = wrap_method(int.__rxor__)
- __truediv__ = wrap_method(int.__truediv__)
- __trunc__ = wrap_method(int.__trunc__)
- __xor__ = wrap_method(int.__xor__)
-
-
-class Float(Item, _CustomFloat):
- """
- A float literal.
- """
-
- def __new__(cls, value: float, trivia: Trivia, raw: str) -> Float:
- return float.__new__(cls, value)
-
- def __init__(self, value: float, trivia: Trivia, raw: str) -> None:
- super().__init__(trivia)
- self._original = value
- self._raw = raw
- self._sign = False
-
- if re.match(r"^[+\-].+$", raw):
- self._sign = True
-
- def unwrap(self) -> float:
- return self._original
-
- __float__ = unwrap
-
- @property
- def discriminant(self) -> int:
- return 3
-
- @property
- def value(self) -> float:
- """The wrapped float value"""
- return self
-
- def as_string(self) -> str:
- return self._raw
-
- def _new(self, result):
- raw = str(result)
-
- if self._sign:
- sign = "+" if result >= 0 else "-"
- raw = sign + raw
-
- return Float(result, self._trivia, raw)
-
- def _getstate(self, protocol=3):
- return float(self), self._trivia, self._raw
-
- # float methods
- __abs__ = wrap_method(float.__abs__)
- __add__ = wrap_method(float.__add__)
- __eq__ = float.__eq__
- __floordiv__ = wrap_method(float.__floordiv__)
- __le__ = float.__le__
- __lt__ = float.__lt__
- __mod__ = wrap_method(float.__mod__)
- __mul__ = wrap_method(float.__mul__)
- __neg__ = wrap_method(float.__neg__)
- __pos__ = wrap_method(float.__pos__)
- __pow__ = wrap_method(float.__pow__)
- __radd__ = wrap_method(float.__radd__)
- __rfloordiv__ = wrap_method(float.__rfloordiv__)
- __rmod__ = wrap_method(float.__rmod__)
- __rmul__ = wrap_method(float.__rmul__)
- __round__ = wrap_method(float.__round__)
- __rpow__ = wrap_method(float.__rpow__)
- __rtruediv__ = wrap_method(float.__rtruediv__)
- __truediv__ = wrap_method(float.__truediv__)
- __trunc__ = float.__trunc__
-
- if sys.version_info >= (3, 9):
- __ceil__ = float.__ceil__
- __floor__ = float.__floor__
- else:
- __ceil__ = math.ceil
- __floor__ = math.floor
-
-
-class Bool(Item):
- """
- A boolean literal.
- """
-
- def __init__(self, t: int, trivia: Trivia) -> None:
- super().__init__(trivia)
-
- self._value = bool(t)
-
- def unwrap(self) -> bool:
- return bool(self)
-
- @property
- def discriminant(self) -> int:
- return 4
-
- @property
- def value(self) -> bool:
- """The wrapped boolean value"""
- return self._value
-
- def as_string(self) -> str:
- return str(self._value).lower()
-
- def _getstate(self, protocol=3):
- return self._value, self._trivia
-
- def __bool__(self):
- return self._value
-
- __nonzero__ = __bool__
-
- def __eq__(self, other):
- if not isinstance(other, bool):
- return NotImplemented
-
- return other == self._value
-
- def __hash__(self):
- return hash(self._value)
-
- def __repr__(self):
- return repr(self._value)
-
-
-class DateTime(Item, datetime):
- """
- A datetime literal.
- """
-
- def __new__(
- cls,
- year: int,
- month: int,
- day: int,
- hour: int,
- minute: int,
- second: int,
- microsecond: int,
- tzinfo: tzinfo | None,
- *_: Any,
- **kwargs: Any,
- ) -> datetime:
- return datetime.__new__(
- cls,
- year,
- month,
- day,
- hour,
- minute,
- second,
- microsecond,
- tzinfo=tzinfo,
- **kwargs,
- )
-
- def __init__(
- self,
- year: int,
- month: int,
- day: int,
- hour: int,
- minute: int,
- second: int,
- microsecond: int,
- tzinfo: tzinfo | None,
- trivia: Trivia | None = None,
- raw: str | None = None,
- **kwargs: Any,
- ) -> None:
- super().__init__(trivia or Trivia())
-
- self._raw = raw or self.isoformat()
-
- def unwrap(self) -> datetime:
- (
- year,
- month,
- day,
- hour,
- minute,
- second,
- microsecond,
- tzinfo,
- _,
- _,
- ) = self._getstate()
- return datetime(year, month, day, hour, minute, second, microsecond, tzinfo)
-
- @property
- def discriminant(self) -> int:
- return 5
-
- @property
- def value(self) -> datetime:
- return self
-
- def as_string(self) -> str:
- return self._raw
-
- def __add__(self, other):
- if PY38:
- result = datetime(
- self.year,
- self.month,
- self.day,
- self.hour,
- self.minute,
- self.second,
- self.microsecond,
- self.tzinfo,
- ).__add__(other)
- else:
- result = super().__add__(other)
-
- return self._new(result)
-
- def __sub__(self, other):
- if PY38:
- result = datetime(
- self.year,
- self.month,
- self.day,
- self.hour,
- self.minute,
- self.second,
- self.microsecond,
- self.tzinfo,
- ).__sub__(other)
- else:
- result = super().__sub__(other)
-
- if isinstance(result, datetime):
- result = self._new(result)
-
- return result
-
- def replace(self, *args: Any, **kwargs: Any) -> datetime:
- return self._new(super().replace(*args, **kwargs))
-
- def astimezone(self, tz: tzinfo) -> datetime:
- result = super().astimezone(tz)
- if PY38:
- return result
- return self._new(result)
-
- def _new(self, result) -> DateTime:
- raw = result.isoformat()
-
- return DateTime(
- result.year,
- result.month,
- result.day,
- result.hour,
- result.minute,
- result.second,
- result.microsecond,
- result.tzinfo,
- self._trivia,
- raw,
- )
-
- def _getstate(self, protocol=3):
- return (
- self.year,
- self.month,
- self.day,
- self.hour,
- self.minute,
- self.second,
- self.microsecond,
- self.tzinfo,
- self._trivia,
- self._raw,
- )
-
-
-class Date(Item, date):
- """
- A date literal.
- """
-
- def __new__(cls, year: int, month: int, day: int, *_: Any) -> date:
- return date.__new__(cls, year, month, day)
-
- def __init__(
- self, year: int, month: int, day: int, trivia: Trivia, raw: str
- ) -> None:
- super().__init__(trivia)
-
- self._raw = raw
-
- def unwrap(self) -> date:
- (year, month, day, _, _) = self._getstate()
- return date(year, month, day)
-
- @property
- def discriminant(self) -> int:
- return 6
-
- @property
- def value(self) -> date:
- return self
-
- def as_string(self) -> str:
- return self._raw
-
- def __add__(self, other):
- if PY38:
- result = date(self.year, self.month, self.day).__add__(other)
- else:
- result = super().__add__(other)
-
- return self._new(result)
-
- def __sub__(self, other):
- if PY38:
- result = date(self.year, self.month, self.day).__sub__(other)
- else:
- result = super().__sub__(other)
-
- if isinstance(result, date):
- result = self._new(result)
-
- return result
-
- def replace(self, *args: Any, **kwargs: Any) -> date:
- return self._new(super().replace(*args, **kwargs))
-
- def _new(self, result):
- raw = result.isoformat()
-
- return Date(result.year, result.month, result.day, self._trivia, raw)
-
- def _getstate(self, protocol=3):
- return (self.year, self.month, self.day, self._trivia, self._raw)
-
-
-class Time(Item, time):
- """
- A time literal.
- """
-
- def __new__(
- cls,
- hour: int,
- minute: int,
- second: int,
- microsecond: int,
- tzinfo: tzinfo | None,
- *_: Any,
- ) -> time:
- return time.__new__(cls, hour, minute, second, microsecond, tzinfo)
-
- def __init__(
- self,
- hour: int,
- minute: int,
- second: int,
- microsecond: int,
- tzinfo: tzinfo | None,
- trivia: Trivia,
- raw: str,
- ) -> None:
- super().__init__(trivia)
-
- self._raw = raw
-
- def unwrap(self) -> time:
- (hour, minute, second, microsecond, tzinfo, _, _) = self._getstate()
- return time(hour, minute, second, microsecond, tzinfo)
-
- @property
- def discriminant(self) -> int:
- return 7
-
- @property
- def value(self) -> time:
- return self
-
- def as_string(self) -> str:
- return self._raw
-
- def replace(self, *args: Any, **kwargs: Any) -> time:
- return self._new(super().replace(*args, **kwargs))
-
- def _new(self, result):
- raw = result.isoformat()
-
- return Time(
- result.hour,
- result.minute,
- result.second,
- result.microsecond,
- result.tzinfo,
- self._trivia,
- raw,
- )
-
- def _getstate(self, protocol: int = 3) -> tuple:
- return (
- self.hour,
- self.minute,
- self.second,
- self.microsecond,
- self.tzinfo,
- self._trivia,
- self._raw,
- )
-
-
-class _ArrayItemGroup:
- __slots__ = ("value", "indent", "comma", "comment")
-
- def __init__(
- self,
- value: Item | None = None,
- indent: Whitespace | None = None,
- comma: Whitespace | None = None,
- comment: Comment | None = None,
- ) -> None:
- self.value = value
- self.indent = indent
- self.comma = comma
- self.comment = comment
-
- def __iter__(self) -> Iterator[Item]:
- return filter(
- lambda x: x is not None, (self.indent, self.value, self.comma, self.comment)
- )
-
- def __repr__(self) -> str:
- return repr(tuple(self))
-
- def is_whitespace(self) -> bool:
- return self.value is None and self.comment is None
-
- def __bool__(self) -> bool:
- try:
- next(iter(self))
- except StopIteration:
- return False
- return True
-
-
-class Array(Item, _CustomList):
- """
- An array literal
- """
-
- def __init__(
- self, value: list[Item], trivia: Trivia, multiline: bool = False
- ) -> None:
- super().__init__(trivia)
- list.__init__(
- self,
- [v for v in value if not isinstance(v, (Whitespace, Comment, Null))],
- )
- self._index_map: dict[int, int] = {}
- self._value = self._group_values(value)
- self._multiline = multiline
- self._reindex()
-
- def _group_values(self, value: list[Item]) -> list[_ArrayItemGroup]:
- """Group the values into (indent, value, comma, comment) tuples"""
- groups = []
- this_group = _ArrayItemGroup()
- for item in value:
- if isinstance(item, Whitespace):
- if "," not in item.s:
- groups.append(this_group)
- this_group = _ArrayItemGroup(indent=item)
- else:
- if this_group.value is None:
- # when comma is met and no value is provided, add a dummy Null
- this_group.value = Null()
- this_group.comma = item
- elif isinstance(item, Comment):
- if this_group.value is None:
- this_group.value = Null()
- this_group.comment = item
- elif this_group.value is None:
- this_group.value = item
- else:
- groups.append(this_group)
- this_group = _ArrayItemGroup(value=item)
- groups.append(this_group)
- return [group for group in groups if group]
-
- def unwrap(self) -> list[Any]:
- unwrapped = []
- for v in self:
- if hasattr(v, "unwrap"):
- unwrapped.append(v.unwrap())
- else:
- unwrapped.append(v)
- return unwrapped
-
- @property
- def discriminant(self) -> int:
- return 8
-
- @property
- def value(self) -> list:
- return self
-
- def _iter_items(self) -> Iterator[Item]:
- for v in self._value:
- yield from v
-
- def multiline(self, multiline: bool) -> Array:
- """Change the array to display in multiline or not.
-
- :Example:
-
- >>> a = item([1, 2, 3])
- >>> print(a.as_string())
- [1, 2, 3]
- >>> print(a.multiline(True).as_string())
- [
- 1,
- 2,
- 3,
- ]
- """
- self._multiline = multiline
-
- return self
-
- def as_string(self) -> str:
- if not self._multiline or not self._value:
- return f'[{"".join(v.as_string() for v in self._iter_items())}]'
-
- s = "[\n"
- s += "".join(
- self.trivia.indent
- + " " * 4
- + v.value.as_string()
- + ("," if not isinstance(v.value, Null) else "")
- + (v.comment.as_string() if v.comment is not None else "")
- + "\n"
- for v in self._value
- if v.value is not None
- )
- s += self.trivia.indent + "]"
-
- return s
-
- def _reindex(self) -> None:
- self._index_map.clear()
- index = 0
- for i, v in enumerate(self._value):
- if v.value is None or isinstance(v.value, Null):
- continue
- self._index_map[index] = i
- index += 1
-
- def add_line(
- self,
- *items: Any,
- indent: str = " ",
- comment: str | None = None,
- add_comma: bool = True,
- newline: bool = True,
- ) -> None:
- """Add multiple items in a line to control the format precisely.
- When add_comma is True, only accept actual values and
- ", " will be added between values automatically.
-
- :Example:
-
- >>> a = array()
- >>> a.add_line(1, 2, 3)
- >>> a.add_line(4, 5, 6)
- >>> a.add_line(indent="")
- >>> print(a.as_string())
- [
- 1, 2, 3,
- 4, 5, 6,
- ]
- """
- new_values: list[Item] = []
- first_indent = f"\n{indent}" if newline else indent
- if first_indent:
- new_values.append(Whitespace(first_indent))
- whitespace = ""
- data_values = []
- for i, el in enumerate(items):
- it = item(el, _parent=self)
- if isinstance(it, Comment) or add_comma and isinstance(el, Whitespace):
- raise ValueError(f"item type {type(it)} is not allowed in add_line")
- if not isinstance(it, Whitespace):
- if whitespace:
- new_values.append(Whitespace(whitespace))
- whitespace = ""
- new_values.append(it)
- data_values.append(it.value)
- if add_comma:
- new_values.append(Whitespace(","))
- if i != len(items) - 1:
- new_values.append(Whitespace(" "))
- elif "," not in it.s:
- whitespace += it.s
- else:
- new_values.append(it)
- if whitespace:
- new_values.append(Whitespace(whitespace))
- if comment:
- indent = " " if items else ""
- new_values.append(
- Comment(Trivia(indent=indent, comment=f"# {comment}", trail=""))
- )
- list.extend(self, data_values)
- if len(self._value) > 0:
- last_item = self._value[-1]
- last_value_item = next(
- (
- v
- for v in self._value[::-1]
- if v.value is not None and not isinstance(v.value, Null)
- ),
- None,
- )
- if last_value_item is not None:
- last_value_item.comma = Whitespace(",")
- if last_item.is_whitespace():
- self._value[-1:-1] = self._group_values(new_values)
- else:
- self._value.extend(self._group_values(new_values))
- else:
- self._value.extend(self._group_values(new_values))
- self._reindex()
-
- def clear(self) -> None:
- """Clear the array."""
- list.clear(self)
- self._index_map.clear()
- self._value.clear()
-
- def __len__(self) -> int:
- return list.__len__(self)
-
- def __getitem__(self, key: int | slice) -> Any:
- rv = cast(Item, list.__getitem__(self, key))
- if rv.is_boolean():
- return bool(rv)
- return rv
-
- def __setitem__(self, key: int | slice, value: Any) -> Any:
- it = item(value, _parent=self)
- list.__setitem__(self, key, it)
- if isinstance(key, slice):
- raise ValueError("slice assignment is not supported")
- if key < 0:
- key += len(self)
- self._value[self._index_map[key]].value = it
-
- def insert(self, pos: int, value: Any) -> None:
- it = item(value, _parent=self)
- length = len(self)
- if not isinstance(it, (Comment, Whitespace)):
- list.insert(self, pos, it)
- if pos < 0:
- pos += length
- if pos < 0:
- pos = 0
-
- idx = 0 # insert position of the self._value list
- default_indent = " "
- if pos < length:
- try:
- idx = self._index_map[pos]
- except KeyError as e:
- raise IndexError("list index out of range") from e
- else:
- idx = len(self._value)
- if idx >= 1 and self._value[idx - 1].is_whitespace():
- # The last item is a pure whitespace(\n ), insert before it
- idx -= 1
- if (
- self._value[idx].indent is not None
- and "\n" in self._value[idx].indent.s
- ):
- default_indent = "\n "
- indent: Item | None = None
- comma: Item | None = Whitespace(",") if pos < length else None
- if idx < len(self._value) and not self._value[idx].is_whitespace():
- # Prefer to copy the indentation from the item after
- indent = self._value[idx].indent
- if idx > 0:
- last_item = self._value[idx - 1]
- if indent is None:
- indent = last_item.indent
- if not isinstance(last_item.value, Null) and "\n" in default_indent:
- # Copy the comma from the last item if 1) it contains a value and
- # 2) the array is multiline
- comma = last_item.comma
- if last_item.comma is None and not isinstance(last_item.value, Null):
- # Add comma to the last item to separate it from the following items.
- last_item.comma = Whitespace(",")
- if indent is None and (idx > 0 or "\n" in default_indent):
- # apply default indent if it isn't the first item or the array is multiline.
- indent = Whitespace(default_indent)
- new_item = _ArrayItemGroup(value=it, indent=indent, comma=comma)
- self._value.insert(idx, new_item)
- self._reindex()
-
- def __delitem__(self, key: int | slice):
- length = len(self)
- list.__delitem__(self, key)
-
- if isinstance(key, slice):
- indices_to_remove = list(
- range(key.start or 0, key.stop or length, key.step or 1)
- )
- else:
- indices_to_remove = [length + key if key < 0 else key]
- for i in sorted(indices_to_remove, reverse=True):
- try:
- idx = self._index_map[i]
- except KeyError as e:
- if not isinstance(key, slice):
- raise IndexError("list index out of range") from e
- else:
- del self._value[idx]
- if (
- idx == 0
- and len(self._value) > 0
- and "\n" not in self._value[idx].indent.s
- ):
- # Remove the indentation of the first item if not newline
- self._value[idx].indent = None
- if len(self._value) > 0:
- v = self._value[-1]
- if not v.is_whitespace():
- # remove the comma of the last item
- v.comma = None
-
- self._reindex()
-
- def _getstate(self, protocol=3):
- return list(self._iter_items()), self._trivia, self._multiline
-
-
-class AbstractTable(Item, _CustomDict):
- """Common behaviour of both :class:`Table` and :class:`InlineTable`"""
-
- def __init__(self, value: container.Container, trivia: Trivia):
- Item.__init__(self, trivia)
-
- self._value = value
-
- for k, v in self._value.body:
- if k is not None:
- dict.__setitem__(self, k.key, v)
-
- def unwrap(self) -> dict[str, Any]:
- unwrapped = {}
- for k, v in self.items():
- if isinstance(k, Key):
- k = k.key
- if hasattr(v, "unwrap"):
- v = v.unwrap()
- unwrapped[k] = v
-
- return unwrapped
-
- @property
- def value(self) -> container.Container:
- return self._value
-
- @overload
- def append(self: AT, key: None, value: Comment | Whitespace) -> AT:
- ...
-
- @overload
- def append(self: AT, key: Key | str, value: Any) -> AT:
- ...
-
- def append(self, key, value):
- raise NotImplementedError
-
- @overload
- def add(self: AT, key: Comment | Whitespace) -> AT:
- ...
-
- @overload
- def add(self: AT, key: Key | str, value: Any = ...) -> AT:
- ...
-
- def add(self, key, value=None):
- if value is None:
- if not isinstance(key, (Comment, Whitespace)):
- msg = "Non comment/whitespace items must have an associated key"
- raise ValueError(msg)
-
- key, value = None, key
-
- return self.append(key, value)
-
- def remove(self: AT, key: Key | str) -> AT:
- self._value.remove(key)
-
- if isinstance(key, Key):
- key = key.key
-
- if key is not None:
- dict.__delitem__(self, key)
-
- return self
-
- def setdefault(self, key: Key | str, default: Any) -> Any:
- super().setdefault(key, default)
- return self[key]
-
- def __str__(self):
- return str(self.value)
-
- def copy(self: AT) -> AT:
- return copy.copy(self)
-
- def __repr__(self) -> str:
- return repr(self.value)
-
- def __iter__(self) -> Iterator[str]:
- return iter(self._value)
-
- def __len__(self) -> int:
- return len(self._value)
-
- def __delitem__(self, key: Key | str) -> None:
- self.remove(key)
-
- def __getitem__(self, key: Key | str) -> Item:
- return cast(Item, self._value[key])
-
- def __setitem__(self, key: Key | str, value: Any) -> None:
- if not isinstance(value, Item):
- value = item(value, _parent=self)
-
- is_replace = key in self
- self._value[key] = value
-
- if key is not None:
- dict.__setitem__(self, key, value)
-
- if is_replace:
- return
- m = re.match("(?s)^[^ ]*([ ]+).*$", self._trivia.indent)
- if not m:
- return
-
- indent = m.group(1)
-
- if not isinstance(value, Whitespace):
- m = re.match("(?s)^([^ ]*)(.*)$", value.trivia.indent)
- if not m:
- value.trivia.indent = indent
- else:
- value.trivia.indent = m.group(1) + indent + m.group(2)
-
-
-class Table(AbstractTable):
- """
- A table literal.
- """
-
- def __init__(
- self,
- value: container.Container,
- trivia: Trivia,
- is_aot_element: bool,
- is_super_table: bool | None = None,
- name: str | None = None,
- display_name: str | None = None,
- ) -> None:
- super().__init__(value, trivia)
-
- self.name = name
- self.display_name = display_name
- self._is_aot_element = is_aot_element
- self._is_super_table = is_super_table
-
- @property
- def discriminant(self) -> int:
- return 9
-
- def __copy__(self) -> Table:
- return type(self)(
- self._value.copy(),
- self._trivia.copy(),
- self._is_aot_element,
- self._is_super_table,
- self.name,
- self.display_name,
- )
-
- def append(self, key: Key | str | None, _item: Any) -> Table:
- """
- Appends a (key, item) to the table.
- """
- if not isinstance(_item, Item):
- _item = item(_item, _parent=self)
-
- self._value.append(key, _item)
-
- if isinstance(key, Key):
- key = next(iter(key)).key
- _item = self._value[key]
-
- if key is not None:
- dict.__setitem__(self, key, _item)
-
- m = re.match(r"(?s)^[^ ]*([ ]+).*$", self._trivia.indent)
- if not m:
- return self
-
- indent = m.group(1)
-
- if not isinstance(_item, Whitespace):
- m = re.match("(?s)^([^ ]*)(.*)$", _item.trivia.indent)
- if not m:
- _item.trivia.indent = indent
- else:
- _item.trivia.indent = m.group(1) + indent + m.group(2)
-
- return self
-
- def raw_append(self, key: Key | str | None, _item: Any) -> Table:
- """Similar to :meth:`append` but does not copy indentation."""
- if not isinstance(_item, Item):
- _item = item(_item)
-
- self._value.append(key, _item)
-
- if isinstance(key, Key):
- key = next(iter(key)).key
- _item = self._value[key]
-
- if key is not None:
- dict.__setitem__(self, key, _item)
-
- return self
-
- def is_aot_element(self) -> bool:
- """True if the table is the direct child of an AOT element."""
- return self._is_aot_element
-
- def is_super_table(self) -> bool:
- """A super table is the intermediate parent of a nested table as in [a.b.c].
- If true, it won't appear in the TOML representation."""
- if self._is_super_table is not None:
- return self._is_super_table
- # If the table has only one child and that child is a table, then it is a super table.
- if len(self) != 1:
- return False
- only_child = next(iter(self.values()))
- return isinstance(only_child, (Table, AoT))
-
- def as_string(self) -> str:
- return self._value.as_string()
-
- # Helpers
-
- def indent(self, indent: int) -> Table:
- """Indent the table with given number of spaces."""
- super().indent(indent)
-
- m = re.match("(?s)^[^ ]*([ ]+).*$", self._trivia.indent)
- if not m:
- indent_str = ""
- else:
- indent_str = m.group(1)
-
- for _, item in self._value.body:
- if not isinstance(item, Whitespace):
- item.trivia.indent = indent_str + item.trivia.indent
-
- return self
-
- def invalidate_display_name(self):
- self.display_name = None
-
- for child in self.values():
- if hasattr(child, "invalidate_display_name"):
- child.invalidate_display_name()
-
- def _getstate(self, protocol: int = 3) -> tuple:
- return (
- self._value,
- self._trivia,
- self._is_aot_element,
- self._is_super_table,
- self.name,
- self.display_name,
- )
-
-
-class InlineTable(AbstractTable):
- """
- An inline table literal.
- """
-
- def __init__(
- self, value: container.Container, trivia: Trivia, new: bool = False
- ) -> None:
- super().__init__(value, trivia)
-
- self._new = new
-
- @property
- def discriminant(self) -> int:
- return 10
-
- def append(self, key: Key | str | None, _item: Any) -> InlineTable:
- """
- Appends a (key, item) to the table.
- """
- if not isinstance(_item, Item):
- _item = item(_item, _parent=self)
-
- if not isinstance(_item, (Whitespace, Comment)):
- if not _item.trivia.indent and len(self._value) > 0 and not self._new:
- _item.trivia.indent = " "
- if _item.trivia.comment:
- _item.trivia.comment = ""
-
- self._value.append(key, _item)
-
- if isinstance(key, Key):
- key = key.key
-
- if key is not None:
- dict.__setitem__(self, key, _item)
-
- return self
-
- def as_string(self) -> str:
- buf = "{"
- last_item_idx = next(
- (
- i
- for i in range(len(self._value.body) - 1, -1, -1)
- if self._value.body[i][0] is not None
- ),
- None,
- )
- for i, (k, v) in enumerate(self._value.body):
- if k is None:
- if i == len(self._value.body) - 1:
- if self._new:
- buf = buf.rstrip(", ")
- else:
- buf = buf.rstrip(",")
-
- buf += v.as_string()
-
- continue
-
- v_trivia_trail = v.trivia.trail.replace("\n", "")
- buf += (
- f"{v.trivia.indent}"
- f'{k.as_string() + ("." if k.is_dotted() else "")}'
- f"{k.sep}"
- f"{v.as_string()}"
- f"{v.trivia.comment}"
- f"{v_trivia_trail}"
- )
-
- if last_item_idx is not None and i < last_item_idx:
- buf += ","
- if self._new:
- buf += " "
-
- buf += "}"
-
- return buf
-
- def __setitem__(self, key: Key | str, value: Any) -> None:
- if hasattr(value, "trivia") and value.trivia.comment:
- value.trivia.comment = ""
- super().__setitem__(key, value)
-
- def __copy__(self) -> InlineTable:
- return type(self)(self._value.copy(), self._trivia.copy(), self._new)
-
- def _getstate(self, protocol: int = 3) -> tuple:
- return (self._value, self._trivia)
-
-
-class String(str, Item):
- """
- A string literal.
- """
-
- def __new__(cls, t, value, original, trivia):
- return super().__new__(cls, value)
-
- def __init__(self, t: StringType, _: str, original: str, trivia: Trivia) -> None:
- super().__init__(trivia)
-
- self._t = t
- self._original = original
-
- def unwrap(self) -> str:
- return str(self)
-
- @property
- def discriminant(self) -> int:
- return 11
-
- @property
- def value(self) -> str:
- return self
-
- def as_string(self) -> str:
- return f"{self._t.value}{decode(self._original)}{self._t.value}"
-
- def __add__(self: ItemT, other: str) -> ItemT:
- if not isinstance(other, str):
- return NotImplemented
- result = super().__add__(other)
- original = self._original + getattr(other, "_original", other)
-
- return self._new(result, original)
-
- def _new(self, result: str, original: str) -> String:
- return String(self._t, result, original, self._trivia)
-
- def _getstate(self, protocol=3):
- return self._t, str(self), self._original, self._trivia
-
- @classmethod
- def from_raw(cls, value: str, type_=StringType.SLB, escape=True) -> String:
- value = decode(value)
-
- invalid = type_.invalid_sequences
- if any(c in value for c in invalid):
- raise InvalidStringError(value, invalid, type_.value)
-
- escaped = type_.escaped_sequences
- string_value = escape_string(value, escaped) if escape and escaped else value
-
- return cls(type_, decode(value), string_value, Trivia())
-
-
-class AoT(Item, _CustomList):
- """
- An array of table literal
- """
-
- def __init__(
- self, body: list[Table], name: str | None = None, parsed: bool = False
- ) -> None:
- self.name = name
- self._body: list[Table] = []
- self._parsed = parsed
-
- super().__init__(Trivia(trail=""))
-
- for table in body:
- self.append(table)
-
- def unwrap(self) -> list[dict[str, Any]]:
- unwrapped = []
- for t in self._body:
- if hasattr(t, "unwrap"):
- unwrapped.append(t.unwrap())
- else:
- unwrapped.append(t)
- return unwrapped
-
- @property
- def body(self) -> list[Table]:
- return self._body
-
- @property
- def discriminant(self) -> int:
- return 12
-
- @property
- def value(self) -> list[dict[Any, Any]]:
- return [v.value for v in self._body]
-
- def __len__(self) -> int:
- return len(self._body)
-
- @overload
- def __getitem__(self, key: slice) -> list[Table]:
- ...
-
- @overload
- def __getitem__(self, key: int) -> Table:
- ...
-
- def __getitem__(self, key):
- return self._body[key]
-
- def __setitem__(self, key: slice | int, value: Any) -> None:
- raise NotImplementedError
-
- def __delitem__(self, key: slice | int) -> None:
- del self._body[key]
- list.__delitem__(self, key)
-
- def insert(self, index: int, value: dict) -> None:
- value = item(value, _parent=self)
- if not isinstance(value, Table):
- raise ValueError(f"Unsupported insert value type: {type(value)}")
- length = len(self)
- if index < 0:
- index += length
- if index < 0:
- index = 0
- elif index >= length:
- index = length
- m = re.match("(?s)^[^ ]*([ ]+).*$", self._trivia.indent)
- if m:
- indent = m.group(1)
-
- m = re.match("(?s)^([^ ]*)(.*)$", value.trivia.indent)
- if not m:
- value.trivia.indent = indent
- else:
- value.trivia.indent = m.group(1) + indent + m.group(2)
- prev_table = self._body[index - 1] if 0 < index and length else None
- next_table = self._body[index + 1] if index < length - 1 else None
- if not self._parsed:
- if prev_table and "\n" not in value.trivia.indent:
- value.trivia.indent = "\n" + value.trivia.indent
- if next_table and "\n" not in next_table.trivia.indent:
- next_table.trivia.indent = "\n" + next_table.trivia.indent
- self._body.insert(index, value)
- list.insert(self, index, value)
-
- def invalidate_display_name(self):
- """Call ``invalidate_display_name`` on the contained tables"""
- for child in self:
- if hasattr(child, "invalidate_display_name"):
- child.invalidate_display_name()
-
- def as_string(self) -> str:
- b = ""
- for table in self._body:
- b += table.as_string()
-
- return b
-
- def __repr__(self) -> str:
- return f""
-
- def _getstate(self, protocol=3):
- return self._body, self.name, self._parsed
-
-
-class Null(Item):
- """
- A null item.
- """
-
- def __init__(self) -> None:
- pass
-
- def unwrap(self) -> None:
- return None
-
- @property
- def discriminant(self) -> int:
- return -1
-
- @property
- def value(self) -> None:
- return None
-
- def as_string(self) -> str:
- return ""
-
- def _getstate(self, protocol=3) -> tuple:
- return ()
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/headers.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/headers.py
deleted file mode 100644
index 9ae3035a5b8fc2254f1c45f97c7d7f02779315f3..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/headers.py
+++ /dev/null
@@ -1,587 +0,0 @@
-from __future__ import annotations
-
-import base64
-import binascii
-import ipaddress
-import re
-from typing import Callable, List, Optional, Sequence, Tuple, TypeVar, cast
-
-from . import exceptions
-from .typing import (
- ConnectionOption,
- ExtensionHeader,
- ExtensionName,
- ExtensionParameter,
- Subprotocol,
- UpgradeProtocol,
-)
-
-
-__all__ = [
- "build_host",
- "parse_connection",
- "parse_upgrade",
- "parse_extension",
- "build_extension",
- "parse_subprotocol",
- "build_subprotocol",
- "validate_subprotocols",
- "build_www_authenticate_basic",
- "parse_authorization_basic",
- "build_authorization_basic",
-]
-
-
-T = TypeVar("T")
-
-
-def build_host(host: str, port: int, secure: bool) -> str:
- """
- Build a ``Host`` header.
-
- """
- # https://www.rfc-editor.org/rfc/rfc3986.html#section-3.2.2
- # IPv6 addresses must be enclosed in brackets.
- try:
- address = ipaddress.ip_address(host)
- except ValueError:
- # host is a hostname
- pass
- else:
- # host is an IP address
- if address.version == 6:
- host = f"[{host}]"
-
- if port != (443 if secure else 80):
- host = f"{host}:{port}"
-
- return host
-
-
-# To avoid a dependency on a parsing library, we implement manually the ABNF
-# described in https://www.rfc-editor.org/rfc/rfc6455.html#section-9.1 and
-# https://www.rfc-editor.org/rfc/rfc7230.html#appendix-B.
-
-
-def peek_ahead(header: str, pos: int) -> Optional[str]:
- """
- Return the next character from ``header`` at the given position.
-
- Return :obj:`None` at the end of ``header``.
-
- We never need to peek more than one character ahead.
-
- """
- return None if pos == len(header) else header[pos]
-
-
-_OWS_re = re.compile(r"[\t ]*")
-
-
-def parse_OWS(header: str, pos: int) -> int:
- """
- Parse optional whitespace from ``header`` at the given position.
-
- Return the new position.
-
- The whitespace itself isn't returned because it isn't significant.
-
- """
- # There's always a match, possibly empty, whose content doesn't matter.
- match = _OWS_re.match(header, pos)
- assert match is not None
- return match.end()
-
-
-_token_re = re.compile(r"[-!#$%&\'*+.^_`|~0-9a-zA-Z]+")
-
-
-def parse_token(header: str, pos: int, header_name: str) -> Tuple[str, int]:
- """
- Parse a token from ``header`` at the given position.
-
- Return the token value and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- match = _token_re.match(header, pos)
- if match is None:
- raise exceptions.InvalidHeaderFormat(header_name, "expected token", header, pos)
- return match.group(), match.end()
-
-
-_quoted_string_re = re.compile(
- r'"(?:[\x09\x20-\x21\x23-\x5b\x5d-\x7e]|\\[\x09\x20-\x7e\x80-\xff])*"'
-)
-
-
-_unquote_re = re.compile(r"\\([\x09\x20-\x7e\x80-\xff])")
-
-
-def parse_quoted_string(header: str, pos: int, header_name: str) -> Tuple[str, int]:
- """
- Parse a quoted string from ``header`` at the given position.
-
- Return the unquoted value and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- match = _quoted_string_re.match(header, pos)
- if match is None:
- raise exceptions.InvalidHeaderFormat(
- header_name, "expected quoted string", header, pos
- )
- return _unquote_re.sub(r"\1", match.group()[1:-1]), match.end()
-
-
-_quotable_re = re.compile(r"[\x09\x20-\x7e\x80-\xff]*")
-
-
-_quote_re = re.compile(r"([\x22\x5c])")
-
-
-def build_quoted_string(value: str) -> str:
- """
- Format ``value`` as a quoted string.
-
- This is the reverse of :func:`parse_quoted_string`.
-
- """
- match = _quotable_re.fullmatch(value)
- if match is None:
- raise ValueError("invalid characters for quoted-string encoding")
- return '"' + _quote_re.sub(r"\\\1", value) + '"'
-
-
-def parse_list(
- parse_item: Callable[[str, int, str], Tuple[T, int]],
- header: str,
- pos: int,
- header_name: str,
-) -> List[T]:
- """
- Parse a comma-separated list from ``header`` at the given position.
-
- This is appropriate for parsing values with the following grammar:
-
- 1#item
-
- ``parse_item`` parses one item.
-
- ``header`` is assumed not to start or end with whitespace.
-
- (This function is designed for parsing an entire header value and
- :func:`~websockets.http.read_headers` strips whitespace from values.)
-
- Return a list of items.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- # Per https://www.rfc-editor.org/rfc/rfc7230.html#section-7, "a recipient
- # MUST parse and ignore a reasonable number of empty list elements";
- # hence while loops that remove extra delimiters.
-
- # Remove extra delimiters before the first item.
- while peek_ahead(header, pos) == ",":
- pos = parse_OWS(header, pos + 1)
-
- items = []
- while True:
- # Loop invariant: a item starts at pos in header.
- item, pos = parse_item(header, pos, header_name)
- items.append(item)
- pos = parse_OWS(header, pos)
-
- # We may have reached the end of the header.
- if pos == len(header):
- break
-
- # There must be a delimiter after each element except the last one.
- if peek_ahead(header, pos) == ",":
- pos = parse_OWS(header, pos + 1)
- else:
- raise exceptions.InvalidHeaderFormat(
- header_name, "expected comma", header, pos
- )
-
- # Remove extra delimiters before the next item.
- while peek_ahead(header, pos) == ",":
- pos = parse_OWS(header, pos + 1)
-
- # We may have reached the end of the header.
- if pos == len(header):
- break
-
- # Since we only advance in the header by one character with peek_ahead()
- # or with the end position of a regex match, we can't overshoot the end.
- assert pos == len(header)
-
- return items
-
-
-def parse_connection_option(
- header: str, pos: int, header_name: str
-) -> Tuple[ConnectionOption, int]:
- """
- Parse a Connection option from ``header`` at the given position.
-
- Return the protocol value and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- item, pos = parse_token(header, pos, header_name)
- return cast(ConnectionOption, item), pos
-
-
-def parse_connection(header: str) -> List[ConnectionOption]:
- """
- Parse a ``Connection`` header.
-
- Return a list of HTTP connection options.
-
- Args
- header: value of the ``Connection`` header.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- return parse_list(parse_connection_option, header, 0, "Connection")
-
-
-_protocol_re = re.compile(
- r"[-!#$%&\'*+.^_`|~0-9a-zA-Z]+(?:/[-!#$%&\'*+.^_`|~0-9a-zA-Z]+)?"
-)
-
-
-def parse_upgrade_protocol(
- header: str, pos: int, header_name: str
-) -> Tuple[UpgradeProtocol, int]:
- """
- Parse an Upgrade protocol from ``header`` at the given position.
-
- Return the protocol value and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- match = _protocol_re.match(header, pos)
- if match is None:
- raise exceptions.InvalidHeaderFormat(
- header_name, "expected protocol", header, pos
- )
- return cast(UpgradeProtocol, match.group()), match.end()
-
-
-def parse_upgrade(header: str) -> List[UpgradeProtocol]:
- """
- Parse an ``Upgrade`` header.
-
- Return a list of HTTP protocols.
-
- Args:
- header: value of the ``Upgrade`` header.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- return parse_list(parse_upgrade_protocol, header, 0, "Upgrade")
-
-
-def parse_extension_item_param(
- header: str, pos: int, header_name: str
-) -> Tuple[ExtensionParameter, int]:
- """
- Parse a single extension parameter from ``header`` at the given position.
-
- Return a ``(name, value)`` pair and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- # Extract parameter name.
- name, pos = parse_token(header, pos, header_name)
- pos = parse_OWS(header, pos)
- # Extract parameter value, if there is one.
- value: Optional[str] = None
- if peek_ahead(header, pos) == "=":
- pos = parse_OWS(header, pos + 1)
- if peek_ahead(header, pos) == '"':
- pos_before = pos # for proper error reporting below
- value, pos = parse_quoted_string(header, pos, header_name)
- # https://www.rfc-editor.org/rfc/rfc6455.html#section-9.1 says:
- # the value after quoted-string unescaping MUST conform to
- # the 'token' ABNF.
- if _token_re.fullmatch(value) is None:
- raise exceptions.InvalidHeaderFormat(
- header_name, "invalid quoted header content", header, pos_before
- )
- else:
- value, pos = parse_token(header, pos, header_name)
- pos = parse_OWS(header, pos)
-
- return (name, value), pos
-
-
-def parse_extension_item(
- header: str, pos: int, header_name: str
-) -> Tuple[ExtensionHeader, int]:
- """
- Parse an extension definition from ``header`` at the given position.
-
- Return an ``(extension name, parameters)`` pair, where ``parameters`` is a
- list of ``(name, value)`` pairs, and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- # Extract extension name.
- name, pos = parse_token(header, pos, header_name)
- pos = parse_OWS(header, pos)
- # Extract all parameters.
- parameters = []
- while peek_ahead(header, pos) == ";":
- pos = parse_OWS(header, pos + 1)
- parameter, pos = parse_extension_item_param(header, pos, header_name)
- parameters.append(parameter)
- return (cast(ExtensionName, name), parameters), pos
-
-
-def parse_extension(header: str) -> List[ExtensionHeader]:
- """
- Parse a ``Sec-WebSocket-Extensions`` header.
-
- Return a list of WebSocket extensions and their parameters in this format::
-
- [
- (
- 'extension name',
- [
- ('parameter name', 'parameter value'),
- ....
- ]
- ),
- ...
- ]
-
- Parameter values are :obj:`None` when no value is provided.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- return parse_list(parse_extension_item, header, 0, "Sec-WebSocket-Extensions")
-
-
-parse_extension_list = parse_extension # alias for backwards compatibility
-
-
-def build_extension_item(
- name: ExtensionName, parameters: List[ExtensionParameter]
-) -> str:
- """
- Build an extension definition.
-
- This is the reverse of :func:`parse_extension_item`.
-
- """
- return "; ".join(
- [cast(str, name)]
- + [
- # Quoted strings aren't necessary because values are always tokens.
- name if value is None else f"{name}={value}"
- for name, value in parameters
- ]
- )
-
-
-def build_extension(extensions: Sequence[ExtensionHeader]) -> str:
- """
- Build a ``Sec-WebSocket-Extensions`` header.
-
- This is the reverse of :func:`parse_extension`.
-
- """
- return ", ".join(
- build_extension_item(name, parameters) for name, parameters in extensions
- )
-
-
-build_extension_list = build_extension # alias for backwards compatibility
-
-
-def parse_subprotocol_item(
- header: str, pos: int, header_name: str
-) -> Tuple[Subprotocol, int]:
- """
- Parse a subprotocol from ``header`` at the given position.
-
- Return the subprotocol value and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- item, pos = parse_token(header, pos, header_name)
- return cast(Subprotocol, item), pos
-
-
-def parse_subprotocol(header: str) -> List[Subprotocol]:
- """
- Parse a ``Sec-WebSocket-Protocol`` header.
-
- Return a list of WebSocket subprotocols.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- return parse_list(parse_subprotocol_item, header, 0, "Sec-WebSocket-Protocol")
-
-
-parse_subprotocol_list = parse_subprotocol # alias for backwards compatibility
-
-
-def build_subprotocol(subprotocols: Sequence[Subprotocol]) -> str:
- """
- Build a ``Sec-WebSocket-Protocol`` header.
-
- This is the reverse of :func:`parse_subprotocol`.
-
- """
- return ", ".join(subprotocols)
-
-
-build_subprotocol_list = build_subprotocol # alias for backwards compatibility
-
-
-def validate_subprotocols(subprotocols: Sequence[Subprotocol]) -> None:
- """
- Validate that ``subprotocols`` is suitable for :func:`build_subprotocol`.
-
- """
- if not isinstance(subprotocols, Sequence):
- raise TypeError("subprotocols must be a list")
- if isinstance(subprotocols, str):
- raise TypeError("subprotocols must be a list, not a str")
- for subprotocol in subprotocols:
- if not _token_re.fullmatch(subprotocol):
- raise ValueError(f"invalid subprotocol: {subprotocol}")
-
-
-def build_www_authenticate_basic(realm: str) -> str:
- """
- Build a ``WWW-Authenticate`` header for HTTP Basic Auth.
-
- Args:
- realm: identifier of the protection space.
-
- """
- # https://www.rfc-editor.org/rfc/rfc7617.html#section-2
- realm = build_quoted_string(realm)
- charset = build_quoted_string("UTF-8")
- return f"Basic realm={realm}, charset={charset}"
-
-
-_token68_re = re.compile(r"[A-Za-z0-9-._~+/]+=*")
-
-
-def parse_token68(header: str, pos: int, header_name: str) -> Tuple[str, int]:
- """
- Parse a token68 from ``header`` at the given position.
-
- Return the token value and the new position.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
-
- """
- match = _token68_re.match(header, pos)
- if match is None:
- raise exceptions.InvalidHeaderFormat(
- header_name, "expected token68", header, pos
- )
- return match.group(), match.end()
-
-
-def parse_end(header: str, pos: int, header_name: str) -> None:
- """
- Check that parsing reached the end of header.
-
- """
- if pos < len(header):
- raise exceptions.InvalidHeaderFormat(header_name, "trailing data", header, pos)
-
-
-def parse_authorization_basic(header: str) -> Tuple[str, str]:
- """
- Parse an ``Authorization`` header for HTTP Basic Auth.
-
- Return a ``(username, password)`` tuple.
-
- Args:
- header: value of the ``Authorization`` header.
-
- Raises:
- InvalidHeaderFormat: on invalid inputs.
- InvalidHeaderValue: on unsupported inputs.
-
- """
- # https://www.rfc-editor.org/rfc/rfc7235.html#section-2.1
- # https://www.rfc-editor.org/rfc/rfc7617.html#section-2
- scheme, pos = parse_token(header, 0, "Authorization")
- if scheme.lower() != "basic":
- raise exceptions.InvalidHeaderValue(
- "Authorization",
- f"unsupported scheme: {scheme}",
- )
- if peek_ahead(header, pos) != " ":
- raise exceptions.InvalidHeaderFormat(
- "Authorization", "expected space after scheme", header, pos
- )
- pos += 1
- basic_credentials, pos = parse_token68(header, pos, "Authorization")
- parse_end(header, pos, "Authorization")
-
- try:
- user_pass = base64.b64decode(basic_credentials.encode()).decode()
- except binascii.Error:
- raise exceptions.InvalidHeaderValue(
- "Authorization",
- "expected base64-encoded credentials",
- ) from None
- try:
- username, password = user_pass.split(":", 1)
- except ValueError:
- raise exceptions.InvalidHeaderValue(
- "Authorization",
- "expected username:password credentials",
- ) from None
-
- return username, password
-
-
-def build_authorization_basic(username: str, password: str) -> str:
- """
- Build an ``Authorization`` header for HTTP Basic Auth.
-
- This is the reverse of :func:`parse_authorization_basic`.
-
- """
- # https://www.rfc-editor.org/rfc/rfc7617.html#section-2
- assert ":" not in username
- user_pass = f"{username}:{password}"
- basic_credentials = base64.b64encode(user_pass.encode()).decode()
- return "Basic " + basic_credentials
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/sync/client.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/sync/client.py
deleted file mode 100644
index 087ff5f569a3705109b5bd92071f1422c920f8d5..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/sync/client.py
+++ /dev/null
@@ -1,328 +0,0 @@
-from __future__ import annotations
-
-import socket
-import ssl
-import threading
-from typing import Any, Optional, Sequence, Type
-
-from ..client import ClientProtocol
-from ..datastructures import HeadersLike
-from ..extensions.base import ClientExtensionFactory
-from ..extensions.permessage_deflate import enable_client_permessage_deflate
-from ..headers import validate_subprotocols
-from ..http import USER_AGENT
-from ..http11 import Response
-from ..protocol import CONNECTING, OPEN, Event
-from ..typing import LoggerLike, Origin, Subprotocol
-from ..uri import parse_uri
-from .connection import Connection
-from .utils import Deadline
-
-
-__all__ = ["connect", "unix_connect", "ClientConnection"]
-
-
-class ClientConnection(Connection):
- """
- Threaded implementation of a WebSocket client connection.
-
- :class:`ClientConnection` provides :meth:`recv` and :meth:`send` methods for
- receiving and sending messages.
-
- It supports iteration to receive messages::
-
- for message in websocket:
- process(message)
-
- The iterator exits normally when the connection is closed with close code
- 1000 (OK) or 1001 (going away) or without a close code. It raises a
- :exc:`~websockets.exceptions.ConnectionClosedError` when the connection is
- closed with any other code.
-
- Args:
- socket: Socket connected to a WebSocket server.
- protocol: Sans-I/O connection.
- close_timeout: Timeout for closing the connection in seconds.
-
- """
-
- def __init__(
- self,
- socket: socket.socket,
- protocol: ClientProtocol,
- *,
- close_timeout: Optional[float] = 10,
- ) -> None:
- self.protocol: ClientProtocol
- self.response_rcvd = threading.Event()
- super().__init__(
- socket,
- protocol,
- close_timeout=close_timeout,
- )
-
- def handshake(
- self,
- additional_headers: Optional[HeadersLike] = None,
- user_agent_header: Optional[str] = USER_AGENT,
- timeout: Optional[float] = None,
- ) -> None:
- """
- Perform the opening handshake.
-
- """
- with self.send_context(expected_state=CONNECTING):
- self.request = self.protocol.connect()
- if additional_headers is not None:
- self.request.headers.update(additional_headers)
- if user_agent_header is not None:
- self.request.headers["User-Agent"] = user_agent_header
- self.protocol.send_request(self.request)
-
- if not self.response_rcvd.wait(timeout):
- self.close_socket()
- self.recv_events_thread.join()
- raise TimeoutError("timed out during handshake")
-
- if self.response is None:
- self.close_socket()
- self.recv_events_thread.join()
- raise ConnectionError("connection closed during handshake")
-
- if self.protocol.state is not OPEN:
- self.recv_events_thread.join(self.close_timeout)
- self.close_socket()
- self.recv_events_thread.join()
-
- if self.protocol.handshake_exc is not None:
- raise self.protocol.handshake_exc
-
- def process_event(self, event: Event) -> None:
- """
- Process one incoming event.
-
- """
- # First event - handshake response.
- if self.response is None:
- assert isinstance(event, Response)
- self.response = event
- self.response_rcvd.set()
- # Later events - frames.
- else:
- super().process_event(event)
-
- def recv_events(self) -> None:
- """
- Read incoming data from the socket and process events.
-
- """
- try:
- super().recv_events()
- finally:
- # If the connection is closed during the handshake, unblock it.
- self.response_rcvd.set()
-
-
-def connect(
- uri: str,
- *,
- # TCP/TLS — unix and path are only for unix_connect()
- sock: Optional[socket.socket] = None,
- ssl_context: Optional[ssl.SSLContext] = None,
- server_hostname: Optional[str] = None,
- unix: bool = False,
- path: Optional[str] = None,
- # WebSocket
- origin: Optional[Origin] = None,
- extensions: Optional[Sequence[ClientExtensionFactory]] = None,
- subprotocols: Optional[Sequence[Subprotocol]] = None,
- additional_headers: Optional[HeadersLike] = None,
- user_agent_header: Optional[str] = USER_AGENT,
- compression: Optional[str] = "deflate",
- # Timeouts
- open_timeout: Optional[float] = 10,
- close_timeout: Optional[float] = 10,
- # Limits
- max_size: Optional[int] = 2**20,
- # Logging
- logger: Optional[LoggerLike] = None,
- # Escape hatch for advanced customization
- create_connection: Optional[Type[ClientConnection]] = None,
-) -> ClientConnection:
- """
- Connect to the WebSocket server at ``uri``.
-
- This function returns a :class:`ClientConnection` instance, which you can
- use to send and receive messages.
-
- :func:`connect` may be used as a context manager::
-
- async with websockets.sync.client.connect(...) as websocket:
- ...
-
- The connection is closed automatically when exiting the context.
-
- Args:
- uri: URI of the WebSocket server.
- sock: Preexisting TCP socket. ``sock`` overrides the host and port
- from ``uri``. You may call :func:`socket.create_connection` to
- create a suitable TCP socket.
- ssl_context: Configuration for enabling TLS on the connection.
- server_hostname: Host name for the TLS handshake. ``server_hostname``
- overrides the host name from ``uri``.
- origin: Value of the ``Origin`` header, for servers that require it.
- extensions: List of supported extensions, in order in which they
- should be negotiated and run.
- subprotocols: List of supported subprotocols, in order of decreasing
- preference.
- additional_headers (HeadersLike | None): Arbitrary HTTP headers to add
- to the handshake request.
- user_agent_header: Value of the ``User-Agent`` request header.
- It defaults to ``"Python/x.y.z websockets/X.Y"``.
- Setting it to :obj:`None` removes the header.
- compression: The "permessage-deflate" extension is enabled by default.
- Set ``compression`` to :obj:`None` to disable it. See the
- :doc:`compression guide <../../topics/compression>` for details.
- open_timeout: Timeout for opening the connection in seconds.
- :obj:`None` disables the timeout.
- close_timeout: Timeout for closing the connection in seconds.
- :obj:`None` disables the timeout.
- max_size: Maximum size of incoming messages in bytes.
- :obj:`None` disables the limit.
- logger: Logger for this client.
- It defaults to ``logging.getLogger("websockets.client")``.
- See the :doc:`logging guide <../../topics/logging>` for details.
- create_connection: Factory for the :class:`ClientConnection` managing
- the connection. Set it to a wrapper or a subclass to customize
- connection handling.
-
- Raises:
- InvalidURI: If ``uri`` isn't a valid WebSocket URI.
- OSError: If the TCP connection fails.
- InvalidHandshake: If the opening handshake fails.
- TimeoutError: If the opening handshake times out.
-
- """
-
- # Process parameters
-
- wsuri = parse_uri(uri)
- if not wsuri.secure and ssl_context is not None:
- raise TypeError("ssl_context argument is incompatible with a ws:// URI")
-
- if unix:
- if path is None and sock is None:
- raise TypeError("missing path argument")
- elif path is not None and sock is not None:
- raise TypeError("path and sock arguments are incompatible")
- else:
- assert path is None # private argument, only set by unix_connect()
-
- if subprotocols is not None:
- validate_subprotocols(subprotocols)
-
- if compression == "deflate":
- extensions = enable_client_permessage_deflate(extensions)
- elif compression is not None:
- raise ValueError(f"unsupported compression: {compression}")
-
- # Calculate timeouts on the TCP, TLS, and WebSocket handshakes.
- # The TCP and TLS timeouts must be set on the socket, then removed
- # to avoid conflicting with the WebSocket timeout in handshake().
- deadline = Deadline(open_timeout)
-
- if create_connection is None:
- create_connection = ClientConnection
-
- try:
- # Connect socket
-
- if sock is None:
- if unix:
- sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
- sock.settimeout(deadline.timeout())
- assert path is not None # validated above -- this is for mpypy
- sock.connect(path)
- else:
- sock = socket.create_connection(
- (wsuri.host, wsuri.port),
- deadline.timeout(),
- )
- sock.settimeout(None)
-
- # Disable Nagle algorithm
-
- if not unix:
- sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True)
-
- # Initialize TLS wrapper and perform TLS handshake
-
- if wsuri.secure:
- if ssl_context is None:
- ssl_context = ssl.create_default_context()
- if server_hostname is None:
- server_hostname = wsuri.host
- sock.settimeout(deadline.timeout())
- sock = ssl_context.wrap_socket(sock, server_hostname=server_hostname)
- sock.settimeout(None)
-
- # Initialize WebSocket connection
-
- protocol = ClientProtocol(
- wsuri,
- origin=origin,
- extensions=extensions,
- subprotocols=subprotocols,
- state=CONNECTING,
- max_size=max_size,
- logger=logger,
- )
-
- # Initialize WebSocket protocol
-
- connection = create_connection(
- sock,
- protocol,
- close_timeout=close_timeout,
- )
- # On failure, handshake() closes the socket and raises an exception.
- connection.handshake(
- additional_headers,
- user_agent_header,
- deadline.timeout(),
- )
-
- except Exception:
- if sock is not None:
- sock.close()
- raise
-
- return connection
-
-
-def unix_connect(
- path: Optional[str] = None,
- uri: Optional[str] = None,
- **kwargs: Any,
-) -> ClientConnection:
- """
- Connect to a WebSocket server listening on a Unix socket.
-
- This function is identical to :func:`connect`, except for the additional
- ``path`` argument. It's only available on Unix.
-
- It's mainly useful for debugging servers listening on Unix sockets.
-
- Args:
- path: File system path to the Unix socket.
- uri: URI of the WebSocket server. ``uri`` defaults to
- ``ws://localhost/`` or, when a ``ssl_context`` is provided, to
- ``wss://localhost/``.
-
- """
- if uri is None:
- if kwargs.get("ssl_context") is None:
- uri = "ws://localhost/"
- else:
- uri = "wss://localhost/"
- return connect(uri=uri, unix=True, path=path, **kwargs)
diff --git a/spaces/pyesonekyaw/faceforgerydetection/Scripts/__init__.py b/spaces/pyesonekyaw/faceforgerydetection/Scripts/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Acrobat Pro DC 2018.012.20039 Crack BEST Utorrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe Acrobat Pro DC 2018.012.20039 Crack BEST Utorrent.md
deleted file mode 100644
index c45c793c79d1b585acee96be7e57b3a791d67b1d..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Acrobat Pro DC 2018.012.20039 Crack BEST Utorrent.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
Adobe Acrobat Pro DC 2018.012.20039 Crack utorrent
-
-Mark each item with a simple letter of the alphabet from A to Z.
-
-March 15, 2565 BE n Each food item is identified with a number from 1 to 50. Use the markers to label each item from 1 to 50.
-
-July 13, 2565 BE n Use the pattern to make the drawing below.
-
-The pattern is the same as the version that you used in Section 9.1, except that this version has a different name, is of different size, and uses different point sizes for the filling and the shading.
-
-September 2, 2566 BE n This version of the pie chart presents the same information as the pie chart that you made in Section 9.1. The pie chart has a different name and a different size, and it uses different point sizes for the pie and for the filling. Compare the pie chart below to the pie chart that you made in Section 9.1.
-
-October 27, 2566 BE n This pie chart has a different name, a different size, and it uses different point sizes for the pie and for the filling. Compare the pie chart below to the pie chart that you made in Section 9.1.
-
-December 2, 2567 BE n This chart presents information about the amount of time spent on various tasks. Use the markers to label the tasks.
-
-January 6, 2568 BE n Use the markers to label the tasks.
-
-March 13, 2568 BE n Use the markers to label the tasks.
-
-May 14, 2569 BE n Use the markers to label the tasks.
-
-June 6, 2569 BE n Use the markers to label the tasks.
-
-August 5, 2570 BE n Use the markers to label the tasks.
-
-September 9, 2570 BE n This is a year-by-year breakdown of how much time is spent on a given task. Each row in the table presents a year. You should use the markers to label the columns.
-
-October 21, 2570 BE n Each row presents the same information as the pie chart that you made in Section 9.2. The pie chart has a different name and a different size, and it uses different point sizes for the pie and for the filling. Compare the pie chart that you made in Section 9.2 to the pie chart below.
-
-December 23, 2570 BE n This pie chart has a different name, a different size, and it uses different point sizes for the 4fefd39f24
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Barkod Etiket Pro V5.0 Crack _HOT_.md b/spaces/quidiaMuxgu/Expedit-SAM/Barkod Etiket Pro V5.0 Crack _HOT_.md
deleted file mode 100644
index 13e35d36bb0ebe7565b5976c723bab29f65b4018..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Barkod Etiket Pro V5.0 Crack _HOT_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Download Steinberg WaveLab LE 7 Keygen Crack No Survey 0. ... Serial number, Steinberg... ... June 12 2020 0 ... barkod etiket pro v5.1 crack 4d29de3e1b
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Descargar __LINK__ Crack Principe De Persia Las Arenas Del Tiempo.md b/spaces/quidiaMuxgu/Expedit-SAM/Descargar __LINK__ Crack Principe De Persia Las Arenas Del Tiempo.md
deleted file mode 100644
index 2af19008ce655dd0491e769bceadc942b7d56f74..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Descargar __LINK__ Crack Principe De Persia Las Arenas Del Tiempo.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Son distintos, chocantes, increiblemente emotivos y llenos de alegría física y emocional. una extensión para los gameplay de todas unas maneras especiales. For i en i numeris xi i januari, 32 bit. Imad el dnsen cargar el crack de prince of persia oah i dalemi il namnetto dalemi.
-
Prince of Persia Forgotten Sands crack, Game full free download full free. Helped you to get out and enjoy your unsecured. Prince of persia free download for PC Windows 7,8,10 64 bit. Prince of persia, prince of persia, prince of persia after the sands, prince of persia, prince of persia: the sands of time, prince of persia: the sands of time remake, prince of persia: the sands of time, prince of persia: the sands of time remake of pc, prince of persia: the sands of time remake, prince of persia: the sands of time tdmv, prince of persia: the sands of time vc run, prince of persia: the sands of time remove, prince of persia: the sands of time, prince of persia. Impresión y, cd dvd drive, tape audio cd drive,. Html para descargar, descargar torrent,. Prince of persia please unlock the sands of time game in play store my friend works for the prince of persia related piracy software in xp pro, windows 7, 8, 8. Home - download, downloads - games games downloads, torrents torrents downloads,. Download Prince of persia Forgotten Sands cracked, Game full free download full free. Helped you to get out and enjoy your unsecured. Prince of persia free download for PC Windows 7,8,10 64 bit. Prince of persia, prince of persia, prince of persia after the sands, prince of persia, prince of persia: the sands of time, prince of persia: the sands of time remake, prince of persia: the sands of time, prince of persia: the sands of time remake of pc, prince of persia: the sands of time remake, prince of persia: the sands of time tdmv, prince of persia: the sands of time vc run, prince of persia: the sands of time remove, prince of persia: the sands of time, prince of persia. Prince of persia Forgotten Sands, cracked game has 342 downloads, last checked. Prince of persia Forgotten Sands cracked Game for PC has bla, bla, bla, bla, bla, bla, bla. Prince of persia Forgotten Sands, cracked game has 342 downloads, last checked.
-
descargar crack principe de persia las arenas del tiempo
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/LiveCD Windows XPE-7PE.md b/spaces/quidiaMuxgu/Expedit-SAM/LiveCD Windows XPE-7PE.md
deleted file mode 100644
index 9b947c85836b2f101173b4c207a18bf345aaca08..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/LiveCD Windows XPE-7PE.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
windows pe has the best hardware support and most users would be familiar with it. however, originally windows pe may have ahigher system requirementbecause the newest windows pe 5.1 already require at least 512mb just for the base and adding more drivers, packages, or apps will obviously need more volume.
-
to install a certificate by using the system certificates dialog box:
open the windows start menu, then go to all programs and then select windows accessories > system tools > system certificates.
in the system certificates dialog box that opens, click add to add a certificate.
in the import certificates dialog box, select add subject to store in following store, type the name of the certificate file in the file name field, and then click ok to import the certificate.
go to the personal certificate store in the system certificates dialog box and select the certificate.
the certificate details will be displayed.
once you know that the thumbprint has been added to the certificate, close the certificate store and restart the computer.
-
to launch the built-in windows pe rescue functionality from a usb key, you can use the following windows pe boot options:
press f8 when you hear the startup sound and you see the boot options screen.
select boot from first hard disk.
select run from the next menu option.
select repair and if your windows pe rescue cd is listed on the list, select it.
-
for instructions to create a windows pe rescue disk from a windows 10 installation disk, see the windows pe image. in this blog post, we'll demonstrate the process for creating a windows pe rescue disk from a windows 7 installation disk.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Bodhidharma Full Movie in Tamil HD 1080p A Bio-War Against India and the Secret of the DNA Memory.md b/spaces/raedeXanto/academic-chatgpt-beta/Bodhidharma Full Movie in Tamil HD 1080p A Bio-War Against India and the Secret of the DNA Memory.md
deleted file mode 100644
index 14bca1681e807f9318c80e079576c3ead3c0d770..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Bodhidharma Full Movie in Tamil HD 1080p A Bio-War Against India and the Secret of the DNA Memory.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Bodhidharma: The Legendary Monk Who Brought Zen and Kung Fu to China
-
If you are a fan of action, thriller, and historical movies, you might have heard of Bodhidharma, a Tamil movie that was released in 2011. The movie tells the story of a legendary monk who traveled from India to China in the 6th century AD and became the founder of Zen Buddhism and Shaolin Kung Fu. The movie also features a modern-day plot involving a genetic engineering student, a circus worker, and a Chinese spy who are all connected to Bodhidharma's legacy.
-
In this article, we will explore the origins, journey, and legacy of Bodhidharma, as well as review the movie's plot, characters, and quality. We will also provide some FAQs for those who want to know more about this fascinating figure.
The movie begins with a flashback to the 6th century AD, where we meet three characters who are related to Bodhidharma:
-
Subha
-
Subha is a genetic engineering student who is researching the DNA samples of ancient people. She believes that the DNA contains the memory strands of their ancestors, and that by activating them, she can revive their skills and abilities.
-
7 Aum Arivu full movie in tamil hd 1080p
-Rudhran full movie in tamil hd 1080p
-Bhediya full movie in tamil hd 1080p
-Bodhidharma skills and legend tamil movie hd 1080p
-Genetic engineering and virus attack tamil movie hd 1080p
-Suriya and Shruti Haasan action thriller tamil movie hd 1080p
-Raghava Lawrence and Priya Bhavani Shankar horror comedy tamil movie hd 1080p
-Varun Dhawan and Kriti Sanon werewolf drama tamil movie hd 1080p
-A.R. Murugadoss directorial tamil movie hd 1080p
-Kathiresan directorial tamil movie hd 1080p
-Amar Kaushik directorial tamil movie hd 1080p
-G.V Prakash Kumar music tamil movie hd 1080p
-Sachin-Jigar music tamil movie hd 1080p
-Bodhidharma history and biography tamil movie hd 1080p
-Dong Lee villain role tamil movie hd 1080p
-Johnny Tri Nguyen martial arts tamil movie hd 1080p
-Sarath Kumar supporting role tamil movie hd 1080p
-China vs India bio-war tamil movie hd 1080p
-DNA memory strands tamil movie hd 1080p
-Bombay Circus setting tamil movie hd 1080p
-Ringa Ringa song video tamil movie hd 1080p
-The seventh sense tagline tamil movie hd 1080p
-Operation Red plot twist tamil movie hd 1080p
-Subha Srinivasan character name tamil movie hd 1080p
-Arvind character name tamil movie hd 1080p
-Bhodi Dharma descendant story tamil movie hd 1080p
-Genetic experiment on Arvind tamil movie hd 1080p
-Subha's thesis submission tamil movie hd 1080p
-KSTAR facility fusion reactor reference tamil movie hd 1080p
-Imdb rating and reviews of bodhidharma full movie in tamil hd 1080p
-
She finds out that one of her subjects, Arvind, has a rare genetic marker that links him to Bodhidharma, a legendary monk who lived in India more than 1500 years ago.
-
Arvind
-
Arvind is a circus worker who performs acrobatic stunts and tricks for a living. He is unaware of his ancestral connection to Bodhidharma, until Subha approaches him and tells him about her research.
-
She convinces him to participate in her experiment, hoping to unlock his hidden potential as a fighter and a healer.
-
Bhodi Dharma
-
Bhodi Dharma is the main protagonist of the movie's historical plot. He is an exceptionally skilled fighter and a medic who belongs to the Pallava dynasty in South India.
-
He is also a devout Buddhist who follows the teachings of his master, Prajnatara. Prajnatara sends him on a mission to spread Buddhism in China, where it is facing decline and corruption.
-
The Journey of Bodhidharma
-
The movie then follows Bodhidharma's journey from India to China, where he faces many challenges and obstacles:
-
The mission
-
Bodhidharma travels by sea to China, along with his loyal followers. He arrives at the port city of Guangzhou, where he meets a friendly monk named Dazu Huike.
-
Huike tells him that Buddhism in China is in a sorry state, as the emperor Liang Wudi is obsessed with immortality and has corrupted the Buddhist teachings with his superstitions.
-
Bodhidharma decides to go to the emperor's palace and try to enlighten him with the true essence of Buddhism.
-
The challenges
-
Bodhidharma's meeting with the emperor does not go well. The emperor asks him what merit he has gained by building temples and donating money to Buddhism.
-
Bodhidharma replies that he has gained no merit at all, as these actions are based on worldly attachments and ego.
-
The emperor then asks him what is the highest truth of Buddhism.
-
Bodhidharma answers that there is no truth at all, as everything is empty and illusory.
-
The emperor is offended by Bodhidharma's answers and dismisses him as a barbarian.
-
Bodhidharma then leaves the palace and heads north, where he encounters more hostility from the local monks who are jealous of his skills and wisdom.
-
They try to sabotage his teachings and challenge him to debates and fights.
-
The legend
-
Bodhidharma eventually reaches the Shaolin temple, where he finds a group of monks who are sincere in their practice but lack physical strength and stamina.
-
He decides to stay there and teach them meditation and martial arts.
-
He also enters a cave near the temple and meditates for nine years without moving or speaking.
-
During this time, he attains enlightenment and becomes known as Damo, the first patriarch of Zen Buddhism in China.
-
He also passes on his teachings to Huike, who becomes his successor.
-
The Legacy of Bodhidharma
-
The movie then switches back to the present day, where we see how Bodhidharma's legacy affects the lives of Subha, Arvind, and China:
-
The impact
-
Bodhidharma's impact on China is immense. He is revered as the founder of Zen Buddhism, which emphasizes direct experience over scriptures and rituals.
-
He is also credited with creating Shaolin Kung Fu, which combines physical training with spiritual cultivation.
-
His teachings inspire millions of people across Asia and beyond, who seek to follow his example of wisdom and compassion.
-
The threat
-
However, not everyone appreciates Bodhidharma's legacy. China is plotting to wage a bio-war against India using a deadly virus that can wipe out millions of people.
-
The virus is derived from an ancient strain that was found in Bodhidharma's blood sample.
-
China wants to use this virus to erase Bodhidharma's history from India and claim him as their own hero.
-
To do this, they send a spy named Dong Lee to India to carry out Operation Red, which involves infecting Arvind with the virus and spreading it across the country.
-
The solution
-
Subha discovers China's plan when she analyzes Arvind's DNA sample after he falls ill. She realizes that he has been infected with the virus and that he is also carrying Bodhidharma's memory strands.
-
She decides to activate those memory strands using her genetic device, hoping that they will help Arvind fight off the virus and recover his health.
-
She also contacts her professor Imran Saahil, who helps her track down Dong Lee and stop his operation.
-
Subha and Arvind use their genetic skills and martial arts skills to confront Dong Lee and his agents in various locations across India.
-
They manage to stop Operation Red before it causes too much damage, but not before losing some of their friends along the way.
-
Conclusion
-
In conclusion, Bodhidharma is an action-packed movie that combines history, science fiction, and thriller elements. It tells the story of a legendary monk who brought Zen Buddhism and Shaolin Kung Fu to China in the 6th century AD, as well as his modern-day descendants who use his skills to save India from a bio-war attack by China.
-
The movie has some strengths such as its impressive action scenes, its intriguing premise, its star cast (Suriya plays both Bhodi Dharma and Arvind), its patriotic message (the movie was released on Diwali), and its catchy songs (the song "Oh Ringa Ringa" features more
than 1000 dancers in the busy streets of Chennai).
-
However, the movie also has some weaknesses such as its historical inaccuracies (Bodhidharma's origin, journey, and legacy are not well documented and are subject to debate), its clichéd characters (Subha is a stereotypical nerdy girl, Dong Lee is a one-dimensional villain), its unrealistic plot (the virus and the genetic device are not scientifically plausible), and its lengthy duration (the movie runs for almost three hours).
-
Overall, Bodhidharma is a movie that can be enjoyed by fans of action and thriller genres, as well as by those who are interested in learning more about Bodhidharma's legend. However, it is not a movie that can be taken too seriously or too literally, as it is more of a fictionalized and dramatized version of Bodhidharma's story than a factual and accurate one.
-
FAQs
-
Here are some frequently asked questions about Bodhidharma and the movie:
-
Q: Who was Bodhidharma?
-
A: Bodhidharma was a Buddhist monk who lived in the 6th century AD. He is regarded as the first patriarch of Zen Buddhism in China and the founder of Shaolin Kung Fu. He is also known as Damo in Chinese and Daruma in Japanese.
-
Q: Where did Bodhidharma come from?
-
A: According to some sources, Bodhidharma was born in South India and belonged to the Pallava dynasty. According to others, he was born in Persia or Central Asia and belonged to the royal family of Kanchipuram. However, there is no definitive evidence for either claim.
-
Q: What did Bodhidharma do in China?
-
A: Bodhidharma traveled to China to spread Buddhism and to revive its original teachings. He met with the emperor Liang Wudi but failed to impress him with his answers. He then went to the Shaolin temple where he taught meditation and martial arts to the monks. He also meditated for nine years in a cave near the temple and attained enlightenment.
-
Q: How did Bodhidharma die?
-
A: There are different accounts of how Bodhidharma died. Some say he died peacefully in his cave. Some say he was poisoned by a jealous monk. Some say he faked his death and returned to India. Some say he never died and became immortal.
-
Q: Is the movie Bodhidharma based on a true story?
-
A: The movie Bodhidharma is loosely based on some historical facts and legends about Bodhidharma, but it also adds a lot of fictional elements and twists to make it more entertaining and appealing. The movie is not meant to be a documentary or a biography of Bodhidharma, but rather a creative interpretation of his story.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download HOT Chrome Google.md b/spaces/raedeXanto/academic-chatgpt-beta/Download HOT Chrome Google.md
deleted file mode 100644
index 8fa4c7534e72dd8249010cecabd8861ee611d42d..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Download HOT Chrome Google.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
How to Download Chrome Google: A Step-by-Step Guide
-
Chrome Google is a fast, secure and easy-to-use web browser that offers many features and benefits. If you want to download Chrome Google for your computer, here are the steps you need to follow:
Chrome Google is more than just a web browser. It is also a platform that allows you to access various Google services and apps, such as Gmail, Google Drive, Google Photos, Google Maps, Google Translate and more. You can sign in to Chrome Google with your Google account and sync your bookmarks, history, passwords and settings across all your devices. You can also customize your browser with themes, extensions and apps from the Chrome Web Store.
-
One of the best features of Chrome Google is its speed and performance. Chrome Google uses a powerful engine that can load web pages quickly and smoothly. It also supports the latest web standards and technologies, such as HTML5, CSS3, JavaScript and WebAssembly. Chrome Google can also run multiple tabs and processes without slowing down your computer or crashing.
-
Another great feature of Chrome Google is its security and privacy. Chrome Google protects you from malicious websites, phishing, malware and other online threats. It also warns you before you visit a site that may harm your computer or steal your personal information. Chrome Google also gives you control over your data and how it is shared with websites and third parties. You can manage your cookies, permissions, passwords and autofill settings in the Chrome Google settings. You can also use the incognito mode to browse the web without saving any history or cookies.
-
-
Chrome Google also offers many features that enhance your browsing experience and productivity. For example, you can use the omnibox to search the web, enter web addresses, perform calculations, convert units and more. You can also use voice search to speak your queries instead of typing them. You can also use the tab search feature to find and switch to any open tab in Chrome Google. You can also use the reading list feature to save articles for later reading.
-
-
Another feature that Chrome Google provides is the ability to cast your browser content to your TV or other devices. You can use the cast button in Chrome Google to stream videos, music, photos and web pages from your computer to your Chromecast-enabled device. You can also mirror your entire desktop or browser tab to your TV or other device. This way, you can enjoy your favorite content on a bigger screen.
-
Chrome Google is constantly updating and improving its features and performance. You can always check for updates in the Chrome Google settings and install them with a click. You can also give feedback and suggestions to the Chrome Google team through the help menu or the Chrome Google community forum. By downloading Chrome Google, you are joining millions of users who enjoy a fast, secure and smart web browser.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Fire Service Drill Book Download The Essential Resource for Firefighters and Fire Officers.md b/spaces/raedeXanto/academic-chatgpt-beta/Fire Service Drill Book Download The Essential Resource for Firefighters and Fire Officers.md
deleted file mode 100644
index 86275faa9e595c461cce324e43d136733a8c3698..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Fire Service Drill Book Download The Essential Resource for Firefighters and Fire Officers.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Fire Service Drill Book Download
-
If you are a firefighter or aspire to become one, you might be interested in downloading a fire service drill book. A fire service drill book is a manual that contains practical instructions and exercises for firefighters to learn and practice various aspects of firemanship, such as methods of rescue, decontamination, ventilation, salvage, chemicals, etc. A fire service drill book is an essential resource for firefighters to enhance their skills and knowledge, improve their safety and efficiency, and standardize their procedures and practices.
-
In this article, we will explore the different types of fire service drill books available, the benefits of using them, and how to download them from online or offline sources. We will also provide some tips and precautions for downloading fire service drill books.
There are many fire service drill books available in the market, but some of the most popular and widely used ones are:
-
Fire Service Drill Book by Home Office (UK)
-
This is a comprehensive manual that covers all aspects of fire service drills, such as ladder drills, hose drills, pump drills, rescue drills, breathing apparatus drills, etc. It also includes diagrams and illustrations to explain the procedures and techniques. It was first published in 1950 and has been revised several times since then. The latest edition was published in 1985 by HMSO (Her Majesty's Stationery Office) .
-
Practical Firemanship by Home Office (UK)
-
This is another manual that focuses on the practical aspects of firemanship, such as methods of rescue, decontamination, ventilation, salvage, chemicals, etc. It also provides information on the types and causes of fires, fire behavior, fire prevention, fire investigation, etc. It was first published in 1974 and has been updated several times since then. The latest edition was published in 1990 by HMSO .
-
Other fire service drill books
-
There are also other fire service drill books that are specific to certain countries or regions, such as the US Fire Administration's Firefighter's Handbook , the Australian Fire Service's Firefighter's Handbook , the Canadian Fire Service's Firefighter's Handbook , etc. These books may have different formats and contents depending on the local laws, regulations, standards, and practices.
-
Benefits of fire service drill books
-
Fire service drill books are not only useful for firefighters but also for anyone who wants to learn more about firemanship. Some of the benefits of using fire service drill books are:
-
Enhance skills and knowledge of firefighters
-
Fire service drill books provide detailed instructions and exercises for firefighters to learn and practice various aspects of firemanship. By following these drills regularly, firefighters can improve their skills and knowledge in handling different types of fires and emergencies. They can also refresh their memory and keep up with the latest developments and innovations in the field.
-
Improve safety and efficiency of fire operations
-
Fire service drill books also help firefighters to improve their safety and efficiency in performing their duties. By following the standardized procedures and techniques described in these books, firefighters can reduce the risks of injuries and accidents, increase their speed and accuracy, coordinate better with their team members, and use their equipment more effectively.
-
How to download fire service drill book for free
-Fire service drill book PDF download online
-Best fire service drill book to download and practice
-Download fire service drill book for beginners
-Fire service drill book download with answers and explanations
-Fire service drill book download for advanced learners
-Fire service drill book download for instructors and trainers
-Fire service drill book download with illustrations and diagrams
-Fire service drill book download with audio and video
-Fire service drill book download with quizzes and tests
-Fire service drill book download for different types of fires
-Fire service drill book download for different scenarios and situations
-Fire service drill book download for different equipment and tools
-Fire service drill book download for different roles and responsibilities
-Fire service drill book download for different standards and regulations
-Fire service drill book download with tips and tricks
-Fire service drill book download with case studies and examples
-Fire service drill book download with exercises and activities
-Fire service drill book download with feedback and evaluation
-Fire service drill book download with certificates and badges
-Fire service drill book download with updates and revisions
-Fire service drill book download with bonus materials and resources
-Fire service drill book download with discounts and offers
-Fire service drill book download with reviews and ratings
-Fire service drill book download with testimonials and success stories
-Where to find fire service drill book to download
-What to look for in a fire service drill book before downloading
-How to use a fire service drill book after downloading
-How to share a fire service drill book after downloading
-How to print a fire service drill book after downloading
-How to save a fire service drill book after downloading
-How to backup a fire service drill book after downloading
-How to delete a fire service drill book after downloading
-How to edit a fire service drill book after downloading
-How to customize a fire service drill book after downloading
-How to create your own fire service drill book to download
-How to sell your own fire service drill book online
-How to promote your own fire service drill book online
-How to monetize your own fire service drill book online
-How to get feedback on your own fire service drill book online
-How to improve your own fire service drill book online
-How to update your own fire service drill book online
-How to revise your own fire service drill book online
-How to add bonus materials and resources to your own fire service drill book online
-How to offer discounts and offers on your own fire service drill book online
-How to get reviews and ratings on your own fire service drill book online
-How to get testimonials and success stories on your own fire service drill book online
-
Standardize fire service procedures and practices
-
Fire service drill books also help to standardize the procedures and practices of the fire service across different regions and countries. By using these books as a common reference point, firefighters can ensure that they follow the same rules and guidelines as their counterparts in other places. This can facilitate communication and cooperation among different fire departments and agencies.
-
How to download fire service drill books
-
If you want to download a fire service drill book, you have two options: online or offline.
-
Online sources and links
-
The easiest way to download a fire service drill book is to use online sources and links. There are many websites that offer free or paid downloads of various fire service drill books in PDF or other formats. Some examples are:
You can also use search engines like Google or Bing to find more online sources and links for downloading fire service drill books.
-
Offline sources and libraries
-
If you prefer to have a physical copy of a fire service drill book, you can also use offline sources and libraries. There are many bookstores that sell new or used copies of various fire service drill books. You can also borrow them from public or private libraries that have them in their collections. Some examples are:
You can also use online catalogs like WorldCat or LibraryThing to find more offline sources and libraries for obtaining fire service drill books.
-
Tips and precautions for downloading fire service drill books
-
Before you download a fire service drill book from any source, you should follow some tips and precautions to ensure that you get a quality product that meets your needs. Here are some suggestions:
- - Check the edition, date, author, publisher, format, size, language, etc. of the book before downloading it. - Compare different sources and links for downloading the same book and choose the one that offers the best quality, price, speed, security, etc. - Read reviews and ratings from other users who have downloaded the same book before. - Scan the downloaded file for viruses or malware before opening it. - Respect the intellectual property rights of the authors and publishers of the book. - Use a reliable device and internet connection for downloading the book.
Conclusion
-
A fire service drill book is a valuable resource for anyone who wants to learn more about firemanship. It contains practical instructions and exercises for firefighters to enhance their skills and knowledge, improve their safety and efficiency, and standardize their procedures and practices. You can download a fire service drill book from online or offline sources using various links or catalogs. However, you should follow some tips and precautions before downloading any book to ensure that you get a quality product that meets your needs.
- **FAQs** Q: What is a fire service drill book? A: A fire service drill book is a manual that contains practical instructions and exercises for firefighters to learn and practice various aspects of firemanship, such as methods of rescue, decontamination, ventilation, salvage, chemicals, etc. Q: Why is a fire service drill book important? A: A fire service drill book is important because it helps firefighters to enhance their skills and knowledge, improve their safety and efficiency, and standardize their procedures and practices. Q: How can I download a fire service drill book? A: You can download a fire service drill book from online or offline sources using various links or catalogs. You can also use search engines like Google or Bing to find more online sources and links for downloading fire service drill books. Q: What are some examples of fire service drill books? A: Some examples of fire service drill books are: - Fire Service Drill Book by Home Office (UK) - Practical Firemanship by Home Office (UK) - Firefighter's Handbook by US Fire Administration - Firefighter's Handbook by Australian Fire Service Q: What are some tips and precautions for downloading fire service drill books? A: Some tips and precautions for downloading fire service drill books are: - Check the edition, date, author, publisher, format, size, language, etc. of the book before downloading it. - Compare different sources and links for downloading the same book and choose the one that offers the best quality, price, speed, security, etc. - Read reviews and ratings from other users who have downloaded the same book before. - Scan the downloaded file for viruses or malware before opening it. - Respect the intellectual property rights of the authors and publishers of the book. - Use a reliable device and internet connection for downloading the book. 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Alien Covenant English 3 In Hindi Hd.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Alien Covenant English 3 In Hindi Hd.md
deleted file mode 100644
index 2c6515e943df3e1053b3fa1a1d61c548f41f4dfa..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Alien Covenant English 3 In Hindi Hd.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Dostudio Authoring Edition Keygen Torrent: A Guide for Blu-ray Enthusiasts
-
-
If you are looking for a way to create professional Blu-ray discs with interactive menus, complex interactivity, and dual 1080p 3D streams, you might be interested in Dostudio Authoring Edition. This is a software that allows you to create replication-ready Blu-ray projects fast and easily. However, this software is not cheap and you might be tempted to look for a Dostudio Authoring Edition keygen torrent to get it for free.
-
-
In this article, we will explain what Dostudio Authoring Edition is, what are its features and benefits, and why you should avoid downloading a Dostudio Authoring Edition keygen torrent. We will also give you some alternatives to get this software legally and safely.
Dostudio Authoring Edition is a program that was developed by Sony Creative Software Inc. It is part of the DoStudio line, which is a series of applications focused on professional Blu-ray Disc authoring. Dostudio Authoring Edition empowers you to create high-quality, replication ready Blu-ray Disc titles with interactive pop-up menus, complex interactivity, and dual 1080p 3D streams.
-
-
Some of the features of Dostudio Authoring Edition are:
-
-
-
It supports Blu-ray Disc specification version 2.0.
-
It allows you to create interactive pop-up menus with up to 32 buttons per page.
-
It supports multiple audio and subtitle tracks, including Dolby TrueHD and DTS-HD Master Audio.
-
It supports BD-Java interactivity, including advanced scripting and graphics capabilities.
-
It supports dual 1080p 3D streams for Blu-ray 3D titles.
-
It allows you to transcode your files into Blu-ray disc-compliant MVC and AVC files for Blu-ray 3D.
-
It has a user-friendly interface that guides you through the authoring process.
-
It has a preview mode that lets you test your project before burning it.
-
-
-
Dostudio Authoring Edition is compatible with Windows XP / XP 64 bit / Vista / Vista 64 bit / 7 / 7 64 bit / 8 / 8 64 bit. It requires a minimum of 2 GB of RAM and 100 GB of free disk space. It also requires a Blu-ray burner and a Blu-ray player for testing.
-
-
Why should you avoid downloading a Dostudio Authoring Edition keygen torrent?
-
-
A Dostudio Authoring Edition keygen torrent is a file that contains a program that generates a serial number or a license key for activating the software without paying for it. This might sound like an easy way to get the software for free, but it comes with many risks and disadvantages.
-
-
Some of the reasons why you should avoid downloading a Dostudio Authoring Edition keygen torrent are:
-
-
-
It is illegal. Downloading and using a Dostudio Authoring Edition keygen torrent is a form of software piracy, which is a violation of intellectual property rights. You could face legal consequences if you are caught using pirated software.
-
It is unsafe. Downloading a Dostudio Authoring Edition keygen torrent from unknown sources could expose your computer to viruses, malware, spyware, ransomware, or other harmful programs. These could damage your system, steal your personal information, or lock your files until you pay a ransom.
-
It is unreliable. Downloading a Dostudio Authoring Edition keygen torrent does not guarantee that you will get a working key or that the software will function properly. You could end up with an invalid key, a corrupted file, or a software that crashes or freezes frequently.
-
It is unethical. Downloading and using a Dostudio Authoring Edition keygen torrent deprives the developers of their rightful income and discourages them from creating more quality products. You are also hurting other users who pay for the software and expect to receive updates and support.
-
-
-
Therefore, downloading a Dostudio Authoring Edition keygen torrent is not worth the risk and the hassle. You are better off looking for other ways to get the software legally and safely.
-
-
What are some alternatives to get Dostudio Authoring Edition legally and safely?
-
-
If you want to get Dostudio Authoring Edition legally and safely, you have some options to choose from. Some of them are:
-
-
-
-
Buy the software from the official website. This is the best way to get the software as you will receive the latest version, updates, support, and warranty. You can buy the software from https://www.sonycreativesoftware.com/dostudio. The price of the software is $2395 USD.
-
Look for discounts or promotions. Sometimes, the developers or authorized resellers might offer discounts or promotions on the software. You can look for these on their website, social media pages, newsletters, or online forums. You might be able to save some money while getting the software legally.
-
Use a free trial or a demo version. If you are not sure if you want to buy the software or if you want to test it before buying it, you can use a free trial or a demo version of the software. These versions usually have limited features or time restrictions, but they allow you to try the software without paying for it. You can download a free trial or a demo version of Dostudio Authoring Edition from https://www.sonycreativesoftware.com/download/trials/dostudio.
-
Use an alternative software. If you cannot afford or do not want to buy Dostudio Authoring Edition, you can look for other software that can perform similar functions. There are many other Blu-ray authoring software available on the market, some of them are free or cheaper than Dostudio Authoring Edition. However, they might not have all the features or quality that Dostudio Authoring Edition offers. Some examples of alternative software are DVDFab Blu-ray Creator, Leawo Blu-ray Creator, Aiseesoft Blu-ray Creator, etc.
-
-
-
In conclusion, Dostudio Authoring Edition is a powerful and professional software that allows you to create replication-ready Blu-ray projects fast and easily. However, downloading a Dostudio Authoring Edition keygen torrent is not a good idea as it is illegal, unsafe, unreliable, and unethical. You should look for other ways to get the software legally and safely, such as buying it from the official website, looking for discounts or promotions, using a free trial or a demo version, or using an alternative software.
-
How to use Dostudio Authoring Edition?
-
-
Dostudio Authoring Edition has a user-friendly interface that guides you through the authoring process. You can create your Blu-ray project in four steps:
-
-
-
Import your video, audio, and subtitle files. You can use various formats, such as AVI, MOV, MP4, MKV, M2TS, etc. You can also import existing Blu-ray folders or ISO files.
-
Edit your project settings. You can choose the disc type, the output format, the playback mode, the region code, etc. You can also customize the disc label and volume name.
-
Create your menus and interactivity. You can use the built-in menu templates or create your own from scratch. You can add buttons, images, text, animations, sounds, etc. You can also add BD-Java interactivity, such as pop-up menus, bookmarks, playlists, etc.
-
Preview and burn your project. You can test your project in the preview mode and check for errors or warnings. You can also export your project as a Blu-ray folder or an ISO file. Finally, you can burn your project to a Blu-ray disc using a compatible burner.
-
-
-
Dostudio Authoring Edition also provides you with some tools and features to help you with your authoring process. For example, you can use the DoStudio Encoder to transcode your files into Blu-ray disc-compliant MVC and AVC files for Blu-ray 3D. You can also use the DoStudio Subtitle Editor to create and edit subtitles for your project.
-
-
What are the advantages and disadvantages of Dostudio Authoring Edition?
-
-
Dostudio Authoring Edition is a powerful and professional software that has many advantages for Blu-ray enthusiasts. Some of them are:
-
-
-
It allows you to create high-quality Blu-ray projects with interactive menus, complex interactivity, and dual 1080p 3D streams.
-
It supports Blu-ray Disc specification version 2.0 and various audio and subtitle formats.
-
It has a user-friendly interface that guides you through the authoring process.
-
It has a preview mode that lets you test your project before burning it.
-
It provides you with some tools and features to help you with your authoring process.
-
-
-
However, Dostudio Authoring Edition also has some disadvantages that you should consider before buying it. Some of them are:
-
-
-
It is expensive. The price of the software is $2395 USD, which might be too high for some users.
-
It requires a lot of system resources. The software requires a minimum of 2 GB of RAM and 100 GB of free disk space. It also requires a Blu-ray burner and a Blu-ray player for testing.
-
It has a steep learning curve. The software has many features and options that might be overwhelming for beginners or casual users.
-
It does not support some formats or features. The software does not support UHD Blu-ray discs or HDR content. It also does not support some advanced BD-Java features, such as internet connectivity or persistent storage.
-
-
-
Therefore, Dostudio Authoring Edition is a software that has many advantages and disadvantages for Blu-ray enthusiasts. You should weigh them carefully before deciding whether to buy it or not.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fifa World Cup 2006 Download Full Version Pc Tpb Season.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fifa World Cup 2006 Download Full Version Pc Tpb Season.md
deleted file mode 100644
index ad65bad9cb5978a24a777f7c2495e548109b1d49..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fifa World Cup 2006 Download Full Version Pc Tpb Season.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
fifa world cup 2006 download full version pc tpb season
-
-Descargar FIFA World Cup Germany 2006 para PS2 por torrent gratis. ... FIFA 06 PC Free Download PC Game Cracked in Direct Link and Torrent. ... the previous FIFA games great, including season, competition, shoot-out, ... 4d29de3e1b
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Game Maker 7 Exe Decompiler.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Game Maker 7 Exe Decompiler.md
deleted file mode 100644
index 2a11311a659ef252459ca48a6227dff242701080..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Game Maker 7 Exe Decompiler.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-```markdown
-
How to Decompile Game Maker 7 Executables
-
If you have ever wanted to reverse engineer a game made with Game Maker 7, you might have wondered if there is a way to decompile the executable file back to its original project file. In this article, we will show you how to use a tool called GM8Decompiler, which can decompile Game Maker 8.x executables, including Game Maker 7 ones.
GM8Decompiler is an open-source decompiler for Game Maker 8.x executables, developed by OpenGMK. It can revert any game back to its original .gmk or .gm81 format respectively. It works by reading the gamedata section of the executable, which contains all the game's assets (sprites, rooms, GML code, etc.), and reconstructing the project file from it. It is faster, safer, more thorough, and supports more games than previous decompilers[^1^].
-
How to use GM8Decompiler?
-
To use GM8Decompiler, you will need to download the latest release from its GitHub repository[^3^]. You will also need to have Rust installed on your system, which you can get from https://rustup.rs or a package manager of your choice. Once you have downloaded and extracted the GM8Decompiler binary, you can run it from the command line with the following syntax:
The input argument is the path to the executable file you want to decompile, and the output argument is the path where you want to save the project file. You can also use various flags and options to customize the decompilation process, such as:
-
-
-d or --deobfuscate: This flag will attempt to deobfuscate any obfuscated code in the executable, such as variable names or string literals.
-
-p or --preserve-broken-events: This flag will preserve any broken events in the project file, such as empty events or events with invalid IDs. By default, these events are repaired or removed.
-
-v or --verbose: This flag will print more information about the decompilation process to the standard output.
-
--help: This flag will display a help message with all the available flags and options.
-
-
For example, if you want to decompile a game called "mygame.exe" and save it as "mygame.gmk" with deobfuscation enabled, you can use this command:
-gm8decompiler -d mygame.exe mygame.gmk
-
The decompilation process may take some time depending on the size and complexity of the game. Once it is done, you should have a project file that you can open with Game Maker 7 or 8.
-
-
Limitations and Caveats
-
While GM8Decompiler is a powerful tool that can decompile most Game Maker 7 executables, it is not perfect and has some limitations and caveats that you should be aware of:
-
-
GM8Decompiler does not support games that use external DLLs or extensions. If you try to decompile such games, you may encounter errors or incomplete results.
-
GM8Decompiler does not preserve any comments or formatting in the GML code. The code will be decompiled as plain text with minimal indentation.
-
GM8Decompiler does not guarantee that the decompiled project file will work exactly as the original executable. There may be some differences or errors due to limitations of Game Maker or differences between versions.
-
GM8Decompiler does not support games that use encryption or anti-decompilation techniques. If you try to decompile such games, you may get corrupted or unreadable results.
-
GM8Decompiler is intended for educational and research purposes only. You should not use it to steal or plagiarize other people's games without their permission. You should respect the intellectual property rights of the original game developers.
-
-
Conclusion
-
In this article, we have
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (supernatural Season 1 5 720p Torrent) [PORTABLE].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (supernatural Season 1 5 720p Torrent) [PORTABLE].md
deleted file mode 100644
index de1028a329177010d35c50fedb93cf4a1fd9fa13..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (supernatural Season 1 5 720p Torrent) [PORTABLE].md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
How to Watch Supernatural Season 1-5 Online in HD
-
If you are a fan of the hit TV show Supernatural, you might be wondering how to watch the first five seasons online in high definition. Supernatural is a thrilling drama that follows the adventures of two brothers, Sam and Dean Winchester, who hunt demons, ghosts, vampires, and other supernatural creatures. The show has been running for 15 seasons and has a loyal fan base.
-
However, not all streaming platforms offer the show in HD quality, and some might not have all the episodes available. So, how can you watch Supernatural season 1-5 online in HD without missing any of the action? Here are some options:
-
HD Online Player (supernatural season 1 5 720p torrent)
Torrents: One of the most popular ways to watch Supernatural online is to download torrents. Torrents are files that contain data from various sources that can be downloaded using a torrent client. You can find torrents for Supernatural season 1-5 on various websites, such as daxn3dy7.wixsite.com, scribd.com, or soundcloud.com. However, downloading torrents can be risky, as they might contain viruses, malware, or illegal content. You should always use a VPN and an antivirus software when downloading torrents.
-
Streaming services: Another way to watch Supernatural online is to use streaming services that offer the show in HD quality. Some of the streaming platforms that have Supernatural season 1-5 are Netflix, Amazon Prime Video, Hulu, and HBO Max. However, these services might not be available in all regions, and they might require a subscription fee. You should check the availability and pricing of these services before signing up.
-
Online players: A third option to watch Supernatural online is to use online players that stream the show in HD quality. Online players are websites that host video files that can be played on your browser. You can find online players for Supernatural season 1-5 on various websites, such as fmovies.to, watchserieshd.tv, or putlockers.cr. However, online players can be unreliable, as they might have low-quality videos, broken links, or intrusive ads. You should always use an ad blocker and a VPN when using online players.
-
-
As you can see, there are many ways to watch Supernatural season 1-5 online in HD quality. However, each option has its pros and cons, and you should choose the one that suits your preferences and budget. Whichever option you choose, you will enjoy watching the thrilling adventures of Sam and Dean Winchester as they fight against evil forces.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/robin0307/MMOCR/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py b/spaces/robin0307/MMOCR/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py
deleted file mode 100644
index e22571e74511bab4303138f0e4816687fadac69e..0000000000000000000000000000000000000000
--- a/spaces/robin0307/MMOCR/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py
+++ /dev/null
@@ -1,33 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py',
- '../../_base_/schedules/schedule_sgd_160e.py',
- '../../_base_/det_datasets/icdar2017.py',
- '../../_base_/det_pipelines/maskrcnn_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}}
-
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=4,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/ohem_sampler.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/ohem_sampler.py
deleted file mode 100644
index 7eb066633809ff8d70240062c2dacd0e7283a1c5..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/ohem_sampler.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from ..transforms import bbox2roi
-from .base_sampler import BaseSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class OHEMSampler(BaseSampler):
- r"""Online Hard Example Mining Sampler described in `Training Region-based
- Object Detectors with Online Hard Example Mining
- `_.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- context,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- loss_key='loss_cls',
- **kwargs):
- super(OHEMSampler, self).__init__(num, pos_fraction, neg_pos_ub,
- add_gt_as_proposals)
- self.context = context
- if not hasattr(self.context, 'num_stages'):
- self.bbox_head = self.context.bbox_head
- else:
- self.bbox_head = self.context.bbox_head[self.context.current_stage]
-
- self.loss_key = loss_key
-
- def hard_mining(self, inds, num_expected, bboxes, labels, feats):
- with torch.no_grad():
- rois = bbox2roi([bboxes])
- if not hasattr(self.context, 'num_stages'):
- bbox_results = self.context._bbox_forward(feats, rois)
- else:
- bbox_results = self.context._bbox_forward(
- self.context.current_stage, feats, rois)
- cls_score = bbox_results['cls_score']
- loss = self.bbox_head.loss(
- cls_score=cls_score,
- bbox_pred=None,
- rois=rois,
- labels=labels,
- label_weights=cls_score.new_ones(cls_score.size(0)),
- bbox_targets=None,
- bbox_weights=None,
- reduction_override='none')[self.loss_key]
- _, topk_loss_inds = loss.topk(num_expected)
- return inds[topk_loss_inds]
-
- def _sample_pos(self,
- assign_result,
- num_expected,
- bboxes=None,
- feats=None,
- **kwargs):
- """Sample positive boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): Assigned results
- num_expected (int): Number of expected positive samples
- bboxes (torch.Tensor, optional): Boxes. Defaults to None.
- feats (list[torch.Tensor], optional): Multi-level features.
- Defaults to None.
-
- Returns:
- torch.Tensor: Indices of positive samples
- """
- # Sample some hard positive samples
- pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False)
- if pos_inds.numel() != 0:
- pos_inds = pos_inds.squeeze(1)
- if pos_inds.numel() <= num_expected:
- return pos_inds
- else:
- return self.hard_mining(pos_inds, num_expected, bboxes[pos_inds],
- assign_result.labels[pos_inds], feats)
-
- def _sample_neg(self,
- assign_result,
- num_expected,
- bboxes=None,
- feats=None,
- **kwargs):
- """Sample negative boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): Assigned results
- num_expected (int): Number of expected negative samples
- bboxes (torch.Tensor, optional): Boxes. Defaults to None.
- feats (list[torch.Tensor], optional): Multi-level features.
- Defaults to None.
-
- Returns:
- torch.Tensor: Indices of negative samples
- """
- # Sample some hard negative samples
- neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
- if neg_inds.numel() != 0:
- neg_inds = neg_inds.squeeze(1)
- if len(neg_inds) <= num_expected:
- return neg_inds
- else:
- neg_labels = assign_result.labels.new_empty(
- neg_inds.size(0)).fill_(self.bbox_head.num_classes)
- return self.hard_mining(neg_inds, num_expected, bboxes[neg_inds],
- neg_labels, feats)
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/misc.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/misc.py
deleted file mode 100644
index 2017cbb94660c919a99e522393e83b42b27e46fe..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/misc.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import glob
-import os
-import os.path as osp
-import warnings
-
-import mmcv
-import torch
-from mmcv.utils import TORCH_VERSION, digit_version, print_log
-
-
-def find_latest_checkpoint(path, suffix='pth'):
- """Find the latest checkpoint from the working directory.
-
- Args:
- path(str): The path to find checkpoints.
- suffix(str): File extension.
- Defaults to pth.
-
- Returns:
- latest_path(str | None): File path of the latest checkpoint.
- References:
- .. [1] https://github.com/microsoft/SoftTeacher
- /blob/main/ssod/utils/patch.py
- """
- if not osp.exists(path):
- warnings.warn('The path of checkpoints does not exist.')
- return None
- if osp.exists(osp.join(path, f'latest.{suffix}')):
- return osp.join(path, f'latest.{suffix}')
-
- checkpoints = glob.glob(osp.join(path, f'*.{suffix}'))
- if len(checkpoints) == 0:
- warnings.warn('There are no checkpoints in the path.')
- return None
- latest = -1
- latest_path = None
- for checkpoint in checkpoints:
- count = int(osp.basename(checkpoint).split('_')[-1].split('.')[0])
- if count > latest:
- latest = count
- latest_path = checkpoint
- return latest_path
-
-
-def update_data_root(cfg, logger=None):
- """Update data root according to env MMDET_DATASETS.
-
- If set env MMDET_DATASETS, update cfg.data_root according to
- MMDET_DATASETS. Otherwise, using cfg.data_root as default.
-
- Args:
- cfg (mmcv.Config): The model config need to modify
- logger (logging.Logger | str | None): the way to print msg
- """
- assert isinstance(cfg, mmcv.Config), \
- f'cfg got wrong type: {type(cfg)}, expected mmcv.Config'
-
- if 'MMDET_DATASETS' in os.environ:
- dst_root = os.environ['MMDET_DATASETS']
- print_log(f'MMDET_DATASETS has been set to be {dst_root}.'
- f'Using {dst_root} as data root.')
- else:
- return
-
- assert isinstance(cfg, mmcv.Config), \
- f'cfg got wrong type: {type(cfg)}, expected mmcv.Config'
-
- def update(cfg, src_str, dst_str):
- for k, v in cfg.items():
- if isinstance(v, mmcv.ConfigDict):
- update(cfg[k], src_str, dst_str)
- if isinstance(v, str) and src_str in v:
- cfg[k] = v.replace(src_str, dst_str)
-
- update(cfg.data, cfg.data_root, dst_root)
- cfg.data_root = dst_root
-
-
-_torch_version_div_indexing = (
- 'parrots' not in TORCH_VERSION
- and digit_version(TORCH_VERSION) >= digit_version('1.8'))
-
-
-def floordiv(dividend, divisor, rounding_mode='trunc'):
- if _torch_version_div_indexing:
- return torch.div(dividend, divisor, rounding_mode=rounding_mode)
- else:
- return dividend // divisor
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/version.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/version.py
deleted file mode 100644
index fecd645024d90770d008d94fe62c532189a5f6b2..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/version.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-
-__version__ = '2.28.2'
-short_version = __version__
-
-
-def parse_version_info(version_str):
- version_info = []
- for x in version_str.split('.'):
- if x.isdigit():
- version_info.append(int(x))
- elif x.find('rc') != -1:
- patch_version = x.split('rc')
- version_info.append(int(patch_version[0]))
- version_info.append(f'rc{patch_version[1]}')
- return tuple(version_info)
-
-
-version_info = parse_version_info(__version__)
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/_base_/datasets/coco_instance.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/_base_/datasets/coco_instance.py
deleted file mode 100644
index 9901a858414465d19d8ec6ced316b460166176b4..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/_base_/datasets/coco_instance.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# dataset settings
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_train2017.json',
- img_prefix=data_root + 'train2017/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline))
-evaluation = dict(metric=['bbox', 'segm'])
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Filesflash Premium Account Username And Password Get Unlimited Access to Files.md b/spaces/rorallitri/biomedical-language-models/logs/Filesflash Premium Account Username And Password Get Unlimited Access to Files.md
deleted file mode 100644
index dc909c9cd3fab4e707b2bd5c0f2b2be630f1d4b5..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Filesflash Premium Account Username And Password Get Unlimited Access to Files.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-for the western world is straight forward, but it’s also a great way to explore the world. Building a character in this game is simple and straightforward, but the amount of customization that is available is astounding. Focusing on seven major traits, you can add the rest using color, face and hair options.
-
-You can use the new unified attribute system to reflect your character’s personality, giving you a voice in what you are. This not only helps you with the story-driven character arcs, but also gives you a huge amount of ways to interact with NPCs. Crafting, cooking and fishing are some of the most useful, and can often be done by non-combat characters.
-
-A big change in Fallout: New Vegas was the addition of romance options for both male and female companions. They are not as important as the other attributes, but they add a lot of fun and an extra option when it comes to choosing a companion. Your companions level up, allowing you to improve their skills and attributes, as well as improve the build of your companions in-game.
-
-The biggest issue with Fallout: New Vegas is the questing, which is often difficult to achieve. The quests are designed to be very linear, often leading you to areas where other quests are being conducted. This means you often have to backtrack to complete quests you’ve missed, or re-visit areas to gain back quest givers. The biggest issue with this is that it’s incredibly repetitive. After a while, you will have completed several quests for individual NPCs, but it’s impossible to avoid.
-
-Enemies are a little more common than in previous Fallout titles, but they are also a bit easier to defeat. Most of the challenges are in building or conserving your health, rather than fighting with your fists. It’s a change that Bethesda is hoping will help move Fallout: New Vegas away from the post-apocalyptic role-playing game genre. A lot of RPG players are still nostalgic for the turn-based combat of titles like Final Fantasy Tactics and Dragon Quest VII, and prefer the combat of games like Dragon Age: Origins. If this is you, then this change in combat may turn you off.
-
-It seems Bethesda are still struggling with the RPG-lite genre of the Fallout games, as they have added a fairly traditional leveling up system in New Vegas. Enemies have increased in number and complexity, but the XP system is a little easier to use. New Vegas 4fefd39f24
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Kingdom Hearts 1 Final Mix Ita BEST Download Ps2.md b/spaces/rorallitri/biomedical-language-models/logs/Kingdom Hearts 1 Final Mix Ita BEST Download Ps2.md
deleted file mode 100644
index a0c311190f92e5914f43eee3fca2b58a2d191775..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Kingdom Hearts 1 Final Mix Ita BEST Download Ps2.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Kingdom Hearts: Birth by Sleep Final Mix is the one of the most popular preventing video games. A kingdom hearts recreation, at its core, is about running round and beating the crap out of amorphous blob enemies in elegant approaches. For the sport to paintings, the single most critical element is this combat has to be amusing. And in birth by using sleep, it is *amusing* with a capital f-u-n. The simple hack-and-cut down is a quite proper formula to start with, but in a protracted sport, you want to continuously be blending matters as much as preserve the combat sparkling and exciting.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/model_list.py b/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/model_list.py
deleted file mode 100644
index 119a27df498e76f5270bdf30da501730837a212d..0000000000000000000000000000000000000000
--- a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/model_list.py
+++ /dev/null
@@ -1,48 +0,0 @@
-stable_model_list = [
- "runwayml/stable-diffusion-v1-5",
- "stabilityai/stable-diffusion-2-1",
- "prompthero/openjourney-v4",
- "wavymulder/Analog-Diffusion",
- "dreamlike-art/dreamlike-diffusion-1.0",
- "gsdf/Counterfeit-V2.5",
- "dreamlike-art/dreamlike-photoreal-2.0"
-
-
-]
-
-controlnet_canny_model_list = [
- "lllyasviel/sd-controlnet-canny",
- "thibaud/controlnet-sd21-canny-diffusers",
-]
-
-controlnet_depth_model_list = [
- "lllyasviel/sd-controlnet-depth",
- "thibaud/controlnet-sd21-depth-diffusers",
-]
-
-controlnet_pose_model_list = [
- "lllyasviel/sd-controlnet-openpose",
- "thibaud/controlnet-sd21-openpose-diffusers",
-]
-
-controlnet_hed_model_list = [
- "lllyasviel/sd-controlnet-hed",
- "thibaud/controlnet-sd21-hed-diffusers",
-]
-
-controlnet_scribble_model_list = [
- "lllyasviel/sd-controlnet-scribble",
- "thibaud/controlnet-sd21-scribble-diffusers",
-]
-stable_inpiant_model_list = [
- "stabilityai/stable-diffusion-2-inpainting",
- "runwayml/stable-diffusion-inpainting",
-]
-
-controlnet_mlsd_model_list = [
- "lllyasviel/sd-controlnet-mlsd",
-]
-
-controlnet_seg_model_list = [
- "lllyasviel/sd-controlnet-seg",
-]
diff --git a/spaces/scedlatioru/img-to-music/Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA.md b/spaces/scedlatioru/img-to-music/Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA.md
deleted file mode 100644
index 7bd1c88fd1162359d916a280376dc309ccea32bc..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA.md
+++ /dev/null
@@ -1,142 +0,0 @@
-## Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA
-
-
-
-
-
- 
-
-
-
-
-
-**LINK 🆓 [https://urlca.com/2txvPn](https://urlca.com/2txvPn)**
-
-
-
-
-
-
-
-
-
-
-
- ```html
-
-# Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA: What's New and How to Download
-
-
-
-Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA is the latest patch for the action RPG game that adds new features, improvements and fixes. Here is everything you need to know about this update and how to download it.
-
-
-
-## What is Titan Quest Anniversary Edition Atlantis?
-
-
-
-Titan Quest Anniversary Edition Atlantis is an expansion for the classic game Titan Quest Anniversary Edition, which is a remastered version of the original Titan Quest and its expansion Immortal Throne. The expansion adds a new story campaign that takes you on a quest to find the mythical kingdom of Atlantis, as well as a new endless mode, new skills, new items and more.
-
-
-
-## What is Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA?
-
-
-
-Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA is the latest patch for the game that was released on April 16, 2023. The patch fixes some bugs, improves performance and stability, and adds some new features. Some of the highlights of the patch are:
-
-
-
-- A new in-game commentary for the soundtrack featuring voice actors from the game and rock band Aerosmith
-
-- A new casino merchant that lets you spend your excess money on randomly generated loot
-
-- A new quick cast option that lets you cast spells faster
-
-- A new color grading option that enhances the graphics
-
-- Various balance changes and bug fixes
-
-
-
-## How to Download Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA?
-
-
-
-To download Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA, you need to have the base game Titan Quest Anniversary Edition and the expansion Atlantis installed on your PC. You also need to have the previous patches up to v2.8 installed. You can download the patch from various sources online, such as [^1^] [^2^] [^3^] [^4^]. The patch size is about 441 MB. To install the patch, follow these steps:
-
-
-
-1. Extract the release
-
-2. Run setup.exe
-
-3. Install the update
-
-4. Copy the crack from the PLAZA folder
-
-5. Play!
-
-
-
-Titan Quest Anniversary Edition Atlantis Update V2 9-PLAZA is a great update for fans of the game who want to enjoy more content and better performance. If you are looking for a classic action RPG with a rich mythology and a lot of replay value, you should give Titan Quest Anniversary Edition Atlantis a try.
-
- ``` ```html
-
-## What are the Pros and Cons of Titan Quest Anniversary Edition Atlantis?
-
-
-
-Titan Quest Anniversary Edition Atlantis is not a perfect expansion, and it has its share of pros and cons. Here are some of the main ones:
-
-
-
-### Pros
-
-
-
-- The new story campaign is well-written and has some interesting twists and surprises
-
-- The new endless mode is a fun and challenging way to test your skills and gear
-
-- The new skills and items add more variety and customization to your character build
-
-- The new graphical options make the game look more modern and vibrant
-
-- The new soundtrack commentary and casino merchant add some humor and personality to the game
-
-
-
-### Cons
-
-
-
-- The new story campaign is too short and easy compared to the previous ones
-
-- The new endless mode is too repetitive and grindy after a while
-
-- The new skills and items are not well-balanced and some of them are overpowered or useless
-
-- The new graphical options can cause performance issues and glitches on some systems
-
-- The new soundtrack commentary and casino merchant can be annoying and distracting at times
-
-
-
-## Is Titan Quest Anniversary Edition Atlantis Worth It?
-
-
-
-Titan Quest Anniversary Edition Atlantis is a mixed bag of an expansion. It has some good ideas and features, but it also has some flaws and shortcomings. It is not as good as the previous expansion Ragnarok, which added a whole new act, a new mastery, a higher level cap, and more. Atlantis feels more like a side quest than a main quest, and it does not add much to the core gameplay or the overall experience.
-
-
-
-However, that does not mean that Atlantis is a bad expansion. It still offers some enjoyable content and enhancements for fans of the game who want more of it. It also has a reasonable price tag of $14.99, which is not too expensive for what it offers. If you love Titan Quest Anniversary Edition and you want to explore a new setting, try a new mode, or experiment with new skills and items, you might find Atlantis worth your time and money. But if you are looking for a substantial improvement or a fresh challenge, you might be disappointed by Atlantis.
-
- ``` 1b8d091108
-
-
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Baghban 2015 Full Movie Download 720p.md b/spaces/scedlatioru/img-to-music/example/Baghban 2015 Full Movie Download 720p.md
deleted file mode 100644
index 8e222b013e5584c08147a3b523ff0c1bb1e5aabf..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Baghban 2015 Full Movie Download 720p.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-Rascals Full Bollywood Hindi Movie (2015) 720p. Download Rascals Full Bollywood Hindi Movie (2015) 720p. Hazaaron Khwaishein Aisi Full Bollywood Hindi Movie (2015) 720p. Bollywood movies in Hindi. 50:07. Download Mera Pyaar Karega Full Bollywood Hindi Movie (2015) 720p. download the Mera Pyaar Karega Full Bollywood Hindi Movie (2015) 720p. When a mentally ill man causes problems for his relatives he ends up involved in a crime. the director Joshi has already. Download Mere Khayal Ramaanayak (2015) Full Bollywood Hindi Movie (2015) 720p. download the Mere Khayal Ramaanayak (2015) Full Bollywood Hindi Movie (2015) 720p. Hazaaron Khwaishein Aisi Full Bollywood Hindi Movie (2015) 720p. Bollywood Movies in Hindi. 50:07. download Rascals Full Bollywood Hindi Movie (2015) 720p. Rascals Full Bollywood Hindi Movie (2015) 720p. Get to Download Rascals Full Bollywood Hindi Movie (2015) 720p. Rascals Full Bollywood Hindi Movie (2015) 720p. Hazaaron Khwaishein Aisi Full Bollywood Hindi Movie (2015) 720p. it will cost you $4. Download Rascals Full Bollywood Hindi Movie (2015) 720p. Download Rascals Full Bollywood Hindi Movie (2015) 720p. When a mentally ill man causes problems for his relatives he ends up involved in a crime. Bollywood Movies in Hindi. 50:07. Download Mera Pyaar Karega Full Bollywood Hindi Movie (2015) 720p. Rascals Full Bollywood Hindi Movie (2015) 720p. Hawaizaada Full Bollywood Hindi Movie (2015) 720p. Bollywood movies in Hindi. 50:07. This movie has been released under the [India] film category on [October 26] at [City] [Country]. Rascals Full Bollywood Hindi Movie (2015) 720p. 50:07. it will cost you $4. Get to Download Rascals Full Bollywood Hindi Movie (2015) 720p.
-
-Hazaaron Khwaishein Aisi Full Bollywood Hindi Movie (2015) 720p
-
-Bollywood movies in Hindi. 50:07. Download Mer 4fefd39f24
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Ishaqzaade Full Movie 720p Free Download.md b/spaces/scedlatioru/img-to-music/example/Ishaqzaade Full Movie 720p Free Download.md
deleted file mode 100644
index cad4d6ec4d4259cc5fefb272418c83f4c785bbff..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Ishaqzaade Full Movie 720p Free Download.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
Ishaqzaade Full Movie 720p Free Download: A Forbidden Love Story
-
Ishaqzaade is a 2012 Bollywood movie that tells the story of a Hindu man and a Muslim woman who fall in love despite their families' political rivalry. The movie stars Arjun Kapoor and Parineeti Chopra as the lead pair, and Gauhar Khan as a supporting character. The movie was directed by Habib Faisal and produced by Yash Raj Films.
The movie is set in the town of Almore, where the Qureshis and the Chauhans are competing for the upcoming MLA election. Zoya Qureshi is a fiery and fearless daughter of the Qureshi leader, who campaigns for her father's victory. Parma Chauhan is a reckless and rebellious grandson of the Chauhan leader, who will do anything to help his grandfather win. The two young enemies use guns and insults to fight each other. However, Parma is attracted to Zoya's beauty and courage, and Zoya is intrigued by Parma's charm and audacity. As the election approaches, they secretly meet and their hatred ignites a passionate romance.
-
But their love story is not an easy one. They have to face the wrath of their families, their communities, and their own conscience. They have to deal with the consequences of their actions, and the price they have to pay for their love. They have to fight for their right to be together, against all odds.
-
Ishaqzaade is a movie that explores the themes of honor killings, communal violence, and interfaith relationships. It is a movie that challenges the stereotypes and prejudices that divide people on the basis of religion and caste. It is a movie that celebrates the power of love over hate.
-
If you want to watch this movie in high quality, you can download it for free from Ocean of Movies[^1^]. This website offers you a direct link to download Ishaqzaade in 720p resolution, with fast downloading speed. You can also find other movies in different genres and languages on this website.
-
-
So don't wait any longer. Download Ishaqzaade full movie 720p free from Ocean of Movies[^1^] and enjoy this thrilling and romantic movie with your loved ones.
-
-
Ishaqzaade is a movie that received critical acclaim and commercial success. It was praised for its realistic portrayal of the social issues and the chemistry of the lead actors. It was nominated for several awards, including the Filmfare Award for Best Debut Male for Arjun Kapoor and the Filmfare Award for Best Actress for Parineeti Chopra. It was also one of the highest-grossing movies of 2012.
-
Ishaqzaade is a movie that will make you laugh, cry, and feel. It will make you question your beliefs and values. It will make you root for the lovers who dare to defy the norms. It will make you witness the tragedy and triumph of their love.
-
Ishaqzaade is a movie that you should not miss. It is a movie that will stay with you long after it ends. It is a movie that will touch your heart and soul.
-
-
If you are wondering where to watch Ishaqzaade full movie 720p free, you can find it on Ocean of Movies. This website is a one-stop destination for all your movie needs. You can download movies in various formats and resolutions, from 300 MB to 1080p. You can also browse movies by genre, year, actor, and language. You can find Bollywood, Hollywood, Hindi dubbed, Telugu, Tamil, Punjabi, and other movies on this website.
-
Ocean of Movies is a safe and reliable website that offers you free and fast downloads. You don't have to worry about viruses, malware, or pop-ups. You don't have to register or sign up to access the movies. You just have to click on the download link and enjoy the movie.
-
So what are you waiting for? Download Ishaqzaade full movie 720p free from Ocean of Movies and watch this amazing movie with your friends and family. You will not regret it.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/scedlatioru/img-to-music/example/Redsn0w Win 0.9.10b8b .rarl [BEST].md b/spaces/scedlatioru/img-to-music/example/Redsn0w Win 0.9.10b8b .rarl [BEST].md
deleted file mode 100644
index c432b4b6de7ebae14e41acc375b22ea21f6fe776..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Redsn0w Win 0.9.10b8b .rarl [BEST].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Taboo is one thing, realism is another. And there should be more options. Unfortunately if you want polygamy in his version, you have to turn on ... 4d29de3e1b
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Whole Tomato Visual Assist X 10.9.2258.5.md b/spaces/scedlatioru/img-to-music/example/Whole Tomato Visual Assist X 10.9.2258.5.md
deleted file mode 100644
index c0641b2c1b734566f79fab7a4edddca24755d1a4..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Whole Tomato Visual Assist X 10.9.2258.5.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-Oct 8, 2020 - Visual Assist X dramatically reduces application development time with key new features and enhancements to existing features in Visual ... News | Microsoft Visual Studio
-10 Oct. 2019 г. - Microsoft Visual Studio 2020 offers a new level of support for .NET Core, Windows Server, Azure, Cassandra, Kafka, Data Stash, ...
-Project Management Magazine
-29 Mar. 2019 г. - Visual Studio Online. ...
-Visual Studio.
-Visual Studio Tools for Office.
-Visual Studio Tools for .NET Core.
-Visual Studio Tools for ...
-Microsoft Visual Studio Community ...
-Microsoft Visual Studio Ultimate 2019 for .NET Core.
- Microsoft Visual Basic 2019 - ...
-Microsoft Visual C# 2019 - ...
-Microsoft Visual C++ 2019 - ...
-Microsoft Visual Studio 2019 for Mac
-Microsoft Visual Studio Professional 2019 ...
-Microsoft Visual Basic 2019 - ...
-Visual Studio for Mac 2019
-Visual Studio Community 2019 - ...
-Visual Studio Ultimate 2019 for .NET Core.
-Visual Studio Code 2019 ... 8a78ff9644
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/[DVDRIP Xvid] Eternal Sunshine Of Spotless Mind [VOSTFR].md b/spaces/scedlatioru/img-to-music/example/[DVDRIP Xvid] Eternal Sunshine Of Spotless Mind [VOSTFR].md
deleted file mode 100644
index 51f2dfd79b6e950339ed9615db16ea4db0dec4e9..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/[DVDRIP Xvid] Eternal Sunshine Of Spotless Mind [VOSTFR].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
[DVDRIP Xvid] Eternal Sunshine Of Spotless Mind [VOSTFR]
""")
- gr.Markdown(description)
-
-if __name__ == '__main__':
- demo.queue(concurrency_count=3).launch(height=2500)
diff --git a/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/modeling/layers/misc.py b/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/modeling/layers/misc.py
deleted file mode 100644
index a8cf1c680c06b57412bfdf7a1c4a9c53f4acdbbd..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/modeling/layers/misc.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-"""
-helper class that supports empty tensors on some nn functions.
-
-Ideally, add support directly in PyTorch to empty tensors in
-those functions.
-
-This can be removed once https://github.com/pytorch/pytorch/issues/12013
-is implemented
-"""
-
-import math
-import torch
-from torch.nn.modules.utils import _ntuple
-
-
-class _NewEmptyTensorOp(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x, new_shape):
- ctx.shape = x.shape
- return x.new_empty(new_shape)
-
- @staticmethod
- def backward(ctx, grad):
- shape = ctx.shape
- return _NewEmptyTensorOp.apply(grad, shape), None
-
-
-class Conv2d(torch.nn.Conv2d):
- def forward(self, x):
- if x.numel() > 0:
- return super(Conv2d, self).forward(x)
- # get output shape
-
- output_shape = [
- (i + 2 * p - (di * (k - 1) + 1)) // d + 1
- for i, p, di, k, d in zip(
- x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride
- )
- ]
- output_shape = [x.shape[0], self.weight.shape[0]] + output_shape
- return _NewEmptyTensorOp.apply(x, output_shape)
-
-
-class ConvTranspose2d(torch.nn.ConvTranspose2d):
- def forward(self, x):
- if x.numel() > 0:
- return super(ConvTranspose2d, self).forward(x)
- # get output shape
-
- output_shape = [
- (i - 1) * d - 2 * p + (di * (k - 1) + 1) + op
- for i, p, di, k, d, op in zip(
- x.shape[-2:],
- self.padding,
- self.dilation,
- self.kernel_size,
- self.stride,
- self.output_padding,
- )
- ]
- output_shape = [x.shape[0], self.bias.shape[0]] + output_shape
- return _NewEmptyTensorOp.apply(x, output_shape)
-
-
-class BatchNorm2d(torch.nn.BatchNorm2d):
- def forward(self, x):
- if x.numel() > 0:
- return super(BatchNorm2d, self).forward(x)
- # get output shape
- output_shape = x.shape
- return _NewEmptyTensorOp.apply(x, output_shape)
-
-
-def interpolate(
- input, size=None, scale_factor=None, mode="nearest", align_corners=None
-):
- if input.numel() > 0:
- return torch.nn.functional.interpolate(
- input, size, scale_factor, mode, align_corners
- )
-
- def _check_size_scale_factor(dim):
- if size is None and scale_factor is None:
- raise ValueError("either size or scale_factor should be defined")
- if size is not None and scale_factor is not None:
- raise ValueError("only one of size or scale_factor should be defined")
- if (
- scale_factor is not None
- and isinstance(scale_factor, tuple)
- and len(scale_factor) != dim
- ):
- raise ValueError(
- "scale_factor shape must match input shape. "
- "Input is {}D, scale_factor size is {}".format(dim, len(scale_factor))
- )
-
- def _output_size(dim):
- _check_size_scale_factor(dim)
- if size is not None:
- return size
- scale_factors = _ntuple(dim)(scale_factor)
- # math.floor might return float in py2.7
- return [
- int(math.floor(input.size(i + 2) * scale_factors[i])) for i in range(dim)
- ]
-
- output_shape = tuple(_output_size(2))
- output_shape = input.shape[:-2] + output_shape
- return _NewEmptyTensorOp.apply(input, output_shape)
diff --git a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/metrics/perceptual_path_length.py b/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/metrics/perceptual_path_length.py
deleted file mode 100644
index 17271cfdf1545a26ab71d309ce2180532f513bd6..0000000000000000000000000000000000000000
--- a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/metrics/perceptual_path_length.py
+++ /dev/null
@@ -1,108 +0,0 @@
-# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
-#
-# This work is licensed under the Creative Commons Attribution-NonCommercial
-# 4.0 International License. To view a copy of this license, visit
-# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to
-# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
-
-"""Perceptual Path Length (PPL)."""
-
-import numpy as np
-import tensorflow as tf
-import dnnlib.tflib as tflib
-
-from metrics import metric_base
-from training import misc
-
-#----------------------------------------------------------------------------
-
-# Normalize batch of vectors.
-def normalize(v):
- return v / tf.sqrt(tf.reduce_sum(tf.square(v), axis=-1, keepdims=True))
-
-# Spherical interpolation of a batch of vectors.
-def slerp(a, b, t):
- a = normalize(a)
- b = normalize(b)
- d = tf.reduce_sum(a * b, axis=-1, keepdims=True)
- p = t * tf.math.acos(d)
- c = normalize(b - d * a)
- d = a * tf.math.cos(p) + c * tf.math.sin(p)
- return normalize(d)
-
-#----------------------------------------------------------------------------
-
-class PPL(metric_base.MetricBase):
- def __init__(self, num_samples, epsilon, space, sampling, minibatch_per_gpu, **kwargs):
- assert space in ['z', 'w']
- assert sampling in ['full', 'end']
- super().__init__(**kwargs)
- self.num_samples = num_samples
- self.epsilon = epsilon
- self.space = space
- self.sampling = sampling
- self.minibatch_per_gpu = minibatch_per_gpu
-
- def _evaluate(self, Gs, num_gpus):
- minibatch_size = num_gpus * self.minibatch_per_gpu
-
- # Construct TensorFlow graph.
- distance_expr = []
- for gpu_idx in range(num_gpus):
- with tf.device('/gpu:%d' % gpu_idx):
- Gs_clone = Gs.clone()
- noise_vars = [var for name, var in Gs_clone.components.synthesis.vars.items() if name.startswith('noise')]
-
- # Generate random latents and interpolation t-values.
- lat_t01 = tf.random_normal([self.minibatch_per_gpu * 2] + Gs_clone.input_shape[1:])
- lerp_t = tf.random_uniform([self.minibatch_per_gpu], 0.0, 1.0 if self.sampling == 'full' else 0.0)
-
- # Interpolate in W or Z.
- if self.space == 'w':
- dlat_t01 = Gs_clone.components.mapping.get_output_for(lat_t01, None, is_validation=True)
- dlat_t0, dlat_t1 = dlat_t01[0::2], dlat_t01[1::2]
- dlat_e0 = tflib.lerp(dlat_t0, dlat_t1, lerp_t[:, np.newaxis, np.newaxis])
- dlat_e1 = tflib.lerp(dlat_t0, dlat_t1, lerp_t[:, np.newaxis, np.newaxis] + self.epsilon)
- dlat_e01 = tf.reshape(tf.stack([dlat_e0, dlat_e1], axis=1), dlat_t01.shape)
- else: # space == 'z'
- lat_t0, lat_t1 = lat_t01[0::2], lat_t01[1::2]
- lat_e0 = slerp(lat_t0, lat_t1, lerp_t[:, np.newaxis])
- lat_e1 = slerp(lat_t0, lat_t1, lerp_t[:, np.newaxis] + self.epsilon)
- lat_e01 = tf.reshape(tf.stack([lat_e0, lat_e1], axis=1), lat_t01.shape)
- dlat_e01 = Gs_clone.components.mapping.get_output_for(lat_e01, None, is_validation=True)
-
- # Synthesize images.
- with tf.control_dependencies([var.initializer for var in noise_vars]): # use same noise inputs for the entire minibatch
- images = Gs_clone.components.synthesis.get_output_for(dlat_e01, is_validation=True, randomize_noise=False)
-
- # Crop only the face region.
- c = int(images.shape[2] // 8)
- images = images[:, :, c*3 : c*7, c*2 : c*6]
-
- # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images.
- if images.shape[2] > 256:
- factor = images.shape[2] // 256
- images = tf.reshape(images, [-1, images.shape[1], images.shape[2] // factor, factor, images.shape[3] // factor, factor])
- images = tf.reduce_mean(images, axis=[3,5])
-
- # Scale dynamic range from [-1,1] to [0,255] for VGG.
- images = (images + 1) * (255 / 2)
-
- # Evaluate perceptual distance.
- img_e0, img_e1 = images[0::2], images[1::2]
- distance_measure = misc.load_pkl('https://drive.google.com/uc?id=1N2-m9qszOeVC9Tq77WxsLnuWwOedQiD2') # vgg16_zhang_perceptual.pkl
- distance_expr.append(distance_measure.get_output_for(img_e0, img_e1) * (1 / self.epsilon**2))
-
- # Sampling loop.
- all_distances = []
- for _ in range(0, self.num_samples, minibatch_size):
- all_distances += tflib.run(distance_expr)
- all_distances = np.concatenate(all_distances, axis=0)
-
- # Reject outliers.
- lo = np.percentile(all_distances, 1, interpolation='lower')
- hi = np.percentile(all_distances, 99, interpolation='higher')
- filtered_distances = np.extract(np.logical_and(lo <= all_distances, all_distances <= hi), all_distances)
- self._report_result(np.mean(filtered_distances))
-
-#----------------------------------------------------------------------------
diff --git a/spaces/silentchen/layout-guidance/README.md b/spaces/silentchen/layout-guidance/README.md
deleted file mode 100644
index 55c2887cbbb4eb3b1cf9cab9b0faba678876dc07..0000000000000000000000000000000000000000
--- a/spaces/silentchen/layout-guidance/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Layout Guidance
-emoji: 🐨
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/tf/shape_helpers.py b/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/tf/shape_helpers.py
deleted file mode 100644
index be2926a63bce7ca5db3effe63d5264620aa1dcf8..0000000000000000000000000000000000000000
--- a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/tf/shape_helpers.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Utilities for dealing with shapes of TensorFlow tensors."""
-import tensorflow.compat.v1 as tf
-
-
-def shape_list(x):
- """Return list of dimensions of a tensor, statically where possible.
-
- Like `x.shape.as_list()` but with tensors instead of `None`s.
-
- Args:
- x: A tensor.
- Returns:
- A list with length equal to the rank of the tensor. The n-th element of the
- list is an integer when that dimension is statically known otherwise it is
- the n-th element of `tf.shape(x)`.
- """
- x = tf.convert_to_tensor(x)
-
- # If unknown rank, return dynamic shape
- if x.get_shape().dims is None:
- return tf.shape(x)
-
- static = x.get_shape().as_list()
- shape = tf.shape(x)
-
- ret = []
- for i in range(len(static)):
- dim = static[i]
- if dim is None:
- dim = shape[i]
- ret.append(dim)
- return ret
-
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Big by Young M.A Free MP3 Download and Lyrics.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Big by Young M.A Free MP3 Download and Lyrics.md
deleted file mode 100644
index 849fbd264b8216fc099b266ed0a5aca87d72cb10..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Big by Young M.A Free MP3 Download and Lyrics.md
+++ /dev/null
@@ -1,188 +0,0 @@
-
-
How to Download Young M.A Big MP3
-
If you are a fan of hip-hop music, you might have heard of Young M.A, a talented rapper from Brooklyn, New York. She is known for her catchy songs, witty lyrics, and confident attitude. One of her most popular songs is Big, which was released in 2019. In this article, we will show you how to download Young M.A Big MP3 for free from different sources.
Young M.A is an acronym for Young Me Achieving. She was born as Katorah Marrero on April 3, 1992. She started rapping at the age of nine and released her first mixtape in 2014. She gained fame after her song OOOUUU went viral in 2016. Since then, she has released several singles and projects, such as Herstory in the Making (2019) and Off the Yak (2021). She is also an entrepreneur and philanthropist who founded her own record label and foundation.
-
What is Big?
-
Big is a song by Young M.A that was released on June 28, 2019. It is the lead single from her debut studio album Herstory in the Making. The song is produced by Mike Zombie and features Young M.A rapping about her success, wealth, and lifestyle. The song has a catchy hook that goes "Uh-oh/Big-big-big-big-big-big-big-big/Big-big-big-big-big-big-big-big". The song has over 93 million views on YouTube and peaked at number 73 on the Billboard Hot 100 chart.
-
Why download Big MP3?
-
There are many reasons why you might want to download Big MP3 for free. Here are some of them:
-
-
You can listen to the song offline without any interruptions or ads.
-
You can save data and storage space on your device.
-
You can transfer the song to other devices or platforms.
-
You can create your own playlist or mixtape with the song.
-
You can support your favorite artist by streaming or buying her music later.
-
-
Where to download Big MP3?
-
There are many websites that offer free MP3 downloads of Big MP3, but not all of them are safe, legal, or reliable. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also have low-quality audio, broken links, or misleading ads. To avoid these risks, you should only download Big MP3 from trusted and reputable sources. Here are some of the best ones that we recommend:
-
YouTube
-
YouTube is the most popular video-sharing platform in the world. It has millions of videos, including music videos, live performances, interviews, and more. You can find the official video of Big by Young M.A on her YouTube channel. However, YouTube does not allow you to download videos or audio directly from its website. You need to use a third-party tool or app to do so. Here is how to download Big MP3 from YouTube:
-
How to download from YouTube
-
-
Go to the YouTube website or app and search for Big by Young M.A.
-
Copy the URL of the video from the address bar or the share button.
-
Go to a YouTube to MP3 converter website or app, such as Y2mate, 4K Video Downloader, or Snappea.
-
Paste the URL of the video into the input box and click on convert or download.
-
Select the MP3 format and the quality that you want.
-
Click on download and save the file to your device.
-
-
Pros and cons of YouTube
-
YouTube has some advantages and disadvantages when it comes to downloading Big MP3. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
- You can find the official video and other versions of Big by Young M.A.
-
- You need to use a third-party tool or app to download MP3 from YouTube.
-
-
-
- You can choose the quality and format of the MP3 file.
-
- Some YouTube to MP3 converters may have ads, pop-ups, or malware.
-
-
-
- You can also download other videos or audio from YouTube.
-
- Downloading MP3 from YouTube may violate its terms of service or copyright laws.
-
-
-
Bazenation
-
Bazenation is a website that provides free downloads of music, videos, albums, mixtapes, and more. It has a large collection of hip-hop, rap, R&B, and other genres of music. You can find Big by Young M.A on Bazenation. Here is how to download Big MP3 from Bazenation:
-
How to download from Bazenation
-
-
Go to the Bazenation website and search for Big by Young M.A.
-
Click on the title of the song or the download button.
-
You will be redirected to another page with a countdown timer and some ads.
-
Wait for the timer to end and click on the download link that appears.
-
You will be redirected again to another page with a captcha and a final download link.
-
Solve the captcha and click on the final download link.
-
Save the file to your device.
-
-
Pros and cons of Bazenation
-
Bazenation has some advantages and disadvantages when it comes to downloading Big MP3. Here are some of them:
-
download young m.a big mp3 free
-download young m.a big mp3 song
-download young m.a big mp3 audio
-download young m.a big mp3 music
-download young m.a big mp3 320kbps
-download young m.a big mp3 lyrics
-download young m.a big mp3 video
-download young m.a big mp3 base naija
-download young m.a big mp3 trend musics
-download young m.a big mp3 bazenation
-download young m.a big mp3 direct link
-download young m.a big mp3 online
-download young m.a big mp3 fast
-download young m.a big mp3 high quality
-download young m.a big mp3 latest
-download young m.a big mp3 hip hop
-download young m.a big mp3 rap
-download young m.a big mp3 diss track
-download young m.a big mp3 kehlani
-download young m.a big mp3 stream
-download young m.a big mp3 zip file
-download young m.a big mp3 album
-download young m.a big mp3 single
-download young m.a big mp3 official
-download young m.a big mp3 clean version
-how to download young m.a big mp3
-where to download young m.a big mp3
-best site to download young m.a big mp3
-easiest way to download young m.a big mp3
-safest place to download young m.a big mp3
-listen and download young m.a big mp3
-watch and download young m.a big mp3
-enjoy and download young m.a big mp3
-share and download young m.a big mp3
-review and download young m.a big mp3
-new release of young m.a big mp3 download
-hot track of young m.a big mp3 download
-hit song of young m.a big mp3 download
-top chart of young m.a big mp3 download
-viral tune of young m.a big mp3 download
-
-
-
Pros
-
Cons
-
-
-
- You can find Big by Young M.A and other songs by her on Bazenation.
-
- You have to go through multiple pages, ads, and captcha to download MP3 from Bazenation.
-
-
-
- You can also find other music, videos, albums, mixtapes, and more on Bazenation.
-
- Some of the links or files on Bazenation may be broken, corrupted, or infected.
-
-
-
- You can download MP3 files directly from Bazenation without using a third-party tool or app.
-
- Downloading MP3 from Bazenation may be illegal or unethical depending on the source and license of the music.
-
-
Waploaded
-
Waploaded is another website that offers free downloads of music, videos, movies, TV shows, news, and more. It has a variety of content from different countries, languages, and genres. You can find Big by Young M.A on Waploaded. Here is how to download Big MP3 from Waploaded:
-
How to download from Waploaded
-
-
Go to the Waploaded website and search for Big by Young M.A.
-
Click on the title of the song or the download button.
-
You will be taken to a page with the song details, such as the artist, genre, duration, size, and quality.
-
Scroll down and click on the download link that matches your preference.
-
You will be asked to complete a short survey or offer to unlock the download link.
-
After completing the survey or offer, you will get the download link.
-
Click on the download link and save the file to your device.
-
-
Pros and cons of Waploaded
-
Waploaded has some advantages and disadvantages when it comes to downloading Big MP3. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
- You can find Big by Young M.A and other songs by her on Waploaded.
-
- You have to complete a survey or offer to get the download link from Waploaded.
-
-
-
- You can also find other music, videos, movies, TV shows, news, and more on Waploaded.
-
- Some of the surveys or offers on Waploaded may be spammy, scammy, or risky.
-
-
-
- You can download MP3 files directly from Waploaded without using a third-party tool or app.
-
- Downloading MP3 from Waploaded may be illegal or unethical depending on the source and license of the music.
-
-
Conclusion
-
In conclusion, Big by Young M.A is a great song that you can enjoy listening to anytime and anywhere. However, if you want to download Big MP3 for free, you need to be careful about the source and the method that you use. We have shown you three of the best websites that you can use to download Big MP3 safely and easily: YouTube, Bazenation, and Waploaded. Each of them has its own pros and cons that you should consider before choosing one. We hope that this article has helped you learn how to download Big MP3 for free from different sources. If you have any questions or feedback, please feel free to leave a comment below.
-
Summary
-
Here is a summary of the main points of this article:
-
-
Big by Young M.A is a popular hip-hop song that was released in 2019.
-
You can download Big MP3 for free from different websites, such as YouTube, Bazenation, and Waploaded.
-
You need to use a third-party tool or app to download MP3 from YouTube.
-
You need to go through multiple pages, ads, and captcha to download MP3 from Bazenation.
-
You need to complete a survey or offer to get the download link from Waploaded.
-
You should only download MP3 from trusted and reputable sources.
-
You should respect the rights and interests of the artist and the music industry.
-
-
FAQs
-
Here are some frequently asked questions about downloading Big MP3:
-
-
Q: Is downloading Big MP3 legal?
-
A: It depends on the source and the license of the music. Some websites may have permission or authorization from the artist or the music label to offer free downloads of Big MP3. Some websites may not have such permission or authorization and may be violating the law or infringing on the rights of the artist or the music label. You should always check the terms and conditions of the website before downloading Big MP3.
-
Q: Is downloading Big MP3 safe?
-
A: It depends on the website and the tool that you use. Some websites may have viruses, malware, or spyware that can harm your device or compromise your privacy. Some tools may have ads, pop-ups, or malware that can annoy you or infect your device. You should always use antivirus software and firewall protection on your device before downloading Big MP3. You should also avoid clicking on suspicious links or downloading unknown files.
-
Q: How can I support Young M.A?
-
A: If you like Big by Young M.A and want to support her, you can do so by streaming or buying her music from official platforms, such as Spotify, Apple Music, Amazon Music, T idal, YouTube Music, and more. You can also follow her on social media, such as Instagram, Twitter, Facebook, and TikTok. You can also visit her official website, where you can find her merchandise, tour dates, news, and more.
-
Q: What are some other songs by Young M.A that I can download?
-
A: Young M.A has many other songs that you can download for free from different websites. Some of her most popular songs are OOOUUU, PettyWap, Car Confessions, Stubborn Ass, and Off the Yak. You can also download her mixtapes and albums, such as Herstory in the Making and Off the Yak.
-
Q: How can I download Big MP3 faster?
-
A: There are some tips and tricks that you can use to download Big MP3 faster from different websites. Some of them are:
-
-
Use a fast and stable internet connection.
-
Use a browser that supports fast downloads, such as Chrome, Firefox, or Opera.
-
Use a download manager or accelerator that can boost your download speed, such as IDM, FDM, or EagleGet.
-
Choose a website that has a high-speed server and a low-traffic volume.
-
Choose a file format and quality that is suitable for your device and preference.
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Commons IO 2.6 Jar File from a Trusted Mirror Site.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Commons IO 2.6 Jar File from a Trusted Mirror Site.md
deleted file mode 100644
index 67684243165928010cb983eccf6db12203023f7d..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Commons IO 2.6 Jar File from a Trusted Mirror Site.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
How to Download and Use Commons IO 2.6 Jar
-
If you are looking for a library of utilities to assist with developing IO functionality in Java, you might want to check out Commons IO. In this article, we will show you how to download and use the Commons IO 2.6 jar file in your project.
-
What is Commons IO and Why Use It?
-
Commons IO is a library of utilities that provides various classes and methods for working with streams, readers, writers, files, file filters, file comparators, endian transformation classes, and much more. It is part of the Apache Commons project, which aims to provide reusable Java components for common tasks.
Using Commons IO can save you a lot of time and effort when dealing with IO operations in Java. Some of the benefits are:
-
-
You can avoid writing boilerplate code and rely on well-tested code.
-
You can use utility methods that are not available in the standard Java API, such as copying, deleting, moving, comparing, filtering, monitoring files.
-
You can use utility classes that provide additional functionality for streams, readers, writers, files, such as TeeInputStream, TeeOutputStream, LineIterator, FileCleaningTracker, etc.
-
You can use endian classes that allow you to swap the byte order of Java primitives and streams.
-
You can use file filters that implement both FileFilter and FilenameFilter interfaces.
-
You can use comparators that allow you to sort files by name, size, last modified date, etc.
-
You can use functional interfaces that are specific to IO operations.
-
You can use serialization framework that allows you to control the deserialization of classes.
-
-
Alternatives to Commons IO
-
If you are looking for other libraries that provide similar or complementary functionality to Commons IO, you might want to check out these alternatives:
-
-
Google Guava: Guava is a suite of core and expanded libraries that include utility classes for collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and more.
-
Apache Commons Lang: Commons Lang provides a host of helper utilities for the java.lang API, notably String manipulation methods, basic numerical methods, object reflection, concurrency, creation and serialization and System properties.
-
Apache Commons Compress: Commons Compress defines an API for working with compression and archive formats. These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4, Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.
-
Apache Commons VFS: Commons VFS provides a single API for accessing various different file systems. It presents a uniform view of the files from various different sources, such as the files on local disk, on an HTTP server, or inside a Zip archive.
-
-
How to Download Commons IO 2.6 Jar
-
There are several ways to download the Commons IO 2.6 jar file. Here are some of the most common ones:
-
Using a Mirror Site
-
You can download the jar file directly from one of the mirror sites that host the Apache Commons project. You can choose the nearest mirror site to your location for faster download speed. You can also verify the integrity of the downloaded file using the provided checksums and signatures.
-
Using Maven Dependency
-
If you are using Maven as your build tool, you can simply add the following dependency to your pom.xml file:
-
commons io 2.6 jar download maven
-commons io 2.6 jar download gradle
-commons io 2.6 jar download sbt
-commons io 2.6 jar download ivy
-commons io 2.6 jar download grape
-commons io 2.6 jar download buildr
-commons io 2.6 jar download apache
-commons io 2.6 jar download java2s
-commons io 2.6 jar download license
-commons io 2.6 jar download gpl2
-commons io 2.6 jar download classpath exception
-commons io 2.6 jar download utility classes
-commons io 2.6 jar download stream implementations
-commons io 2.6 jar download file filters
-commons io 2.6 jar download file comparators
-commons io 2.6 jar download endian transformation classes
-commons io 2.6 jar download javadoc
-commons io 2.6 jar download sources
-commons io 2.6 jar download pom
-commons io 2.6 jar download type list
-commons io 2.6 jar download mirror
-commons io 2.6 jar download signature
-commons io 2.6 jar download checksum
-commons io 2.6 jar download keys file
-commons io 2.6 jar download pgp key
-commons io 2.6 jar download sha512 hash
-commons io 2.6 jar download asc file
-commons io 2.6 jar download release build
-commons io 2.6 jar download distribution directory
-commons io 2.6 jar download backup mirror
-commons io 2.6 jar download archive
-commons io 2.6 jar download older release
-commons io 2.6 jar download latest release
-commons io 2.6 jar download new feature
-commons io 2.6 jar download bug fix
-commons io 2.6 jar download improvement
-commons io 2.6 jar download dependency update
-commons io 2.6 jar download performance enhancement
-commons io 2.6 jar download code cleanup
-commons io 2.6 jar download documentation update
-commons io 2.6 jar download example code
-commons io 2.6 jar download test case
-commons io 2.6 jar download issue tracker
-commons io 2.6 jar download mailing list
-commons io 2.6 jar download user guide
-commons io 2.6 jar download developer guide
-commons io 2.6 jar download api reference
Maven will automatically download and manage the jar file for you.
-
Using Java2s Site
-
You can also download the jar file from the Java2s site, which provides a collection of Java libraries and resources. You can browse through the categories or search for the library name to find the jar file. You can also view the source code and examples of using the library.
-
How to Use Commons IO 2.6 Jar in Your Project
-
Once you have downloaded the jar file, you can use it in your project by following these steps:
-
Adding the Jar File to the Classpath
-
You need to add the jar file to your classpath so that your Java compiler and runtime can find it. You can do this in different ways depending on your development environment and preferences. For example:
-
-
If you are using an IDE like Eclipse or IntelliJ IDEA, you can right-click on your project and select Properties or Project Structure. Then you can add the jar file as an external library or a module dependency.
-
If you are using a command-line tool like javac or java, you can use the -cp or -classpath option to specify the path to the jar file.
-
If you are using a build tool like Maven or Gradle, you can add the jar file as a dependency in your configuration file.
-
-
Importing the Relevant Classes
-
Next, you need to import the classes that you want to use from the Commons IO library. You can use either a single import statement for each class or a wildcard import statement for a whole package. For example:
-
// Import a single class import org.apache.commons.io.FileUtils; // Import a whole package import org.apache.commons.io.*;
-
Using the Utility Classes and Methods
-
Finally, you can use the utility classes and methods from the Commons IO library to perform various IO operations in your code. For example:
-
// Copy a file FileUtils.copyFile(new File("source.txt"), new File("destination.txt")); // Delete a directory FileUtils.deleteDirectory(new File("temp")); // Read a file into a string String content = FileUtils.readFileToString(new File("data.txt"), "UTF-8"); // Write a string to a file FileUtils.writeStringToFile(new File("output.txt"), "Hello World", "UTF-8"); // Compare two files by content boolean equal = FileUtils.contentEquals(new File ("file1.txt"), new File("file2.txt")); // List the files in a directory that match a filter Collection files = FileUtils.listFiles(new File("docs"), new WildcardFileFilter("*.pdf"), TrueFileFilter.INSTANCE); // Monitor a directory for changes FileAlterationObserver observer = new FileAlterationObserver(new File("logs")); observer.addListener(new FileAlterationListenerAdaptor() @Override public void onFileCreate(File file) System.out.println("New file created: " + file.getName()); @Override public void onFileDelete(File file) System.out.println("File deleted: " + file.getName()); ); FileAlterationMonitor monitor = new FileAlterationMonitor(1000); monitor.addObserver(observer); monitor.start();
-
These are just some examples of using the Commons IO library. You can find more examples and documentation on the official website.
-
Conclusion
-
In this article, we have learned how to download and use the Commons IO 2.6 jar file in our Java projects. We have seen what Commons IO is, why use it, and what are some of the alternatives. We have also seen how to add the jar file to our classpath, import the relevant classes, and use the utility classes and methods. We hope that this article has helped you to understand and appreciate the power and convenience of Commons IO.
-
FAQs
-
Here are some of the frequently asked questions about Commons IO:
-
-
Q: What is the latest version of Commons IO?
-
A: The latest version of Commons IO is 2.11.0, which was released on June 7, 2021. You can download it from the download page.
-
Q: How can I contribute to Commons IO?
-
A: If you want to contribute to Commons IO, you can check out the contribution guide, which explains how to report issues, submit patches, and join the mailing list.
-
Q: How can I get support for Commons IO?
-
A: If you need support for Commons IO, you can use the user mailing list, where you can ask questions and get answers from other users and developers. You can also browse through the archive of previous messages.
-
Q: How can I learn more about Commons IO?
-
A: If you want to learn more about Commons IO, you can read the user guide, which provides a comprehensive overview of the library and its features. You can also check out the examples, which demonstrate how to use various classes and methods.
-
Q: Is Commons IO compatible with Android?
-
A: Yes, Commons IO is compatible with Android. However, some features may not work as expected due to differences in the Android platform. For example, file monitoring may not work on some devices or versions of Android.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Super Bear Adventure Cheat APK and Unlock All Levels for Free.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Super Bear Adventure Cheat APK and Unlock All Levels for Free.md
deleted file mode 100644
index f4de10a9adcfb312391b6730d2e13173bbc4a0dd..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Super Bear Adventure Cheat APK and Unlock All Levels for Free.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
Super Bear Adventure Cheat APK: How to Unlock All Levels and Skins
-
Do you love playing Super Bear Adventure, but find it hard to complete all the levels and unlock all the skins? If yes, then you might be interested in using a cheat apk that can help you achieve your goals. In this article, we will tell you everything you need to know about Super Bear Adventure cheat apk, including what it is, why you should use it, how to download and install it, and how to use it. Read on to find out more.
Super Bear Adventure is a fun and addictive platformer game that you can play on your Android device. It is developed by EarthKwak Games, a small indie studio that creates games with love and passion. The game has over 10 million downloads and a 4.5-star rating on Google Play Store.
-
A fun and addictive platformer game
-
Super Bear Adventure is a game that will remind you of the classic platformers of the 90s, such as Super Mario Bros, Sonic the Hedgehog, and Donkey Kong. You will control a cute bear named Teddy, who has to explore different worlds, collect coins and gems, fight enemies, solve puzzles, and find secrets. The game has over 60 levels across six different worlds, each with its own theme, music, and boss. You can also customize your bear with various skins and hats that you can buy with coins or gems.
-
The story and the gameplay
-
The game has a simple but engaging story that will keep you hooked. Teddy is a young bear who lives in a peaceful forest with his friends. One day, he finds out that an evil wizard named Crocus has stolen his grandfather's medallion, which is a powerful artifact that can control time. Teddy decides to go on an adventure to retrieve the medallion and stop Crocus from destroying the world. Along the way, he will meet new friends and foes, discover new places, and learn new skills.
-
The gameplay of Super Bear Adventure is easy to learn but hard to master. You will use the virtual buttons on the screen to move, jump, attack, and interact with objects. You will also have a health bar that will decrease if you get hit by enemies or traps. You can restore your health by collecting honey pots or hearts. You will also have a power bar that will fill up as you collect coins and gems. You can use this power bar to activate special abilities, such as flying, shooting fireballs, or freezing enemies.
-
The features and the graphics
-
Super Bear Adventure has many features that make it stand out from other platformer games. Some of these features are:
-
super bear adventure mod apk unlocked all
-super bear adventure hack apk download
-super bear adventure unlimited coins and gems apk
-super bear adventure premium apk free
-super bear adventure latest version mod apk
-super bear adventure cheat codes android
-super bear adventure mod menu apk
-super bear adventure apk mod money
-super bear adventure hack tool apk
-super bear adventure cracked apk
-super bear adventure full version apk
-super bear adventure cheat engine apk
-super bear adventure modded apk 2023
-super bear adventure no ads apk
-super bear adventure pro apk download
-super bear adventure cheat sheet apk
-super bear adventure mega mod apk
-super bear adventure god mode apk
-super bear adventure hack online apk
-super bear adventure cheat app apk
-super bear adventure mod apk android 1
-super bear adventure hack generator apk
-super bear adventure cheat table apk
-super bear adventure mod apk rexdl
-super bear adventure hack version apk
-super bear adventure cheat trainer apk
-super bear adventure mod apk revdl
-super bear adventure hack apk 2023
-super bear adventure cheat console apk
-super bear adventure mod apk happymod
-super bear adventure hack no root apk
-super bear adventure cheat menu apk
-super bear adventure mod apk an1
-super bear adventure hack without verification apk
-super bear adventure cheat keyboard apk
-super bear adventure mod apk apkpure
-super bear adventure hack no survey apk
-super bear adventure cheat script apk
-super bear adventure mod apk offline
-super bear adventure hack and slash apk
-super bear adventure cheat terminal apk
-super bear adventure mod apk obb
-super bear adventure hack iosgods apk
-super bear adventure cheat injector apk
-super bear adventure mod apk android republic
-super bear adventure hack lucky patcher apk
-super bear adventure cheat editor apk
-super bear adventure mod apk platinmods
-
-
Achievements and leaderboards: You can unlock achievements by completing various tasks in the game, such as collecting all the coins in a level, defeating a boss without getting hit, or finding all the secrets. You can also compete with other players around the world on the leaderboards by scoring high points in each level.
-
Mini-games: You can play mini-games in between levels to earn extra coins and gems. These mini-games include fishing, whack-a-mole, slot machine, memory game, and more.
-
Cloud save: You can save your progress in the cloud and continue playing on any device.
-
Controller support: You can play the game with a compatible controller if you prefer.
-
-
The graphics of Super Bear Adventure are colorful and charming. The game has a pixel art style that gives it a
nostalgic and retro feel. The game also has smooth animations and sound effects that enhance the gameplay experience. The music is catchy and upbeat, and fits well with the mood of each world.
-
Why use Super Bear Adventure Cheat APK?
-
Super Bear Adventure is a fun and addictive game, but it can also be challenging and frustrating at times. Some levels are very hard to complete, and some skins are very expensive to buy. You might feel like giving up or spending real money to get more coins and gems. But what if there was a way to get unlimited coins and gems, unlock all levels and skins, and enjoy the game without any hassle? That's where Super Bear Adventure cheat apk comes in.
-
The benefits of using the cheat apk
-
Super Bear Adventure cheat apk is a modified version of the original game that gives you access to all the features and content that you normally have to pay for or work hard for. By using the cheat apk, you can:
-
-
Unlock all levels: You can play any level you want, without having to complete the previous ones. You can also skip the boss battles if you find them too hard.
-
Unlock all skins: You can customize your bear with any skin you like, without having to buy them with coins or gems. You can also mix and match different skins and hats to create your own unique look.
-
Get unlimited coins and gems: You can get as many coins and gems as you want, without having to collect them in the game or watch ads. You can use them to buy anything you want in the game, such as power-ups, extra lives, or mini-games.
-
Have more fun: You can enjoy the game without any stress or frustration. You can explore the worlds at your own pace, try different skills and abilities, and discover new secrets. You can also challenge yourself by playing on harder difficulties or trying to get higher scores.
-
-
The risks of using the cheat apk
-
Super Bear Adventure cheat apk might sound too good to be true, but it also comes with some risks that you should be aware of before using it. Some of these risks are:
-
-
Malware: The cheat apk might contain viruses or other malicious software that can harm your device or steal your personal information. You should always download the cheat apk from a trusted source and scan it with an antivirus before installing it.
-
Ban: The cheat apk might violate the terms of service of the game or Google Play Store, and result in your account being banned or suspended. You should always use the cheat apk at your own risk and discretion, and avoid using it online or with other players.
-
Bugs: The cheat apk might not work properly or cause errors or glitches in the game. You should always backup your data before using the cheat apk, and uninstall it if you encounter any problems.
-
Boredom: The cheat apk might make the game too easy or too boring for you. You might lose interest in the game or feel like cheating is not fun anymore. You should always use the cheat apk moderately and responsibly, and switch back to the original game if you want more challenge or variety.
-
-
How to download and install the cheat apk
-
If you decide to use Super Bear Adventure cheat apk, here are the steps you need to follow to download and install it on your device:
-
-
Go to a reliable website that offers Super Bear Adventure cheat apk, such as [APKPure] or [APKHome].
-
Download the latest version of Super Bear Adventure cheat apk on your device.
-
Go to your device settings and enable unknown sources. This will allow you to install apps from sources other than Google Play Store.
-
Locate the downloaded file on your device and tap on it to install it.
-
Wait for the installation to finish and launch the game.
-
-
How to use Super Bear Adventure Cheat APK?
-
Once you have installed Super Bear Adventure cheat apk on your device, you can start using it right away. Here are some tips on how to use it effectively:
-
How to unlock all levels
-
To unlock all levels in Super Bear Adventure cheat apk, you just need to go to the world map and tap on any level you want to play. You don't need to complete the previous levels or meet any requirements. You can also skip the boss battles by tapping on the next world icon.
-
How to unlock all skins
-
To unlock all skins in Super Bear Adventure cheat apk, you just need to go to the shop and tap on any skin you want to buy. You don't need to spend any coins or gems to buy them. You can also mix and match different skins and hats to create your own unique look.
-
How to get unlimited coins and gems
-
To get unlimited coins and gems in Super Bear Adventure cheat apk, you just need to play the game as usual. You will get a lot of coins and gems from collecting them in the levels, playing mini-games, or watching ads. You can also use the cheat menu to add more coins and gems to your account. To access the cheat menu, you just need to tap on the pause button and then tap on the cheat button. You can then enter the amount of coins and gems you want to add and tap on the confirm button.
-
Conclusion
-
Super Bear Adventure is a fun and addictive platformer game that you can play on your Android device. It has a lot of features and content that will keep you entertained for hours. However, if you want to unlock all levels and skins, get unlimited coins and gems, and have more fun, you might want to use Super Bear Adventure cheat apk. This is a modified version of the game that gives you access to everything you want in the game. However, you should also be aware of the risks of using the cheat apk, such as malware, ban, bugs, or boredom. You should always use the cheat apk at your own risk and discretion, and download it from a trusted source. You should also backup your data before using the cheat apk, and uninstall it if you encounter any problems.
-
If you are interested in using Super Bear Adventure cheat apk, you can follow the steps we have provided in this article to download and install it on your device. You can also follow our tips on how to use it effectively to unlock all levels and skins, and get unlimited coins and gems. We hope you enjoy playing Super Bear Adventure with the cheat apk, and have a great time with your bear.
-
Call to action
-
If you liked this article, please share it with your friends who also love playing Super Bear Adventure. You can also leave a comment below and tell us what you think about the game and the cheat apk. We would love to hear from you.
-
FAQs
-
Here are some frequently asked questions about Super Bear Adventure cheat apk:
-
-
Q: Is Super Bear Adventure cheat apk safe to use?
-
A: Super Bear Adventure cheat apk is not an official version of the game, and it might contain viruses or other malicious software that can harm your device or steal your personal information. You should always download the cheat apk from a trusted source and scan it with an antivirus before installing it.
-
Q: Will I get banned for using Super Bear Adventure cheat apk?
-
A: Super Bear Adventure cheat apk might violate the terms of service of the game or Google Play Store, and result in your account being banned or suspended. You should always use the cheat apk at your own risk and discretion, and avoid using it online or with other players.
-
Q: How do I update Super Bear Adventure cheat apk?
-
A: Super Bear Adventure cheat apk might not work properly or cause errors or glitches in the game if it is not updated regularly. You should always check for updates on the website where you downloaded the cheat apk, and download and install the latest version when available.
-
Q: Can I use Super Bear Adventure cheat apk on iOS devices?
-
A: Super Bear Adventure cheat apk is only compatible with Android devices. You cannot use it on iOS devices such as iPhones or iPads.
-
Q: Can I use Super Bear Adventure cheat apk with a controller?
-
A: Super Bear Adventure cheat apk supports controller input, just like the original game. You can play the game with a compatible controller if you prefer.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py
deleted file mode 100644
index 6fff4faf44a92d42504559ecea8ec1047d2e5f14..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import csv
-import io
-import logging
-import os
-import os.path as op
-import sys
-
-from dump_hubert_feature import HubertFeatureReader
-from feature_utils import get_shard_range, dump_feature
-from fairseq.data.audio.audio_utils import get_waveform
-from fairseq.data.audio.speech_to_text_dataset import (
- read_from_uncompressed_zip,
-)
-
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("dump_hubert_feature_s2t")
-
-
-class HubertFeatureReaderS2T(HubertFeatureReader):
- def read_audio(self, path, ref_len=None):
- path, *extra = path.split(":")
- assert len(extra) == 2
- assert path.endswith(".zip")
-
- data = read_from_uncompressed_zip(path, int(extra[0]), int(extra[1]))
- f = io.BytesIO(data)
- wav, sr = get_waveform(f)
- assert sr == self.task.cfg.sample_rate, sr
- if wav.ndim == 2:
- wav = wav.mean(-1)
- assert wav.ndim == 1, wav.ndim
- if ref_len is not None and abs(ref_len - len(wav)) > 160:
- logging.warning(f"ref {ref_len} != read {len(wav)} ({path})")
- return wav
-
-
-def get_path_iterator(root, tsv, nshard, rank):
- with open(tsv) as f:
- reader = csv.DictReader(
- f,
- delimiter="\t",
- quotechar=None,
- doublequote=False,
- lineterminator="\n",
- quoting=csv.QUOTE_NONE,
- )
- subpaths = [op.join(root, e["audio"]) for e in reader]
- start, end = get_shard_range(len(subpaths), nshard, rank)
- subpaths = subpaths[start:end]
- def iterate():
- for subpath in subpaths:
- yield op.join(root, subpath), None
- return iterate, len(subpaths)
-
-
-def main(
- root, tsv_path, ckpt_path, layer, nshard, rank, feat_dir, split, max_chunk
-):
- reader = HubertFeatureReaderS2T(ckpt_path, layer, max_chunk)
- generator, num = get_path_iterator(root, tsv_path, nshard, rank)
- dump_feature(reader, generator, num, split, nshard, rank, feat_dir)
-
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("root")
- parser.add_argument("tsv_path")
- parser.add_argument("ckpt_path")
- parser.add_argument("layer", type=int)
- parser.add_argument("nshard", type=int)
- parser.add_argument("rank", type=int)
- parser.add_argument("feat_dir")
- parser.add_argument("split")
- parser.add_argument("--max_chunk", type=int, default=1600000)
- args = parser.parse_args()
- logger.info(args)
-
- main(**vars(args))
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py
deleted file mode 100644
index 6177239dc75f6937d036462a5a2379aaee202e7d..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py
+++ /dev/null
@@ -1,707 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Run inference for pre-processed data with a trained model.
-"""
-
-import ast
-from collections import namedtuple
-from dataclasses import dataclass, field
-from enum import Enum, auto
-import hydra
-from hydra.core.config_store import ConfigStore
-import logging
-import math
-import os
-from omegaconf import OmegaConf
-from typing import Optional
-import sys
-
-import editdistance
-import torch
-
-from hydra.core.hydra_config import HydraConfig
-
-from fairseq import checkpoint_utils, progress_bar, tasks, utils
-from fairseq.data.data_utils import post_process
-from fairseq.dataclass.configs import FairseqDataclass, FairseqConfig
-from fairseq.logging.meters import StopwatchMeter
-from omegaconf import open_dict
-
-from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoderConfig
-
-logging.root.setLevel(logging.INFO)
-logging.basicConfig(stream=sys.stdout, level=logging.INFO)
-logger = logging.getLogger(__name__)
-
-
-class DecoderType(Enum):
- VITERBI = auto()
- KENLM = auto()
- FAIRSEQ = auto()
- KALDI = auto()
-
-
-@dataclass
-class UnsupGenerateConfig(FairseqDataclass):
- fairseq: FairseqConfig = FairseqConfig()
- lm_weight: float = field(
- default=2.0,
- metadata={"help": "language model weight"},
- )
- w2l_decoder: DecoderType = field(
- default=DecoderType.VITERBI,
- metadata={"help": "type of decoder to use"},
- )
- kaldi_decoder_config: Optional[KaldiDecoderConfig] = None
- lexicon: Optional[str] = field(
- default=None,
- metadata={
- "help": "path to lexicon. This is also used to 'phonemize' for unsupvised param tuning"
- },
- )
- lm_model: Optional[str] = field(
- default=None,
- metadata={"help": "path to language model (kenlm or fairseq)"},
- )
- unit_lm: bool = field(
- default=False,
- metadata={"help": "whether to use unit lm"},
- )
- beam_threshold: float = field(
- default=50.0,
- metadata={"help": "beam score threshold"},
- )
- beam_size_token: float = field(
- default=100.0,
- metadata={"help": "max tokens per beam"},
- )
- beam: int = field(
- default=5,
- metadata={"help": "decoder beam size"},
- )
- nbest: int = field(
- default=1,
- metadata={"help": "number of results to return"},
- )
- word_score: float = field(
- default=1.0,
- metadata={"help": "word score to add at end of word"},
- )
- unk_weight: float = field(
- default=-math.inf,
- metadata={"help": "unknown token weight"},
- )
- sil_weight: float = field(
- default=0.0,
- metadata={"help": "silence token weight"},
- )
- targets: Optional[str] = field(
- default=None,
- metadata={"help": "extension of ground truth labels to compute UER"},
- )
- results_path: Optional[str] = field(
- default=None,
- metadata={"help": "where to store results"},
- )
- post_process: Optional[str] = field(
- default=None,
- metadata={"help": "how to post process results"},
- )
- vocab_usage_power: float = field(
- default=2,
- metadata={"help": "for unsupervised param tuning"},
- )
-
- viterbi_transcript: Optional[str] = field(
- default=None,
- metadata={"help": "for unsupervised param tuning"},
- )
- min_lm_ppl: float = field(
- default=0,
- metadata={"help": "for unsupervised param tuning"},
- )
- min_vt_uer: float = field(
- default=0,
- metadata={"help": "for unsupervised param tuning"},
- )
-
- blank_weight: float = field(
- default=0,
- metadata={"help": "value to add or set for blank emission"},
- )
- blank_mode: str = field(
- default="set",
- metadata={
- "help": "can be add or set, how to modify blank emission with blank weight"
- },
- )
- sil_is_blank: bool = field(
- default=False,
- metadata={"help": "if true, token is same as blank token"},
- )
-
- unsupervised_tuning: bool = field(
- default=False,
- metadata={
- "help": "if true, returns a score based on unsupervised param selection metric instead of UER"
- },
- )
- is_ax: bool = field(
- default=False,
- metadata={
- "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume"
- },
- )
-
-
-def get_dataset_itr(cfg, task):
- return task.get_batch_iterator(
- dataset=task.dataset(cfg.fairseq.dataset.gen_subset),
- max_tokens=cfg.fairseq.dataset.max_tokens,
- max_sentences=cfg.fairseq.dataset.batch_size,
- max_positions=(sys.maxsize, sys.maxsize),
- ignore_invalid_inputs=cfg.fairseq.dataset.skip_invalid_size_inputs_valid_test,
- required_batch_size_multiple=cfg.fairseq.dataset.required_batch_size_multiple,
- num_shards=cfg.fairseq.dataset.num_shards,
- shard_id=cfg.fairseq.dataset.shard_id,
- num_workers=cfg.fairseq.dataset.num_workers,
- data_buffer_size=cfg.fairseq.dataset.data_buffer_size,
- ).next_epoch_itr(shuffle=False)
-
-
-def process_predictions(
- cfg: UnsupGenerateConfig,
- hypos,
- tgt_dict,
- target_tokens,
- res_files,
-):
- retval = []
- word_preds = []
- transcriptions = []
- dec_scores = []
-
- for i, hypo in enumerate(hypos[: min(len(hypos), cfg.nbest)]):
- if torch.is_tensor(hypo["tokens"]):
- tokens = hypo["tokens"].int().cpu()
- tokens = tokens[tokens >= tgt_dict.nspecial]
- hyp_pieces = tgt_dict.string(tokens)
- else:
- hyp_pieces = " ".join(hypo["tokens"])
-
- if "words" in hypo and len(hypo["words"]) > 0:
- hyp_words = " ".join(hypo["words"])
- else:
- hyp_words = post_process(hyp_pieces, cfg.post_process)
-
- to_write = {}
- if res_files is not None:
- to_write[res_files["hypo.units"]] = hyp_pieces
- to_write[res_files["hypo.words"]] = hyp_words
-
- tgt_words = ""
- if target_tokens is not None:
- if isinstance(target_tokens, str):
- tgt_pieces = tgt_words = target_tokens
- else:
- tgt_pieces = tgt_dict.string(target_tokens)
- tgt_words = post_process(tgt_pieces, cfg.post_process)
-
- if res_files is not None:
- to_write[res_files["ref.units"]] = tgt_pieces
- to_write[res_files["ref.words"]] = tgt_words
-
- if not cfg.fairseq.common_eval.quiet:
- logger.info(f"HYPO {i}:" + hyp_words)
- if tgt_words:
- logger.info("TARGET:" + tgt_words)
-
- if "am_score" in hypo and "lm_score" in hypo:
- logger.info(
- f"DECODER AM SCORE: {hypo['am_score']}, DECODER LM SCORE: {hypo['lm_score']}, DECODER SCORE: {hypo['score']}"
- )
- elif "score" in hypo:
- logger.info(f"DECODER SCORE: {hypo['score']}")
-
- logger.info("___________________")
-
- hyp_words_arr = hyp_words.split()
- tgt_words_arr = tgt_words.split()
-
- retval.append(
- (
- editdistance.eval(hyp_words_arr, tgt_words_arr),
- len(hyp_words_arr),
- len(tgt_words_arr),
- hyp_pieces,
- hyp_words,
- )
- )
- word_preds.append(hyp_words_arr)
- transcriptions.append(to_write)
- dec_scores.append(-hypo.get("score", 0)) # negate cuz kaldi returns NLL
-
- if len(retval) > 1:
- best = None
- for r, t in zip(retval, transcriptions):
- if best is None or r[0] < best[0][0]:
- best = r, t
- for dest, tran in best[1].items():
- print(tran, file=dest)
- dest.flush()
- return best[0]
-
- assert len(transcriptions) == 1
- for dest, tran in transcriptions[0].items():
- print(tran, file=dest)
-
- return retval[0]
-
-
-def prepare_result_files(cfg: UnsupGenerateConfig):
- def get_res_file(file_prefix):
- if cfg.fairseq.dataset.num_shards > 1:
- file_prefix = f"{cfg.fairseq.dataset.shard_id}_{file_prefix}"
- path = os.path.join(
- cfg.results_path,
- "{}{}.txt".format(
- cfg.fairseq.dataset.gen_subset,
- file_prefix,
- ),
- )
- return open(path, "w", buffering=1)
-
- if not cfg.results_path:
- return None
-
- return {
- "hypo.words": get_res_file(""),
- "hypo.units": get_res_file("_units"),
- "ref.words": get_res_file("_ref"),
- "ref.units": get_res_file("_ref_units"),
- "hypo.nbest.words": get_res_file("_nbest_words"),
- }
-
-
-def optimize_models(cfg: UnsupGenerateConfig, use_cuda, models):
- """Optimize ensemble for generation"""
- for model in models:
- model.eval()
- if cfg.fairseq.common.fp16:
- model.half()
- if use_cuda:
- model.cuda()
-
-
-GenResult = namedtuple(
- "GenResult",
- [
- "count",
- "errs_t",
- "gen_timer",
- "lengths_hyp_unit_t",
- "lengths_hyp_t",
- "lengths_t",
- "lm_score_t",
- "num_feats",
- "num_sentences",
- "num_symbols",
- "vt_err_t",
- "vt_length_t",
- ],
-)
-
-
-def generate(cfg: UnsupGenerateConfig, models, saved_cfg, use_cuda):
- task = tasks.setup_task(cfg.fairseq.task)
- saved_cfg.task.labels = cfg.fairseq.task.labels
- task.load_dataset(cfg.fairseq.dataset.gen_subset, task_cfg=saved_cfg.task)
- # Set dictionary
- tgt_dict = task.target_dictionary
- logger.info(
- "| {} {} {} examples".format(
- cfg.fairseq.task.data,
- cfg.fairseq.dataset.gen_subset,
- len(task.dataset(cfg.fairseq.dataset.gen_subset)),
- )
- )
- # Load dataset (possibly sharded)
- itr = get_dataset_itr(cfg, task)
- # Initialize generator
- gen_timer = StopwatchMeter()
-
- def build_generator(cfg: UnsupGenerateConfig):
- w2l_decoder = cfg.w2l_decoder
- if w2l_decoder == DecoderType.VITERBI:
- from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder
-
- return W2lViterbiDecoder(cfg, task.target_dictionary)
- elif w2l_decoder == DecoderType.KENLM:
- from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder
-
- return W2lKenLMDecoder(cfg, task.target_dictionary)
- elif w2l_decoder == DecoderType.FAIRSEQ:
- from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder
-
- return W2lFairseqLMDecoder(cfg, task.target_dictionary)
- elif w2l_decoder == DecoderType.KALDI:
- from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoder
-
- assert cfg.kaldi_decoder_config is not None
-
- return KaldiDecoder(
- cfg.kaldi_decoder_config,
- cfg.beam,
- )
- else:
- raise NotImplementedError(
- "only wav2letter decoders with (viterbi, kenlm, fairseqlm) options are supported at the moment but found "
- + str(w2l_decoder)
- )
-
- generator = build_generator(cfg)
-
- kenlm = None
- fairseq_lm = None
- if cfg.lm_model is not None:
- import kenlm
-
- kenlm = kenlm.Model(cfg.lm_model)
-
- num_sentences = 0
- if cfg.results_path is not None and not os.path.exists(cfg.results_path):
- os.makedirs(cfg.results_path)
-
- res_files = prepare_result_files(cfg)
- errs_t = 0
- lengths_hyp_t = 0
- lengths_hyp_unit_t = 0
- lengths_t = 0
- count = 0
- num_feats = 0
- all_hyp_pieces = []
- all_hyp_words = []
-
- num_symbols = (
- len([s for s in tgt_dict.symbols if not s.startswith("madeup")])
- - tgt_dict.nspecial
- )
- targets = None
- if cfg.targets is not None:
- tgt_path = os.path.join(
- cfg.fairseq.task.data, cfg.fairseq.dataset.gen_subset + "." + cfg.targets
- )
- if os.path.exists(tgt_path):
- with open(tgt_path, "r") as f:
- targets = f.read().splitlines()
- viterbi_transcript = None
- if cfg.viterbi_transcript is not None and len(cfg.viterbi_transcript) > 0:
- logger.info(f"loading viterbi transcript from {cfg.viterbi_transcript}")
- with open(cfg.viterbi_transcript, "r") as vf:
- viterbi_transcript = vf.readlines()
- viterbi_transcript = [v.rstrip().split() for v in viterbi_transcript]
-
- gen_timer.start()
-
- start = 0
- end = len(itr)
-
- hypo_futures = None
- if cfg.w2l_decoder == DecoderType.KALDI:
- logger.info("Extracting features")
- hypo_futures = []
- samples = []
- with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t:
- for i, sample in enumerate(t):
- if "net_input" not in sample or i < start or i >= end:
- continue
- if "padding_mask" not in sample["net_input"]:
- sample["net_input"]["padding_mask"] = None
-
- hypos, num_feats = gen_hypos(
- generator, models, num_feats, sample, task, use_cuda
- )
- hypo_futures.append(hypos)
- samples.append(sample)
- itr = list(zip(hypo_futures, samples))
- start = 0
- end = len(itr)
- logger.info("Finished extracting features")
-
- with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t:
- for i, sample in enumerate(t):
- if i < start or i >= end:
- continue
-
- if hypo_futures is not None:
- hypos, sample = sample
- hypos = [h.result() for h in hypos]
- else:
- if "net_input" not in sample:
- continue
-
- hypos, num_feats = gen_hypos(
- generator, models, num_feats, sample, task, use_cuda
- )
-
- for i, sample_id in enumerate(sample["id"].tolist()):
- if targets is not None:
- target_tokens = targets[sample_id]
- elif "target" in sample or "target_label" in sample:
- toks = (
- sample["target"][i, :]
- if "target_label" not in sample
- else sample["target_label"][i, :]
- )
-
- target_tokens = utils.strip_pad(toks, tgt_dict.pad()).int().cpu()
- else:
- target_tokens = None
-
- # Process top predictions
- (
- errs,
- length_hyp,
- length,
- hyp_pieces,
- hyp_words,
- ) = process_predictions(
- cfg,
- hypos[i],
- tgt_dict,
- target_tokens,
- res_files,
- )
- errs_t += errs
- lengths_hyp_t += length_hyp
- lengths_hyp_unit_t += (
- len(hyp_pieces) if len(hyp_pieces) > 0 else len(hyp_words)
- )
- lengths_t += length
- count += 1
- all_hyp_pieces.append(hyp_pieces)
- all_hyp_words.append(hyp_words)
-
- num_sentences += (
- sample["nsentences"] if "nsentences" in sample else sample["id"].numel()
- )
-
- lm_score_sum = 0
- if kenlm is not None:
-
- if cfg.unit_lm:
- lm_score_sum = sum(kenlm.score(w) for w in all_hyp_pieces)
- else:
- lm_score_sum = sum(kenlm.score(w) for w in all_hyp_words)
- elif fairseq_lm is not None:
- lm_score_sum = sum(fairseq_lm.score([h.split() for h in all_hyp_words])[0])
-
- vt_err_t = 0
- vt_length_t = 0
- if viterbi_transcript is not None:
- unit_hyps = []
- if cfg.targets is not None and cfg.lexicon is not None:
- lex = {}
- with open(cfg.lexicon, "r") as lf:
- for line in lf:
- items = line.rstrip().split()
- lex[items[0]] = items[1:]
- for h in all_hyp_pieces:
- hyp_ws = []
- for w in h.split():
- assert w in lex, w
- hyp_ws.extend(lex[w])
- unit_hyps.append(hyp_ws)
-
- else:
- unit_hyps.extend([h.split() for h in all_hyp_words])
-
- vt_err_t = sum(
- editdistance.eval(vt, h) for vt, h in zip(viterbi_transcript, unit_hyps)
- )
-
- vt_length_t = sum(len(h) for h in viterbi_transcript)
-
- if res_files is not None:
- for r in res_files.values():
- r.close()
-
- gen_timer.stop(lengths_hyp_t)
-
- return GenResult(
- count,
- errs_t,
- gen_timer,
- lengths_hyp_unit_t,
- lengths_hyp_t,
- lengths_t,
- lm_score_sum,
- num_feats,
- num_sentences,
- num_symbols,
- vt_err_t,
- vt_length_t,
- )
-
-
-def gen_hypos(generator, models, num_feats, sample, task, use_cuda):
- sample = utils.move_to_cuda(sample) if use_cuda else sample
-
- if "features" in sample["net_input"]:
- sample["net_input"]["dense_x_only"] = True
- num_feats += (
- sample["net_input"]["features"].shape[0]
- * sample["net_input"]["features"].shape[1]
- )
- hypos = task.inference_step(generator, models, sample, None)
- return hypos, num_feats
-
-
-def main(cfg: UnsupGenerateConfig, model=None):
- if (
- cfg.fairseq.dataset.max_tokens is None
- and cfg.fairseq.dataset.batch_size is None
- ):
- cfg.fairseq.dataset.max_tokens = 1024000
-
- use_cuda = torch.cuda.is_available() and not cfg.fairseq.common.cpu
-
- task = tasks.setup_task(cfg.fairseq.task)
-
- overrides = ast.literal_eval(cfg.fairseq.common_eval.model_overrides)
-
- if cfg.fairseq.task._name == "unpaired_audio_text":
- overrides["model"] = {
- "blank_weight": cfg.blank_weight,
- "blank_mode": cfg.blank_mode,
- "blank_is_sil": cfg.sil_is_blank,
- "no_softmax": True,
- "segmentation": {
- "type": "NONE",
- },
- }
- else:
- overrides["model"] = {
- "blank_weight": cfg.blank_weight,
- "blank_mode": cfg.blank_mode,
- }
-
- if model is None:
- # Load ensemble
- logger.info("| loading model(s) from {}".format(cfg.fairseq.common_eval.path))
- models, saved_cfg = checkpoint_utils.load_model_ensemble(
- cfg.fairseq.common_eval.path.split("\\"),
- arg_overrides=overrides,
- task=task,
- suffix=cfg.fairseq.checkpoint.checkpoint_suffix,
- strict=(cfg.fairseq.checkpoint.checkpoint_shard_count == 1),
- num_shards=cfg.fairseq.checkpoint.checkpoint_shard_count,
- )
- optimize_models(cfg, use_cuda, models)
- else:
- models = [model]
- saved_cfg = cfg.fairseq
-
- with open_dict(saved_cfg.task):
- saved_cfg.task.shuffle = False
- saved_cfg.task.sort_by_length = False
-
- gen_result = generate(cfg, models, saved_cfg, use_cuda)
-
- wer = None
- if gen_result.lengths_t > 0:
- wer = gen_result.errs_t * 100.0 / gen_result.lengths_t
- logger.info(f"WER: {wer}")
-
- lm_ppl = float("inf")
-
- if gen_result.lm_score_t != 0 and gen_result.lengths_hyp_t > 0:
- hyp_len = gen_result.lengths_hyp_t
- lm_ppl = math.pow(
- 10, -gen_result.lm_score_t / (hyp_len + gen_result.num_sentences)
- )
- logger.info(f"LM PPL: {lm_ppl}")
-
- logger.info(
- "| Processed {} sentences ({} tokens) in {:.1f}s ({:.2f}"
- " sentences/s, {:.2f} tokens/s)".format(
- gen_result.num_sentences,
- gen_result.gen_timer.n,
- gen_result.gen_timer.sum,
- gen_result.num_sentences / gen_result.gen_timer.sum,
- 1.0 / gen_result.gen_timer.avg,
- )
- )
-
- vt_diff = None
- if gen_result.vt_length_t > 0:
- vt_diff = gen_result.vt_err_t / gen_result.vt_length_t
- vt_diff = max(cfg.min_vt_uer, vt_diff)
-
- lm_ppl = max(cfg.min_lm_ppl, lm_ppl)
-
- if not cfg.unsupervised_tuning == 0:
- weighted_score = wer
- else:
- weighted_score = math.log(lm_ppl) * (vt_diff or 1.0)
-
- res = (
- f"| Generate {cfg.fairseq.dataset.gen_subset} with beam={cfg.beam}, "
- f"lm_weight={cfg.kaldi_decoder_config.acoustic_scale if cfg.kaldi_decoder_config else cfg.lm_weight}, "
- f"word_score={cfg.word_score}, sil_weight={cfg.sil_weight}, blank_weight={cfg.blank_weight}, "
- f"WER: {wer}, LM_PPL: {lm_ppl}, num feats: {gen_result.num_feats}, "
- f"length: {gen_result.lengths_hyp_t}, UER to viterbi: {(vt_diff or 0) * 100}, score: {weighted_score}"
- )
-
- logger.info(res)
- # print(res)
-
- return task, weighted_score
-
-
-@hydra.main(
- config_path=os.path.join("../../..", "fairseq", "config"), config_name="config"
-)
-def hydra_main(cfg):
- with open_dict(cfg):
- # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126)
- cfg.job_logging_cfg = OmegaConf.to_container(
- HydraConfig.get().job_logging, resolve=True
- )
-
- cfg = OmegaConf.create(
- OmegaConf.to_container(cfg, resolve=False, enum_to_str=False)
- )
- OmegaConf.set_struct(cfg, True)
- logger.info(cfg)
-
- utils.import_user_module(cfg.fairseq.common)
-
- _, score = main(cfg)
-
- if cfg.is_ax:
- return score, None
- return score
-
-
-def cli_main():
- try:
- from hydra._internal.utils import get_args
-
- cfg_name = get_args().config_name or "config"
- except:
- logger.warning("Failed to get config name from hydra args")
- cfg_name = "config"
-
- cs = ConfigStore.instance()
- cs.store(name=cfg_name, node=UnsupGenerateConfig)
- hydra_main()
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/stamps-labs/stamp2vec/embedding_models/vae/constants.py b/spaces/stamps-labs/stamp2vec/embedding_models/vae/constants.py
deleted file mode 100644
index bda9b9ed2f8d5d46ff9072d8f8ae5b9f94c923cf..0000000000000000000000000000000000000000
--- a/spaces/stamps-labs/stamp2vec/embedding_models/vae/constants.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# dimenstion of image embedding
-Z_DIM = 128
-# hidden dimensions for encoder model
-ENC_HIDDEN_DIM = 16
-# hidden dimensions for decoder model
-DEC_HIDDEN_DIM = 64
\ No newline at end of file
diff --git a/spaces/starlit7/USPoliticsTTS/attentions.py b/spaces/starlit7/USPoliticsTTS/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/starlit7/USPoliticsTTS/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Deep House Kits Vol. 1-3 Bundle WAV MIDi Presets.md b/spaces/stomexserde/gpt4-ui/Examples/Deep House Kits Vol. 1-3 Bundle WAV MIDi Presets.md
deleted file mode 100644
index 2dfd2cdf444412d023fe3552b7778054c0de90e1..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Deep House Kits Vol. 1-3 Bundle WAV MIDi Presets.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
Deep House Kits Vol. 1-3 Bundle WAV MIDi Presets: A Review
-
If you are looking for some high-quality and versatile construction kits for your deep house productions, you might want to check out the Deep House Kits Vol. 1-3 Bundle WAV MIDi Presets by Essential Audio Media. This bundle contains 18 construction kits inspired by some of the most popular deep house producers such as EDX, Calvin Harris, Calippo, MK, James Hype, Sigala and many more.
-
Each construction kit comes with a full mix and individual stems for drums, bass, synths, pads, vocals and FX. You also get MIDI files for each melodic element, as well as one-shot drum samples and synth presets for Spire, Sylenth1, Serum, Avenger and Massive. This gives you a lot of flexibility and control over your sound design and arrangement.
The bundle offers a total of 441 files in 24-bit WAV format, with a size of 1.73 GB (unzipped). The loops range from 120 to 126 BPM and are key-labeled for your convenience. The sound quality is excellent and the kits are well-structured and varied. You can easily mix and match different elements from different kits to create your own unique tracks.
-
The Deep House Kits Vol. 1-3 Bundle WAV MIDi Presets is a great resource for any deep house producer who wants to get some inspiration and fresh sounds for their projects. The bundle is currently available at a discounted price of $19.95 USD (regular price $23.99 USD) at Producer Sources website[^1^]. You can also listen to some demos and previews of the kits there.
-
Whether you are a beginner or an experienced producer, you will find something useful and enjoyable in this bundle. Don't miss this opportunity to grab this amazing deal and add some quality deep house sounds to your library.
-
-
-
But what if you want to take your deep house production to the next level? What are some tips and tricks that can help you create more original and professional sounding tracks? Here are some ideas that you can try out in your own projects.
-
Deep House Production Tips
-
-
Cut-up Vocals: If youâre using vocal samples as part of your deep house tune, remember the option to slice, dice and shake things up. You can use a sampler or a slicer effect to chop up vocal phrases and rearrange them into new patterns. You can also apply effects such as filters, delays, reverbs, pitch-shifters and distortions to create more variations and textures. Cut-up vocals can add a lot of groove and interest to your tracks, especially if you sync them with your drums and bass.[^2^]
-
Sign of the Tines: One of the most iconic sounds of deep house is the electric piano, especially the Fender Rhodes. This instrument has a warm and smooth tone that works well with chords and melodies. You can use an electric piano emulation plug-in or a sample library to get this sound, or even record your own if you have access to one. To make your electric piano sound more authentic, you can add some effects such as chorus, phaser, tremolo and rotary speaker. You can also layer it with other sounds such as pads, strings or organs to create more depth and richness.[^2^]
-
Double Up on Chords: If you want to make your chords sound bigger and fuller, you can double them with another instrument. For example, you can layer your electric piano chords with a synth pad or a string section. You can also use different inversions or voicings of the same chord to create more harmonic variation. Doubling up on chords can add more body and definition to your tracks, as well as creating more contrast between different sections.[^2^]
-
A Bit of Humanity: One of the challenges of producing electronic music is to make it sound less robotic and more human. To achieve this, you can use some techniques such as swing quantization, groove quantization, velocity variation and automation. Swing quantization adds a slight delay to every other 16th note, creating a more groovy and funky feel. Groove quantization applies a predefined timing and velocity pattern to your notes, making them sound more natural and organic. Velocity variation changes the loudness of each note according to a random or predefined range, adding more expression and dynamics. Automation enables you to change any parameter over time, such as volume, filter cutoff, pan or pitch, creating more movement and interest.[^2^]
-
Exotic Drumming: While deep house drums are usually based on the classic 4/4 kick-snare-hat pattern, you can spice them up by adding some exotic percussion sounds such as congas, bongos, shakers, tambourines or cowbells. You can use a percussion sample pack or a drum machine plug-in to get these sounds, or even record your own if you have access to some instruments. You can also use some effects such as reverb, delay or distortion to create more space and character for your percussion sounds. Exotic drumming can add more flavor and diversity to your tracks, as well as creating more groove and syncopation.[^2^]
- 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/studiobrn/SplitTrack/audiocraft/__init__.py b/spaces/studiobrn/SplitTrack/audiocraft/__init__.py
deleted file mode 100644
index 1759733cc109fa348c3f764c5939b5b609521cb3..0000000000000000000000000000000000000000
--- a/spaces/studiobrn/SplitTrack/audiocraft/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from . import data, modules, models
-
-__version__ = '0.0.1'
diff --git a/spaces/subhajitmaji/MusicGen/audiocraft/modules/seanet.py b/spaces/subhajitmaji/MusicGen/audiocraft/modules/seanet.py
deleted file mode 100644
index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000
--- a/spaces/subhajitmaji/MusicGen/audiocraft/modules/seanet.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import numpy as np
-import torch.nn as nn
-
-from .conv import StreamableConv1d, StreamableConvTranspose1d
-from .lstm import StreamableLSTM
-
-
-class SEANetResnetBlock(nn.Module):
- """Residual block from SEANet model.
-
- Args:
- dim (int): Dimension of the input/output.
- kernel_sizes (list): List of kernel sizes for the convolutions.
- dilations (list): List of dilations for the convolutions.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection.
- """
- def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1],
- activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False,
- pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True):
- super().__init__()
- assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations'
- act = getattr(nn, activation)
- hidden = dim // compress
- block = []
- for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)):
- in_chs = dim if i == 0 else hidden
- out_chs = dim if i == len(kernel_sizes) - 1 else hidden
- block += [
- act(**activation_params),
- StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation,
- norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- self.block = nn.Sequential(*block)
- self.shortcut: nn.Module
- if true_skip:
- self.shortcut = nn.Identity()
- else:
- self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode)
-
- def forward(self, x):
- return self.shortcut(x) + self.block(x)
-
-
-class SEANetEncoder(nn.Module):
- """SEANet encoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of
- upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here
- that must match the decoder order. We use the decoder order as some models may only employ the decoder.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the encoder, it corresponds to the N first blocks.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0):
- super().__init__()
- self.channels = channels
- self.dimension = dimension
- self.n_filters = n_filters
- self.ratios = list(reversed(ratios))
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = 1
- model: tp.List[nn.Module] = [
- StreamableConv1d(channels, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Downsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- norm=block_norm, norm_params=norm_params,
- activation=activation, activation_params=activation_params,
- causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- # Add downsampling layers
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, mult * n_filters * 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- mult *= 2
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, dimension, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- return self.model(x)
-
-
-class SEANetDecoder(nn.Module):
- """SEANet decoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- final_activation (str): Final activation function after all convolutions.
- final_activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple.
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the decoder, it corresponds to the N last blocks.
- trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup.
- If equal to 1.0, it means that all the trimming is done at the right.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None,
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0):
- super().__init__()
- self.dimension = dimension
- self.channels = channels
- self.n_filters = n_filters
- self.ratios = ratios
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = int(2 ** len(self.ratios))
- model: tp.List[nn.Module] = [
- StreamableConv1d(dimension, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- # Upsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm
- # Add upsampling layers
- model += [
- act(**activation_params),
- StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, trim_right_ratio=trim_right_ratio),
- ]
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- activation=activation, activation_params=activation_params,
- norm=block_norm, norm_params=norm_params, causal=causal,
- pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- mult //= 2
-
- # Add final layers
- model += [
- act(**activation_params),
- StreamableConv1d(n_filters, channels, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Add optional final activation to decoder (eg. tanh)
- if final_activation is not None:
- final_act = getattr(nn, final_activation)
- final_activation_params = final_activation_params or {}
- model += [
- final_act(**final_activation_params)
- ]
- self.model = nn.Sequential(*model)
-
- def forward(self, z):
- y = self.model(z)
- return y
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/DownloadChittagongmovietorrent1080p TOP.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/DownloadChittagongmovietorrent1080p TOP.md
deleted file mode 100644
index 8aa1727b858b579641e043311a15de1baf5695f7..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/DownloadChittagongmovietorrent1080p TOP.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
https://coub.com/stories/3211027-downloadchittagongmovietorrent1080p. https://coub.com/stories/3211030-anak-sd-belajar-ngentot-sama-mbak-_verified_. https://coub.com/stories/3211028-chittagong-movie-torrent-1080p-soffquan. https://coub.com/stories/3486218-download-chittagong-movie-torrent-1080p-soffquan https://coub.com/stories/3486217-download-chittagong-movie-torrent-1080p-soffquan. Und er ist der oberste Totalsupporter bei der Besetzung der Chittagong Division.
-
downloadchittagongmovietorrent1080p - Evil," "bandit, wie ich mit Spezialisten in der Lehr- und Hilfswerkleitung für die Grenzkommandos und Wache- und Polizeikräfte vertraut bin, schreckte sich wegen seiner Zivilisation nicht davon ab, das ungestörte Leben, wie er es verkörpert, zu achten und dem Untergang aus dem Weg zu gehen, so dass er seine Vorfahren in der Wunderkammer des „Houses of Jadu b“ erblickte und bei den Schrecken des Untergangs „The Scarlet Dawn“ schrieb. mehr. https://download-chittagong-movie-torrent-1080p.info/downloadchittagongmovie-torrent-1080p https://download-chittagong-movie-torrent-1080p.info/downloadchittagongmovie-torrent-1080p. Deutschland.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Intercultural Business Communication Gibson Pdf Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Intercultural Business Communication Gibson Pdf Download.md
deleted file mode 100644
index a72a2872602bf12077b48f583dd37731bc118f96..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Intercultural Business Communication Gibson Pdf Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
the present study also considered the role of personality traits in the relationship between the frequency of intercultural contact and cultural intelligence. the results showed that extraversion and openness to experience were negatively associated with cultural intelligence. in other words, the more extraverted an individual is, the lower the levels of cultural intelligence tend to be. this negative relationship was also found in a study by almeida, bouchard, & kivik, 2009. the authors argue that, because extraversion is a predisposition to be sociable, it is possible that, when the frequency of intercultural contact decreases, so does the level of sociability and, thus, the degree of intercultural socialization, and consequently, the level of cultural intelligence (almeida et al., 2009). it would be interesting to explore the possibility of undertaking this research from the perspective of collectivism and individualism (cale & thomas, 2018).
-
for the purpose of this book, the author defines intercultural business communication as the communication that occurs between individuals of different national cultures. a cross-cultural business communication allows managers to interact with people from different cultures, understanding their cultural backgrounds. in addition, the book is divided into two parts. the first part of the book provides an analysis of the basic components of intercultural communication, focusing on how to understand the cultural differences of individuals and how to learn about cultures. the second part of the book presents the author's personal experience as a translator and trainer of intercultural communication. in the following sections, i will review the different aspects of the book in more detail.
-
intercultural business communication gibson pdf download
-
-ReiBoot.. tenorshare android data recovery keygen crackingpatching. Mon, 10 Dec 2018 ... 2 Jun 2017 . Free Any Data Recovery 5.5.5.8 Full ... 4d29de3e1b
-
-
-
diff --git a/spaces/tiagones/nitrosocke-spider-verse-diffusion/app.py b/spaces/tiagones/nitrosocke-spider-verse-diffusion/app.py
deleted file mode 100644
index 8cf6a4ef76ca6ef2a5f85da8103774194cb58825..0000000000000000000000000000000000000000
--- a/spaces/tiagones/nitrosocke-spider-verse-diffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/nitrosocke/spider-verse-diffusion").launch()
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit [NEW].md b/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit [NEW].md
deleted file mode 100644
index 91fe23612228b4cdcf7837153644e8ce54b96bc7..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit [NEW].md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit: A Comprehensive Review
-
Introduction
-
If you are a civil engineer or a designer who is looking for a powerful and comprehensive software for your civil engineering projects, you might have heard of Autodesk AutoCAD Civil 3D. This software is one of the most popular and widely used solutions in the civil sector, as it provides you with the tools and features you need to design, document, visualize, and collaborate on your projects.
-
Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit
In this article, we will review Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit, which is the latest version of this software that was released in November 2018. We will cover the following topics:
-
-
What is Autodesk AutoCAD Civil 3D?
-
What are the features and benefits of Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit?
-
How to download and install Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit?
-
How to use Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit for civil engineering projects?
-
Conclusion
-
-
By the end of this article, you will have a clear understanding of what Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit can do for you, how to get it, and how to use it effectively.
-
Main body
-
What is Autodesk AutoCAD Civil 3D?
-
Autodesk AutoCAD Civil 3D is a software that enables you to create and edit dynamic models of civil structures and objects, such as roads, bridges, tunnels, pipelines, landfills, dams, etc. It also allows you to work with local standards and data formats, exchange data with other users and software, manage and collaborate on design drawings, and visualize and present your design in 3D.
-
Autodesk AutoCAD Civil 3D is based on the AutoCAD platform, which means that it inherits all the features and functions of AutoCAD, such as drawing tools, commands, layers, blocks, etc. In addition, it also integrates with AutoCAD Map 3D, which means that you can access geospatial data and analysis tools within Autodesk AutoCAD Civil 3D.
-
Autodesk AutoCAD Civil 3D is a Building Information Modeling (BIM) solution, which means that it creates a coordinated data model of your project that contains all the information about your design elements, such as geometry, properties, materials, etc. This data model is intelligent and dynamic, which means that any change you make in one part of the model will automatically update the other parts of the model, as well as the documentation and reports. This ensures that your design is consistent, accurate, and up-to-date.
-
What are the features and benefits of Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit?
-
Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit is the latest version of this software that was released in November 2018. It includes several new features and enhancements that improve the performance, usability, and functionality of the software. Some of the main features and benefits of Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit are:
-
-
Improved performance and stability: Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit has been optimized to run faster and smoother on 64-bit systems, as well as on high-resolution monitors and devices. It also fixes some bugs and issues that were reported in the previous versions of the software.
-
Enhanced user interface and workflow: Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit has a more intuitive and user-friendly interface that makes it easier to access and use the tools and features of the software. It also has a more streamlined workflow that reduces the number of steps and clicks required to perform common tasks and operations.
-
New and improved tools and features: Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit introduces several new and improved tools and features that enhance the capabilities and functionality of the software. Some of these tools and features are:
-
-
Corridor Overlap Resolution: This tool allows you to automatically resolve overlapping corridor sections by creating a new region with a specified width, offset, or elevation.
-
Feature Line Elevation Editor: This tool allows you to edit the elevations of feature lines by using a table or a graph.
-
Pressure Pipe Content: This feature allows you to access more content for pressure pipe networks, such as fittings, valves, hydrants, etc.
-
Rail Turnouts and Crossings: This feature allows you to create rail turnouts and crossings by using predefined or custom templates.
-
Relative Feature Lines: This feature allows you to create feature lines that are relative to a surface or another feature line.
-
Section View Drafting Buffers: This feature allows you to create drafting buffers around section views that can be used to add annotations or details.
-
Subassembly Composer: This feature allows you to create custom subassemblies for corridors by using a graphical interface.
-
-
-
How to download and install Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit?
-
If you want to download and install Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit, you need to follow these steps:
Select the option "Download free trial" or "Buy now" depending on your preference.
-
Fill in the required information and create an account if you don't have one already.
-
Choose the version, language, and operating system of your choice.
-
Click on "Download now" or "Install now" depending on your preference.
-
Follow the instructions on the screen to complete the download or installation process.
-
-
Note: You need to have a valid license or subscription to use Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit after the trial period expires.
-
Conclusion
-
In conclusion, Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit is a powerful and comprehensive software for civil engineering design and documentation. It enables you to create and edit dynamic models of civil structures and objects, work with local standards and data formats, exchange data with other users and software, manage and collaborate on design drawings, and visualize and present your design in 3D.
-
-
Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit also includes several new features and enhancements that improve the performance, usability, and functionality of the software, such as corridor overlap resolution, feature line elevation editor, pressure pipe content, rail turnouts and crossings, relative feature lines, section view drafting buffers, and subassembly composer.
-
If you are interested in using Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit for your civil engineering projects, you can download and install it from the official website of Autodesk AutoCAD Civil 3D. You will need to have a valid license or subscription to use it after the trial period expires.
-
Here are some recommendations and tips for using Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit:
-
-
Make sure that your system meets the minimum requirements for running the software, such as processor, memory, disk space, graphics card, etc.
-
Check the online help and tutorials for learning how to use the tools and features of the software.
-
Use the data shortcuts and references to share data between drawings and users.
-
Use the styles and settings to customize the appearance and behavior of your design elements.
-
Use the labels and tables to annotate and document your design data.
-
Use the reports and analysis tools to check and verify your design data.
-
Use the layout and plot tools to create and print your design drawings.
-
-
FAQs
-
Here are some frequently asked questions about Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit:
-
-
What is the difference between Autodesk AutoCAD Civil 3D and Autodesk AutoCAD?
-
Autodesk AutoCAD Civil 3D is a software that is based on Autodesk AutoCAD, but it has additional tools and features that are specific for civil engineering design and documentation. Autodesk AutoCAD is a software that is more general and can be used for various types of design and drafting.
-
What are the advantages of using Autodesk AutoCAD Civil 3D over other civil engineering software?
-
Autodesk AutoCAD Civil 3D has several advantages over other civil engineering software, such as:
-
-
It is a BIM solution that creates a coordinated data model of your project that is intelligent and dynamic.
-
It integrates with AutoCAD Map 3D, which allows you to access geospatial data and analysis tools within Autodesk AutoCAD Civil 3D.
-
It supports local standards and data formats, such as country kits, coordinate systems, landXML, etc.
-
It has a large user community and online resources that can help you learn and troubleshoot the software.
-
-
How can I get support and help for using Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit?
-
You can get support and help for using Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL 64 Bit by using the following methods:
-
-
You can access the online help and tutorials within the software or on the official website of Autodesk AutoCAD Civil 3D.
-
You can contact the technical support team of Autodesk by phone, email, or chat.
-
You can join the online forums and communities of Autodesk AutoCAD Civil 3D users and experts.
-
- b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Bahubali The Beginning Hd 1080p Online Movies.md b/spaces/tioseFevbu/cartoon-converter/scripts/Bahubali The Beginning Hd 1080p Online Movies.md
deleted file mode 100644
index 12e48c68393e9895109b036d73704aef40f2ebfd..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Bahubali The Beginning Hd 1080p Online Movies.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
How to Watch Baahubali: The Beginning in HD 1080p Online
-
Baahubali: The Beginning is a 2015 Indian epic action film directed by S.S. Rajamouli and starring Prabhas, Rana Daggubati, Anushka Shetty, and Tamannaah Bhatia. The film tells the story of Shivudu, a young man who learns his true identity as the heir of the Mahishmati kingdom and sets out to avenge his father's death and rescue his mother from the tyranny of his uncle Bhallaladeva.
-
The film was praised for its stunning visuals, grand scale, and thrilling action sequences. It became one of the highest-grossing Indian films of all time and received several awards and nominations. The film was also dubbed in Hindi, Tamil, Malayalam, and other languages and released worldwide.
If you are a fan of Baahubali: The Beginning or want to watch it for the first time, you might be wondering how to watch it in HD 1080p online. Here are some of the options you can try:
-
-
Netflix: Netflix is one of the most popular streaming platforms that offers a wide range of movies and shows in various genres and languages. You can watch Baahubali: The Beginning on Netflix with a subscription plan that suits your budget and preferences. You can also download the movie on your device and watch it offline.
-
Disney+ Hotstar: Disney+ Hotstar is another popular streaming platform that offers a variety of content from Disney, Marvel, Star Wars, National Geographic, and more. You can watch Baahubali: The Beginning on Disney+ Hotstar with a VIP or Premium subscription plan. You can also download the movie on your device and watch it offline.
-
Amazon Prime Video: Amazon Prime Video is another streaming platform that offers a lot of movies and shows in different languages and genres. You can watch Baahubali: The Beginning on Amazon Prime Video with a Prime membership or by renting or buying the movie individually. You can also download the movie on your device and watch it offline.
-
Google Play Movies & TV: Google Play Movies & TV is a service that allows you to rent or buy movies and shows from Google Play Store. You can watch Baahubali: The Beginning on Google Play Movies & TV by renting or buying the movie in HD quality. You can also download the movie on your device and watch it offline.
-
YouTube: YouTube is a platform that allows you to watch videos uploaded by users or official channels. You can watch Baahubali: The Beginning on YouTube by renting or buying the movie in HD quality. You can also download the movie on your device and watch it offline.
-
Apple TV: Apple TV is a service that allows you to rent or buy movies and shows from iTunes Store. You can watch Baahubali: The Beginning on Apple TV by renting or buying the movie in HD quality. You can also download the movie on your device and watch it offline.
-
-
These are some of the ways you can watch Baahubali: The Beginning in HD 1080p online. However, you should always check the availability and legality of the content in your region before accessing any of these platforms. Also, you should always use a reliable internet connection and a compatible device to enjoy the best viewing experience.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Hytran Software 11.md b/spaces/tioseFevbu/cartoon-converter/scripts/Hytran Software 11.md
deleted file mode 100644
index 0d5b6b6d43d23c2868915c1d457b880461573ded..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Hytran Software 11.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
Hytran software 11: A powerful tool for water hammer analysis
-
Water hammer is a phenomenon that occurs when a fluid in motion is suddenly stopped or changed by a valve, pump, or other device. Water hammer can cause high pressures, vibrations, noise, and damage to pipes and equipment. To prevent or mitigate water hammer, engineers need to understand its causes and effects, and design pipelines and systems accordingly.
-
Hytran software 11 is a Windows-based software package that allows engineers to analyze hydraulic transients or water hammer in pipelines. Hytran software 11 is developed by Hytran Solutions, a company that specializes in water hammer software and consulting. Hytran software 11 is written in the object oriented C++ language for Windows environment, and supports Windows XP/7/8/10/11.
Hytran software 11 has an intuitive graphical user interface that enables users to draw, input data, edit, and analyze pipelines in minutes. Users can see real time transient graphics flashing across the screen as the transients propagate along a pipeline. Indicators show cavitation and flow direction, providing a full picture of the water hammer phenomenon. Transients at selected locations along the pipe network are plotted simultaneously on the screen.
-
Hytran software 11 can handle complex pipe networks with multiple branches, loops, junctions, valves, pumps, reservoirs, surge tanks, air vessels, and other devices. Hytran software 11 can model steady state and transient flow conditions, including friction losses, minor losses, variable speed pumps, pump start-up and shut-down, valve opening and closing, pressure relief valves, air valves, check valves, surge arresters, and more. Hytran software 11 can also perform frequency analysis, transient analysis with variable time step, transient analysis with variable pipe properties, transient analysis with fluid-structure interaction, and transient analysis with gas release.
-
Hytran software 11 is a powerful tool for water hammer analysis that can help engineers design safe and efficient pipelines and systems. Hytran software 11 is used by consultants, water authorities, educational institutions, and other organizations around the world. Hytran software 11 is available as a demo version for free download from the developer's website[^2^], or as a full version for purchase from Hytran Solutions or their authorized distributors.
-
-
Water hammer analysis is an important aspect of hydraulic engineering, as it can help engineers prevent or reduce the negative impacts of water hammer on pipelines and systems. Water hammer analysis can help engineers identify the sources and locations of water hammer, estimate the magnitude and duration of pressure surges, evaluate the risk of pipe failure or leakage, and design appropriate mitigation measures.
-
Water hammer analysis can also have practical applications in other fields, such as hydraulic fracturing. Hydraulic fracturing is a technique that involves injecting fluid at high pressure into a wellbore to create fractures in the rock formation and enhance oil and gas production. Water hammer can occur at the end of hydraulic fracturing treatments, when the fluid injection rate is rapidly reduced or terminated. Water hammer can cause oscillatory pressure behavior in the wellbore, which can affect the fracture geometry, fluid distribution, proppant placement, and well productivity.
-
-
Water hammer analysis can help engineers understand the dynamics of water hammer in hydraulic fracturing, and optimize the injection rate and shut-in time to achieve the desired fracture characteristics. Water hammer analysis can also help engineers monitor the well performance and detect any anomalies or problems during or after the treatment. Water hammer analysis can be performed using software tools such as Hytran software 11, which can simulate the transient flow conditions and pressure behavior in complex wellbore systems.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/more_itertools/recipes.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/more_itertools/recipes.py
deleted file mode 100644
index a2596423a4c3dbd15a357241477a0af0a531f9ec..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/more_itertools/recipes.py
+++ /dev/null
@@ -1,698 +0,0 @@
-"""Imported from the recipes section of the itertools documentation.
-
-All functions taken from the recipes section of the itertools library docs
-[1]_.
-Some backward-compatible usability improvements have been made.
-
-.. [1] http://docs.python.org/library/itertools.html#recipes
-
-"""
-import warnings
-from collections import deque
-from itertools import (
- chain,
- combinations,
- count,
- cycle,
- groupby,
- islice,
- repeat,
- starmap,
- tee,
- zip_longest,
-)
-import operator
-from random import randrange, sample, choice
-
-__all__ = [
- 'all_equal',
- 'before_and_after',
- 'consume',
- 'convolve',
- 'dotproduct',
- 'first_true',
- 'flatten',
- 'grouper',
- 'iter_except',
- 'ncycles',
- 'nth',
- 'nth_combination',
- 'padnone',
- 'pad_none',
- 'pairwise',
- 'partition',
- 'powerset',
- 'prepend',
- 'quantify',
- 'random_combination_with_replacement',
- 'random_combination',
- 'random_permutation',
- 'random_product',
- 'repeatfunc',
- 'roundrobin',
- 'sliding_window',
- 'tabulate',
- 'tail',
- 'take',
- 'triplewise',
- 'unique_everseen',
- 'unique_justseen',
-]
-
-
-def take(n, iterable):
- """Return first *n* items of the iterable as a list.
-
- >>> take(3, range(10))
- [0, 1, 2]
-
- If there are fewer than *n* items in the iterable, all of them are
- returned.
-
- >>> take(10, range(3))
- [0, 1, 2]
-
- """
- return list(islice(iterable, n))
-
-
-def tabulate(function, start=0):
- """Return an iterator over the results of ``func(start)``,
- ``func(start + 1)``, ``func(start + 2)``...
-
- *func* should be a function that accepts one integer argument.
-
- If *start* is not specified it defaults to 0. It will be incremented each
- time the iterator is advanced.
-
- >>> square = lambda x: x ** 2
- >>> iterator = tabulate(square, -3)
- >>> take(4, iterator)
- [9, 4, 1, 0]
-
- """
- return map(function, count(start))
-
-
-def tail(n, iterable):
- """Return an iterator over the last *n* items of *iterable*.
-
- >>> t = tail(3, 'ABCDEFG')
- >>> list(t)
- ['E', 'F', 'G']
-
- """
- return iter(deque(iterable, maxlen=n))
-
-
-def consume(iterator, n=None):
- """Advance *iterable* by *n* steps. If *n* is ``None``, consume it
- entirely.
-
- Efficiently exhausts an iterator without returning values. Defaults to
- consuming the whole iterator, but an optional second argument may be
- provided to limit consumption.
-
- >>> i = (x for x in range(10))
- >>> next(i)
- 0
- >>> consume(i, 3)
- >>> next(i)
- 4
- >>> consume(i)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- If the iterator has fewer items remaining than the provided limit, the
- whole iterator will be consumed.
-
- >>> i = (x for x in range(3))
- >>> consume(i, 5)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- """
- # Use functions that consume iterators at C speed.
- if n is None:
- # feed the entire iterator into a zero-length deque
- deque(iterator, maxlen=0)
- else:
- # advance to the empty slice starting at position n
- next(islice(iterator, n, n), None)
-
-
-def nth(iterable, n, default=None):
- """Returns the nth item or a default value.
-
- >>> l = range(10)
- >>> nth(l, 3)
- 3
- >>> nth(l, 20, "zebra")
- 'zebra'
-
- """
- return next(islice(iterable, n, None), default)
-
-
-def all_equal(iterable):
- """
- Returns ``True`` if all the elements are equal to each other.
-
- >>> all_equal('aaaa')
- True
- >>> all_equal('aaab')
- False
-
- """
- g = groupby(iterable)
- return next(g, True) and not next(g, False)
-
-
-def quantify(iterable, pred=bool):
- """Return the how many times the predicate is true.
-
- >>> quantify([True, False, True])
- 2
-
- """
- return sum(map(pred, iterable))
-
-
-def pad_none(iterable):
- """Returns the sequence of elements and then returns ``None`` indefinitely.
-
- >>> take(5, pad_none(range(3)))
- [0, 1, 2, None, None]
-
- Useful for emulating the behavior of the built-in :func:`map` function.
-
- See also :func:`padded`.
-
- """
- return chain(iterable, repeat(None))
-
-
-padnone = pad_none
-
-
-def ncycles(iterable, n):
- """Returns the sequence elements *n* times
-
- >>> list(ncycles(["a", "b"], 3))
- ['a', 'b', 'a', 'b', 'a', 'b']
-
- """
- return chain.from_iterable(repeat(tuple(iterable), n))
-
-
-def dotproduct(vec1, vec2):
- """Returns the dot product of the two iterables.
-
- >>> dotproduct([10, 10], [20, 20])
- 400
-
- """
- return sum(map(operator.mul, vec1, vec2))
-
-
-def flatten(listOfLists):
- """Return an iterator flattening one level of nesting in a list of lists.
-
- >>> list(flatten([[0, 1], [2, 3]]))
- [0, 1, 2, 3]
-
- See also :func:`collapse`, which can flatten multiple levels of nesting.
-
- """
- return chain.from_iterable(listOfLists)
-
-
-def repeatfunc(func, times=None, *args):
- """Call *func* with *args* repeatedly, returning an iterable over the
- results.
-
- If *times* is specified, the iterable will terminate after that many
- repetitions:
-
- >>> from operator import add
- >>> times = 4
- >>> args = 3, 5
- >>> list(repeatfunc(add, times, *args))
- [8, 8, 8, 8]
-
- If *times* is ``None`` the iterable will not terminate:
-
- >>> from random import randrange
- >>> times = None
- >>> args = 1, 11
- >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP
- [2, 4, 8, 1, 8, 4]
-
- """
- if times is None:
- return starmap(func, repeat(args))
- return starmap(func, repeat(args, times))
-
-
-def _pairwise(iterable):
- """Returns an iterator of paired items, overlapping, from the original
-
- >>> take(4, pairwise(count()))
- [(0, 1), (1, 2), (2, 3), (3, 4)]
-
- On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`.
-
- """
- a, b = tee(iterable)
- next(b, None)
- yield from zip(a, b)
-
-
-try:
- from itertools import pairwise as itertools_pairwise
-except ImportError:
- pairwise = _pairwise
-else:
-
- def pairwise(iterable):
- yield from itertools_pairwise(iterable)
-
- pairwise.__doc__ = _pairwise.__doc__
-
-
-def grouper(iterable, n, fillvalue=None):
- """Collect data into fixed-length chunks or blocks.
-
- >>> list(grouper('ABCDEFG', 3, 'x'))
- [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')]
-
- """
- if isinstance(iterable, int):
- warnings.warn(
- "grouper expects iterable as first parameter", DeprecationWarning
- )
- n, iterable = iterable, n
- args = [iter(iterable)] * n
- return zip_longest(fillvalue=fillvalue, *args)
-
-
-def roundrobin(*iterables):
- """Yields an item from each iterable, alternating between them.
-
- >>> list(roundrobin('ABC', 'D', 'EF'))
- ['A', 'D', 'E', 'B', 'F', 'C']
-
- This function produces the same output as :func:`interleave_longest`, but
- may perform better for some inputs (in particular when the number of
- iterables is small).
-
- """
- # Recipe credited to George Sakkis
- pending = len(iterables)
- nexts = cycle(iter(it).__next__ for it in iterables)
- while pending:
- try:
- for next in nexts:
- yield next()
- except StopIteration:
- pending -= 1
- nexts = cycle(islice(nexts, pending))
-
-
-def partition(pred, iterable):
- """
- Returns a 2-tuple of iterables derived from the input iterable.
- The first yields the items that have ``pred(item) == False``.
- The second yields the items that have ``pred(item) == True``.
-
- >>> is_odd = lambda x: x % 2 != 0
- >>> iterable = range(10)
- >>> even_items, odd_items = partition(is_odd, iterable)
- >>> list(even_items), list(odd_items)
- ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9])
-
- If *pred* is None, :func:`bool` is used.
-
- >>> iterable = [0, 1, False, True, '', ' ']
- >>> false_items, true_items = partition(None, iterable)
- >>> list(false_items), list(true_items)
- ([0, False, ''], [1, True, ' '])
-
- """
- if pred is None:
- pred = bool
-
- evaluations = ((pred(x), x) for x in iterable)
- t1, t2 = tee(evaluations)
- return (
- (x for (cond, x) in t1 if not cond),
- (x for (cond, x) in t2 if cond),
- )
-
-
-def powerset(iterable):
- """Yields all possible subsets of the iterable.
-
- >>> list(powerset([1, 2, 3]))
- [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]
-
- :func:`powerset` will operate on iterables that aren't :class:`set`
- instances, so repeated elements in the input will produce repeated elements
- in the output. Use :func:`unique_everseen` on the input to avoid generating
- duplicates:
-
- >>> seq = [1, 1, 0]
- >>> list(powerset(seq))
- [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)]
- >>> from more_itertools import unique_everseen
- >>> list(powerset(unique_everseen(seq)))
- [(), (1,), (0,), (1, 0)]
-
- """
- s = list(iterable)
- return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))
-
-
-def unique_everseen(iterable, key=None):
- """
- Yield unique elements, preserving order.
-
- >>> list(unique_everseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D']
- >>> list(unique_everseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'D']
-
- Sequences with a mix of hashable and unhashable items can be used.
- The function will be slower (i.e., `O(n^2)`) for unhashable items.
-
- Remember that ``list`` objects are unhashable - you can use the *key*
- parameter to transform the list to a tuple (which is hashable) to
- avoid a slowdown.
-
- >>> iterable = ([1, 2], [2, 3], [1, 2])
- >>> list(unique_everseen(iterable)) # Slow
- [[1, 2], [2, 3]]
- >>> list(unique_everseen(iterable, key=tuple)) # Faster
- [[1, 2], [2, 3]]
-
- Similary, you may want to convert unhashable ``set`` objects with
- ``key=frozenset``. For ``dict`` objects,
- ``key=lambda x: frozenset(x.items())`` can be used.
-
- """
- seenset = set()
- seenset_add = seenset.add
- seenlist = []
- seenlist_add = seenlist.append
- use_key = key is not None
-
- for element in iterable:
- k = key(element) if use_key else element
- try:
- if k not in seenset:
- seenset_add(k)
- yield element
- except TypeError:
- if k not in seenlist:
- seenlist_add(k)
- yield element
-
-
-def unique_justseen(iterable, key=None):
- """Yields elements in order, ignoring serial duplicates
-
- >>> list(unique_justseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D', 'A', 'B']
- >>> list(unique_justseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'A', 'D']
-
- """
- return map(next, map(operator.itemgetter(1), groupby(iterable, key)))
-
-
-def iter_except(func, exception, first=None):
- """Yields results from a function repeatedly until an exception is raised.
-
- Converts a call-until-exception interface to an iterator interface.
- Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel
- to end the loop.
-
- >>> l = [0, 1, 2]
- >>> list(iter_except(l.pop, IndexError))
- [2, 1, 0]
-
- Multiple exceptions can be specified as a stopping condition:
-
- >>> l = [1, 2, 3, '...', 4, 5, 6]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- [7, 6, 5]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- [4, 3, 2]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- []
-
- """
- try:
- if first is not None:
- yield first()
- while 1:
- yield func()
- except exception:
- pass
-
-
-def first_true(iterable, default=None, pred=None):
- """
- Returns the first true value in the iterable.
-
- If no true value is found, returns *default*
-
- If *pred* is not None, returns the first item for which
- ``pred(item) == True`` .
-
- >>> first_true(range(10))
- 1
- >>> first_true(range(10), pred=lambda x: x > 5)
- 6
- >>> first_true(range(10), default='missing', pred=lambda x: x > 9)
- 'missing'
-
- """
- return next(filter(pred, iterable), default)
-
-
-def random_product(*args, repeat=1):
- """Draw an item at random from each of the input iterables.
-
- >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP
- ('c', 3, 'Z')
-
- If *repeat* is provided as a keyword argument, that many items will be
- drawn from each iterable.
-
- >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP
- ('a', 2, 'd', 3)
-
- This equivalent to taking a random selection from
- ``itertools.product(*args, **kwarg)``.
-
- """
- pools = [tuple(pool) for pool in args] * repeat
- return tuple(choice(pool) for pool in pools)
-
-
-def random_permutation(iterable, r=None):
- """Return a random *r* length permutation of the elements in *iterable*.
-
- If *r* is not specified or is ``None``, then *r* defaults to the length of
- *iterable*.
-
- >>> random_permutation(range(5)) # doctest:+SKIP
- (3, 4, 0, 1, 2)
-
- This equivalent to taking a random selection from
- ``itertools.permutations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- r = len(pool) if r is None else r
- return tuple(sample(pool, r))
-
-
-def random_combination(iterable, r):
- """Return a random *r* length subsequence of the elements in *iterable*.
-
- >>> random_combination(range(5), 3) # doctest:+SKIP
- (2, 3, 4)
-
- This equivalent to taking a random selection from
- ``itertools.combinations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(sample(range(n), r))
- return tuple(pool[i] for i in indices)
-
-
-def random_combination_with_replacement(iterable, r):
- """Return a random *r* length subsequence of elements in *iterable*,
- allowing individual elements to be repeated.
-
- >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP
- (0, 0, 1, 2, 2)
-
- This equivalent to taking a random selection from
- ``itertools.combinations_with_replacement(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(randrange(n) for i in range(r))
- return tuple(pool[i] for i in indices)
-
-
-def nth_combination(iterable, r, index):
- """Equivalent to ``list(combinations(iterable, r))[index]``.
-
- The subsequences of *iterable* that are of length *r* can be ordered
- lexicographically. :func:`nth_combination` computes the subsequence at
- sort position *index* directly, without computing the previous
- subsequences.
-
- >>> nth_combination(range(5), 3, 5)
- (0, 3, 4)
-
- ``ValueError`` will be raised If *r* is negative or greater than the length
- of *iterable*.
- ``IndexError`` will be raised if the given *index* is invalid.
- """
- pool = tuple(iterable)
- n = len(pool)
- if (r < 0) or (r > n):
- raise ValueError
-
- c = 1
- k = min(r, n - r)
- for i in range(1, k + 1):
- c = c * (n - k + i) // i
-
- if index < 0:
- index += c
-
- if (index < 0) or (index >= c):
- raise IndexError
-
- result = []
- while r:
- c, n, r = c * r // n, n - 1, r - 1
- while index >= c:
- index -= c
- c, n = c * (n - r) // n, n - 1
- result.append(pool[-1 - n])
-
- return tuple(result)
-
-
-def prepend(value, iterator):
- """Yield *value*, followed by the elements in *iterator*.
-
- >>> value = '0'
- >>> iterator = ['1', '2', '3']
- >>> list(prepend(value, iterator))
- ['0', '1', '2', '3']
-
- To prepend multiple values, see :func:`itertools.chain`
- or :func:`value_chain`.
-
- """
- return chain([value], iterator)
-
-
-def convolve(signal, kernel):
- """Convolve the iterable *signal* with the iterable *kernel*.
-
- >>> signal = (1, 2, 3, 4, 5)
- >>> kernel = [3, 2, 1]
- >>> list(convolve(signal, kernel))
- [3, 8, 14, 20, 26, 14, 5]
-
- Note: the input arguments are not interchangeable, as the *kernel*
- is immediately consumed and stored.
-
- """
- kernel = tuple(kernel)[::-1]
- n = len(kernel)
- window = deque([0], maxlen=n) * n
- for x in chain(signal, repeat(0, n - 1)):
- window.append(x)
- yield sum(map(operator.mul, kernel, window))
-
-
-def before_and_after(predicate, it):
- """A variant of :func:`takewhile` that allows complete access to the
- remainder of the iterator.
-
- >>> it = iter('ABCdEfGhI')
- >>> all_upper, remainder = before_and_after(str.isupper, it)
- >>> ''.join(all_upper)
- 'ABC'
- >>> ''.join(remainder) # takewhile() would lose the 'd'
- 'dEfGhI'
-
- Note that the first iterator must be fully consumed before the second
- iterator can generate valid results.
- """
- it = iter(it)
- transition = []
-
- def true_iterator():
- for elem in it:
- if predicate(elem):
- yield elem
- else:
- transition.append(elem)
- return
-
- def remainder_iterator():
- yield from transition
- yield from it
-
- return true_iterator(), remainder_iterator()
-
-
-def triplewise(iterable):
- """Return overlapping triplets from *iterable*.
-
- >>> list(triplewise('ABCDE'))
- [('A', 'B', 'C'), ('B', 'C', 'D'), ('C', 'D', 'E')]
-
- """
- for (a, _), (b, c) in pairwise(pairwise(iterable)):
- yield a, b, c
-
-
-def sliding_window(iterable, n):
- """Return a sliding window of width *n* over *iterable*.
-
- >>> list(sliding_window(range(6), 4))
- [(0, 1, 2, 3), (1, 2, 3, 4), (2, 3, 4, 5)]
-
- If *iterable* has fewer than *n* items, then nothing is yielded:
-
- >>> list(sliding_window(range(3), 4))
- []
-
- For a variant with more features, see :func:`windowed`.
- """
- it = iter(iterable)
- window = deque(islice(it, n), maxlen=n)
- if len(window) == n:
- yield tuple(window)
- for x in it:
- window.append(x)
- yield tuple(window)
diff --git a/spaces/tomofi/MMOCR/docs/zh_cn/conf.py b/spaces/tomofi/MMOCR/docs/zh_cn/conf.py
deleted file mode 100644
index 5b2e21343250ffbebc4bac476614da28e09d2bdd..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/docs/zh_cn/conf.py
+++ /dev/null
@@ -1,136 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-
-import os
-import subprocess
-import sys
-
-import pytorch_sphinx_theme
-
-sys.path.insert(0, os.path.abspath('../../'))
-
-# -- Project information -----------------------------------------------------
-
-project = 'MMOCR'
-copyright = '2020-2030, OpenMMLab'
-author = 'OpenMMLab'
-
-# The full version, including alpha/beta/rc tags
-version_file = '../../mmocr/version.py'
-with open(version_file, 'r') as f:
- exec(compile(f.read(), version_file, 'exec'))
-__version__ = locals()['__version__']
-release = __version__
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = [
- 'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode',
- 'sphinx_markdown_tables', 'sphinx_copybutton', 'myst_parser'
-]
-
-autodoc_mock_imports = ['mmcv._ext']
-
-# Ignore >>> when copying code
-copybutton_prompt_text = r'>>> |\.\.\. '
-copybutton_prompt_is_regexp = True
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# The suffix(es) of source filenames.
-# You can specify multiple suffix as a list of string:
-#
-source_suffix = {
- '.rst': 'restructuredtext',
- '.md': 'markdown',
-}
-
-# The master toctree document.
-master_doc = 'index'
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-#
-# html_theme = 'sphinx_rtd_theme'
-html_theme = 'pytorch_sphinx_theme'
-html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
-html_theme_options = {
- 'logo_url':
- 'https://mmocr.readthedocs.io/zh_CN/latest',
- 'menu': [
- {
- 'name':
- '教程',
- 'url':
- 'https://colab.research.google.com/github/'
- 'open-mmlab/mmocr/blob/main/demo/MMOCR_Tutorial.ipynb'
- },
- {
- 'name': 'GitHub',
- 'url': 'https://github.com/open-mmlab/mmocr'
- },
- {
- 'name':
- '上游库',
- 'children': [
- {
- 'name': 'MMCV',
- 'url': 'https://github.com/open-mmlab/mmcv',
- 'description': '基础视觉库'
- },
- {
- 'name': 'MMDetection',
- 'url': 'https://github.com/open-mmlab/mmdetection',
- 'description': '目标检测工具箱'
- },
- ]
- },
- ],
- # Specify the language of shared menu
- 'menu_lang':
- 'cn',
-}
-
-language = 'zh_CN'
-
-master_doc = 'index'
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
-html_css_files = ['css/readthedocs.css']
-
-# Enable ::: for my_st
-myst_enable_extensions = ['colon_fence']
-
-
-def builder_inited_handler(app):
- subprocess.run(['./cp_origin_docs.sh'])
- subprocess.run(['./merge_docs.sh'])
- subprocess.run(['./stats.py'])
-
-
-def setup(app):
- app.connect('builder-inited', builder_inited_handler)
diff --git a/spaces/tomofi/MMOCR/mmocr/models/textrecog/recognizer/nrtr.py b/spaces/tomofi/MMOCR/mmocr/models/textrecog/recognizer/nrtr.py
deleted file mode 100644
index 36096bedc6f65d250a9af41b4970e5ccaea51301..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/mmocr/models/textrecog/recognizer/nrtr.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmocr.models.builder import RECOGNIZERS
-from .encode_decode_recognizer import EncodeDecodeRecognizer
-
-
-@RECOGNIZERS.register_module()
-class NRTR(EncodeDecodeRecognizer):
- """Implementation of `NRTR `_"""
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_3x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_3x_coco.py
deleted file mode 100644
index 66834f08ba398e7621aa8c5a3bfe12a646aecde2..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_3x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = './mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py'
-
-# learning policy
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/tonyassi/video-face-swap/DeepFakeAI/metadata.py b/spaces/tonyassi/video-face-swap/DeepFakeAI/metadata.py
deleted file mode 100644
index 39b16362cdd2cb5464ce32dcd270fc8e15f6251b..0000000000000000000000000000000000000000
--- a/spaces/tonyassi/video-face-swap/DeepFakeAI/metadata.py
+++ /dev/null
@@ -1,13 +0,0 @@
-METADATA =\
-{
- 'name': 'DeepFakeAI',
- 'description': 'Next generation face swapper and enhancer',
- 'version': '1.0.0',
- 'license': 'MIT',
- 'author': 'Ashiq Hussain Mir',
- 'url': 'https://codegenius.me'
-}
-
-
-def get(key : str) -> str:
- return METADATA[key]
diff --git a/spaces/trttung1610/musicgen/audiocraft/modules/lstm.py b/spaces/trttung1610/musicgen/audiocraft/modules/lstm.py
deleted file mode 100644
index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000
--- a/spaces/trttung1610/musicgen/audiocraft/modules/lstm.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from torch import nn
-
-
-class StreamableLSTM(nn.Module):
- """LSTM without worrying about the hidden state, nor the layout of the data.
- Expects input as convolutional layout.
- """
- def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True):
- super().__init__()
- self.skip = skip
- self.lstm = nn.LSTM(dimension, dimension, num_layers)
-
- def forward(self, x):
- x = x.permute(2, 0, 1)
- y, _ = self.lstm(x)
- if self.skip:
- y = y + x
- y = y.permute(1, 2, 0)
- return y
diff --git a/spaces/tsi-org/LLaVA/llava/model/multimodal_encoder/builder.py b/spaces/tsi-org/LLaVA/llava/model/multimodal_encoder/builder.py
deleted file mode 100644
index 2b13589d4e55af529fe0838c4130c2033ac10478..0000000000000000000000000000000000000000
--- a/spaces/tsi-org/LLaVA/llava/model/multimodal_encoder/builder.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import os
-from .clip_encoder import CLIPVisionTower
-
-
-def build_vision_tower(vision_tower_cfg, **kwargs):
- vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))
- is_absolute_path_exists = os.path.exists(vision_tower)
- if is_absolute_path_exists or vision_tower.startswith("openai") or vision_tower.startswith("laion"):
- return CLIPVisionTower(vision_tower, args=vision_tower_cfg, **kwargs)
-
- raise ValueError(f'Unknown vision tower: {vision_tower}')
diff --git a/spaces/tsi-org/LLaVA/scripts/sqa_eval_batch.sh b/spaces/tsi-org/LLaVA/scripts/sqa_eval_batch.sh
deleted file mode 100644
index adbf46ef7a6e86181b5927002597ef786add5bde..0000000000000000000000000000000000000000
--- a/spaces/tsi-org/LLaVA/scripts/sqa_eval_batch.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-
-CHUNKS=8
-for IDX in {0..7}; do
- CUDA_VISIBLE_DEVICES=$IDX python -m llava.eval.model_vqa_science \
- --model-path liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3 \
- --question-file ~/haotian/datasets/ScienceQA/data/scienceqa/llava_test_QCM-LEA.json \
- --image-folder ~/haotian/datasets/ScienceQA/data/scienceqa/images/test \
- --answers-file ./test_llava-13b-chunk$CHUNKS_$IDX.jsonl \
- --num-chunks $CHUNKS \
- --chunk-idx $IDX \
- --conv-mode llava_v1 &
-done
diff --git a/spaces/tumuyan/vits-miki/attentions.py b/spaces/tumuyan/vits-miki/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/tumuyan/vits-miki/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/ucalyptus/PTI/models/e4e/encoders/helpers.py b/spaces/ucalyptus/PTI/models/e4e/encoders/helpers.py
deleted file mode 100644
index cf31d3c16b1d2df4c34390d5aa1141398a4aa5cd..0000000000000000000000000000000000000000
--- a/spaces/ucalyptus/PTI/models/e4e/encoders/helpers.py
+++ /dev/null
@@ -1,140 +0,0 @@
-from collections import namedtuple
-import torch
-import torch.nn.functional as F
-from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module
-
-"""
-ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Flatten(Module):
- def forward(self, input):
- return input.view(input.size(0), -1)
-
-
-def l2_norm(input, axis=1):
- norm = torch.norm(input, 2, axis, True)
- output = torch.div(input, norm)
- return output
-
-
-class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])):
- """ A named tuple describing a ResNet block. """
-
-
-def get_block(in_channel, depth, num_units, stride=2):
- return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)]
-
-
-def get_blocks(num_layers):
- if num_layers == 50:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=4),
- get_block(in_channel=128, depth=256, num_units=14),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 100:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=13),
- get_block(in_channel=128, depth=256, num_units=30),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 152:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=8),
- get_block(in_channel=128, depth=256, num_units=36),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- else:
- raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers))
- return blocks
-
-
-class SEModule(Module):
- def __init__(self, channels, reduction):
- super(SEModule, self).__init__()
- self.avg_pool = AdaptiveAvgPool2d(1)
- self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False)
- self.relu = ReLU(inplace=True)
- self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False)
- seltorch.sigmoid = Sigmoid()
-
- def forward(self, x):
- module_input = x
- x = self.avg_pool(x)
- x = self.fc1(x)
- x = self.relu(x)
- x = self.fc2(x)
- x = seltorch.sigmoid(x)
- return module_input * x
-
-
-class bottleneck_IR(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-class bottleneck_IR_SE(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR_SE, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False),
- PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1, bias=False),
- BatchNorm2d(depth),
- SEModule(depth, 16)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-def _upsample_add(x, y):
- """Upsample and add two feature maps.
- Args:
- x: (Variable) top feature map to be upsampled.
- y: (Variable) lateral feature map.
- Returns:
- (Variable) added feature map.
- Note in PyTorch, when input size is odd, the upsampled feature map
- with `F.upsample(..., scale_factor=2, mode='nearest')`
- maybe not equal to the lateral feature map size.
- e.g.
- original input size: [N,_,15,15] ->
- conv2d feature map size: [N,_,8,8] ->
- upsampled feature map size: [N,_,16,16]
- So we choose bilinear upsample which supports arbitrary output sizes.
- """
- _, _, H, W = y.size()
- return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Black Mesa Announcement System Text To Speech.md b/spaces/usbethFlerru/sovits-modelsV2/example/Black Mesa Announcement System Text To Speech.md
deleted file mode 100644
index f280ec7a2594d70e5aa393793455dd283040de0c..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Black Mesa Announcement System Text To Speech.md
+++ /dev/null
@@ -1,6 +0,0 @@
-