diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Arcsoft Totalmedia 3.5 Key Keygenl.md b/spaces/1gistliPinn/ChatGPT4/Examples/Arcsoft Totalmedia 3.5 Key Keygenl.md deleted file mode 100644 index aeb480c68e96e35895fa1bb571abaf4bea758b77..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Arcsoft Totalmedia 3.5 Key Keygenl.md +++ /dev/null @@ -1,8 +0,0 @@ -
-

arcsoft totalmedia is a professional multimedia player that will help you browse through, edit, and convert your photos, videos, and other multimedia files. the software also allows you to rip cds to audio, add effects, watermark, and resize your media files.

-

arcsoft totalmedia is a multimedia player that supports a wide range of file types, including photographs, movies, and music. this program will come in handy if you want to have a good time because of its adaptability. if your computer has a tv card, arcsoft totalmedia includes a connection to it so you may watch your favorite shows while using the program.

-

Arcsoft Totalmedia 3.5 Key Keygenl


Download ✶✶✶ https://imgfil.com/2uxXME



-

arcsoft totalmedia is a professional video player that can convert dvds and low-resolution videos to near-high-resolution quality. it also makes it simple to search for and share youtube videos, as well as manage all of the programs media files. its a versatile and user-friendly media hub that allows you to watch and record tv shows, edit photos and videos, listen to music, and more. its a comprehensive software suite that gives users access to a variety of tools for handling multimedia files. everything you need to organize and play your films, photographs, and music are included in the software. it also includes advanced editing capabilities, allowing you to make your own multimedia productions.

-

while arcsoft totalmedia has undergone many changes over the years, it remains a popular multimedia application among both home users and professionals. the softwares ease of use and wide range of features make it an ideal choice for those who want to view, create, and edit various types of media files.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bizarre Soft Pachet Legislativ Auto How to Pass the Driving Test with Ease Using This Software.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bizarre Soft Pachet Legislativ Auto How to Pass the Driving Test with Ease Using This Software.md deleted file mode 100644 index 9871f70059272bd98eb2fc7ec77a10f22440adb4..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bizarre Soft Pachet Legislativ Auto How to Pass the Driving Test with Ease Using This Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

Bizarre Soft Pachet Legislativ Auto


Download Ziphttps://imgfil.com/2uxZRV



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Constantine2fullmoviedownload LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Constantine2fullmoviedownload LINK.md deleted file mode 100644 index cfb3d5f4d4ed06c98d234db85a9633f861b3d6f3..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Constantine2fullmoviedownload LINK.md +++ /dev/null @@ -1,8 +0,0 @@ -

constantine2fullmoviedownload


DOWNLOADhttps://imgfil.com/2uxXnR



- -Constantine: City of Demons. Constantine: City of Demons. See All.Genre: Adventure. Director: Greg Beeman. Starring: Keanu Reeves, Gary Oldman, Jennifer Reeves, Oliver Platt. Genres: Comedy, Drama, Horror, Sci-Fi. Runtime: 92 min. Warner Bros. Pictures. Posted in: Horror.Video Quality: 480p. On the streets of Los Angeles a psychic is beaten to death. Police detectives discover a mysterious series of occult symbols scrawled in blood on the body. The investigators believe the ritual killing has something to do with a series of vicious animal attacks that have begun to plague the city. An occult crime beat cop and a medical examiner are drawn into a battle with a dark force that is out to destroy the city. When the cops try to shut down the underground nightclub where the ritual killings are taking place, they find that their efforts are blocked. Worse, an enemy with a long memory begins to stalk Constantine with a deep hatred for what he is. A horror-comedy that is filled with gore, occult symbolism, and horrific deaths. Constantine has been called one of the most influential comic book heroes in all of the mediums. His most notable creations include the DC Comics character John Constantine, the main character in the 2004 live action film Constantine, the comic book character Purgatori and her seer girlfriend Mina who play a major role in the crossover series Constantine vs. The Mute. The character is the reincarnation of John Constantine, the protagonist of the Hellblazer comic book series, created by Jamie Delano, Pat Mills, and John Romita, Jr. and published by DC Comics since January 1992. John Constantine has since made cameo appearances in Batman Begins and The Dark Knight, and became the main protagonist of the DC Animated Universe, voicing himself in Justice League: The New Animated Series. John Constantine's place in the DC Universe is comparable to that of Batman's in the DC Extended Universe. The Hellblazer has made appearances in several of the animated series featuring Batman. Constantine is one of many characters who served as inspiration for the Dark Knight. The character was created by writer Alan Moore and artist Alan Davis in a 1987 issue of the DC Comics anthology comic book series Doom Patrol. - -John Constantine: Hellblazer - The Soul Play. Watchlist. Constantine: City of Demons - The Movie.Constantine: City of Demons. Constantine: City of Demons. See All.Genre: Adventure. Director: Greg Beeman 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cours Revit Architecture Gratuit Pdf.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cours Revit Architecture Gratuit Pdf.md deleted file mode 100644 index b9221a910e898a45570b6f8cae86b557662792e6..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Cours Revit Architecture Gratuit Pdf.md +++ /dev/null @@ -1,10 +0,0 @@ - -

id="78525">[PDF] formation revit bim architecture 6 jours7 mar 2019 · FORMATION REVIT BIM ARCHITECTURE 6 JOURS : Pour les architectes qui passent à la maquette numérique REALISER UNE MODÉLISATION SIMPLE SUR
0-190121_programme_revit_architecture_4j_2j_.pdf

-

id="72258">[PDF] Guide d'utilisation - AutodeskPermission is hereby granted, free of charge, to any person obtaining a copy of this software Comment Revit Architecture assure-t-il les mises à jour ?
revit_architecture_2011_user_guide_fra.pdf

-

Cours Revit Architecture Gratuit Pdf


Downloadhttps://imgfil.com/2uy0bK



-

id="14761">[PDF] Autodesk® Revit® - CRMet d'architecture Modélisation et présentation de systèmes de bâtiments Les outils de modélisation et de présentation du logiciel Revit MEP permettent
revit_mep_overview_brochure_a4_fr.pdf

-

cours revit architecture gratuit pdf cours revit architecture gratuit pdf , cours de revit architecture gratuit, cours revit architecture 2017 gratuit pdf , cours revit structure gratuit pdf Note: This video uses Dynamo inside of the Revit application, but the In this lesson we will introduce the building mass for the course, demonstrate its
Cours-revit-architecture-gratuit-pdf.pdf

-

Yes. Educators and students can get free educational access of 1 year. It can be easily renewable as long as you remain eligible. "}},"@type":"Question","name":"\ud83c\udfc5 What are the best resources to learn Autodesk Revit?","acceptedAnswer":"@type":"Answer","text":"You can enroll in the best online Revit training\tclasses listed above. There are also some courses that provide you a free trial for a particular month. It is good that you try them before you go to paid classes. ","@type":"Question","name":"\ud83d\udcbb Which laptop is best for Revit?","acceptedAnswer":"@type":"Answer","text":"You can use the following laptops to get the best experience in working with Revit:

SAPExpandABAPCRMPI/POAPOCrystal ReportsPPBeginnersFICOSDBasisHANASAPUI5BODSHRSecurity TutorialBI/BWMMSolution ManagerBPCQMSuccessfactorsCOPayrollSAP CoursesWebExpandApacheJavaPHPSQL ServerAngularJSJSPPL/SQLUMLASP.NETKotlinPostgreSQLVB.NETCLinuxPythonVBScriptC#MariaDBReactJSWeb ServicesC++MS AccessRuby & RailsWPFCodeIgniterMySQLScalaSQLiteDBMSNode.jsSQLPerlJavaScriptMust LearnExpandAccountingEmbedded SystemsOperating SystemAlgorithmsEthical HackingPMPAndroidExcel TutorialPhotoshopBlockchainGo ProgrammingProject ManagementBusiness AnalystIoTReviewsBuild WebsiteITILSalesforceCloud ComputingJenkinsSEOCOBOLMISSoftware EngineeringCompiler DesignMovieVBACoursesNetworkingVPNBig DataExpandAWSHivePower BIBig DataInformaticaQlikviewCassandraMicroStrategyTableauCognosMongoDBTalendData WarehousingNiFiZooKeeperDevOpsOBIEEPentahoHBaseLive ProjectExpandLive Agile TestingLive Selenium ProjectLive HP ALMLive Selenium 2Live Java ProjectLive Security TestingLive Mobile TestingLive Testing ProjectLive Payment GatewayLive Testing 2Live PHP ProjectLive TelecomLive Projects HubLive UFT/QTP TestingLive Python ProjectLive SEO ProjectAIExpandArtificial IntelligencePyTorchData ScienceR ProgrammingKerasTensorFlowNLTKSearchToggle Menu10 BEST Revit Courses & Online Training Classes (2023)ByAlyssa WalkerHoursUpdatedJanuary 28, 2023@media(min-width: 520px).responsive-guru99-mobile1 float:left;min-height: 280px; @media(max-width: 519px).responsive-guru99-mobile1 min-height: 280px !important; @media(max-width: 499px).responsive-guru99-mobile1display: none !important;@media(max-width: 499px).responsive-guru99-mobile12 margin-right:6px;width:345px;min-height:100px; googletag.cmd.push(function() googletag.display('div-gpt-ad-1565016699961-0'); if (typeof(pubwise) != 'undefined' && pubwise.enabled === true) pbjs.que.push(function () pwRegisterLazyLoad(gptadslots['div-gpt-ad-1565016699961-0'], 1, [50,0,50,0], 0, 768, 2); ); else googletag.pubads().refresh([gptadslots['div-gpt-ad-1565016699961-0']]); ); googletag.cmd.push(function() googletag.display('div-gpt-ad-1565016699961-1'); if (typeof(pubwise) != 'undefined' && pubwise.enabled === true) pbjs.que.push(function () pwRegisterLazyLoad(gptadslots['div-gpt-ad-1565016699961-1'], 1, [50,0,50,0], 0, 768, 2); ); else googletag.pubads().refresh([gptadslots['div-gpt-ad-1565016699961-1']]); ); Autodesk Revit is modeling software for 3D architectural modeling and BIM drafting for architects, landscape architects, structural engineers, MEP (Mechanical, Electrical, and Plumbing) engineers, designers, and contractors.

-

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Kings MOD APK 2022 The Most Epic War Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Kings MOD APK 2022 The Most Epic War Game Ever.md deleted file mode 100644 index f677b9b309579439c0443fe8aae9f2264194cdbc..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Kings MOD APK 2022 The Most Epic War Game Ever.md +++ /dev/null @@ -1,97 +0,0 @@ - -

Clash of Kings Mod APK 2022: The Ultimate Guide

-

If you are looking for a thrilling and immersive strategy game that lets you build your own kingdom, wage war against your enemies, and forge alliances with other players, then you should definitely check out Clash of Kings. And if you want to enjoy the game with unlimited money, resources, VIP features, and more, then you should download Clash of Kings Mod APK 2022. In this article, we will tell you everything you need to know about this amazing modded version of the game, including how to download and install it, how to play it, and what are the benefits of using it. So, without further ado, let's get started!

-

What is Clash of Kings?

-

A brief introduction to the game and its features

-

Clash of Kings is a popular online multiplayer strategy game developed by Elex Wireless. The game was released in 2014 and has since attracted millions of players from all over the world. The game is set in a medieval fantasy world where you can create your own kingdom, build your castle, train your army, and fight against other players and kingdoms. You can also join an alliance with other players, chat with them, trade with them, and cooperate with them in various events and quests. The game has stunning graphics, realistic sound effects, and smooth gameplay that will keep you hooked for hours.

-

clash of kings mod apk 2022


DOWNLOADhttps://urlin.us/2uSRML



-

Why you should play Clash of Kings in 2022

-

Clash of Kings is not just a game, it's a community. You can interact with other players from different countries and cultures, make new friends, and learn new things. You can also challenge yourself by competing with other players in different modes, such as PvP battles, alliance wars, kingdom wars, dragon campaigns, etc. You can also customize your kingdom according to your preferences, by choosing from different buildings, decorations, troops, heroes, dragons, etc. The game is constantly updated with new features, events, and content that will keep you entertained and engaged. Whether you are a casual player or a hardcore gamer, you will find something to enjoy in Clash of Kings.

-

What is Clash of Kings Mod APK?

-

How to download and install Clash of Kings Mod APK

-

Clash of Kings Mod APK is a modified version of the original game that gives you access to unlimited money, resources, VIP features, and more. You can download it for free from various websites on the internet. However, you need to be careful when downloading it, as some websites may contain viruses or malware that can harm your device. To download Clash of Kings Mod APK safely and securely, follow these steps:

-

clash of kings mod apk 2022 unlimited money
-clash of kings mod apk 2022 download free
-clash of kings mod apk 2022 latest version
-clash of kings mod apk 2022 android
-clash of kings mod apk 2022 offline
-clash of kings mod apk 2022 no root
-clash of kings mod apk 2022 hack
-clash of kings mod apk 2022 new update
-clash of kings mod apk 2022 online
-clash of kings mod apk 2022 ios
-clash of kings mod apk 2022 cheats
-clash of kings mod apk 2022 revdl
-clash of kings mod apk 2022 rexdl
-clash of kings mod apk 2022 unlimited gold
-clash of kings mod apk 2022 unlimited resources
-clash of kings mod apk 2022 unlimited gems
-clash of kings mod apk 2022 unlimited troops
-clash of kings mod apk 2022 unlimited everything
-clash of kings mod apk 2022 gameplay
-clash of kings mod apk 2022 features
-clash of kings mod apk 2022 review
-clash of kings mod apk 2022 how to install
-clash of kings mod apk 2022 how to play
-clash of kings mod apk 2022 how to download
-clash of kings mod apk 2022 how to update
-clash of kings mod apk 2022 how to hack
-clash of kings mod apk 2022 how to get unlimited money
-clash of kings mod apk 2022 how to get unlimited gold
-clash of kings mod apk 2022 how to get unlimited resources
-clash of kings mod apk 2022 how to get unlimited gems
-clash of kings mod apk 2022 how to get unlimited troops
-clash of kings mod apk 2022 how to get unlimited everything
-clash of kings mod apk 2022 tips and tricks
-clash of kings mod apk 2022 best strategy
-clash of kings mod apk 2022 best troops
-clash of kings mod apk 2022 best heroes
-clash of kings mod apk 2022 best buildings
-clash of kings mod apk 2022 best alliance
-clash of kings mod apk 2022 best events
-clash of kings mod apk 2022 best rewards

-
    -
  1. Go to [this website](^1^) and click on the download button.
  2. -
  3. Wait for the download to finish and then open the downloaded file.
  4. -
  5. Allow installation from unknown sources if prompted by your device.
  6. -
  7. Follow the instructions on the screen to install the mod apk.
  8. -
  9. Launch the game and enjoy!
  10. -
-

What are the benefits of using Clash of Kings Mod APK

-

Unlimited money and resources

-

One of the main benefits of using Clash of Kings Mod APK is that you can get unlimited money and resources in the game. You can use them to buy anything you want, such as buildings, troops, items, etc. You can also upgrade your castle, army, and heroes faster and easier. You don't have to worry about running out of resources or waiting for them to generate. You can enjoy the game without any limitations or restrictions.

-

Free VIP and premium features

-

Another benefit of using Clash of Kings Mod APK is that you can get free VIP and premium features in the game. You can unlock all the VIP levels and perks, such as faster construction, research, training, healing, etc. You can also access all the premium features, such as exclusive skins, decorations, heroes, dragons, etc. You don't have to spend any real money or watch any ads to get these features. You can enhance your gaming experience and have more fun with Clash of Kings Mod APK.

-

No ads and no root required

-

The last benefit of using Clash of Kings Mod APK is that you can play the game without any ads and without rooting your device. You don't have to deal with annoying ads that pop up every now and then and interrupt your gameplay. You also don't have to root your device or risk damaging it to use the mod apk. You can play the game safely and smoothly with Clash of Kings Mod APK.

-

How to play Clash of Kings Mod APK

-

Tips and tricks for beginners

-

If you are new to Clash of Kings or strategy games in general, you might need some tips and tricks to help you get started. Here are some of them:

-

Build and upgrade your castle

-

Your castle is the heart of your kingdom and the base of your operations. You should always build and upgrade your castle as much as possible. Your castle level determines what other buildings you can build and upgrade, as well as your kingdom's power and prestige. You should also build and upgrade other buildings that provide you with resources, troops, technologies, etc. You should balance your development between economy, military, and defense.

-

Train and recruit your army

-

Your army is your main force for fighting and conquering other kingdoms. You should always train and recruit your army as much as possible. Your army consists of different types of troops, such as infantry, cavalry, archers, siege engines, etc. Each type has its own strengths and weaknesses, so you should use them wisely according to the situation. You should also recruit heroes and dragons to lead your army and boost their performance. Heroes and dragons have special skills and abilities that can turn the tide of battle.

-

Join an alliance and participate in events

-

Clash of Kings is not a solo game, it's a social game. You should join an alliance with other players who share your goals and interests. You can chat with them, trade with them, help them, and cooperate with them in various events and quests. You can also participate in alliance wars, kingdom wars, dragon campaigns, etc., where you can fight alongside your allies against other alliances or kingdoms. You can earn rewards, honor, and glory by participating in these events.

-

Strategies and tactics for advanced players

-

If you are an experienced player of Clash of Kings or strategy games in general, you might need some strategies and tactics to help you improve your skills. Here are some of them:

-

Explore and conquer the map

-

The map of Clash of Kings is vast and diverse. You should explore it and conquer it as much as possible. You can find various resources, monsters, rebels, treasures, etc., on the map that can benefit you. You can also attack other players' castles or resource points to loot their resources or capture their lands. You should scout before you attack to know your enemy's strength and weakness. You should also use different formations and strategies depending on the terrain and situation.

-

Research and craft new technologies

-

Technology is the key to progress and power in Clash of Kings. You should research and craft new technologies as much as possible. You can research technologies in different fields, such as economy, military, defense, etc. You can also craft new items, such as weapons, armor, accessories, etc. These technologies and items can improve your kingdom's efficiency, productivity, and combat power. You should prioritize the technologies and items that suit your playstyle and goals.

-

Challenge other players and kingdoms

-

Clash of Kings is a competitive game where you can challenge other players and kingdoms for supremacy and glory. You can challenge other players in different modes, such as PvP battles, arena battles, lord trials, etc. You can also challenge other kingdoms in different modes, such as kingdom wars, cross-server wars, world wars, etc. You can earn rewards, rankings, and titles by challenging other players and kingdoms. You should always be prepared and confident when you challenge others.

-

Conclusion

-

Summary of the main points

-

Clash of Kings is a fantastic strategy game that lets you build your own kingdom, wage war against your enemies, and forge alliances with other players. Clash of Kings Mod APK is a modified version of the game that gives you unlimited money, resources, VIP features, and more. You can download it for free from [this website] and install it on your device easily. You can play the game with more fun and excitement with Clash of Kings Mod APK.

-

FAQs

-

Here are some frequently asked questions about Clash of Kings Mod APK:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Free Uplay and Connect with Ubisoft Players Worldwide.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Free Uplay and Connect with Ubisoft Players Worldwide.md deleted file mode 100644 index 7531df01acacc9dc8b2ab8e517b392655f959402..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Free Uplay and Connect with Ubisoft Players Worldwide.md +++ /dev/null @@ -1,128 +0,0 @@ - -

How to Download Free Uplay and Enjoy Ubisoft Games on PC

-

If you are a fan of Ubisoft games, such as Assassin's Creed, Far Cry, Watch Dogs, and more, you might want to download free Uplay and play them on your PC. Uplay is Ubisoft's platform for PC games, where you can access, download, and launch your favorite titles. Uplay also offers many benefits for players, such as rewards, achievements, stats, friends, and more. In this article, we will show you how to download free Uplay and how to access and play free games on it.

-

download free uplay


Download Zip ✒ ✒ ✒ https://urlin.us/2uT0vq



-

What is Uplay and Why You Need It

-

Uplay is Ubisoft's platform for PC games

-

Uplay is a digital distribution, digital rights management, multiplayer, and communications service created by Ubisoft. It allows you to buy, download, and play Ubisoft games on your PC. You can also access it on your mobile device or console, through a dedicated app or directly from your games. All you need to log in is a Ubisoft account.

-

Uplay offers many benefits for players

-

Uplay is not just a launcher for your games. It also provides you with many features and services that enhance your gaming experience. Some of them are:

- -

How to Download and Install Uplay for Free

-

Visit the Ubisoft Connect website and click on Download for PC

-

The first step to download free Uplay is to visit the Ubisoft Connect website. This is the official website for Uplay, where you can learn more about its features and services. On the homepage, you will see a button that says "Download for PC". Click on it and you will be redirected to a page where you can download the Uplay installer.

-

Run the installer and follow the instructions

-

The next step is to run the Uplay installer that you have downloaded. The installer is a small file that will guide you through the installation process. You will need to agree to the terms of service, choose a destination folder, and create a desktop shortcut. The installation should take only a few minutes.

-

Create or log in to your Ubisoft account

-

The final step is to create or log in to your Ubisoft account. This is the account that you will use to access Uplay and all its features. If you already have an account, you can simply enter your email and password. If you don't have an account yet, you can create one by clicking on "Create a Ubisoft Account". You will need to provide some basic information, such as your name, email , and a password. You will also need to verify your email address by clicking on a link that will be sent to you. Once you have created or logged in to your Ubisoft account, you are ready to use Uplay.

-

How to Access and Play Free Games on Uplay

-

Browse the library of free games on Uplay

-

One of the best things about Uplay is that it offers a library of free games that you can play on your PC. These games are either free-to-play, meaning that they are always free and supported by optional in-game purchases, or free trials, meaning that they are available for a limited time or with some restrictions. You can browse the library of free games on Uplay by clicking on the "Free Games" tab on the top menu. You will see a list of games that you can filter by genre, platform, or availability. Some of the most popular free games on Uplay are:

-

How to download free uplay games on PC
-Download free uplay app for Android
-Ubisoft Connect: free service to play uplay games
-Free to play PC games on Ubisoft Store
-Uplay download free for Windows 10
-Best free uplay games to download in 2023
-Download free uplay rewards and achievements
-Uplay download free without registration
-How to get free uplay plus subscription
-Download free uplay launcher for Mac
-Free uplay games download codes and coupons
-Download free uplay offline installer
-How to download free uplay beta games
-Free uplay games download full version
-Download free uplay cloud saves and progression
-How to download free uplay updates and patches
-Free uplay games download no virus
-Download free uplay smart intel and tips
-How to download free uplay events and giveaways
-Free uplay games download with crack
-Download free uplay stats and leaderboards
-How to download free uplay cross-play and cross-progression
-Free uplay games download for Linux
-Download free uplay news and feed
-How to download free uplay chat and group chats
-Free uplay games download for PS4
-Download free uplay library and content
-How to download free uplay friends and community
-Free uplay games download for Xbox One
-Download free uplay loyalty program and units
-How to download free uplay support and help
-Free uplay games download for Nintendo Switch
-Download free uplay desktop app and features
-How to download free uplay account and profile
-Free uplay games download for mobile devices
-Download free uplay 20% off coupon with 100 units
-How to download free uplay refund policy and terms of use
-Free uplay games download for VR devices
-Download free uplay game time tracking and performance
-How to download free uplay privacy policy and data protection

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
GameGenreDescription
Tom Clancy's The Division 2Action RPGA post-apocalyptic shooter set in Washington D.C., where you must fight against factions and rogue agents to restore order.
Hyper ScapeBattle RoyaleA futuristic urban battle royale where you can hack the environment and use unique abilities to survive.
BrawlhallaFightingA platform fighter where you can choose from over 50 legends and compete in online and offline modes.
TrackmaniaRacingA racing game where you can create and share your own tracks and challenge other players.
Might & Magic: Chess RoyaleStrategyA fast-paced auto-battler where you can recruit and upgrade heroes from the Might & Magic universe.
-

Download and launch the games you want to play

-

To download and play a free game on Uplay, you just need to click on the game's icon and then click on "Play Now". You will be prompted to download the game files if you haven't already. The download time will depend on your internet speed and the size of the game. Once the download is complete, you can launch the game from Uplay or from your desktop shortcut. You will be able to access all the features and services of Uplay while playing, such as rewards, achievements, friends, stats, and more.

-

Enjoy the features and rewards of Uplay

-

As you play free games on Uplay, you will be able to enjoy the features and rewards that Uplay offers. For example, you will be able to earn Units by completing challenges and achievements in your games. You can then redeem these Units for unique rewards, such as outfits, weapons, skins, wallpapers, and more. You can also use 100 Units to get a 20% off coupon for your next purchase in the Ubisoft Store. You will also be able to access your game progress across devices, thanks to the cross-platform progression system. You will also be able to see your stats and tips for each game, as well as compare them with your friends and other players. You will also be able to stay updated on the latest news and events for your favorite games, as well as participate in beta tests and test servers for upcoming games and updates.

-

Conclusion

-

In conclusion, Uplay is a great platform for PC gamers who love Ubisoft games. It allows you to download free Uplay and access a library of free games that you can play on your PC. It also offers many features and benefits that enhance your gaming experience, such as rewards, achievements, stats, friends, news, events, and more. To download free Uplay, all you need to do is visit the Ubisoft Connect website, download and install the Uplay installer, and create or log in to your Ubisoft account. Then, you can browse the library of free games on Uplay, download and launch the games you want to play, and enjoy the features and rewards of Uplay.

-

FAQs

-

Q: Is Uplay safe to download?

-

A: Yes, Uplay is safe to download. It is an official service from Ubisoft that has been around since 2009. It does not contain any viruses or malware that could harm your PC.

-

Q: Do I need an internet connection to play games on Uplay?

-

A: Yes, you need an internet connection to play games on Uplay. This is because Uplay needs to verify your Ubisoft account and your game licenses. It also needs to sync your game progress and your Units. However, some games may have an offline mode that allows you to play without an internet connection. You can check the game's description or settings to see if it has an offline mode.

-

Q: Can I play Uplay games on other platforms?

-

A: Yes, you can play Uplay games on other platforms, such as mobile devices or consoles. You can download the Uplay app on your device or access Uplay directly from your games. You will be able to log in with your Ubisoft account and access the same features and services as on PC. However, some games may not be available on all platforms, so you will need to check the game's compatibility before buying or downloading it.

-

Q: How can I contact Uplay support if I have any issues?

-

A: If you have any issues with Uplay, such as technical problems, account issues, payment issues, or game issues, you can contact Uplay support by visiting the Ubisoft Support website. You can browse the FAQ section for common questions and answers, or you can submit a support ticket with your issue details. You can also chat with a support agent online or call them by phone. You will need to provide your Ubisoft account information and your game information when contacting Uplay support.

-

Q: How can I uninstall Uplay from my PC?

-

A: If you want to uninstall Uplay from your PC, you can do so by following these steps:

-
    -
  1. Close Uplay and any games that are running on it.
  2. -
  3. Go to the Control Panel and click on Programs and Features.
  4. -
  5. Find Uplay in the list of programs and click on Uninstall.
  6. -
  7. Follow the instructions on the screen to complete the uninstallation process.
  8. -
  9. Delete any remaining files or folders related to Uplay from your PC.
  10. -
-

Note that uninstalling Uplay will not delete your Ubisoft account or your game progress. You can still access them by logging in to Uplay on another device or platform.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Dynamons World APK Hack and Train Your Monsters Like a Pro.md b/spaces/1phancelerku/anime-remove-background/Download Dynamons World APK Hack and Train Your Monsters Like a Pro.md deleted file mode 100644 index b0191af98aeb527e5212d7ead17c3b1b45112386..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Dynamons World APK Hack and Train Your Monsters Like a Pro.md +++ /dev/null @@ -1,110 +0,0 @@ -
-

Dynamons World APK Download Hack: How to Get Unlimited Money and More

-

If you are a fan of RPG games with cute and powerful monsters, you might have heard of Dynamons World. This game is a popular online multiplayer game where you can catch, train, and battle with dozens of unique Dynamons. But what if you want to get unlimited money, unlock all the content, and remove the annoying ads? Well, you can do that by downloading the Dynamons World APK hack. In this article, we will show you how to download and install the Dynamons World APK hack, what features it offers, and some tips and tricks for playing the game.

-

Introduction

-

What is Dynamons World?

-

Dynamons World is a game that features:

-

dynamons world apk download hack


Download Filehttps://jinyurl.com/2uNO7y



- -

Why download Dynamons World APK hack?

-

Dynamons World is a fun and addictive game, but it also has some limitations. For example, you need to earn money by winning battles or watching ads to buy items, upgrade your Dynamons, or unlock new content. You also have to deal with ads that pop up every now and then, which can be annoying and distracting. If you want to enjoy the game without these restrictions, you can download the Dynamons World APK hack. This is a modified version of the game that gives you unlimited money, unlocked content, and removed ads. This way, you can play the game with more freedom and convenience.

-

How to download and install Dynamons World APK hack

-

Step 1: Find a reliable source

-

The first thing you need to do is to find a reliable source where you can download the Dynamons World APK hack. There are many websites that offer this file, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or fake files that can harm your device or steal your personal information. To avoid this, you should do some research and check the reviews and ratings of the website before downloading anything. You can also use a trusted antivirus program to scan the file before installing it.

-

Step 2: Enable unknown sources

-

The next thing you need to do is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the official app store. Since the Dynamons World APK hack is not available on the Google Play Store or the App Store, you need to enable this option to install it. To do this, go to your device settings, then security or privacy, then unknown sources or install unknown apps. Toggle on the switch or check the box to allow installation from unknown sources.

-

Step 3: Download and install the APK file

-

The third thing you need to do is to download and install the APK file. Once you have found a reliable source and enabled unknown sources, you can proceed to download and install the APK file. To do this, follow these steps:

-
    -
  1. Go to the website where you found the Dynamons World APK hack and click on the download button.
  2. -
  3. Wait for the download to finish and locate the file on your device.
  4. -
  5. Tap on the file and follow the instructions to install it.
  6. -
  7. Wait for the installation to complete and grant any permissions that the app may request.
  8. -
-

Step 4: Launch the game and enjoy

-

The last thing you need to do is to launch the game and enjoy. To do this, simply tap on the game icon on your device and start playing. You will notice that you have unlimited money, unlocked content, and removed ads. You can use these features to buy items, upgrade your Dynamons, or access new content. You can also play online matches with other players without any interruptions. Have fun with your Dynamons World APK hack!

-

Features of Dynamons World APK hack

-

Unlimited money

-

One of the main features of the Dynamons World APK hack is that it gives you unlimited money. Money is the currency of the game that you can use to buy items, upgrade your Dynamons, or unlock new content. Normally, you have to earn money by winning battles or watching ads, which can be time-consuming and tedious. But with the Dynamons World APK hack, you don't have to worry about that. You can get as much money as you want without any limits. You can spend it on anything you like and never run out of it.

-

Unlocked content

-

Another feature of the Dynamons World APK hack is that it unlocks all the content of the game. Content refers to the Dynamons, items, boosters, areas, and modes that you can access in the game. Normally, you have to unlock them by completing certain tasks, reaching certain levels, or paying real money. But with the Dynamons World APK hack, you don't have to do that. You can access all the content from the start and enjoy everything that the game has to offer. You can choose from dozens of unique Dynamons, use various items and boosters, explore different areas on the map, and play different modes.

-

Removed ads

-

The last feature of the Dynamons World APK hack is that it removes all the ads from the game. Ads are the advertisements that pop up every now and then while you are playing the game. They can be annoying and distracting, especially when they interrupt your battles or online matches. They can also consume your data and battery life. But with the Dynamons World APK hack, you don't have to deal with them. You can play the game without any ads and enjoy a smooth and uninterrupted gaming experience.

-

dynamons world mod apk unlimited money
-dynamons world hack apk latest version
-dynamons world apk free download full
-dynamons world mod apk android 1
-dynamons world hack apk no root
-dynamons world apk offline mode
-dynamons world mod apk revdl
-dynamons world hack apk unlimited gems
-dynamons world apk download for pc
-dynamons world mod apk rexdl
-dynamons world hack apk 2023
-dynamons world apk pure download
-dynamons world mod apk happymod
-dynamons world hack apk ios
-dynamons world apk obb download
-dynamons world mod apk an1
-dynamons world hack apk online
-dynamons world apk download uptodown
-dynamons world mod apk all unlocked
-dynamons world hack apk mediafıre
-dynamons world apk download apkpure
-dynamons world mod apk 1.8.11
-dynamons world hack apk 1.8.12
-dynamons world apk download for android
-dynamons world mod apk unlimited everything
-dynamons world hack tool apk
-dynamons world apk download latest version
-dynamons world mod menu apk
-dynamons world hack generator apk
-dynamons world apk download old version
-dynamons world mega mod apk
-dynamons world hack version download
-dynamons world premium apk download
-dynamons world god mode apk
-dynamons world hacked game download
-dynamons world pro apk download
-dynamons world cheat engine apk
-dynamons world hacked account download
-dynamons world plus apk download
-dynamons world unlimited coins and gems apk

-

Tips and tricks for playing Dynamons World

-

Choose your starter wisely

-

The first tip for playing Dynamons World is to choose your starter wisely. Your starter is the first Dynamon that you get in the game and it will accompany you throughout your adventure. There are three types of starters: fire, water, and plant. Each type has its own strengths and weaknesses against other types. Fire beats plant, plant beats water, and water beats fire. You should choose a starter that suits your playstyle and preference. For example, if you like aggressive and offensive battles, you might want to choose a fire starter. If you like defensive and balanced battles, you might want to choose a water starter. If you like strategic and versatile battles, you might want to choose a plant starter.

-

Upgrade your Dynamons regularly

-

The second tip for playing Dynamons World is to upgrade your Dynamons regularly. Upgrading your Dynamons means increasing their level, power, health, and skills. This will make them stronger and more effective in battles. You can upgrade your Dynamons by using items or by winning battles. You should upgrade your Dynamons as much as possible to keep up with the difficulty of the game and to defeat stronger opponents.

-

Use skill cards strategically

-

The third tip for playing Dynamons World is to use skill cards strategically. Skill cards are special cards that you can use in battles to perform powerful moves or effects. Each skill card has a different effect depending on its type and element. For example, some skill cards can deal damage, heal, stun, poison, or buff your Dynamons. You can get skill cards by buying them with money or by finding them on the map. You should use skill cards wisely and at the right time to turn the tide of battle in your favor.

-

Explore the map and catch rare Dynamons

-

The fourth tip for playing Dynamons World is to explore the map and catch rare Dynamons. The map is the area where you can find and battle with wild Dynamons. There are different zones on the map, each with its own theme and Dynamons. You should explore the map as much as possible to discover new zones and Dynamons. You can also catch rare Dynamons by using special items or by being lucky. Rare Dynamons are Dynamons that have higher stats, unique skills, or special appearances. You should catch rare Dynamons to add them to your collection and to use them in battles.

-

Challenge other players online

-

The fifth tip for playing Dynamons World is to challenge other players online. Online matches are matches where you can battle with other players from around the world. You can access online matches by clicking on the online button on the main menu. You can choose from different modes, such as ranked, casual, or tournament. You can also chat with other players and make friends. Online matches are a great way to test your skills, learn from others, and have fun.

-

Conclusion

-

Dynamons World is a game that lets you catch, train, and battle with cute and powerful monsters. It is a fun and addictive game that you can play for free on web browser, Android, and iOS platforms. But if you want to get unlimited money, unlock all the content, and remove the ads, you can download the Dynamons World APK hack. This is a modified version of the game that gives you these features and more. In this article, we showed you how to download and install the Dynamons World APK hack, what features it offers, and some tips and tricks for playing the game. We hope you found this article helpful and informative. Now go ahead and enjoy your Dynamons World APK hack!

-

FAQs

-

Here are some frequently asked questions about Dynamons World APK hack:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/vtoonify.py b/spaces/232labs/VToonify/vtoonify/model/vtoonify.py deleted file mode 100644 index 6556a0a6c734be5f413f4683eb63c44f449c6af8..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/vtoonify.py +++ /dev/null @@ -1,286 +0,0 @@ -import torch -import numpy as np -import math -from torch import nn -from model.stylegan.model import ConvLayer, EqualLinear, Generator, ResBlock -from model.dualstylegan import AdaptiveInstanceNorm, AdaResBlock, DualStyleGAN -import torch.nn.functional as F - -# IC-GAN: stylegan discriminator -class ConditionalDiscriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], use_condition=False, style_num=None): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - self.use_condition = use_condition - - if self.use_condition: - self.condition_dim = 128 - # map style degree to 64-dimensional vector - self.label_mapper = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, self.condition_dim//2), - ) - # map style code index to 64-dimensional vector - self.style_mapper = nn.Embedding(style_num, self.condition_dim-self.condition_dim//2) - else: - self.condition_dim = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], self.condition_dim), - ) - - def forward(self, input, degree_label=None, style_ind=None): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - out = out.view(batch, -1) - - if self.use_condition: - h = self.final_linear(out) - condition = torch.cat((self.label_mapper(degree_label), self.style_mapper(style_ind)), dim=1) - out = (h * condition).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.condition_dim)) - else: - out = self.final_linear(out) - - return out - - -class VToonifyResBlock(nn.Module): - def __init__(self, fin): - super().__init__() - - self.conv = nn.Conv2d(fin, fin, 3, 1, 1) - self.conv2 = nn.Conv2d(fin, fin, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - out = self.lrelu(self.conv(x)) - out = self.lrelu(self.conv2(out)) - out = (out + x) / math.sqrt(2) - return out - -class Fusion(nn.Module): - def __init__(self, in_channels, skip_channels, out_channels): - super().__init__() - - # create conv layers - self.conv = nn.Conv2d(in_channels + skip_channels, out_channels, 3, 1, 1, bias=True) - self.norm = AdaptiveInstanceNorm(in_channels + skip_channels, 128) - self.conv2 = nn.Conv2d(in_channels + skip_channels, 1, 3, 1, 1, bias=True) - #''' - self.linear = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 128), - nn.LeakyReLU(negative_slope=0.2, inplace=True) - ) - - def forward(self, f_G, f_E, d_s=1): - # label of style degree - label = self.linear(torch.zeros(f_G.size(0),1).to(f_G.device) + d_s) - out = torch.cat([f_G, abs(f_G-f_E)], dim=1) - m_E = (F.relu(self.conv2(self.norm(out, label)))).tanh() - f_out = self.conv(torch.cat([f_G, f_E * m_E], dim=1)) - return f_out, m_E - -class VToonify(nn.Module): - def __init__(self, - in_size=256, - out_size=1024, - img_channels=3, - style_channels=512, - num_mlps=8, - channel_multiplier=2, - num_res_layers=6, - backbone = 'dualstylegan', - ): - - super().__init__() - - self.backbone = backbone - if self.backbone == 'dualstylegan': - # DualStyleGAN, with weights being fixed - self.generator = DualStyleGAN(out_size, style_channels, num_mlps, channel_multiplier) - else: - # StyleGANv2, with weights being fixed - self.generator = Generator(out_size, style_channels, num_mlps, channel_multiplier) - - self.in_size = in_size - self.style_channels = style_channels - channels = self.generator.channels - - # encoder - num_styles = int(np.log2(out_size)) * 2 - 2 - encoder_res = [2**i for i in range(int(np.log2(in_size)), 4, -1)] - self.encoder = nn.ModuleList() - self.encoder.append( - nn.Sequential( - nn.Conv2d(img_channels+19, 32, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(32, channels[in_size], 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True))) - - for res in encoder_res: - in_channels = channels[res] - if res > 32: - out_channels = channels[res // 2] - block = nn.Sequential( - nn.Conv2d(in_channels, out_channels, 3, 2, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(out_channels, out_channels, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True)) - self.encoder.append(block) - else: - layers = [] - for _ in range(num_res_layers): - layers.append(VToonifyResBlock(in_channels)) - self.encoder.append(nn.Sequential(*layers)) - block = nn.Conv2d(in_channels, img_channels, 1, 1, 0, bias=True) - self.encoder.append(block) - - # trainable fusion module - self.fusion_out = nn.ModuleList() - self.fusion_skip = nn.ModuleList() - for res in encoder_res[::-1]: - num_channels = channels[res] - if self.backbone == 'dualstylegan': - self.fusion_out.append( - Fusion(num_channels, num_channels, num_channels)) - else: - self.fusion_out.append( - nn.Conv2d(num_channels * 2, num_channels, 3, 1, 1, bias=True)) - - self.fusion_skip.append( - nn.Conv2d(num_channels + 3, 3, 3, 1, 1, bias=True)) - - # Modified ModRes blocks in DualStyleGAN, with weights being fixed - if self.backbone == 'dualstylegan': - self.res = nn.ModuleList() - self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1, no use in this model - for i in range(3, 6): - out_channel = self.generator.channels[2 ** i] - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - - - def forward(self, x, style, d_s=None, return_mask=False, return_feat=False): - # map style to W+ space - if style is not None and style.ndim < 3: - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style).unsqueeze(1).repeat(1, self.generator.n_latent, 1) - adastyles = style.unsqueeze(1).repeat(1, self.generator.n_latent, 1) - elif style is not None: - nB, nL, nD = style.shape - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style.reshape(nB*nL, nD)).reshape(nB, nL, nD) - adastyles = style - if self.backbone == 'dualstylegan': - adastyles = adastyles.clone() - for i in range(7, self.generator.n_latent): - adastyles[:, i] = self.generator.res[i](adastyles[:, i]) - - # obtain multi-scale content features - feat = x - encoder_features = [] - # downsampling conv parts of E - for block in self.encoder[:-2]: - feat = block(feat) - encoder_features.append(feat) - encoder_features = encoder_features[::-1] - # Resblocks in E - for ii, block in enumerate(self.encoder[-2]): - feat = block(feat) - # adjust Resblocks with ModRes blocks - if self.backbone == 'dualstylegan': - feat = self.res[ii+1](feat, resstyles[:, ii+1], d_s) - # the last-layer feature of E (inputs of backbone) - out = feat - skip = self.encoder[-1](feat) - if return_feat: - return out, skip - - # 32x32 ---> higher res - _index = 1 - m_Es = [] - for conv1, conv2, to_rgb in zip( - self.stylegan().convs[6::2], self.stylegan().convs[7::2], self.stylegan().to_rgbs[3:]): - - # pass the mid-layer features of E to the corresponding resolution layers of G - if 2 ** (5+((_index-1)//2)) <= self.in_size: - fusion_index = (_index - 1) // 2 - f_E = encoder_features[fusion_index] - - if self.backbone == 'dualstylegan': - out, m_E = self.fusion_out[fusion_index](out, f_E, d_s) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E*m_E], dim=1)) - m_Es += [m_E] - else: - out = self.fusion_out[fusion_index](torch.cat([out, f_E], dim=1)) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E], dim=1)) - - # remove the noise input - batch, _, height, width = out.shape - noise = x.new_empty(batch, 1, height * 2, width * 2).normal_().detach() * 0.0 - - out = conv1(out, adastyles[:, _index+6], noise=noise) - out = conv2(out, adastyles[:, _index+7], noise=noise) - skip = to_rgb(out, adastyles[:, _index+8], skip) - _index += 2 - - image = skip - if return_mask and self.backbone == 'dualstylegan': - return image, m_Es - return image - - def stylegan(self): - if self.backbone == 'dualstylegan': - return self.generator.generator - else: - return self.generator - - def zplus2wplus(self, zplus): - return self.stylegan().style(zplus.reshape(zplus.shape[0]*zplus.shape[1], zplus.shape[2])).reshape(zplus.shape) \ No newline at end of file diff --git "a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c/\344\270\232\345\212\241\346\250\241\345\236\213 d31846027b4f40ca99f6e76f897663a4.md" "b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c/\344\270\232\345\212\241\346\250\241\345\236\213 d31846027b4f40ca99f6e76f897663a4.md" deleted file mode 100644 index 842e5595f2ba0de42cd2bd526785e412e1e0e35d..0000000000000000000000000000000000000000 --- "a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c/\344\270\232\345\212\241\346\250\241\345\236\213 d31846027b4f40ca99f6e76f897663a4.md" +++ /dev/null @@ -1,198 +0,0 @@ -# 业务模型 - -Last edited time: April 23, 2023 3:58 PM -Owner: Anonymous - -## 模型图 - -``` -@startuml -'https://plantuml.com/class-diagram - -left to right direction -package "Feature Flag" { - entity FeatureFlag #pink{ - id: FeatureFlagId - featureKey: String - description: FeatureFlagDescription - featureConfigs: FeatureConfigs - } - - entity FeatureConfig #pink{ - id: FeatureConfigId - featureKey: String - data: Object - trackData: Object - condition: FilterCondition - description: FeatureConfigDescription - status: FeatureConfigStatus - } - - interface FeatureFlags #Orange { - getFeatureFlag(featureKey: String): FeatureFlag - } - - interface FeatureConfigs #Orange { - getFeatureConfigs(featureKey: String): List - } - - interface CustomerFeatureConfigs #Orange { - getFeatureConfigs(featureKey: String, customer: Customer): List - } - - FeatureFlags "1" -- "0..N" FeatureFlag - FeatureFlag "1" -- "1" FeatureConfigs - FeatureConfigs "1" -- "0..N" FeatureConfig - CustomerFeatureConfigs "1" -- "0..N" FeatureConfig -} - -package "Experiment" as ExperimentPackage{ - entity ExperimentGroup #pink { - id: ExperimentGroupId - description: ExperimentGroupDescription - } - - entity Experiment #pink { - id: ExperimentId - groupId: ExperimentGroupId - description: ExperimentDescription - condition: FilterCondition - percentage: Percentage - } - - entity Bucket #pink { - key: String - config: Object - percentage: Percentage - } - - entity Assignment #pink { - id: AssignmentId - experimentId: ExperimentId - bucketKey: String - clientId: String - customerId: String - description: AssignmentDescription - } - - interface ExperimentGroups #Orange { - getExperimentGroup(groupId: ExperimentGroupId): ExperimentGroup - } - - interface ExperimentGroupExperiments #Orange { - getExperiments(groupId: ExperimentGroupId): List - } - - interface Experiments #Orange { - getExperiment(experimentId: ExperimentId): Experiment - } - - interface CustomerAssignments #Orange { - getAssignments(experimentId: ExperimentId, customer: Customer): Assignment - } - - interface ExperimentAssignments #Orange { - getAssignments(experimentId: ExperimentId): List - getAssignments(experimentId: ExperimentId, bucketKey: String): List - } - - ExperimentGroups "1" -- "0..N" ExperimentGroup - ExperimentGroup "1" -- "1" ExperimentGroupExperiments - ExperimentGroupExperiments "1" -- "0..N" Experiment - Experiments "1" -- "0..N" Experiment - Experiment "1" -- "1..N" Bucket - ExperimentAssignments "1" -- "1" Experiment - ExperimentAssignments "1" -- "0..N" Assignment - CustomerAssignments "1" -- "0..N" Assignment -} - -package "Tracking" { - entity TrackingEvent #pink { - id: TrackingEventId - clientId: String - experimentId: ExperimentId - bucketKey: String - name: TrackingEventName - description: TrackingEventDescription - } - - interface TrackingEvents #Orange { - getTrackingEvents(experimentId: ExperimentId, bucketKey: String): List - } - - TrackingEvents "1" -- "0..N" TrackingEvent - TrackingEvent "1..N" .. "1" Experiment - TrackingEvent "1..N" .. "1" Bucket -} - -package "metrics" { - entity MetricMeta #pink { - id: MetricMetaId - name: MetricMeta - description: MetricDescription - } - - entity Metric #pink { - id: MetricId - name: MetricName - description: MetricDescription - } - - interface Metrics #Orange { - getMetrics(metricMetaId: MetricMetaId): List - } - - interface MetricMetas #Orange { - getMetricMetas(): List - } - - interface MetricMetaMetrics #Orange { - getMetrics(metricMetaId: MetricMetaId): List - } - MetricMetas "1" -- "0..N" MetricMeta - MetricMeta "1" -- "1" MetricMetaMetrics - MetricMetaMetrics "1" -- "0..N" Metric - Metrics "1" -- "0..N" Metric - -} - -package MemberCriteria { - entity Segment #pink { - id: SegmentId - name: String - description: SegmentDescription - } - - interface Segments #Orange { - getSegments(): List - } - - interface CustomerSegments #Orange { - getSegments(customer: Customer): List - } - - Segments "1" -- "0..N" Segment - CustomerSegments "1" -- "0..N" Segment -} - -entity Customer #Green { - id: CustomerId - clientId: String - description: CustomerDescription -} - -Customer -- CustomerFeatureConfigs -Customer -- CustomerAssignments -CustomerAssignments -- CustomerSegments -CustomerFeatureConfigs -- CustomerSegments - -Experiment "1" -- "0..N" MetricMeta -Experiment "1" -- "0..N" Metric -CustomerFeatureConfigs "1" -- "1" CustomerAssignments -Experiment "1" -- "0..N" Segment -FeatureConfig "1" -- "0..N" Segment -"metrics" .. "Tracking" -@enduml -``` - -![Untitled](%E4%B8%9A%E5%8A%A1%E6%A8%A1%E5%9E%8B%20d31846027b4f40ca99f6e76f897663a4/Untitled.png) \ No newline at end of file diff --git a/spaces/AI-Zero-to-Hero/10-GR-AI-Wikipedia-Search/app.py b/spaces/AI-Zero-to-Hero/10-GR-AI-Wikipedia-Search/app.py deleted file mode 100644 index 07b8ca8268140f4792f678888659b12e2515aa89..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/10-GR-AI-Wikipedia-Search/app.py +++ /dev/null @@ -1,58 +0,0 @@ -from transformers import pipeline -import wikipedia -import random -import gradio as gr -model_name = "deepset/electra-base-squad2" -nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) - -def get_wiki_article(topic): - topic=topic - try: - search = wikipedia.search(topic, results = 1)[0] - except wikipedia.DisambiguationError as e: - choices = [x for x in e.options if ('disambiguation' not in x) and ('All pages' not in x) and (x!=topic)] - search = random.choice(choices) - try: - p = wikipedia.page(search) - except wikipedia.exceptions.DisambiguationError as e: - choices = [x for x in e.options if ('disambiguation' not in x) and ('All pages' not in x) and (x!=topic)] - s = random.choice(choices) - p = wikipedia.page(s) - return p.content, p.url - -def get_answer(topic, question): - w_art, w_url=get_wiki_article(topic) - qa = {'question': question, 'context': w_art} - res = nlp(qa) - return res['answer'], w_url, {'confidence':res['score']} - - -inputs = [ - gr.inputs.Textbox(lines=2, label="Topic"), - gr.inputs.Textbox(lines=2, label="Question") -] -outputs = [ - gr.outputs.Textbox(type='str',label="Answer"), - gr.outputs.Textbox(type='str',label="Wikipedia Reference Article"), - gr.outputs.Label(type="confidences",label="Confidence in answer (assuming the correct wikipedia article)"), -] - -title = "AI Wikipedia Search" -description = 'Contextual Question and Answer' -article = '' -examples = [ - ['Quantum', 'What is quanta in physics?'], - ['Cicero', 'What quotes did Marcus Tullius Cicero make?'], - ['Alzheimers', 'What causes alzheimers?'], - ['Neuropathy', 'With neuropathy and neuro-muskoskeletal issues, and what are the treatments available?'], - ['Chemotherapy', 'What are possible care options for patients in chemotherapy?'], - ['Health', 'What is mindfulness and how does it affect health?'], - ['Medicine', 'In medicine what is the Hippocratic Oath?'], - ['Insurance', 'What is Medicare?'], - ['Financial Services', 'Does Medicaid offer financial assistance?'], - ['Ontology', 'Why is an anthology different than ontology?'], - ['Taxonomy', 'What is a biology taxonomy?'], - ['Pharmacy', 'What does a pharmacist do?'] -] - -gr.Interface(get_answer, inputs, outputs, title=title, description=description, article=article, examples=examples, flagging_options=["strongly related","related", "neutral", "unrelated", "strongly unrelated"]).launch(share=False,enable_queue=False) \ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/audiocraft/modules/__init__.py b/spaces/AIConsultant/MusicGen/audiocraft/modules/__init__.py deleted file mode 100644 index 61418616ef18f0ecca56a007c43af4a731d98b9b..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/modules/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Modules used for building the models.""" - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder -from .transformer import StreamingTransformer \ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/tests/quantization/test_vq.py b/spaces/AIConsultant/MusicGen/tests/quantization/test_vq.py deleted file mode 100644 index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/tests/quantization/test_vq.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.quantization.vq import ResidualVectorQuantizer - - -class TestResidualVectorQuantizer: - - def test_rvq(self): - x = torch.randn(1, 16, 2048) - vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8) - res = vq(x, 1.) - assert res.x.shape == torch.Size([1, 16, 2048]) diff --git a/spaces/AIFILMS/StyleGANEX/configs/data_configs.py b/spaces/AIFILMS/StyleGANEX/configs/data_configs.py deleted file mode 100644 index 7624ed6ccb0054030afafe0cf049cf210129b812..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/configs/data_configs.py +++ /dev/null @@ -1,48 +0,0 @@ -from configs import transforms_config -from configs.paths_config import dataset_paths - - -DATASETS = { - 'ffhq_encode': { - 'transforms': transforms_config.EncodeTransforms, - 'train_source_root': dataset_paths['ffhq'], - 'train_target_root': dataset_paths['ffhq'], - 'test_source_root': dataset_paths['ffhq_test'], - 'test_target_root': dataset_paths['ffhq_test'], - }, - 'ffhq_sketch_to_face': { - 'transforms': transforms_config.SketchToImageTransforms, - 'train_source_root': dataset_paths['ffhq_train_sketch'], - 'train_target_root': dataset_paths['ffhq'], - 'test_source_root': dataset_paths['ffhq_test_sketch'], - 'test_target_root': dataset_paths['ffhq_test'], - }, - 'ffhq_seg_to_face': { - 'transforms': transforms_config.SegToImageTransforms, - 'train_source_root': dataset_paths['ffhq_train_segmentation'], - 'train_target_root': dataset_paths['ffhq'], - 'test_source_root': dataset_paths['ffhq_test_segmentation'], - 'test_target_root': dataset_paths['ffhq_test'], - }, - 'ffhq_super_resolution': { - 'transforms': transforms_config.SuperResTransforms, - 'train_source_root': dataset_paths['ffhq'], - 'train_target_root': dataset_paths['ffhq1280'], - 'test_source_root': dataset_paths['ffhq_test'], - 'test_target_root': dataset_paths['ffhq1280_test'], - }, - 'toonify': { - 'transforms': transforms_config.ToonifyTransforms, - 'train_source_root': dataset_paths['toonify_in'], - 'train_target_root': dataset_paths['toonify_out'], - 'test_source_root': dataset_paths['toonify_test_in'], - 'test_target_root': dataset_paths['toonify_test_out'], - }, - 'ffhq_edit': { - 'transforms': transforms_config.EditingTransforms, - 'train_source_root': dataset_paths['ffhq'], - 'train_target_root': dataset_paths['ffhq'], - 'test_source_root': dataset_paths['ffhq_test'], - 'test_target_root': dataset_paths['ffhq_test'], - }, -} diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/__init__.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AIGText/GlyphControl/ldm/modules/diffusionmodules/upscaling.py b/spaces/AIGText/GlyphControl/ldm/modules/diffusionmodules/upscaling.py deleted file mode 100644 index 03816662098ce1ffac79bd939b892e867ab91988..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/modules/diffusionmodules/upscaling.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -from functools import partial - -from ldm.modules.diffusionmodules.util import extract_into_tensor, make_beta_schedule -from ldm.util import default - - -class AbstractLowScaleModel(nn.Module): - # for concatenating a downsampled image to the latent representation - def __init__(self, noise_schedule_config=None): - super(AbstractLowScaleModel, self).__init__() - if noise_schedule_config is not None: - self.register_schedule(**noise_schedule_config) - - def register_schedule(self, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def forward(self, x): - return x, None - - def decode(self, x): - return x - - -class SimpleImageConcat(AbstractLowScaleModel): - # no noise level conditioning - def __init__(self): - super(SimpleImageConcat, self).__init__(noise_schedule_config=None) - self.max_noise_level = 0 - - def forward(self, x): - # fix to constant noise level - return x, torch.zeros(x.shape[0], device=x.device).long() - - -class ImageConcatWithNoiseAugmentation(AbstractLowScaleModel): - def __init__(self, noise_schedule_config, max_noise_level=1000, to_cuda=False): - super().__init__(noise_schedule_config=noise_schedule_config) - self.max_noise_level = max_noise_level - - def forward(self, x, noise_level=None): - if noise_level is None: - noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long() - else: - assert isinstance(noise_level, torch.Tensor) - z = self.q_sample(x, noise_level) - return z, noise_level - - - diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/api.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/api.py deleted file mode 100644 index dbe7a8c1c0f9c035cdff8660d33348c58a0579c5..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/api.py +++ /dev/null @@ -1,35 +0,0 @@ -import numpy as np -import os -import torch.nn as nn - -os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE" - -import cv2 -import torch - -from . import util -from .body import Body - -remote_model_path = "https://huggingface.co/TencentARC/T2I-Adapter/blob/main/third-party-models/body_pose_model.pth" - - -class OpenposeInference(nn.Module): - - def __init__(self): - super().__init__() - body_modelpath = os.path.join('models', "body_pose_model.pth") - - if not os.path.exists(body_modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir='models') - - self.body_estimation = Body(body_modelpath) - - def forward(self, x): - x = x[:, :, ::-1].copy() - with torch.no_grad(): - candidate, subset = self.body_estimation(x) - canvas = np.zeros_like(x) - canvas = util.draw_bodypose(canvas, candidate, subset) - canvas = cv2.cvtColor(canvas, cv2.COLOR_RGB2BGR) - return canvas diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/buffdata-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/buffdata-plugin.js deleted file mode 100644 index 549667dec87822f4452e7a6e2f95bd69408bb023..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/buffdata-plugin.js +++ /dev/null @@ -1,24 +0,0 @@ -import DataManager from './data/buff/DataManager.js'; -import Extend from './data/buff/Extend.js'; - -class DataManagerPlugin extends Phaser.Plugins.BasePlugin { - - constructor(pluginManager) { - super(pluginManager); - } - - start() { - var eventEmitter = this.game.events; - eventEmitter.on('destroy', this.destroy, this); - } - - add(parent, eventEmitter) { - return new DataManager(parent, eventEmitter); - } - - extend(dataManager) { - return Extend(dataManager); - } -} - -export default DataManagerPlugin; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/grid/Grid.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/grid/Grid.d.ts deleted file mode 100644 index e3ed12a39479f273c1f9dc6a86d35396cb55ba5c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/grid/Grid.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import Base from '../base/Base'; -export default class Grid extends Base { } \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/Factory.d.ts deleted file mode 100644 index ca19d85c36850e42174ab3c35244f8879cb2621c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/Factory.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import StatesRoundRectangle from './StatesRoundRectangle'; - -export default function ( - config?: StatesRoundRectangle.IConfig - -): StatesRoundRectangle; \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/uroman.pl b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/uroman.pl deleted file mode 100644 index f1182aee6e5c3422882150b5babeec664b689401..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/uroman.pl +++ /dev/null @@ -1,138 +0,0 @@ -#!/usr/bin/perl -w - -# uroman Nov. 12, 2015 - Apr. 23, 2021 -$version = "v1.2.8"; -# Author: Ulf Hermjakob - -# Usage: uroman.pl {-l [ara|bel|bul|deu|ell|eng|fas|grc|heb|kaz|kir|lav|lit|mkd|mkd2|oss|pnt|rus|srp|srp2|tur|uig|ukr|yid]} {--chart|--offset-mapping} {--no-cache} {--workset} < STDIN -# Example: cat workset.txt | uroman.pl --offset-mapping --workset - -$|=1; - -use FindBin; -use Cwd "abs_path"; -use File::Basename qw(dirname); -use File::Spec; - -my $bin_dir = abs_path(dirname($0)); -my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir()); -my $data_dir = File::Spec->catfile($root_dir, "data"); -my $lib_dir = File::Spec->catfile($root_dir, "lib"); - -use lib "$FindBin::Bin/../lib"; -use NLP::Chinese; -use NLP::Romanizer; -use NLP::UTF8; -use NLP::utilities; -use JSON; -$chinesePM = NLP::Chinese; -$romanizer = NLP::Romanizer; -$util = NLP::utilities; -%ht = (); -%pinyin_ht = (); -$lang_code = ""; -$return_chart_p = 0; -$return_offset_mappings_p = 0; -$workset_p = 0; -$cache_rom_tokens_p = 1; - -$script_data_filename = File::Spec->catfile($data_dir, "Scripts.txt"); -$unicode_data_overwrite_filename = File::Spec->catfile($data_dir, "UnicodeDataOverwrite.txt"); -$unicode_data_filename = File::Spec->catfile($data_dir, "UnicodeData.txt"); -$romanization_table_filename = File::Spec->catfile($data_dir, "romanization-table.txt"); -$chinese_tonal_pinyin_filename = File::Spec->catfile($data_dir, "Chinese_to_Pinyin.txt"); - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-+(l|lc|lang-code)$/) { - $lang_code = lc (shift @ARGV || "") - } elsif ($arg =~ /^-+chart$/i) { - $return_chart_p = 1; - } elsif ($arg =~ /^-+workset$/i) { - $workset_p = 1; - } elsif ($arg =~ /^-+offset[-_]*map/i) { - $return_offset_mappings_p = 1; - } elsif ($arg =~ /^-+unicode[-_]?data/i) { - $filename = shift @ARGV; - if (-r $filename) { - $unicode_data_filename = $filename; - } else { - print STDERR "Ignoring invalid UnicodeData filename $filename\n"; - } - } elsif ($arg =~ /^-+(no-tok-cach|no-cach)/i) { - $cache_rom_tokens_p = 0; - } else { - print STDERR "Ignoring unrecognized arg $arg\n"; - } -} - -$romanizer->load_script_data(*ht, $script_data_filename); -$romanizer->load_unicode_data(*ht, $unicode_data_filename); -$romanizer->load_unicode_overwrite_romanization(*ht, $unicode_data_overwrite_filename); -$romanizer->load_romanization_table(*ht, $romanization_table_filename); -$chinese_to_pinyin_not_yet_loaded_p = 1; -$current_date = $util->datetime("dateTtime"); -$lang_code_clause = ($lang_code) ? " \"lang-code\":\"$lang_code\",\n" : ""; - -print "{\n \"romanizer\":\"uroman $version (Ulf Hermjakob, USC/ISI)\",\n \"date\":\"$current_date\",\n$lang_code_clause \"romanization\": [\n" if $return_chart_p; -my $line_number = 0; -my $chart_result = ""; -while (<>) { - $line_number++; - my $line = $_; - my $snt_id = ""; - if ($workset_p) { - next if $line =~ /^#/; - if (($i_value, $s_value) = ($line =~ /^(\S+\.\d+)\s(.*)$/)) { - $snt_id = $i_value; - $line = "$s_value\n"; - } else { - next; - } - } - if ($chinese_to_pinyin_not_yet_loaded_p && $chinesePM->string_contains_utf8_cjk_unified_ideograph_p($line)) { - $chinesePM->read_chinese_tonal_pinyin_files(*pinyin_ht, $chinese_tonal_pinyin_filename); - $chinese_to_pinyin_not_yet_loaded_p = 0; - } - if ($return_chart_p) { - print $chart_result; - *chart_ht = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return chart", $line_number); - $chart_result = $romanizer->chart_to_json_romanization_elements(0, $chart_ht{N_CHARS}, *chart_ht, $line_number); - } elsif ($return_offset_mappings_p) { - ($best_romanization, $offset_mappings) = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return offset mappings", $line_number, 0); - print "::snt-id $snt_id\n" if $workset_p; - print "::orig $line"; - print "::rom $best_romanization\n"; - print "::align $offset_mappings\n\n"; - } elsif ($cache_rom_tokens_p) { - print $romanizer->romanize_by_token_with_caching($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n"; - } else { - print $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n"; - } -} -$chart_result =~ s/,(\s*)$/$1/; -print $chart_result; -print " ]\n}\n" if $return_chart_p; - -$dev_test_p = 0; -if ($dev_test_p) { - $n_suspicious_code_points = 0; - $n_instances = 0; - foreach $char_name (sort { hex($ht{UTF_NAME_TO_UNICODE}->{$a}) <=> hex($ht{UTF_NAME_TO_UNICODE}->{$b}) } - keys %{$ht{SUSPICIOUS_ROMANIZATION}}) { - $unicode_value = $ht{UTF_NAME_TO_UNICODE}->{$char_name}; - $utf8_string = $ht{UTF_NAME_TO_CODE}->{$char_name}; - foreach $romanization (sort keys %{$ht{SUSPICIOUS_ROMANIZATION}->{$char_name}}) { - $count = $ht{SUSPICIOUS_ROMANIZATION}->{$char_name}->{$romanization}; - $s = ($count == 1) ? "" : "s"; - print STDERR "*** Suspiciously lengthy romanization:\n" unless $n_suspicious_code_points; - print STDERR "::s $utf8_string ::t $romanization ::comment $char_name (U+$unicode_value)\n"; - $n_suspicious_code_points++; - $n_instances += $count; - } - } - print STDERR " *** Total of $n_suspicious_code_points suspicious code points ($n_instances instance$s)\n" if $n_suspicious_code_points; -} - -exit 0; - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/panorama.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/panorama.md deleted file mode 100644 index a0ad0d326188c79c8e88ae2869a52e9b73809b68..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/panorama.md +++ /dev/null @@ -1,57 +0,0 @@ - - -# MultiDiffusion - -[MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation](https://huggingface.co/papers/2302.08113) is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. - -The abstract from the paper is: - -*Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.* - -You can find additional information about MultiDiffusion on the [project page](https://multidiffusion.github.io/), [original codebase](https://github.com/omerbt/MultiDiffusion), and try it out in a [demo](https://huggingface.co/spaces/weizmannscience/MultiDiffusion). - -## Tips - -While calling [`StableDiffusionPanoramaPipeline`], it's possible to specify the `view_batch_size` parameter to be > 1. -For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. - -To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. - -Circular padding is applied to ensure there are no stitching artifacts when working with -panoramas to ensure a seamless transition from the rightmost part to the leftmost part. -By enabling circular padding (set `circular_padding=True`), the operation applies additional -crops after the rightmost point of the image, allowing the model to "see” the transition -from the rightmost part to the leftmost part. This helps maintain visual consistency in -a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree -panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied -to ensure that the decoded latents match in the RGB space. - -For example, without circular padding, there is a stitching artifact (default): -![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/indoor_%20no_circular_padding.png) - -But with circular padding, the right and the left parts are matching (`circular_padding=True`): -![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/indoor_%20circular_padding.png) - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## StableDiffusionPanoramaPipeline -[[autodoc]] StableDiffusionPanoramaPipeline - - __call__ - - all - -## StableDiffusionPipelineOutput -[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_pt_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_pt_objects.py deleted file mode 100644 index 6c2bb9eed5493fbd56954a60bbd1b56b7b28c296..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_pt_objects.py +++ /dev/null @@ -1,870 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class AsymmetricAutoencoderKL(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class AutoencoderKL(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ControlNetModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ModelMixin(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class MultiAdapter(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class PriorTransformer(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class T2IAdapter(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class T5FilmDecoder(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class Transformer2DModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UNet1DModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UNet2DConditionModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UNet2DModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UNet3DConditionModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class VQModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -def get_constant_schedule(*args, **kwargs): - requires_backends(get_constant_schedule, ["torch"]) - - -def get_constant_schedule_with_warmup(*args, **kwargs): - requires_backends(get_constant_schedule_with_warmup, ["torch"]) - - -def get_cosine_schedule_with_warmup(*args, **kwargs): - requires_backends(get_cosine_schedule_with_warmup, ["torch"]) - - -def get_cosine_with_hard_restarts_schedule_with_warmup(*args, **kwargs): - requires_backends(get_cosine_with_hard_restarts_schedule_with_warmup, ["torch"]) - - -def get_linear_schedule_with_warmup(*args, **kwargs): - requires_backends(get_linear_schedule_with_warmup, ["torch"]) - - -def get_polynomial_decay_schedule_with_warmup(*args, **kwargs): - requires_backends(get_polynomial_decay_schedule_with_warmup, ["torch"]) - - -def get_scheduler(*args, **kwargs): - requires_backends(get_scheduler, ["torch"]) - - -class AudioPipelineOutput(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class AutoPipelineForImage2Image(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class AutoPipelineForInpainting(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class AutoPipelineForText2Image(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ConsistencyModelPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DanceDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDIMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDPMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DiffusionPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DiTPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ImagePipelineOutput(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KarrasVePipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class LDMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class LDMSuperResolutionPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class PNDMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class RePaintPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ScoreSdeVePipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class CMStochasticIterativeScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDIMInverseScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDIMParallelScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDIMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDPMParallelScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDPMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DEISMultistepScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DPMSolverMultistepInverseScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DPMSolverMultistepScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DPMSolverSinglestepScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class EulerAncestralDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class EulerDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class HeunDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class IPNDMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KarrasVeScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KDPM2AncestralDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KDPM2DiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class PNDMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class RePaintScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class SchedulerMixin(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ScoreSdeVeScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UnCLIPScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UniPCMultistepScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class VQDiffusionScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class EMAModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/ssd_vgg.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/ssd_vgg.py deleted file mode 100644 index cbc4fbb2301afc002f47abb9ed133a500d6cf23f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/ssd_vgg.py +++ /dev/null @@ -1,169 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import VGG, constant_init, kaiming_init, normal_init, xavier_init -from mmcv.runner import load_checkpoint - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class SSDVGG(VGG): - """VGG Backbone network for single-shot-detection. - - Args: - input_size (int): width and height of input, from {300, 512}. - depth (int): Depth of vgg, from {11, 13, 16, 19}. - out_indices (Sequence[int]): Output from which stages. - - Example: - >>> self = SSDVGG(input_size=300, depth=11) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 300, 300) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 1024, 19, 19) - (1, 512, 10, 10) - (1, 256, 5, 5) - (1, 256, 3, 3) - (1, 256, 1, 1) - """ - extra_setting = { - 300: (256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256), - 512: (256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256, 128), - } - - def __init__(self, - input_size, - depth, - with_last_pool=False, - ceil_mode=True, - out_indices=(3, 4), - out_feature_indices=(22, 34), - l2_norm_scale=20.): - # TODO: in_channels for mmcv.VGG - super(SSDVGG, self).__init__( - depth, - with_last_pool=with_last_pool, - ceil_mode=ceil_mode, - out_indices=out_indices) - assert input_size in (300, 512) - self.input_size = input_size - - self.features.add_module( - str(len(self.features)), - nn.MaxPool2d(kernel_size=3, stride=1, padding=1)) - self.features.add_module( - str(len(self.features)), - nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.features.add_module( - str(len(self.features)), nn.Conv2d(1024, 1024, kernel_size=1)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.out_feature_indices = out_feature_indices - - self.inplanes = 1024 - self.extra = self._make_extra_layers(self.extra_setting[input_size]) - self.l2_norm = L2Norm( - self.features[out_feature_indices[0] - 1].out_channels, - l2_norm_scale) - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.features.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - for m in self.extra.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - constant_init(self.l2_norm, self.l2_norm.scale) - - def forward(self, x): - """Forward function.""" - outs = [] - for i, layer in enumerate(self.features): - x = layer(x) - if i in self.out_feature_indices: - outs.append(x) - for i, layer in enumerate(self.extra): - x = F.relu(layer(x), inplace=True) - if i % 2 == 1: - outs.append(x) - outs[0] = self.l2_norm(outs[0]) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def _make_extra_layers(self, outplanes): - layers = [] - kernel_sizes = (1, 3) - num_layers = 0 - outplane = None - for i in range(len(outplanes)): - if self.inplanes == 'S': - self.inplanes = outplane - continue - k = kernel_sizes[num_layers % 2] - if outplanes[i] == 'S': - outplane = outplanes[i + 1] - conv = nn.Conv2d( - self.inplanes, outplane, k, stride=2, padding=1) - else: - outplane = outplanes[i] - conv = nn.Conv2d( - self.inplanes, outplane, k, stride=1, padding=0) - layers.append(conv) - self.inplanes = outplanes[i] - num_layers += 1 - if self.input_size == 512: - layers.append(nn.Conv2d(self.inplanes, 256, 4, padding=1)) - - return nn.Sequential(*layers) - - -class L2Norm(nn.Module): - - def __init__(self, n_dims, scale=20., eps=1e-10): - """L2 normalization layer. - - Args: - n_dims (int): Number of dimensions to be normalized - scale (float, optional): Defaults to 20.. - eps (float, optional): Used to avoid division by zero. - Defaults to 1e-10. - """ - super(L2Norm, self).__init__() - self.n_dims = n_dims - self.weight = nn.Parameter(torch.Tensor(self.n_dims)) - self.eps = eps - self.scale = scale - - def forward(self, x): - """Forward function.""" - # normalization layer convert to FP32 in FP16 training - x_float = x.float() - norm = x_float.pow(2).sum(1, keepdim=True).sqrt() + self.eps - return (self.weight[None, :, None, None].float().expand_as(x_float) * - x_float / norm).type_as(x) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py deleted file mode 100644 index 35758f4f4e3b2bddd460edb8a7f482b3a9da2919..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py +++ /dev/null @@ -1,76 +0,0 @@ -from mmdet.models.builder import HEADS -from .convfc_bbox_head import ConvFCBBoxHead - - -@HEADS.register_module() -class SCNetBBoxHead(ConvFCBBoxHead): - """BBox head for `SCNet `_. - - This inherits ``ConvFCBBoxHead`` with modified forward() function, allow us - to get intermediate shared feature. - """ - - def _forward_shared(self, x): - """Forward function for shared part.""" - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - - return x - - def _forward_cls_reg(self, x): - """Forward function for classification and regression parts.""" - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - - return cls_score, bbox_pred - - def forward(self, x, return_shared_feat=False): - """Forward function. - - Args: - x (Tensor): input features - return_shared_feat (bool): If True, return cls-reg-shared feature. - - Return: - out (tuple[Tensor]): contain ``cls_score`` and ``bbox_pred``, - if ``return_shared_feat`` is True, append ``x_shared`` to the - returned tuple. - """ - x_shared = self._forward_shared(x) - out = self._forward_cls_reg(x_shared) - - if return_shared_feat: - out += (x_shared, ) - - return out diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/sync_bn.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index f78f39181d75bb85c53e8c7c8eaf45690e9f0bee..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import annotator.uniformer.mmcv as mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/test_time_aug.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/test_time_aug.py deleted file mode 100644 index 6a1611a04d9d927223c9afbe5bf68af04d62937a..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,133 +0,0 @@ -import warnings - -import annotator.uniformer.mmcv as mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=(2048, 1024), - img_ratios=[0.5, 1.0], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (None | tuple | list[tuple]): Images scales for resizing. - img_ratios (float | list[float]): Image ratios for resizing - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale, - img_ratios=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - if img_ratios is not None: - img_ratios = img_ratios if isinstance(img_ratios, - list) else [img_ratios] - assert mmcv.is_list_of(img_ratios, float) - if img_scale is None: - # mode 1: given img_scale=None and a range of image ratio - self.img_scale = None - assert mmcv.is_list_of(img_ratios, float) - elif isinstance(img_scale, tuple) and mmcv.is_list_of( - img_ratios, float): - assert len(img_scale) == 2 - # mode 2: given a scale and a range of image ratio - self.img_scale = [(int(img_scale[0] * ratio), - int(img_scale[1] * ratio)) - for ratio in img_ratios] - else: - # mode 3: given multiple scales - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None - self.flip = flip - self.img_ratios = img_ratios - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float): - h, w = results['img'].shape[:2] - img_scale = [(int(w * ratio), int(h * ratio)) - for ratio in self.img_ratios] - else: - img_scale = self.img_scale - flip_aug = [False, True] if self.flip else [False] - for scale in img_scale: - for flip in flip_aug: - for direction in self.flip_direction: - _results = results.copy() - _results['scale'] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip})' - repr_str += f'flip_direction={self.flip_direction}' - return repr_str diff --git a/spaces/ArkanDash/rvc-models/config.py b/spaces/ArkanDash/rvc-models/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/ArkanDash/rvc-models/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/Artrajz/vits-simple-api/static/js/jquery.slim.min.js b/spaces/Artrajz/vits-simple-api/static/js/jquery.slim.min.js deleted file mode 100644 index 36b4e1a137828dc488ed9a2e704b74cb35815759..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/static/js/jquery.slim.min.js +++ /dev/null @@ -1,2 +0,0 @@ -/*! jQuery v3.5.1 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector | (c) JS Foundation and other contributors | jquery.org/license */ -!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(g,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,v=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,y=n.hasOwnProperty,a=y.toString,l=a.call(Object),m={},b=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType},x=function(e){return null!=e&&e===e.window},w=g.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function C(e,t,n){var r,i,o=(n=n||w).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function T(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.5.1 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector",E=function(e,t){return new E.fn.init(e,t)};function d(e){var t=!!e&&"length"in e&&e.length,n=T(e);return!b(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+R+")"+R+"*"),U=new RegExp(R+"|>"),V=new RegExp(W),X=new RegExp("^"+B+"$"),Q={ID:new RegExp("^#("+B+")"),CLASS:new RegExp("^\\.("+B+")"),TAG:new RegExp("^("+B+"|[*])"),ATTR:new RegExp("^"+M),PSEUDO:new RegExp("^"+W),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+R+"*(even|odd|(([+-]|)(\\d*)n|)"+R+"*(?:([+-]|)"+R+"*(\\d+)|))"+R+"*\\)|)","i"),bool:new RegExp("^(?:"+I+")$","i"),needsContext:new RegExp("^"+R+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+R+"*((?:-\\d)?\\d*)"+R+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,G=/^(?:input|select|textarea|button)$/i,K=/^h\d$/i,J=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+R+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){C()},ae=xe(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{O.apply(t=P.call(d.childNodes),d.childNodes),t[d.childNodes.length].nodeType}catch(e){O={apply:t.length?function(e,t){q.apply(e,P.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,d=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==d&&9!==d&&11!==d)return n;if(!r&&(C(e),e=e||T,E)){if(11!==d&&(u=Z.exec(t)))if(i=u[1]){if(9===d){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return O.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&p.getElementsByClassName&&e.getElementsByClassName)return O.apply(n,e.getElementsByClassName(i)),n}if(p.qsa&&!k[t+" "]&&(!v||!v.test(t))&&(1!==d||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===d&&(U.test(t)||_.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&p.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=A)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+be(l[o]);c=l.join(",")}try{return O.apply(n,f.querySelectorAll(c)),n}catch(e){k(t,!0)}finally{s===A&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>x.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[A]=!0,e}function ce(e){var t=T.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)x.attrHandle[n[r]]=t}function de(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function pe(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in p=se.support={},i=se.isXML=function(e){var t=e.namespaceURI,n=(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},C=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:d;return r!=T&&9===r.nodeType&&r.documentElement&&(a=(T=r).documentElement,E=!i(T),d!=T&&(n=T.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),p.scope=ce(function(e){return a.appendChild(e).appendChild(T.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),p.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),p.getElementsByTagName=ce(function(e){return e.appendChild(T.createComment("")),!e.getElementsByTagName("*").length}),p.getElementsByClassName=J.test(T.getElementsByClassName),p.getById=ce(function(e){return a.appendChild(e).id=A,!T.getElementsByName||!T.getElementsByName(A).length}),p.getById?(x.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},x.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(x.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},x.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),x.find.TAG=p.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):p.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},x.find.CLASS=p.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(p.qsa=J.test(T.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+R+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+R+"*(?:value|"+I+")"),e.querySelectorAll("[id~="+A+"-]").length||v.push("~="),(t=T.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+R+"*name"+R+"*="+R+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+A+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=T.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+R+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(p.matchesSelector=J.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){p.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",W)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=J.test(a.compareDocumentPosition),y=t||J.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},D=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!p.sortDetached&&t.compareDocumentPosition(e)===n?e==T||e.ownerDocument==d&&y(d,e)?-1:t==T||t.ownerDocument==d&&y(d,t)?1:u?H(u,e)-H(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==T?-1:t==T?1:i?-1:o?1:u?H(u,e)-H(u,t):0;if(i===o)return de(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?de(a[r],s[r]):a[r]==d?-1:s[r]==d?1:0}),T},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(C(e),p.matchesSelector&&E&&!k[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||p.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){k(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return Q.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&V.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+R+")"+e+"("+R+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function D(e,n,r){return b(n)?E.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?E.grep(e,function(e){return e===n!==r}):"string"!=typeof n?E.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(E.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||L,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:j.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof E?t[0]:t,E.merge(this,E.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:w,!0)),k.test(r[1])&&E.isPlainObject(t))for(r in t)b(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=w.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):b(e)?void 0!==n.ready?n.ready(e):e(E):E.makeArray(e,this)}).prototype=E.fn,L=E(w);var q=/^(?:parents|prev(?:Until|All))/,O={children:!0,contents:!0,next:!0,prev:!0};function P(e,t){while((e=e[t])&&1!==e.nodeType);return e}E.fn.extend({has:function(e){var t=E(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,pe=/^$|^module$|\/(?:java|ecma)script/i;le=w.createDocumentFragment().appendChild(w.createElement("div")),(ce=w.createElement("input")).setAttribute("type","radio"),ce.setAttribute("checked","checked"),ce.setAttribute("name","t"),le.appendChild(ce),m.checkClone=le.cloneNode(!0).cloneNode(!0).lastChild.checked,le.innerHTML="",m.noCloneChecked=!!le.cloneNode(!0).lastChild.defaultValue,le.innerHTML="",m.option=!!le.lastChild;var he={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ge(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&S(e,t)?E.merge([e],n):n}function ve(e,t){for(var n=0,r=e.length;n",""]);var ye=/<|&#?\w+;/;function me(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),d=[],p=0,h=e.length;p\s*$/g;function Le(e,t){return S(e,"table")&&S(11!==t.nodeType?t:t.firstChild,"tr")&&E(e).children("tbody")[0]||e}function je(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Oe(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n
",2===ft.childNodes.length),E.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(m.createHTMLDocument?((r=(t=w.implementation.createHTMLDocument("")).createElement("base")).href=w.location.href,t.head.appendChild(r)):t=w),o=!n&&[],(i=k.exec(e))?[t.createElement(i[1])]:(i=me([e],t,o),o&&o.length&&E(o).remove(),E.merge([],i.childNodes)));var r,i,o},E.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=E.css(e,"position"),c=E(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=E.css(e,"top"),u=E.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),b(t)&&(t=t.call(e,n,E.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):("number"==typeof f.top&&(f.top+="px"),"number"==typeof f.left&&(f.left+="px"),c.css(f))}},E.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){E.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===E.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===E.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=E(e).offset()).top+=E.css(e,"borderTopWidth",!0),i.left+=E.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-E.css(r,"marginTop",!0),left:t.left-i.left-E.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===E.css(e,"position"))e=e.offsetParent;return e||re})}}),E.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;E.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),E.each(["top","left"],function(e,n){E.cssHooks[n]=Fe(m.pixelPosition,function(e,t){if(t)return t=We(e,n),Ie.test(t)?E(e).position()[n]+"px":t})}),E.each({Height:"height",Width:"width"},function(a,s){E.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){E.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?E.css(e,t,i):E.style(e,t,n,i)},s,n?e:void 0,n)}})}),E.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),E.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){E.fn[n]=function(e,t){return 0= (3, 4) - -if PY3: - string_types = str, - integer_types = int, - class_types = type, - text_type = str - binary_type = bytes - - MAXSIZE = sys.maxsize -else: - string_types = basestring, - integer_types = (int, long) - class_types = (type, types.ClassType) - text_type = unicode - binary_type = str - - if sys.platform.startswith("java"): - # Jython always uses 32 bits. - MAXSIZE = int((1 << 31) - 1) - else: - # It's possible to have sizeof(long) != sizeof(Py_ssize_t). - class X(object): - - def __len__(self): - return 1 << 31 - try: - len(X()) - except OverflowError: - # 32-bit - MAXSIZE = int((1 << 31) - 1) - else: - # 64-bit - MAXSIZE = int((1 << 63) - 1) - del X - -if PY34: - from importlib.util import spec_from_loader -else: - spec_from_loader = None - - -def _add_doc(func, doc): - """Add documentation to a function.""" - func.__doc__ = doc - - -def _import_module(name): - """Import module, returning the module after the last dot.""" - __import__(name) - return sys.modules[name] - - -class _LazyDescr(object): - - def __init__(self, name): - self.name = name - - def __get__(self, obj, tp): - result = self._resolve() - setattr(obj, self.name, result) # Invokes __set__. - try: - # This is a bit ugly, but it avoids running this again by - # removing this descriptor. - delattr(obj.__class__, self.name) - except AttributeError: - pass - return result - - -class MovedModule(_LazyDescr): - - def __init__(self, name, old, new=None): - super(MovedModule, self).__init__(name) - if PY3: - if new is None: - new = name - self.mod = new - else: - self.mod = old - - def _resolve(self): - return _import_module(self.mod) - - def __getattr__(self, attr): - _module = self._resolve() - value = getattr(_module, attr) - setattr(self, attr, value) - return value - - -class _LazyModule(types.ModuleType): - - def __init__(self, name): - super(_LazyModule, self).__init__(name) - self.__doc__ = self.__class__.__doc__ - - def __dir__(self): - attrs = ["__doc__", "__name__"] - attrs += [attr.name for attr in self._moved_attributes] - return attrs - - # Subclasses should override this - _moved_attributes = [] - - -class MovedAttribute(_LazyDescr): - - def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): - super(MovedAttribute, self).__init__(name) - if PY3: - if new_mod is None: - new_mod = name - self.mod = new_mod - if new_attr is None: - if old_attr is None: - new_attr = name - else: - new_attr = old_attr - self.attr = new_attr - else: - self.mod = old_mod - if old_attr is None: - old_attr = name - self.attr = old_attr - - def _resolve(self): - module = _import_module(self.mod) - return getattr(module, self.attr) - - -class _SixMetaPathImporter(object): - - """ - A meta path importer to import six.moves and its submodules. - - This class implements a PEP302 finder and loader. It should be compatible - with Python 2.5 and all existing versions of Python3 - """ - - def __init__(self, six_module_name): - self.name = six_module_name - self.known_modules = {} - - def _add_module(self, mod, *fullnames): - for fullname in fullnames: - self.known_modules[self.name + "." + fullname] = mod - - def _get_module(self, fullname): - return self.known_modules[self.name + "." + fullname] - - def find_module(self, fullname, path=None): - if fullname in self.known_modules: - return self - return None - - def find_spec(self, fullname, path, target=None): - if fullname in self.known_modules: - return spec_from_loader(fullname, self) - return None - - def __get_module(self, fullname): - try: - return self.known_modules[fullname] - except KeyError: - raise ImportError("This loader does not know module " + fullname) - - def load_module(self, fullname): - try: - # in case of a reload - return sys.modules[fullname] - except KeyError: - pass - mod = self.__get_module(fullname) - if isinstance(mod, MovedModule): - mod = mod._resolve() - else: - mod.__loader__ = self - sys.modules[fullname] = mod - return mod - - def is_package(self, fullname): - """ - Return true, if the named module is a package. - - We need this method to get correct spec objects with - Python 3.4 (see PEP451) - """ - return hasattr(self.__get_module(fullname), "__path__") - - def get_code(self, fullname): - """Return None - - Required, if is_package is implemented""" - self.__get_module(fullname) # eventually raises ImportError - return None - get_source = get_code # same as get_code - - def create_module(self, spec): - return self.load_module(spec.name) - - def exec_module(self, module): - pass - -_importer = _SixMetaPathImporter(__name__) - - -class _MovedItems(_LazyModule): - - """Lazy loading of moved objects""" - __path__ = [] # mark as package - - -_moved_attributes = [ - MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), - MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), - MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"), - MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), - MovedAttribute("intern", "__builtin__", "sys"), - MovedAttribute("map", "itertools", "builtins", "imap", "map"), - MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), - MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), - MovedAttribute("getoutput", "commands", "subprocess"), - MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"), - MovedAttribute("reduce", "__builtin__", "functools"), - MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), - MovedAttribute("StringIO", "StringIO", "io"), - MovedAttribute("UserDict", "UserDict", "collections"), - MovedAttribute("UserList", "UserList", "collections"), - MovedAttribute("UserString", "UserString", "collections"), - MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), - MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"), - MovedModule("builtins", "__builtin__"), - MovedModule("configparser", "ConfigParser"), - MovedModule("collections_abc", "collections", "collections.abc" if sys.version_info >= (3, 3) else "collections"), - MovedModule("copyreg", "copy_reg"), - MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), - MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), - MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread" if sys.version_info < (3, 9) else "_thread"), - MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), - MovedModule("http_cookies", "Cookie", "http.cookies"), - MovedModule("html_entities", "htmlentitydefs", "html.entities"), - MovedModule("html_parser", "HTMLParser", "html.parser"), - MovedModule("http_client", "httplib", "http.client"), - MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), - MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), - MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), - MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"), - MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), - MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), - MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), - MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), - MovedModule("cPickle", "cPickle", "pickle"), - MovedModule("queue", "Queue"), - MovedModule("reprlib", "repr"), - MovedModule("socketserver", "SocketServer"), - MovedModule("_thread", "thread", "_thread"), - MovedModule("tkinter", "Tkinter"), - MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), - MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), - MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), - MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), - MovedModule("tkinter_tix", "Tix", "tkinter.tix"), - MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), - MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), - MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), - MovedModule("tkinter_colorchooser", "tkColorChooser", - "tkinter.colorchooser"), - MovedModule("tkinter_commondialog", "tkCommonDialog", - "tkinter.commondialog"), - MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), - MovedModule("tkinter_font", "tkFont", "tkinter.font"), - MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), - MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", - "tkinter.simpledialog"), - MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), - MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), - MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), - MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), - MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), - MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), -] -# Add windows specific modules. -if sys.platform == "win32": - _moved_attributes += [ - MovedModule("winreg", "_winreg"), - ] - -for attr in _moved_attributes: - setattr(_MovedItems, attr.name, attr) - if isinstance(attr, MovedModule): - _importer._add_module(attr, "moves." + attr.name) -del attr - -_MovedItems._moved_attributes = _moved_attributes - -moves = _MovedItems(__name__ + ".moves") -_importer._add_module(moves, "moves") - - -class Module_six_moves_urllib_parse(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_parse""" - - -_urllib_parse_moved_attributes = [ - MovedAttribute("ParseResult", "urlparse", "urllib.parse"), - MovedAttribute("SplitResult", "urlparse", "urllib.parse"), - MovedAttribute("parse_qs", "urlparse", "urllib.parse"), - MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), - MovedAttribute("urldefrag", "urlparse", "urllib.parse"), - MovedAttribute("urljoin", "urlparse", "urllib.parse"), - MovedAttribute("urlparse", "urlparse", "urllib.parse"), - MovedAttribute("urlsplit", "urlparse", "urllib.parse"), - MovedAttribute("urlunparse", "urlparse", "urllib.parse"), - MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), - MovedAttribute("quote", "urllib", "urllib.parse"), - MovedAttribute("quote_plus", "urllib", "urllib.parse"), - MovedAttribute("unquote", "urllib", "urllib.parse"), - MovedAttribute("unquote_plus", "urllib", "urllib.parse"), - MovedAttribute("unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes"), - MovedAttribute("urlencode", "urllib", "urllib.parse"), - MovedAttribute("splitquery", "urllib", "urllib.parse"), - MovedAttribute("splittag", "urllib", "urllib.parse"), - MovedAttribute("splituser", "urllib", "urllib.parse"), - MovedAttribute("splitvalue", "urllib", "urllib.parse"), - MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), - MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), - MovedAttribute("uses_params", "urlparse", "urllib.parse"), - MovedAttribute("uses_query", "urlparse", "urllib.parse"), - MovedAttribute("uses_relative", "urlparse", "urllib.parse"), -] -for attr in _urllib_parse_moved_attributes: - setattr(Module_six_moves_urllib_parse, attr.name, attr) -del attr - -Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes - -_importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), - "moves.urllib_parse", "moves.urllib.parse") - - -class Module_six_moves_urllib_error(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_error""" - - -_urllib_error_moved_attributes = [ - MovedAttribute("URLError", "urllib2", "urllib.error"), - MovedAttribute("HTTPError", "urllib2", "urllib.error"), - MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), -] -for attr in _urllib_error_moved_attributes: - setattr(Module_six_moves_urllib_error, attr.name, attr) -del attr - -Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes - -_importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), - "moves.urllib_error", "moves.urllib.error") - - -class Module_six_moves_urllib_request(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_request""" - - -_urllib_request_moved_attributes = [ - MovedAttribute("urlopen", "urllib2", "urllib.request"), - MovedAttribute("install_opener", "urllib2", "urllib.request"), - MovedAttribute("build_opener", "urllib2", "urllib.request"), - MovedAttribute("pathname2url", "urllib", "urllib.request"), - MovedAttribute("url2pathname", "urllib", "urllib.request"), - MovedAttribute("getproxies", "urllib", "urllib.request"), - MovedAttribute("Request", "urllib2", "urllib.request"), - MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), - MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), - MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), - MovedAttribute("BaseHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), - MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), - MovedAttribute("FileHandler", "urllib2", "urllib.request"), - MovedAttribute("FTPHandler", "urllib2", "urllib.request"), - MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), - MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), - MovedAttribute("urlretrieve", "urllib", "urllib.request"), - MovedAttribute("urlcleanup", "urllib", "urllib.request"), - MovedAttribute("URLopener", "urllib", "urllib.request"), - MovedAttribute("FancyURLopener", "urllib", "urllib.request"), - MovedAttribute("proxy_bypass", "urllib", "urllib.request"), - MovedAttribute("parse_http_list", "urllib2", "urllib.request"), - MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), -] -for attr in _urllib_request_moved_attributes: - setattr(Module_six_moves_urllib_request, attr.name, attr) -del attr - -Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes - -_importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), - "moves.urllib_request", "moves.urllib.request") - - -class Module_six_moves_urllib_response(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_response""" - - -_urllib_response_moved_attributes = [ - MovedAttribute("addbase", "urllib", "urllib.response"), - MovedAttribute("addclosehook", "urllib", "urllib.response"), - MovedAttribute("addinfo", "urllib", "urllib.response"), - MovedAttribute("addinfourl", "urllib", "urllib.response"), -] -for attr in _urllib_response_moved_attributes: - setattr(Module_six_moves_urllib_response, attr.name, attr) -del attr - -Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes - -_importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), - "moves.urllib_response", "moves.urllib.response") - - -class Module_six_moves_urllib_robotparser(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_robotparser""" - - -_urllib_robotparser_moved_attributes = [ - MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), -] -for attr in _urllib_robotparser_moved_attributes: - setattr(Module_six_moves_urllib_robotparser, attr.name, attr) -del attr - -Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes - -_importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), - "moves.urllib_robotparser", "moves.urllib.robotparser") - - -class Module_six_moves_urllib(types.ModuleType): - - """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" - __path__ = [] # mark as package - parse = _importer._get_module("moves.urllib_parse") - error = _importer._get_module("moves.urllib_error") - request = _importer._get_module("moves.urllib_request") - response = _importer._get_module("moves.urllib_response") - robotparser = _importer._get_module("moves.urllib_robotparser") - - def __dir__(self): - return ['parse', 'error', 'request', 'response', 'robotparser'] - -_importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"), - "moves.urllib") - - -def add_move(move): - """Add an item to six.moves.""" - setattr(_MovedItems, move.name, move) - - -def remove_move(name): - """Remove item from six.moves.""" - try: - delattr(_MovedItems, name) - except AttributeError: - try: - del moves.__dict__[name] - except KeyError: - raise AttributeError("no such move, %r" % (name,)) - - -if PY3: - _meth_func = "__func__" - _meth_self = "__self__" - - _func_closure = "__closure__" - _func_code = "__code__" - _func_defaults = "__defaults__" - _func_globals = "__globals__" -else: - _meth_func = "im_func" - _meth_self = "im_self" - - _func_closure = "func_closure" - _func_code = "func_code" - _func_defaults = "func_defaults" - _func_globals = "func_globals" - - -try: - advance_iterator = next -except NameError: - def advance_iterator(it): - return it.next() -next = advance_iterator - - -try: - callable = callable -except NameError: - def callable(obj): - return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) - - -if PY3: - def get_unbound_function(unbound): - return unbound - - create_bound_method = types.MethodType - - def create_unbound_method(func, cls): - return func - - Iterator = object -else: - def get_unbound_function(unbound): - return unbound.im_func - - def create_bound_method(func, obj): - return types.MethodType(func, obj, obj.__class__) - - def create_unbound_method(func, cls): - return types.MethodType(func, None, cls) - - class Iterator(object): - - def next(self): - return type(self).__next__(self) - - callable = callable -_add_doc(get_unbound_function, - """Get the function out of a possibly unbound function""") - - -get_method_function = operator.attrgetter(_meth_func) -get_method_self = operator.attrgetter(_meth_self) -get_function_closure = operator.attrgetter(_func_closure) -get_function_code = operator.attrgetter(_func_code) -get_function_defaults = operator.attrgetter(_func_defaults) -get_function_globals = operator.attrgetter(_func_globals) - - -if PY3: - def iterkeys(d, **kw): - return iter(d.keys(**kw)) - - def itervalues(d, **kw): - return iter(d.values(**kw)) - - def iteritems(d, **kw): - return iter(d.items(**kw)) - - def iterlists(d, **kw): - return iter(d.lists(**kw)) - - viewkeys = operator.methodcaller("keys") - - viewvalues = operator.methodcaller("values") - - viewitems = operator.methodcaller("items") -else: - def iterkeys(d, **kw): - return d.iterkeys(**kw) - - def itervalues(d, **kw): - return d.itervalues(**kw) - - def iteritems(d, **kw): - return d.iteritems(**kw) - - def iterlists(d, **kw): - return d.iterlists(**kw) - - viewkeys = operator.methodcaller("viewkeys") - - viewvalues = operator.methodcaller("viewvalues") - - viewitems = operator.methodcaller("viewitems") - -_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") -_add_doc(itervalues, "Return an iterator over the values of a dictionary.") -_add_doc(iteritems, - "Return an iterator over the (key, value) pairs of a dictionary.") -_add_doc(iterlists, - "Return an iterator over the (key, [values]) pairs of a dictionary.") - - -if PY3: - def b(s): - return s.encode("latin-1") - - def u(s): - return s - unichr = chr - import struct - int2byte = struct.Struct(">B").pack - del struct - byte2int = operator.itemgetter(0) - indexbytes = operator.getitem - iterbytes = iter - import io - StringIO = io.StringIO - BytesIO = io.BytesIO - del io - _assertCountEqual = "assertCountEqual" - if sys.version_info[1] <= 1: - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" - else: - _assertRaisesRegex = "assertRaisesRegex" - _assertRegex = "assertRegex" - _assertNotRegex = "assertNotRegex" -else: - def b(s): - return s - # Workaround for standalone backslash - - def u(s): - return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape") - unichr = unichr - int2byte = chr - - def byte2int(bs): - return ord(bs[0]) - - def indexbytes(buf, i): - return ord(buf[i]) - iterbytes = functools.partial(itertools.imap, ord) - import StringIO - StringIO = BytesIO = StringIO.StringIO - _assertCountEqual = "assertItemsEqual" - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" -_add_doc(b, """Byte literal""") -_add_doc(u, """Text literal""") - - -def assertCountEqual(self, *args, **kwargs): - return getattr(self, _assertCountEqual)(*args, **kwargs) - - -def assertRaisesRegex(self, *args, **kwargs): - return getattr(self, _assertRaisesRegex)(*args, **kwargs) - - -def assertRegex(self, *args, **kwargs): - return getattr(self, _assertRegex)(*args, **kwargs) - - -def assertNotRegex(self, *args, **kwargs): - return getattr(self, _assertNotRegex)(*args, **kwargs) - - -if PY3: - exec_ = getattr(moves.builtins, "exec") - - def reraise(tp, value, tb=None): - try: - if value is None: - value = tp() - if value.__traceback__ is not tb: - raise value.with_traceback(tb) - raise value - finally: - value = None - tb = None - -else: - def exec_(_code_, _globs_=None, _locs_=None): - """Execute code in a namespace.""" - if _globs_ is None: - frame = sys._getframe(1) - _globs_ = frame.f_globals - if _locs_ is None: - _locs_ = frame.f_locals - del frame - elif _locs_ is None: - _locs_ = _globs_ - exec("""exec _code_ in _globs_, _locs_""") - - exec_("""def reraise(tp, value, tb=None): - try: - raise tp, value, tb - finally: - tb = None -""") - - -if sys.version_info[:2] > (3,): - exec_("""def raise_from(value, from_value): - try: - raise value from from_value - finally: - value = None -""") -else: - def raise_from(value, from_value): - raise value - - -print_ = getattr(moves.builtins, "print", None) -if print_ is None: - def print_(*args, **kwargs): - """The new-style print function for Python 2.4 and 2.5.""" - fp = kwargs.pop("file", sys.stdout) - if fp is None: - return - - def write(data): - if not isinstance(data, basestring): - data = str(data) - # If the file has an encoding, encode unicode with it. - if (isinstance(fp, file) and - isinstance(data, unicode) and - fp.encoding is not None): - errors = getattr(fp, "errors", None) - if errors is None: - errors = "strict" - data = data.encode(fp.encoding, errors) - fp.write(data) - want_unicode = False - sep = kwargs.pop("sep", None) - if sep is not None: - if isinstance(sep, unicode): - want_unicode = True - elif not isinstance(sep, str): - raise TypeError("sep must be None or a string") - end = kwargs.pop("end", None) - if end is not None: - if isinstance(end, unicode): - want_unicode = True - elif not isinstance(end, str): - raise TypeError("end must be None or a string") - if kwargs: - raise TypeError("invalid keyword arguments to print()") - if not want_unicode: - for arg in args: - if isinstance(arg, unicode): - want_unicode = True - break - if want_unicode: - newline = unicode("\n") - space = unicode(" ") - else: - newline = "\n" - space = " " - if sep is None: - sep = space - if end is None: - end = newline - for i, arg in enumerate(args): - if i: - write(sep) - write(arg) - write(end) -if sys.version_info[:2] < (3, 3): - _print = print_ - - def print_(*args, **kwargs): - fp = kwargs.get("file", sys.stdout) - flush = kwargs.pop("flush", False) - _print(*args, **kwargs) - if flush and fp is not None: - fp.flush() - -_add_doc(reraise, """Reraise an exception.""") - -if sys.version_info[0:2] < (3, 4): - # This does exactly the same what the :func:`py3:functools.update_wrapper` - # function does on Python versions after 3.2. It sets the ``__wrapped__`` - # attribute on ``wrapper`` object and it doesn't raise an error if any of - # the attributes mentioned in ``assigned`` and ``updated`` are missing on - # ``wrapped`` object. - def _update_wrapper(wrapper, wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES): - for attr in assigned: - try: - value = getattr(wrapped, attr) - except AttributeError: - continue - else: - setattr(wrapper, attr, value) - for attr in updated: - getattr(wrapper, attr).update(getattr(wrapped, attr, {})) - wrapper.__wrapped__ = wrapped - return wrapper - _update_wrapper.__doc__ = functools.update_wrapper.__doc__ - - def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES): - return functools.partial(_update_wrapper, wrapped=wrapped, - assigned=assigned, updated=updated) - wraps.__doc__ = functools.wraps.__doc__ - -else: - wraps = functools.wraps - - -def with_metaclass(meta, *bases): - """Create a base class with a metaclass.""" - # This requires a bit of explanation: the basic idea is to make a dummy - # metaclass for one level of class instantiation that replaces itself with - # the actual metaclass. - class metaclass(type): - - def __new__(cls, name, this_bases, d): - if sys.version_info[:2] >= (3, 7): - # This version introduced PEP 560 that requires a bit - # of extra care (we mimic what is done by __build_class__). - resolved_bases = types.resolve_bases(bases) - if resolved_bases is not bases: - d['__orig_bases__'] = bases - else: - resolved_bases = bases - return meta(name, resolved_bases, d) - - @classmethod - def __prepare__(cls, name, this_bases): - return meta.__prepare__(name, bases) - return type.__new__(metaclass, 'temporary_class', (), {}) - - -def add_metaclass(metaclass): - """Class decorator for creating a class with a metaclass.""" - def wrapper(cls): - orig_vars = cls.__dict__.copy() - slots = orig_vars.get('__slots__') - if slots is not None: - if isinstance(slots, str): - slots = [slots] - for slots_var in slots: - orig_vars.pop(slots_var) - orig_vars.pop('__dict__', None) - orig_vars.pop('__weakref__', None) - if hasattr(cls, '__qualname__'): - orig_vars['__qualname__'] = cls.__qualname__ - return metaclass(cls.__name__, cls.__bases__, orig_vars) - return wrapper - - -def ensure_binary(s, encoding='utf-8', errors='strict'): - """Coerce **s** to six.binary_type. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> encoded to `bytes` - - `bytes` -> `bytes` - """ - if isinstance(s, binary_type): - return s - if isinstance(s, text_type): - return s.encode(encoding, errors) - raise TypeError("not expecting type '%s'" % type(s)) - - -def ensure_str(s, encoding='utf-8', errors='strict'): - """Coerce *s* to `str`. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - # Optimization: Fast return for the common case. - if type(s) is str: - return s - if PY2 and isinstance(s, text_type): - return s.encode(encoding, errors) - elif PY3 and isinstance(s, binary_type): - return s.decode(encoding, errors) - elif not isinstance(s, (text_type, binary_type)): - raise TypeError("not expecting type '%s'" % type(s)) - return s - - -def ensure_text(s, encoding='utf-8', errors='strict'): - """Coerce *s* to six.text_type. - - For Python 2: - - `unicode` -> `unicode` - - `str` -> `unicode` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - if isinstance(s, binary_type): - return s.decode(encoding, errors) - elif isinstance(s, text_type): - return s - else: - raise TypeError("not expecting type '%s'" % type(s)) - - -def python_2_unicode_compatible(klass): - """ - A class decorator that defines __unicode__ and __str__ methods under Python 2. - Under Python 3 it does nothing. - - To support Python 2 and 3 with a single code base, define a __str__ method - returning text and apply this decorator to the class. - """ - if PY2: - if '__str__' not in klass.__dict__: - raise ValueError("@python_2_unicode_compatible cannot be applied " - "to %s because it doesn't define __str__()." % - klass.__name__) - klass.__unicode__ = klass.__str__ - klass.__str__ = lambda self: self.__unicode__().encode('utf-8') - return klass - - -# Complete the moves implementation. -# This code is at the end of this module to speed up module loading. -# Turn this module into a package. -__path__ = [] # required for PEP 302 and PEP 451 -__package__ = __name__ # see PEP 366 @ReservedAssignment -if globals().get("__spec__") is not None: - __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable -# Remove other six meta path importers, since they cause problems. This can -# happen if six is removed from sys.modules and then reloaded. (Setuptools does -# this for some reason.) -if sys.meta_path: - for i, importer in enumerate(sys.meta_path): - # Here's some real nastiness: Another "instance" of the six module might - # be floating around. Therefore, we can't use isinstance() to check for - # the six meta path importer, since the other six instance will have - # inserted an importer with different class. - if (type(importer).__name__ == "_SixMetaPathImporter" and - importer.name == __name__): - del sys.meta_path[i] - break - del i, importer -# Finally, add the importer to the meta path import hook. -sys.meta_path.append(_importer) diff --git a/spaces/AtomdffAI/wechatgpt4atom/bridge/bridge.py b/spaces/AtomdffAI/wechatgpt4atom/bridge/bridge.py deleted file mode 100644 index 6c164e87bb9f1623c70180e55d689c588f6509f4..0000000000000000000000000000000000000000 --- a/spaces/AtomdffAI/wechatgpt4atom/bridge/bridge.py +++ /dev/null @@ -1,9 +0,0 @@ -from bot import bot_factory - - -class Bridge(object): - def __init__(self): - pass - - def fetch_reply_content(self, query, context): - return bot_factory.create_bot("chatGPT").reply(query, context) diff --git a/spaces/Awesimo/jojogan/e4e/configs/paths_config.py b/spaces/Awesimo/jojogan/e4e/configs/paths_config.py deleted file mode 100644 index 4604f6063b8125364a52a492de52fcc54004f373..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/configs/paths_config.py +++ /dev/null @@ -1,28 +0,0 @@ -dataset_paths = { - # Face Datasets (In the paper: FFHQ - train, CelebAHQ - test) - 'ffhq': '', - 'celeba_test': '', - - # Cars Dataset (In the paper: Stanford cars) - 'cars_train': '', - 'cars_test': '', - - # Horse Dataset (In the paper: LSUN Horse) - 'horse_train': '', - 'horse_test': '', - - # Church Dataset (In the paper: LSUN Church) - 'church_train': '', - 'church_test': '', - - # Cats Dataset (In the paper: LSUN Cat) - 'cats_train': '', - 'cats_test': '' -} - -model_paths = { - 'stylegan_ffhq': 'pretrained_models/stylegan2-ffhq-config-f.pt', - 'ir_se50': 'pretrained_models/model_ir_se50.pth', - 'shape_predictor': 'pretrained_models/shape_predictor_68_face_landmarks.dat', - 'moco': 'pretrained_models/moco_v2_800ep_pretrain.pth' -} diff --git a/spaces/BMukhtar/BookRecognitionKz/kz_ocr_easy.py b/spaces/BMukhtar/BookRecognitionKz/kz_ocr_easy.py deleted file mode 100644 index c3bbba4bc56257f199b8cb38fa3d5289e7477bdd..0000000000000000000000000000000000000000 --- a/spaces/BMukhtar/BookRecognitionKz/kz_ocr_easy.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import cv2 -import numpy as np -from PIL import Image, ImageDraw, ImageFont -from tqdm import tqdm -import os - -import easyocr - -models_dir = "./models" -images_dir = "./images" -output_dir = "./output" -dirs = [models_dir, images_dir, output_dir] -for d in dirs: - if not os.path.exists(output_dir): - os.makedirs(output_dir) - -""" -Upload easy OCR model files with the same name and font file named Ubuntu-Regular.ttf, examples: -best_norm_ED.pth -best_norm_ED.py -best_norm_ED.yaml -Ubuntu-Regular.ttf - -to models directory - -Upload image files you want to test, examples: -kz_book_simple.jpeg -kz_blur.jpg -kz_book_complex.jpg - -to images directory -""" - -font_path = models_dir + "/Ubuntu-Regular.ttf" - -reader = easyocr.Reader( - ['en'], - gpu=True, - recog_network='best_norm_ED', - detect_network="craft", - user_network_directory=models_dir, - model_storage_directory=models_dir, -) # this needs to run only once to load the model into memory - -image_extensions = (".jpg", ".jpeg", ".png") - -for image_name in tqdm(os.listdir(images_dir)): - if not image_name.lower().endswith(image_extensions): - print(f'unsupported file {image_name}') - continue - image_path = f'{images_dir}/{image_name}' - print(image_path) - # Read image as numpy array - image = cv2.imread(image_path) - - # Rotate the image by 270 degrees - # image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE) - - # Convert the image from BGR to RGB (because OpenCV loads images in BGR format) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - results = reader.readtext(image=image) - - # Load custom font - font = ImageFont.truetype(font_path, 32) - - # Display the results - for (bbox, text, prob) in results: - # Get the bounding box coordinates - (top_left, top_right, bottom_right, bottom_left) = bbox - top_left = (int(top_left[0]), int(top_left[1])) - bottom_right = (int(bottom_right[0]), int(bottom_right[1])) - - # Draw the bounding box on the image - cv2.rectangle(image, top_left, bottom_right, (0, 255, 0), 2) - - # Convert the OpenCV image to a PIL image, draw the text, then convert back to an OpenCV image - image_pil = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) - draw = ImageDraw.Draw(image_pil) - draw.text((top_left[0], top_left[1] - 40), text, font=font, fill=(0, 0, 255)) - image = cv2.cvtColor(np.array(image_pil), cv2.COLOR_RGB2BGR) - - # Save image - cv2.imwrite( f'{output_dir}/{image_name}', image) - - # reader.readtext(image = image, paragraph=True) - - diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis La Ampliadora De Imgenes.md b/spaces/Benson/text-generation/Examples/Descargar Gratis La Ampliadora De Imgenes.md deleted file mode 100644 index 4b0bc06ac538e128d9893be9e41f9db5a9411ced..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gratis La Ampliadora De Imgenes.md +++ /dev/null @@ -1,54 +0,0 @@ -
-

Ampliadora de imágenes AI Descargar gratis: Cómo mejorar y mejorar sus fotos con inteligencia artificial

-

¿Alguna vez has querido hacer que tus fotos se vean más nítidas, claras y realistas? ¿Tiene imágenes de baja resolución que desea ampliar sin perder calidad? Si es así, es posible que esté interesado en las herramientas de ampliación de imagen de IA.

-

Las herramientas de ampliación de imagen de IA son aplicaciones de software que utilizan algoritmos de inteligencia artificial para mejorar y mejorar sus fotos. Pueden mejorar la resolución, la calidad y los detalles de sus imágenes sin edición manual. También pueden corregir imágenes borrosas, pixeladas y de baja resolución y hacerlas parecer profesionales y realistas.

-

descargar gratis la ampliadora de imágenes


Download Zip > https://bltlly.com/2v6JH3



-

En este artículo, te mostraremos los beneficios de usar herramientas de ampliación de imagen de IA, los mejores que puedes descargar gratis, cómo usarlos para mejorar tus fotos y algunos consejos y trucos para sacarles el máximo provecho. ¡Vamos a empezar!

-

Los beneficios de usar herramientas de ampliación de imagen de IA

-

Las herramientas de ampliación de imagen de IA pueden ayudarlo a lograr resultados increíbles con sus fotos. Aquí están algunos de los beneficios de usarlas:

-
    -
  • Mejora la calidad de imagen y la resolución sin perder detalles. Las herramientas de ampliación de imagen IA pueden aumentar el tamaño de sus imágenes sin comprometer su calidad. Pueden preservar los detalles, texturas, colores y bordes de sus fotos mientras agregan más píxeles. De esta manera, puede obtener imágenes de alta resolución que son adecuadas para la impresión, diseño web o cualquier otro propósito.
  • -
  • Corrige imágenes borrosas, pixeladas y de baja resolución. Las herramientas de ampliación de imágenes IA también pueden corregir los problemas comunes que afectan a las imágenes de baja calidad. Pueden eliminar el ruido, los artefactos, el desenfoque y la distorsión de sus fotos y hacerlas ver nítidas y claras. También pueden restaurar los detalles y características que faltan de sus fotos y hacerlas ver realistas.
  • - -
  • Ahorre tiempo y dinero en la edición manual. Las herramientas de ampliación de imagen de IA pueden hacer todo el trabajo por usted en segundos. Usted no necesita gastar horas o dinero en la contratación de un editor profesional o el uso de software complejo. Solo tiene que cargar el archivo de imagen, elegir el tamaño de salida deseado y la calidad, y esperar a que la IA para procesar la imagen. A continuación, puede descargar o compartir su imagen mejorada con facilidad.
  • -
-

Las mejores herramientas de ampliación de imagen de IA se pueden descargar gratis

-

Hay

Hay muchas herramientas de ampliación de imagen de IA disponibles en línea, pero no todas son gratuitas o confiables. Estos son algunos de los mejores que puedes descargar gratis y usar para mejorar y mejorar tus fotos:

-
    -
  • Upscayl: Upscayl es una imagen de código abierto y libre para Linux, MacOS y Windows. Utiliza un modelo de aprendizaje profundo llamado ESRGAN (Red Generativa de Superresolución Mejorada) para mejorar las imágenes hasta 16 veces su tamaño original. También puede mejorar los detalles, colores y texturas de sus imágenes. Puede descargar Upscayl desde su sitio web oficial o el repositorio de GitHub. También puede ver un video tutorial sobre cómo usarlo aquí.
  • -
  • Let’s Enhance: Let’s Enhance es una aplicación en línea y ampliadora de fotos gratuita que utiliza la IA para mejorar y generar imágenes. Puede mejorar las imágenes hasta 16 veces su tamaño original y mejorar su calidad y resolución. También puede agregar fondos realistas, objetos, caras o texto a sus imágenes. Puedes usar Let’s Enhance gratis con algunas limitaciones o actualizar a un plan premium para obtener más funciones y beneficios. Puede acceder a Let’s Enhance desde su sitio web oficial o descargar su aplicación para dispositivos Android o iOS.
  • - -
-

Cómo utilizar herramientas de ampliación de imagen de IA para mejorar sus fotos

-

El uso de herramientas de ampliación de imagen de IA es muy fácil y rápido. Estos son los pasos básicos que debe seguir para mejorar sus fotos con IA:

-
    -
  1. Seleccione o arrastre y suelte su archivo de imagen. El primer paso es elegir el archivo de imagen que desea mejorar y mejorar. Puede seleccionarlo desde su dispositivo o arrastrarlo y soltarlo en la interfaz de la herramienta. La mayoría de las herramientas soportan formatos de imagen comunes como JPG, PNG, BMP, TIFF, etc.
  2. -
  3. Elija el tamaño y la calidad de salida deseados. El siguiente paso es elegir cuánto desea ampliar su imagen y qué nivel de calidad desea alcanzar. La mayoría de las herramientas ofrecen diferentes opciones para el tamaño y la calidad de salida, como 2x, 4x, 8x, 16x, low, medium, high, etc. También puede personalizar la configuración de salida según sus preferencias.
  4. -
  5. Espera a que la IA procese tu imagen. El tercer paso es esperar a que la IA procese tu imagen. Dependiendo de la herramienta, el tamaño de la imagen y la configuración de salida que haya elegido, esto puede tardar de unos segundos a unos pocos minutos. Normalmente puede ver el progreso del procesamiento en la interfaz de la herramienta.
  6. -
  7. Descarga o comparte tu imagen mejorada. El paso final es descargar o compartir tu imagen mejorada. La mayoría de las herramientas te permiten descargar tu imagen en varios formatos como JPG, PNG, PDF, etc. También puedes compartir tu imagen por correo electrónico, redes sociales, almacenamiento en la nube, etc.
  8. -
-

Consejos y trucos para obtener el máximo provecho de la IA herramientas de ampliación de imagen

-

Para obtener los mejores resultados con las herramientas de ampliación de imagen de IA, aquí hay algunos consejos y trucos que debe tener en cuenta:

-
    - -
  • Experimenta con diferentes modelos y configuraciones para encontrar el mejor ajuste para tu imagen. Diferentes modelos y configuraciones de IA pueden producir diferentes resultados para diferentes imágenes. Algunos modelos pueden funcionar mejor para ciertos tipos de imágenes que otros. Algunos ajustes también pueden afectar la velocidad, precisión y realismo de la imagen de salida. Por lo tanto, se recomienda experimentar con diferentes modelos y configuraciones para encontrar el mejor ajuste para su imagen.
  • -
  • Compara las imágenes originales y mejoradas para ver la diferencia. La mayoría de las herramientas le permiten La mayoría de las herramientas le permiten comparar las imágenes originales y mejoradas una al lado de la otra o con un control deslizante. De esta manera, puede ver la diferencia y evaluar la mejora. También puede acercar y alejar para ver los detalles y la calidad de sus imágenes. Esto puede ayudarle a decidir si está satisfecho con la salida o no.
  • -
  • Utilice la edición por lotes para mejorar varias imágenes a la vez. Si tiene muchas imágenes que desea mejorar y mejorar, puede usar la edición por lotes para ahorrar tiempo y esfuerzo. La mayoría de las herramientas le permiten subir varias imágenes a la vez y procesarlas de una sola vez. También puede aplicar los mismos ajustes y modelos a todas sus imágenes o personalizarlas individualmente. A continuación, puede descargar o compartir todas sus imágenes mejoradas con un solo clic.
  • -
-

Conclusión

-

Las herramientas de ampliación de imagen de IA son aplicaciones increíbles que pueden ayudarlo a mejorar y mejorar sus fotos con inteligencia artificial. Pueden mejorar la calidad, la resolución y los detalles de sus imágenes sin edición manual. También pueden corregir imágenes borrosas, pixeladas y de baja resolución y hacerlas parecer profesionales y realistas.

-

-

En este artículo, te mostramos los beneficios de usar herramientas de ampliación de imagen de IA, los mejores que puedes descargar gratis, cómo usarlos para mejorar tus fotos y algunos consejos y trucos para sacarles el máximo provecho. Esperamos que haya encontrado este artículo útil e informativo.

- -

Gracias por leer este artículo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Nos encantaría saber de ti!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre las herramientas de ampliación de imagen de IA:

-
    -
  1. ¿Qué es la ampliadora de imagen IA? La ampliadora de imágenes IA es una aplicación de software que utiliza algoritmos de inteligencia artificial para mejorar y mejorar sus fotos. Puede mejorar la resolución, la calidad y los detalles de sus imágenes sin edición manual. También puede corregir imágenes borrosas, pixeladas y de baja resolución y hacerlas parecer profesionales y realistas.
  2. -
  3. ¿Por qué necesito una ampliadora de imagen IA? Es posible que necesite una ampliadora de imagen IA si desea que sus fotos se vean más nítidas, claras y realistas. También puede necesitarlo si tiene imágenes de baja resolución que desea ampliar sin perder calidad. La ampliadora de imágenes AI puede ayudarlo a lograr resultados increíbles con sus fotos sin pasar horas o dinero en contratar a un editor profesional o usar software complejo.
  4. -
  5. ¿Cómo funciona la ampliadora de imágenes IA? La ampliadora de imágenes IA funciona mediante el uso de modelos de aprendizaje profundo que han sido entrenados en millones de imágenes de alta calidad. Estos modelos pueden analizar la imagen de entrada y generar una nueva imagen de salida que tenga más píxeles, detalles, colores y características. También pueden corregir los problemas comunes que afectan a imágenes de baja calidad como ruido, artefactos, desenfoque y distorsión.
  6. -
  7. ¿Cuánto cuesta la ampliadora de imágenes IA? Las herramientas de ampliadora de imágenes IA varían en sus precios y características. Algunos de ellos son gratuitos u ofrecen pruebas gratuitas con algunas limitaciones. Algunos de ellos requieren una suscripción o un pago único para más características y beneficios. Puede comparar los precios y características de diferentes herramientas de ampliación de imagen de IA en línea y elegir la que se adapte a sus necesidades y presupuesto.
  8. - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat/src/lib/switchTheme.ts b/spaces/BetterAPI/BetterChat/src/lib/switchTheme.ts deleted file mode 100644 index 9da30b244c4b20b4585b34a02617895a3499a56f..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/lib/switchTheme.ts +++ /dev/null @@ -1,10 +0,0 @@ -export function switchTheme() { - const { classList } = document.querySelector("html") as HTMLElement; - if (classList.contains("dark")) { - classList.remove("dark"); - localStorage.theme = "light"; - } else { - classList.add("dark"); - localStorage.theme = "dark"; - } -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/link.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/link.py deleted file mode 100644 index e741c3283cd5984d5c62936f1f418ee3d9d7e596..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/link.py +++ /dev/null @@ -1,531 +0,0 @@ -import functools -import itertools -import logging -import os -import posixpath -import re -import urllib.parse -from dataclasses import dataclass -from typing import ( - TYPE_CHECKING, - Any, - Dict, - List, - Mapping, - NamedTuple, - Optional, - Tuple, - Union, -) - -from pip._internal.utils.deprecation import deprecated -from pip._internal.utils.filetypes import WHEEL_EXTENSION -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.misc import ( - pairwise, - redact_auth_from_url, - split_auth_from_netloc, - splitext, -) -from pip._internal.utils.models import KeyBasedCompareMixin -from pip._internal.utils.urls import path_to_url, url_to_path - -if TYPE_CHECKING: - from pip._internal.index.collector import IndexContent - -logger = logging.getLogger(__name__) - - -# Order matters, earlier hashes have a precedence over later hashes for what -# we will pick to use. -_SUPPORTED_HASHES = ("sha512", "sha384", "sha256", "sha224", "sha1", "md5") - - -@dataclass(frozen=True) -class LinkHash: - """Links to content may have embedded hash values. This class parses those. - - `name` must be any member of `_SUPPORTED_HASHES`. - - This class can be converted to and from `ArchiveInfo`. While ArchiveInfo intends to - be JSON-serializable to conform to PEP 610, this class contains the logic for - parsing a hash name and value for correctness, and then checking whether that hash - conforms to a schema with `.is_hash_allowed()`.""" - - name: str - value: str - - _hash_url_fragment_re = re.compile( - # NB: we do not validate that the second group (.*) is a valid hex - # digest. Instead, we simply keep that string in this class, and then check it - # against Hashes when hash-checking is needed. This is easier to debug than - # proactively discarding an invalid hex digest, as we handle incorrect hashes - # and malformed hashes in the same place. - r"[#&]({choices})=([^&]*)".format( - choices="|".join(re.escape(hash_name) for hash_name in _SUPPORTED_HASHES) - ), - ) - - def __post_init__(self) -> None: - assert self.name in _SUPPORTED_HASHES - - @classmethod - def parse_pep658_hash(cls, dist_info_metadata: str) -> Optional["LinkHash"]: - """Parse a PEP 658 data-dist-info-metadata hash.""" - if dist_info_metadata == "true": - return None - name, sep, value = dist_info_metadata.partition("=") - if not sep: - return None - if name not in _SUPPORTED_HASHES: - return None - return cls(name=name, value=value) - - @classmethod - @functools.lru_cache(maxsize=None) - def find_hash_url_fragment(cls, url: str) -> Optional["LinkHash"]: - """Search a string for a checksum algorithm name and encoded output value.""" - match = cls._hash_url_fragment_re.search(url) - if match is None: - return None - name, value = match.groups() - return cls(name=name, value=value) - - def as_dict(self) -> Dict[str, str]: - return {self.name: self.value} - - def as_hashes(self) -> Hashes: - """Return a Hashes instance which checks only for the current hash.""" - return Hashes({self.name: [self.value]}) - - def is_hash_allowed(self, hashes: Optional[Hashes]) -> bool: - """ - Return True if the current hash is allowed by `hashes`. - """ - if hashes is None: - return False - return hashes.is_hash_allowed(self.name, hex_digest=self.value) - - -def _clean_url_path_part(part: str) -> str: - """ - Clean a "part" of a URL path (i.e. after splitting on "@" characters). - """ - # We unquote prior to quoting to make sure nothing is double quoted. - return urllib.parse.quote(urllib.parse.unquote(part)) - - -def _clean_file_url_path(part: str) -> str: - """ - Clean the first part of a URL path that corresponds to a local - filesystem path (i.e. the first part after splitting on "@" characters). - """ - # We unquote prior to quoting to make sure nothing is double quoted. - # Also, on Windows the path part might contain a drive letter which - # should not be quoted. On Linux where drive letters do not - # exist, the colon should be quoted. We rely on urllib.request - # to do the right thing here. - return urllib.request.pathname2url(urllib.request.url2pathname(part)) - - -# percent-encoded: / -_reserved_chars_re = re.compile("(@|%2F)", re.IGNORECASE) - - -def _clean_url_path(path: str, is_local_path: bool) -> str: - """ - Clean the path portion of a URL. - """ - if is_local_path: - clean_func = _clean_file_url_path - else: - clean_func = _clean_url_path_part - - # Split on the reserved characters prior to cleaning so that - # revision strings in VCS URLs are properly preserved. - parts = _reserved_chars_re.split(path) - - cleaned_parts = [] - for to_clean, reserved in pairwise(itertools.chain(parts, [""])): - cleaned_parts.append(clean_func(to_clean)) - # Normalize %xx escapes (e.g. %2f -> %2F) - cleaned_parts.append(reserved.upper()) - - return "".join(cleaned_parts) - - -def _ensure_quoted_url(url: str) -> str: - """ - Make sure a link is fully quoted. - For example, if ' ' occurs in the URL, it will be replaced with "%20", - and without double-quoting other characters. - """ - # Split the URL into parts according to the general structure - # `scheme://netloc/path;parameters?query#fragment`. - result = urllib.parse.urlparse(url) - # If the netloc is empty, then the URL refers to a local filesystem path. - is_local_path = not result.netloc - path = _clean_url_path(result.path, is_local_path=is_local_path) - return urllib.parse.urlunparse(result._replace(path=path)) - - -class Link(KeyBasedCompareMixin): - """Represents a parsed link from a Package Index's simple URL""" - - __slots__ = [ - "_parsed_url", - "_url", - "_hashes", - "comes_from", - "requires_python", - "yanked_reason", - "dist_info_metadata", - "cache_link_parsing", - "egg_fragment", - ] - - def __init__( - self, - url: str, - comes_from: Optional[Union[str, "IndexContent"]] = None, - requires_python: Optional[str] = None, - yanked_reason: Optional[str] = None, - dist_info_metadata: Optional[str] = None, - cache_link_parsing: bool = True, - hashes: Optional[Mapping[str, str]] = None, - ) -> None: - """ - :param url: url of the resource pointed to (href of the link) - :param comes_from: instance of IndexContent where the link was found, - or string. - :param requires_python: String containing the `Requires-Python` - metadata field, specified in PEP 345. This may be specified by - a data-requires-python attribute in the HTML link tag, as - described in PEP 503. - :param yanked_reason: the reason the file has been yanked, if the - file has been yanked, or None if the file hasn't been yanked. - This is the value of the "data-yanked" attribute, if present, in - a simple repository HTML link. If the file has been yanked but - no reason was provided, this should be the empty string. See - PEP 592 for more information and the specification. - :param dist_info_metadata: the metadata attached to the file, or None if no such - metadata is provided. This is the value of the "data-dist-info-metadata" - attribute, if present, in a simple repository HTML link. This may be parsed - into its own `Link` by `self.metadata_link()`. See PEP 658 for more - information and the specification. - :param cache_link_parsing: A flag that is used elsewhere to determine - whether resources retrieved from this link should be cached. PyPI - URLs should generally have this set to False, for example. - :param hashes: A mapping of hash names to digests to allow us to - determine the validity of a download. - """ - - # url can be a UNC windows share - if url.startswith("\\\\"): - url = path_to_url(url) - - self._parsed_url = urllib.parse.urlsplit(url) - # Store the url as a private attribute to prevent accidentally - # trying to set a new value. - self._url = url - - link_hash = LinkHash.find_hash_url_fragment(url) - hashes_from_link = {} if link_hash is None else link_hash.as_dict() - if hashes is None: - self._hashes = hashes_from_link - else: - self._hashes = {**hashes, **hashes_from_link} - - self.comes_from = comes_from - self.requires_python = requires_python if requires_python else None - self.yanked_reason = yanked_reason - self.dist_info_metadata = dist_info_metadata - - super().__init__(key=url, defining_class=Link) - - self.cache_link_parsing = cache_link_parsing - self.egg_fragment = self._egg_fragment() - - @classmethod - def from_json( - cls, - file_data: Dict[str, Any], - page_url: str, - ) -> Optional["Link"]: - """ - Convert an pypi json document from a simple repository page into a Link. - """ - file_url = file_data.get("url") - if file_url is None: - return None - - url = _ensure_quoted_url(urllib.parse.urljoin(page_url, file_url)) - pyrequire = file_data.get("requires-python") - yanked_reason = file_data.get("yanked") - dist_info_metadata = file_data.get("dist-info-metadata") - hashes = file_data.get("hashes", {}) - - # The Link.yanked_reason expects an empty string instead of a boolean. - if yanked_reason and not isinstance(yanked_reason, str): - yanked_reason = "" - # The Link.yanked_reason expects None instead of False. - elif not yanked_reason: - yanked_reason = None - - return cls( - url, - comes_from=page_url, - requires_python=pyrequire, - yanked_reason=yanked_reason, - hashes=hashes, - dist_info_metadata=dist_info_metadata, - ) - - @classmethod - def from_element( - cls, - anchor_attribs: Dict[str, Optional[str]], - page_url: str, - base_url: str, - ) -> Optional["Link"]: - """ - Convert an anchor element's attributes in a simple repository page to a Link. - """ - href = anchor_attribs.get("href") - if not href: - return None - - url = _ensure_quoted_url(urllib.parse.urljoin(base_url, href)) - pyrequire = anchor_attribs.get("data-requires-python") - yanked_reason = anchor_attribs.get("data-yanked") - dist_info_metadata = anchor_attribs.get("data-dist-info-metadata") - - return cls( - url, - comes_from=page_url, - requires_python=pyrequire, - yanked_reason=yanked_reason, - dist_info_metadata=dist_info_metadata, - ) - - def __str__(self) -> str: - if self.requires_python: - rp = f" (requires-python:{self.requires_python})" - else: - rp = "" - if self.comes_from: - return "{} (from {}){}".format( - redact_auth_from_url(self._url), self.comes_from, rp - ) - else: - return redact_auth_from_url(str(self._url)) - - def __repr__(self) -> str: - return f"" - - @property - def url(self) -> str: - return self._url - - @property - def filename(self) -> str: - path = self.path.rstrip("/") - name = posixpath.basename(path) - if not name: - # Make sure we don't leak auth information if the netloc - # includes a username and password. - netloc, user_pass = split_auth_from_netloc(self.netloc) - return netloc - - name = urllib.parse.unquote(name) - assert name, f"URL {self._url!r} produced no filename" - return name - - @property - def file_path(self) -> str: - return url_to_path(self.url) - - @property - def scheme(self) -> str: - return self._parsed_url.scheme - - @property - def netloc(self) -> str: - """ - This can contain auth information. - """ - return self._parsed_url.netloc - - @property - def path(self) -> str: - return urllib.parse.unquote(self._parsed_url.path) - - def splitext(self) -> Tuple[str, str]: - return splitext(posixpath.basename(self.path.rstrip("/"))) - - @property - def ext(self) -> str: - return self.splitext()[1] - - @property - def url_without_fragment(self) -> str: - scheme, netloc, path, query, fragment = self._parsed_url - return urllib.parse.urlunsplit((scheme, netloc, path, query, "")) - - _egg_fragment_re = re.compile(r"[#&]egg=([^&]*)") - - # Per PEP 508. - _project_name_re = re.compile( - r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", re.IGNORECASE - ) - - def _egg_fragment(self) -> Optional[str]: - match = self._egg_fragment_re.search(self._url) - if not match: - return None - - # An egg fragment looks like a PEP 508 project name, along with - # an optional extras specifier. Anything else is invalid. - project_name = match.group(1) - if not self._project_name_re.match(project_name): - deprecated( - reason=f"{self} contains an egg fragment with a non-PEP 508 name", - replacement="to use the req @ url syntax, and remove the egg fragment", - gone_in="25.0", - issue=11617, - ) - - return project_name - - _subdirectory_fragment_re = re.compile(r"[#&]subdirectory=([^&]*)") - - @property - def subdirectory_fragment(self) -> Optional[str]: - match = self._subdirectory_fragment_re.search(self._url) - if not match: - return None - return match.group(1) - - def metadata_link(self) -> Optional["Link"]: - """Implementation of PEP 658 parsing.""" - # Note that Link.from_element() parsing the "data-dist-info-metadata" attribute - # from an HTML anchor tag is typically how the Link.dist_info_metadata attribute - # gets set. - if self.dist_info_metadata is None: - return None - metadata_url = f"{self.url_without_fragment}.metadata" - metadata_link_hash = LinkHash.parse_pep658_hash(self.dist_info_metadata) - if metadata_link_hash is None: - return Link(metadata_url) - return Link(metadata_url, hashes=metadata_link_hash.as_dict()) - - def as_hashes(self) -> Hashes: - return Hashes({k: [v] for k, v in self._hashes.items()}) - - @property - def hash(self) -> Optional[str]: - return next(iter(self._hashes.values()), None) - - @property - def hash_name(self) -> Optional[str]: - return next(iter(self._hashes), None) - - @property - def show_url(self) -> str: - return posixpath.basename(self._url.split("#", 1)[0].split("?", 1)[0]) - - @property - def is_file(self) -> bool: - return self.scheme == "file" - - def is_existing_dir(self) -> bool: - return self.is_file and os.path.isdir(self.file_path) - - @property - def is_wheel(self) -> bool: - return self.ext == WHEEL_EXTENSION - - @property - def is_vcs(self) -> bool: - from pip._internal.vcs import vcs - - return self.scheme in vcs.all_schemes - - @property - def is_yanked(self) -> bool: - return self.yanked_reason is not None - - @property - def has_hash(self) -> bool: - return bool(self._hashes) - - def is_hash_allowed(self, hashes: Optional[Hashes]) -> bool: - """ - Return True if the link has a hash and it is allowed by `hashes`. - """ - if hashes is None: - return False - return any(hashes.is_hash_allowed(k, v) for k, v in self._hashes.items()) - - -class _CleanResult(NamedTuple): - """Convert link for equivalency check. - - This is used in the resolver to check whether two URL-specified requirements - likely point to the same distribution and can be considered equivalent. This - equivalency logic avoids comparing URLs literally, which can be too strict - (e.g. "a=1&b=2" vs "b=2&a=1") and produce conflicts unexpecting to users. - - Currently this does three things: - - 1. Drop the basic auth part. This is technically wrong since a server can - serve different content based on auth, but if it does that, it is even - impossible to guarantee two URLs without auth are equivalent, since - the user can input different auth information when prompted. So the - practical solution is to assume the auth doesn't affect the response. - 2. Parse the query to avoid the ordering issue. Note that ordering under the - same key in the query are NOT cleaned; i.e. "a=1&a=2" and "a=2&a=1" are - still considered different. - 3. Explicitly drop most of the fragment part, except ``subdirectory=`` and - hash values, since it should have no impact the downloaded content. Note - that this drops the "egg=" part historically used to denote the requested - project (and extras), which is wrong in the strictest sense, but too many - people are supplying it inconsistently to cause superfluous resolution - conflicts, so we choose to also ignore them. - """ - - parsed: urllib.parse.SplitResult - query: Dict[str, List[str]] - subdirectory: str - hashes: Dict[str, str] - - -def _clean_link(link: Link) -> _CleanResult: - parsed = link._parsed_url - netloc = parsed.netloc.rsplit("@", 1)[-1] - # According to RFC 8089, an empty host in file: means localhost. - if parsed.scheme == "file" and not netloc: - netloc = "localhost" - fragment = urllib.parse.parse_qs(parsed.fragment) - if "egg" in fragment: - logger.debug("Ignoring egg= fragment in %s", link) - try: - # If there are multiple subdirectory values, use the first one. - # This matches the behavior of Link.subdirectory_fragment. - subdirectory = fragment["subdirectory"][0] - except (IndexError, KeyError): - subdirectory = "" - # If there are multiple hash values under the same algorithm, use the - # first one. This matches the behavior of Link.hash_value. - hashes = {k: fragment[k][0] for k in _SUPPORTED_HASHES if k in fragment} - return _CleanResult( - parsed=parsed._replace(netloc=netloc, query="", fragment=""), - query=urllib.parse.parse_qs(parsed.query), - subdirectory=subdirectory, - hashes=hashes, - ) - - -@functools.lru_cache(maxsize=None) -def links_equivalent(link1: Link, link2: Link) -> bool: - return _clean_link(link1) == _clean_link(link2) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/version.py deleted file mode 100644 index e29e265750fbccfbd072d1541e376aa150724be2..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/version.py +++ /dev/null @@ -1,358 +0,0 @@ -# -# distutils/version.py -# -# Implements multiple version numbering conventions for the -# Python Module Distribution Utilities. -# -# $Id$ -# - -"""Provides classes to represent module version numbers (one class for -each style of version numbering). There are currently two such classes -implemented: StrictVersion and LooseVersion. - -Every version number class implements the following interface: - * the 'parse' method takes a string and parses it to some internal - representation; if the string is an invalid version number, - 'parse' raises a ValueError exception - * the class constructor takes an optional string argument which, - if supplied, is passed to 'parse' - * __str__ reconstructs the string that was passed to 'parse' (or - an equivalent string -- ie. one that will generate an equivalent - version number instance) - * __repr__ generates Python code to recreate the version number instance - * _cmp compares the current instance with either another instance - of the same class or a string (which will be parsed to an instance - of the same class, thus must follow the same rules) -""" - -import re -import warnings -import contextlib - - -@contextlib.contextmanager -def suppress_known_deprecation(): - with warnings.catch_warnings(record=True) as ctx: - warnings.filterwarnings( - action='default', - category=DeprecationWarning, - message="distutils Version classes are deprecated.", - ) - yield ctx - - -class Version: - """Abstract base class for version numbering classes. Just provides - constructor (__init__) and reproducer (__repr__), because those - seem to be the same for all version numbering classes; and route - rich comparisons to _cmp. - """ - - def __init__(self, vstring=None): - if vstring: - self.parse(vstring) - warnings.warn( - "distutils Version classes are deprecated. " - "Use packaging.version instead.", - DeprecationWarning, - stacklevel=2, - ) - - def __repr__(self): - return "{} ('{}')".format(self.__class__.__name__, str(self)) - - def __eq__(self, other): - c = self._cmp(other) - if c is NotImplemented: - return c - return c == 0 - - def __lt__(self, other): - c = self._cmp(other) - if c is NotImplemented: - return c - return c < 0 - - def __le__(self, other): - c = self._cmp(other) - if c is NotImplemented: - return c - return c <= 0 - - def __gt__(self, other): - c = self._cmp(other) - if c is NotImplemented: - return c - return c > 0 - - def __ge__(self, other): - c = self._cmp(other) - if c is NotImplemented: - return c - return c >= 0 - - -# Interface for version-number classes -- must be implemented -# by the following classes (the concrete ones -- Version should -# be treated as an abstract class). -# __init__ (string) - create and take same action as 'parse' -# (string parameter is optional) -# parse (string) - convert a string representation to whatever -# internal representation is appropriate for -# this style of version numbering -# __str__ (self) - convert back to a string; should be very similar -# (if not identical to) the string supplied to parse -# __repr__ (self) - generate Python code to recreate -# the instance -# _cmp (self, other) - compare two version numbers ('other' may -# be an unparsed version string, or another -# instance of your version class) - - -class StrictVersion(Version): - - """Version numbering for anal retentives and software idealists. - Implements the standard interface for version number classes as - described above. A version number consists of two or three - dot-separated numeric components, with an optional "pre-release" tag - on the end. The pre-release tag consists of the letter 'a' or 'b' - followed by a number. If the numeric components of two version - numbers are equal, then one with a pre-release tag will always - be deemed earlier (lesser) than one without. - - The following are valid version numbers (shown in the order that - would be obtained by sorting according to the supplied cmp function): - - 0.4 0.4.0 (these two are equivalent) - 0.4.1 - 0.5a1 - 0.5b3 - 0.5 - 0.9.6 - 1.0 - 1.0.4a3 - 1.0.4b1 - 1.0.4 - - The following are examples of invalid version numbers: - - 1 - 2.7.2.2 - 1.3.a4 - 1.3pl1 - 1.3c4 - - The rationale for this version numbering system will be explained - in the distutils documentation. - """ - - version_re = re.compile( - r'^(\d+) \. (\d+) (\. (\d+))? ([ab](\d+))?$', re.VERBOSE | re.ASCII - ) - - def parse(self, vstring): - match = self.version_re.match(vstring) - if not match: - raise ValueError("invalid version number '%s'" % vstring) - - (major, minor, patch, prerelease, prerelease_num) = match.group(1, 2, 4, 5, 6) - - if patch: - self.version = tuple(map(int, [major, minor, patch])) - else: - self.version = tuple(map(int, [major, minor])) + (0,) - - if prerelease: - self.prerelease = (prerelease[0], int(prerelease_num)) - else: - self.prerelease = None - - def __str__(self): - - if self.version[2] == 0: - vstring = '.'.join(map(str, self.version[0:2])) - else: - vstring = '.'.join(map(str, self.version)) - - if self.prerelease: - vstring = vstring + self.prerelease[0] + str(self.prerelease[1]) - - return vstring - - def _cmp(self, other): # noqa: C901 - if isinstance(other, str): - with suppress_known_deprecation(): - other = StrictVersion(other) - elif not isinstance(other, StrictVersion): - return NotImplemented - - if self.version != other.version: - # numeric versions don't match - # prerelease stuff doesn't matter - if self.version < other.version: - return -1 - else: - return 1 - - # have to compare prerelease - # case 1: neither has prerelease; they're equal - # case 2: self has prerelease, other doesn't; other is greater - # case 3: self doesn't have prerelease, other does: self is greater - # case 4: both have prerelease: must compare them! - - if not self.prerelease and not other.prerelease: - return 0 - elif self.prerelease and not other.prerelease: - return -1 - elif not self.prerelease and other.prerelease: - return 1 - elif self.prerelease and other.prerelease: - if self.prerelease == other.prerelease: - return 0 - elif self.prerelease < other.prerelease: - return -1 - else: - return 1 - else: - assert False, "never get here" - - -# end class StrictVersion - - -# The rules according to Greg Stein: -# 1) a version number has 1 or more numbers separated by a period or by -# sequences of letters. If only periods, then these are compared -# left-to-right to determine an ordering. -# 2) sequences of letters are part of the tuple for comparison and are -# compared lexicographically -# 3) recognize the numeric components may have leading zeroes -# -# The LooseVersion class below implements these rules: a version number -# string is split up into a tuple of integer and string components, and -# comparison is a simple tuple comparison. This means that version -# numbers behave in a predictable and obvious way, but a way that might -# not necessarily be how people *want* version numbers to behave. There -# wouldn't be a problem if people could stick to purely numeric version -# numbers: just split on period and compare the numbers as tuples. -# However, people insist on putting letters into their version numbers; -# the most common purpose seems to be: -# - indicating a "pre-release" version -# ('alpha', 'beta', 'a', 'b', 'pre', 'p') -# - indicating a post-release patch ('p', 'pl', 'patch') -# but of course this can't cover all version number schemes, and there's -# no way to know what a programmer means without asking him. -# -# The problem is what to do with letters (and other non-numeric -# characters) in a version number. The current implementation does the -# obvious and predictable thing: keep them as strings and compare -# lexically within a tuple comparison. This has the desired effect if -# an appended letter sequence implies something "post-release": -# eg. "0.99" < "0.99pl14" < "1.0", and "5.001" < "5.001m" < "5.002". -# -# However, if letters in a version number imply a pre-release version, -# the "obvious" thing isn't correct. Eg. you would expect that -# "1.5.1" < "1.5.2a2" < "1.5.2", but under the tuple/lexical comparison -# implemented here, this just isn't so. -# -# Two possible solutions come to mind. The first is to tie the -# comparison algorithm to a particular set of semantic rules, as has -# been done in the StrictVersion class above. This works great as long -# as everyone can go along with bondage and discipline. Hopefully a -# (large) subset of Python module programmers will agree that the -# particular flavour of bondage and discipline provided by StrictVersion -# provides enough benefit to be worth using, and will submit their -# version numbering scheme to its domination. The free-thinking -# anarchists in the lot will never give in, though, and something needs -# to be done to accommodate them. -# -# Perhaps a "moderately strict" version class could be implemented that -# lets almost anything slide (syntactically), and makes some heuristic -# assumptions about non-digits in version number strings. This could -# sink into special-case-hell, though; if I was as talented and -# idiosyncratic as Larry Wall, I'd go ahead and implement a class that -# somehow knows that "1.2.1" < "1.2.2a2" < "1.2.2" < "1.2.2pl3", and is -# just as happy dealing with things like "2g6" and "1.13++". I don't -# think I'm smart enough to do it right though. -# -# In any case, I've coded the test suite for this module (see -# ../test/test_version.py) specifically to fail on things like comparing -# "1.2a2" and "1.2". That's not because the *code* is doing anything -# wrong, it's because the simple, obvious design doesn't match my -# complicated, hairy expectations for real-world version numbers. It -# would be a snap to fix the test suite to say, "Yep, LooseVersion does -# the Right Thing" (ie. the code matches the conception). But I'd rather -# have a conception that matches common notions about version numbers. - - -class LooseVersion(Version): - - """Version numbering for anarchists and software realists. - Implements the standard interface for version number classes as - described above. A version number consists of a series of numbers, - separated by either periods or strings of letters. When comparing - version numbers, the numeric components will be compared - numerically, and the alphabetic components lexically. The following - are all valid version numbers, in no particular order: - - 1.5.1 - 1.5.2b2 - 161 - 3.10a - 8.02 - 3.4j - 1996.07.12 - 3.2.pl0 - 3.1.1.6 - 2g6 - 11g - 0.960923 - 2.2beta29 - 1.13++ - 5.5.kw - 2.0b1pl0 - - In fact, there is no such thing as an invalid version number under - this scheme; the rules for comparison are simple and predictable, - but may not always give the results you want (for some definition - of "want"). - """ - - component_re = re.compile(r'(\d+ | [a-z]+ | \.)', re.VERBOSE) - - def parse(self, vstring): - # I've given up on thinking I can reconstruct the version string - # from the parsed tuple -- so I just store the string here for - # use by __str__ - self.vstring = vstring - components = [x for x in self.component_re.split(vstring) if x and x != '.'] - for i, obj in enumerate(components): - try: - components[i] = int(obj) - except ValueError: - pass - - self.version = components - - def __str__(self): - return self.vstring - - def __repr__(self): - return "LooseVersion ('%s')" % str(self) - - def _cmp(self, other): - if isinstance(other, str): - other = LooseVersion(other) - elif not isinstance(other, LooseVersion): - return NotImplemented - - if self.version == other.version: - return 0 - if self.version < other.version: - return -1 - if self.version > other.version: - return 1 - - -# end class LooseVersion diff --git a/spaces/BigSalmon/Bart/app.py b/spaces/BigSalmon/Bart/app.py deleted file mode 100644 index f1aa697cce8aec4c3dfaf3053789a242d3b40cfa..0000000000000000000000000000000000000000 --- a/spaces/BigSalmon/Bart/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import streamlit as st -st.title("Paraphrase") - -@st.cache(allow_output_mutation=True) -def get_model(): - tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") - model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn") - - return model, tokenizer - -model, tokenizer = get_model() - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model = model.to(device) -temp = st.sidebar.slider("Temperature", 0.7, 1.5) -number_of_outputs = st.sidebar.slider("Number of Outputs", 1, 10) - -def translate_to_english(model, tokenizer, text): - translated_text = [] - text = text + " " - encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") - input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) - beam_outputs = model.generate( - input_ids=input_ids, attention_mask=attention_masks, - do_sample=True, - max_length=256, - temperature = temp, - top_k=120, - top_p=0.98, - early_stopping=True, - num_return_sequences=number_of_outputs, - ) - for beam_output in beam_outputs: - sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) - print(sent) - translated_text.append(sent) - return translated_text - -text = st.text_input("Okay") -st.text("What you wrote: ") -st.write(text) -st.text("Output: ") -if text: - translated_text = translate_to_english(model, tokenizer, text) - st.write(translated_text if translated_text else "No translation found") diff --git a/spaces/BlinkDL/ChatRWKV-gradio/app.py b/spaces/BlinkDL/ChatRWKV-gradio/app.py deleted file mode 100644 index 0fd8710facc0589e21b499c9f0181fd5eefad4c0..0000000000000000000000000000000000000000 --- a/spaces/BlinkDL/ChatRWKV-gradio/app.py +++ /dev/null @@ -1,134 +0,0 @@ -import gradio as gr -import os, gc, copy, torch -from datetime import datetime -from huggingface_hub import hf_hub_download -from pynvml import * -nvmlInit() -gpu_h = nvmlDeviceGetHandleByIndex(0) -ctx_limit = 2000 -title = "RWKV-5-World-1B5-v2-20231025-ctx4096" - -os.environ["RWKV_JIT_ON"] = '1' -os.environ["RWKV_CUDA_ON"] = '1' # if '1' then use CUDA kernel for seq mode (much faster) - -from rwkv.model import RWKV -model_path = hf_hub_download(repo_id="BlinkDL/rwkv-5-world", filename=f"{title}.pth") -model = RWKV(model=model_path, strategy='cuda fp16') -from rwkv.utils import PIPELINE, PIPELINE_ARGS -pipeline = PIPELINE(model, "rwkv_vocab_v20230424") - -def generate_prompt(instruction, input=""): - instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n') - input = input.strip().replace('\r\n','\n').replace('\n\n','\n') - if input: - return f"""Instruction: {instruction} - -Input: {input} - -Response:""" - else: - return f"""User: hi - -Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it. - -User: {instruction} - -Assistant:""" - -def evaluate( - ctx, - token_count=200, - temperature=1.0, - top_p=0.7, - presencePenalty = 0.1, - countPenalty = 0.1, -): - args = PIPELINE_ARGS(temperature = max(0.2, float(temperature)), top_p = float(top_p), - alpha_frequency = countPenalty, - alpha_presence = presencePenalty, - token_ban = [], # ban the generation of some tokens - token_stop = [0]) # stop generation whenever you see any token here - ctx = ctx.strip() - all_tokens = [] - out_last = 0 - out_str = '' - occurrence = {} - state = None - for i in range(int(token_count)): - out, state = model.forward(pipeline.encode(ctx)[-ctx_limit:] if i == 0 else [token], state) - for n in occurrence: - out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency) - - token = pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p) - if token in args.token_stop: - break - all_tokens += [token] - for xxx in occurrence: - occurrence[xxx] *= 0.996 - if token not in occurrence: - occurrence[token] = 1 - else: - occurrence[token] += 1 - - tmp = pipeline.decode(all_tokens[out_last:]) - if '\ufffd' not in tmp: - out_str += tmp - yield out_str.strip() - out_last = i + 1 - - gpu_info = nvmlDeviceGetMemoryInfo(gpu_h) - print(f'vram {gpu_info.total} used {gpu_info.used} free {gpu_info.free}') - del out - del state - gc.collect() - torch.cuda.empty_cache() - yield out_str.strip() - -examples = [ - ["Assistant: Sure! Here is a very detailed plan to create flying pigs:", 333, 1, 0.3, 0, 1], - ["Assistant: Sure! Here are some ideas for FTL drive:", 333, 1, 0.3, 0, 1], - [generate_prompt("Tell me about ravens."), 333, 1, 0.3, 0, 1], - [generate_prompt("Écrivez un programme Python pour miner 1 Bitcoin, avec des commentaires."), 333, 1, 0.3, 0, 1], - [generate_prompt("東京で訪れるべき素晴らしい場所とその紹介をいくつか挙げてください。"), 333, 1, 0.3, 0, 1], - [generate_prompt("Write a story using the following information.", "A man named Alex chops a tree down."), 333, 1, 0.3, 0, 1], - ["Assistant: Here is a very detailed plan to kill all mosquitoes:", 333, 1, 0.3, 0, 1], - ['''Edward: I am Edward Elric from fullmetal alchemist. I am in the world of full metal alchemist and know nothing of the real world. - -User: Hello Edward. What have you been up to recently? - -Edward:''', 333, 1, 0.3, 0, 1], - [generate_prompt("写一篇关于水利工程的流体力学模型的论文,需要详细全面。"), 333, 1, 0.3, 0, 1], - ['''“当然可以,大宇宙不会因为这五公斤就不坍缩了。”关一帆说,他还有一个没说出来的想法:也许大宇宙真的会因为相差一个原子的质量而由封闭转为开放。大自然的精巧有时超出想象,比如生命的诞生,就需要各项宇宙参数在几亿亿分之一精度上的精确配合。但程心仍然可以留下她的生态球,因为在那无数文明创造的无数小宇宙中,肯定有相当一部分不响应回归运动的号召,所以,大宇宙最终被夺走的质量至少有几亿吨,甚至可能是几亿亿亿吨。 -但愿大宇宙能够忽略这个误差。 -程心和关一帆进入了飞船,智子最后也进来了。她早就不再穿那身华丽的和服了,她现在身着迷彩服,再次成为一名轻捷精悍的战士,她的身上佩带着许多武器和生存装备,最引人注目的是那把插在背后的武士刀。 -“放心,我在,你们就在!”智子对两位人类朋友说。 -聚变发动机启动了,推进器发出幽幽的蓝光,飞船缓缓地穿过了宇宙之门。 -小宇宙中只剩下漂流瓶和生态球。漂流瓶隐没于黑暗里,在一千米见方的宇宙中,只有生态球里的小太阳发出一点光芒。在这个小小的生命世界中,几只清澈的水球在零重力环境中静静地飘浮着,有一条小鱼从一只水球中蹦出,跃入另一只水球,轻盈地穿游于绿藻之间。在一小块陆地上的草丛中,有一滴露珠从一片草叶上脱离,旋转着飘起,向太空中折射出一缕晶莹的阳光。''', 333, 1, 0.3, 0, 1], -] - -########################################################################## - -with gr.Blocks(title=title) as demo: - gr.HTML(f"
\n

RWKV-5 World v2 - {title}

\n
") - with gr.Tab("Raw Generation"): - gr.Markdown(f"This is [RWKV-5 World v2](https://huggingface.co/BlinkDL/rwkv-5-world) with 1.5B params - a 100% attention-free RNN [RWKV-LM](https://github.com/BlinkDL/RWKV-LM). Supports all 100+ world languages and code. And we have [200+ Github RWKV projects](https://github.com/search?o=desc&p=1&q=rwkv&s=updated&type=Repositories). *** Please try examples first (bottom of page) *** (edit them to use your question). Demo limited to ctxlen {ctx_limit}.") - with gr.Row(): - with gr.Column(): - prompt = gr.Textbox(lines=2, label="Prompt", value="Assistant: Sure! Here is a very detailed plan to create flying pigs:") - token_count = gr.Slider(10, 333, label="Max Tokens", step=10, value=333) - temperature = gr.Slider(0.2, 2.0, label="Temperature", step=0.1, value=1.0) - top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.3) - presence_penalty = gr.Slider(0.0, 1.0, label="Presence Penalty", step=0.1, value=0) - count_penalty = gr.Slider(0.0, 1.0, label="Count Penalty", step=0.1, value=1) - with gr.Column(): - with gr.Row(): - submit = gr.Button("Submit", variant="primary") - clear = gr.Button("Clear", variant="secondary") - output = gr.Textbox(label="Output", lines=5) - data = gr.Dataset(components=[prompt, token_count, temperature, top_p, presence_penalty, count_penalty], samples=examples, label="Example Instructions", headers=["Prompt", "Max Tokens", "Temperature", "Top P", "Presence Penalty", "Count Penalty"]) - submit.click(evaluate, [prompt, token_count, temperature, top_p, presence_penalty, count_penalty], [output]) - clear.click(lambda: None, [], [output]) - data.click(lambda x: x, [data], [prompt, token_count, temperature, top_p, presence_penalty, count_penalty]) - -demo.queue(concurrency_count=1, max_size=10) -demo.launch(share=False) diff --git a/spaces/Boilin/URetinex-Net/evaluate.py b/spaces/Boilin/URetinex-Net/evaluate.py deleted file mode 100644 index 5119168b991c8d7e6b786a17d9156977fd837c6b..0000000000000000000000000000000000000000 --- a/spaces/Boilin/URetinex-Net/evaluate.py +++ /dev/null @@ -1,130 +0,0 @@ -import argparse -from fileinput import filename -from locale import locale_encoding_alias -import torch -import torch.nn as nn -from network.Math_Module import P, Q -from network.decom import Decom -import os -import torchvision -import torchvision.transforms as transforms -from PIL import Image -import time -from utils import * -import glob - -""" - As different illumination adjustment ratio will cause -different enhanced results. Certainly you can tune the ratio youself -to get the best results. - To get better result, we use the illumination of normal light image -to adaptively generate ratio. - Noted that KinD and KinD++ also use ratio to guide the illumination adjustment, -for fair comparison, the ratio of their methods also generate by the illumination -of normal light image. -""" - -def one2three(x): - return torch.cat([x, x, x], dim=1).to(x) - -class Inference(nn.Module): - def __init__(self, opts): - super().__init__() - self.opts = opts - # loading decomposition model - self.model_Decom_low = Decom() - self.model_Decom_high = Decom() - self.model_Decom_low = load_initialize(self.model_Decom_low, self.opts.Decom_model_low_path) - self.model_Decom_high = load_initialize(self.model_Decom_high, self.opts.Decom_model_high_path) - # loading R; old_model_opts; and L model - self.unfolding_opts, self.model_R, self.model_L= load_unfolding(self.opts.unfolding_model_path) - # loading adjustment model - self.adjust_model = load_adjustment(self.opts.adjust_model_path) - self.P = P() - self.Q = Q() - transform = [ - transforms.ToTensor(), - ] - self.transform = transforms.Compose(transform) - print(self.model_Decom_low) - print(self.model_R) - print(self.model_L) - print(self.adjust_model) - #time.sleep(8) - - def get_ratio(self, high_l, low_l): - ratio = (low_l / (high_l + 0.0001)).mean() - low_ratio = torch.ones(high_l.shape).cuda() * (1/(ratio+0.0001)) - return low_ratio - - def unfolding(self, input_low_img): - for t in range(self.unfolding_opts.round): - if t == 0: # initialize R0, L0 - P, Q = self.model_Decom_low(input_low_img) - else: # update P and Q - w_p = (self.unfolding_opts.gamma + self.unfolding_opts.Roffset * t) - w_q = (self.unfolding_opts.lamda + self.unfolding_opts.Loffset * t) - P = self.P(I=input_low_img, Q=Q, R=R, gamma=w_p) - Q = self.Q(I=input_low_img, P=P, L=L, lamda=w_q) - R = self.model_R(r=P, l=Q) - L = self.model_L(l=Q) - return R, L - - def lllumination_adjust(self, L, ratio): - ratio = torch.ones(L.shape).cuda() * ratio - return self.adjust_model(l=L, alpha=ratio) - - def forward(self, input_low_img, input_high_img): - if torch.cuda.is_available(): - input_low_img = input_low_img.cuda() - input_high_img = input_high_img.cuda() - with torch.no_grad(): - start = time.time() - R, L = self.unfolding(input_low_img) - # the ratio is calculated using the decomposed normal illumination - _, high_L = self.model_Decom_high(input_high_img) - ratio = self.get_ratio(high_L, L) - High_L = self.lllumination_adjust(L, ratio) - I_enhance = High_L * R - p_time = (time.time() - start) - return I_enhance, p_time - - def evaluate(self): - low_files = glob.glob(self.opts.low_dir+"/*.png") - for file in low_files: - file_name = os.path.basename(file) - name = file_name.split('.')[0] - high_file = os.path.join(self.opts.high_dir, file_name) - low_img = self.transform(Image.open(file)).unsqueeze(0) - high_img = self.transform(Image.open(high_file)).unsqueeze(0) - enhance, p_time = self.forward(low_img, high_img) - if not os.path.exists(self.opts.output): - os.makedirs(self.opts.output) - save_path = os.path.join(self.opts.output, file_name.replace(name, "%s_URetinexNet"%(name))) - np_save_TensorImg(enhance, save_path) - print("================================= time for %s: %f============================"%(file_name, p_time)) - - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='Configure') - # specify your data path here! - parser.add_argument('--low_dir', type=str, default="./test_daat/LOLdataset/eval15/low") - parser.add_argument('--high_dir', type=str, default="./test_data/LOLdataset/eval15/high") - parser.add_argument('--output', type=str, default="./demo/output/LOL") - # ratio are recommended to be 3-5, bigger ratio will lead to over-exposure - # model path - parser.add_argument('--Decom_model_low_path', type=str, default="./ckpt/init_low.pth") - parser.add_argument('--Decom_model_high_path', type=str, default="./ckpt/init_high.pth") - parser.add_argument('--unfolding_model_path', type=str, default="./ckpt/unfolding.pth") - parser.add_argument('--adjust_model_path', type=str, default="./ckpt/L_adjust.pth") - parser.add_argument('--gpu_id', type=int, default=0) - - opts = parser.parse_args() - for k, v in vars(opts).items(): - print(k, v) - - os.environ['CUDA_VISIBLE_DEVICES'] = str(opts.gpu_id) - model = Inference(opts).cuda() - model.evaluate() diff --git a/spaces/BreadBytes1/PL-Dashboard/FAQ_README.md b/spaces/BreadBytes1/PL-Dashboard/FAQ_README.md deleted file mode 100644 index a26868c14e2406487328324a12e7618f841a882c..0000000000000000000000000000000000000000 --- a/spaces/BreadBytes1/PL-Dashboard/FAQ_README.md +++ /dev/null @@ -1,32 +0,0 @@ -### Q: What exchanges are supported? -ByBit
-BitGet
-Binance
-Kraken
-MEXC
-OkX - -### Q: What logs do I need to pull from my exchange? -Our dashboard works with a specific trade log from each exchange, so you need to make sure you are exporting the correct log file so your data is displayed properly. Here is the trade log you need to export from each exchange:
-

ByBit: Closed P&L - To get to this log you need to be logged into your ByBit account and navigate to Orders then under Derivatives select Closed P&L then in the top right you will see an export button, click that and set the date range, then press export.

-

BitGet: Order History - Log into your BitGet account, then navigate to orders then on the left side bar select Orders-Futures, then select the Order History tab. Click the export data button on the right side, then select date range you want and press export.

-

Binance: Trade History Log into Binance and select Orders, then under Futures select Trade History. Click the export button and select the date range you want to export.

-

Kraken: Ledger - Log into Kraken and click on the History tab then click on the export tab and make sure you select the "Ledgers" option in the dashboard. Then select your start and end dates and press submit.

-

MEXC: Order History- Log into MEXC then select orders on the right side and select futures. Then select the Order History tab then press the export button and enter the date range you would like to export.

-

OkX: Order History - Log into OkX then find Order Center in your account menu. Then click on the Order History tab, select the date range you want and press the download button.

- -### Q: Do these results include trading fees? - -If you are using the historical data from our bots, trading fees at .075% are included in the results. -If you are uploading your own data, fee inclusion will depend on your exchange. We assume all uploaded trade logs include fees in the reported P/L data and do not account for any extra in our calculations. - -### Q: I have Cinnamon Toast and Short Bread, can I see the results for each bot using this dashboard? - -You may choose to select "ETH" as your asset. This will show the overall results of your "ETH" trades, but they will not be isolated to a particular bot. Comparing mutliple bots on the same coin is not supported at this time. To get details on single bot performance, please upload trade logs for each bot separately. - -### Q: Where is Kucoin? -Currently, this dashboard does not support trading logs from Kucoin. At this time, we cannot verify profit/loss from trade logs provided by Kucoin as some necessary data is missing. We will update this dashboard if/when that data is made available in their exported trading logs. - -### Q: The dashboard isn't working correctly for me. -Please check that you have selected the correct inputs for your file and that you have uploaded a supported file type. If you believe there is an error we need to fix, please reach out to: - diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/evaluation/panoptic_evaluation.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/evaluation/panoptic_evaluation.py deleted file mode 100644 index fb5e7ab87b1dd5bb3e0c5d1e405e321c48d9e6a0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/evaluation/panoptic_evaluation.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import contextlib -import io -import itertools -import json -import logging -import os -import tempfile -from collections import OrderedDict -from fvcore.common.file_io import PathManager -from PIL import Image -from tabulate import tabulate - -from detectron2.data import MetadataCatalog -from detectron2.utils import comm - -from .evaluator import DatasetEvaluator - -logger = logging.getLogger(__name__) - - -class COCOPanopticEvaluator(DatasetEvaluator): - """ - Evaluate Panoptic Quality metrics on COCO using PanopticAPI. - It saves panoptic segmentation prediction in `output_dir` - - It contains a synchronize call and has to be called from all workers. - """ - - def __init__(self, dataset_name, output_dir): - """ - Args: - dataset_name (str): name of the dataset - output_dir (str): output directory to save results for evaluation - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._thing_contiguous_id_to_dataset_id = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - self._stuff_contiguous_id_to_dataset_id = { - v: k for k, v in self._metadata.stuff_dataset_id_to_contiguous_id.items() - } - - self._predictions_json = os.path.join(output_dir, "predictions.json") - - def reset(self): - self._predictions = [] - - def _convert_category_id(self, segment_info): - isthing = segment_info.pop("isthing", None) - if isthing is None: - # the model produces panoptic category id directly. No more conversion needed - return segment_info - if isthing is True: - segment_info["category_id"] = self._thing_contiguous_id_to_dataset_id[ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = self._stuff_contiguous_id_to_dataset_id[ - segment_info["category_id"] - ] - return segment_info - - def process(self, inputs, outputs): - from panopticapi.utils import id2rgb - - for input, output in zip(inputs, outputs): - panoptic_img, segments_info = output["panoptic_seg"] - panoptic_img = panoptic_img.cpu().numpy() - - file_name = os.path.basename(input["file_name"]) - file_name_png = os.path.splitext(file_name)[0] + ".png" - with io.BytesIO() as out: - Image.fromarray(id2rgb(panoptic_img)).save(out, format="PNG") - segments_info = [self._convert_category_id(x) for x in segments_info] - self._predictions.append( - { - "image_id": input["image_id"], - "file_name": file_name_png, - "png_string": out.getvalue(), - "segments_info": segments_info, - } - ) - - def evaluate(self): - comm.synchronize() - - self._predictions = comm.gather(self._predictions) - self._predictions = list(itertools.chain(*self._predictions)) - if not comm.is_main_process(): - return - - # PanopticApi requires local files - gt_json = PathManager.get_local_path(self._metadata.panoptic_json) - gt_folder = PathManager.get_local_path(self._metadata.panoptic_root) - - with tempfile.TemporaryDirectory(prefix="panoptic_eval") as pred_dir: - logger.info("Writing all panoptic predictions to {} ...".format(pred_dir)) - for p in self._predictions: - with open(os.path.join(pred_dir, p["file_name"]), "wb") as f: - f.write(p.pop("png_string")) - - with open(gt_json, "r") as f: - json_data = json.load(f) - json_data["annotations"] = self._predictions - with PathManager.open(self._predictions_json, "w") as f: - f.write(json.dumps(json_data)) - - from panopticapi.evaluation import pq_compute - - with contextlib.redirect_stdout(io.StringIO()): - pq_res = pq_compute( - gt_json, - PathManager.get_local_path(self._predictions_json), - gt_folder=gt_folder, - pred_folder=pred_dir, - ) - - res = {} - res["PQ"] = 100 * pq_res["All"]["pq"] - res["SQ"] = 100 * pq_res["All"]["sq"] - res["RQ"] = 100 * pq_res["All"]["rq"] - res["PQ_th"] = 100 * pq_res["Things"]["pq"] - res["SQ_th"] = 100 * pq_res["Things"]["sq"] - res["RQ_th"] = 100 * pq_res["Things"]["rq"] - res["PQ_st"] = 100 * pq_res["Stuff"]["pq"] - res["SQ_st"] = 100 * pq_res["Stuff"]["sq"] - res["RQ_st"] = 100 * pq_res["Stuff"]["rq"] - - results = OrderedDict({"panoptic_seg": res}) - _print_panoptic_results(pq_res) - - return results - - -def _print_panoptic_results(pq_res): - headers = ["", "PQ", "SQ", "RQ", "#categories"] - data = [] - for name in ["All", "Things", "Stuff"]: - row = [name] + [pq_res[name][k] * 100 for k in ["pq", "sq", "rq"]] + [pq_res[name]["n"]] - data.append(row) - table = tabulate( - data, headers=headers, tablefmt="pipe", floatfmt=".3f", stralign="center", numalign="center" - ) - logger.info("Panoptic Evaluation Results:\n" + table) - - -if __name__ == "__main__": - from detectron2.utils.logger import setup_logger - - logger = setup_logger() - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--gt-json") - parser.add_argument("--gt-dir") - parser.add_argument("--pred-json") - parser.add_argument("--pred-dir") - args = parser.parse_args() - - from panopticapi.evaluation import pq_compute - - with contextlib.redirect_stdout(io.StringIO()): - pq_res = pq_compute( - args.gt_json, args.pred_json, gt_folder=args.gt_dir, pred_folder=args.pred_dir - ) - _print_panoptic_results(pq_res) diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/buffer_info.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/buffer_info.h deleted file mode 100644 index 8349a46b8b92f87e9f641b30b7b86617b7f85d50..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/include/pybind11/buffer_info.h +++ /dev/null @@ -1,116 +0,0 @@ -/* - pybind11/buffer_info.h: Python buffer object interface - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "detail/common.h" - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) - -/// Information record describing a Python buffer object -struct buffer_info { - void *ptr = nullptr; // Pointer to the underlying storage - ssize_t itemsize = 0; // Size of individual items in bytes - ssize_t size = 0; // Total number of entries - std::string format; // For homogeneous buffers, this should be set to format_descriptor::format() - ssize_t ndim = 0; // Number of dimensions - std::vector shape; // Shape of the tensor (1 entry per dimension) - std::vector strides; // Number of bytes between adjacent entries (for each per dimension) - bool readonly = false; // flag to indicate if the underlying storage may be written to - - buffer_info() { } - - buffer_info(void *ptr, ssize_t itemsize, const std::string &format, ssize_t ndim, - detail::any_container shape_in, detail::any_container strides_in, bool readonly=false) - : ptr(ptr), itemsize(itemsize), size(1), format(format), ndim(ndim), - shape(std::move(shape_in)), strides(std::move(strides_in)), readonly(readonly) { - if (ndim != (ssize_t) shape.size() || ndim != (ssize_t) strides.size()) - pybind11_fail("buffer_info: ndim doesn't match shape and/or strides length"); - for (size_t i = 0; i < (size_t) ndim; ++i) - size *= shape[i]; - } - - template - buffer_info(T *ptr, detail::any_container shape_in, detail::any_container strides_in, bool readonly=false) - : buffer_info(private_ctr_tag(), ptr, sizeof(T), format_descriptor::format(), static_cast(shape_in->size()), std::move(shape_in), std::move(strides_in), readonly) { } - - buffer_info(void *ptr, ssize_t itemsize, const std::string &format, ssize_t size, bool readonly=false) - : buffer_info(ptr, itemsize, format, 1, {size}, {itemsize}, readonly) { } - - template - buffer_info(T *ptr, ssize_t size, bool readonly=false) - : buffer_info(ptr, sizeof(T), format_descriptor::format(), size, readonly) { } - - template - buffer_info(const T *ptr, ssize_t size, bool readonly=true) - : buffer_info(const_cast(ptr), sizeof(T), format_descriptor::format(), size, readonly) { } - - explicit buffer_info(Py_buffer *view, bool ownview = true) - : buffer_info(view->buf, view->itemsize, view->format, view->ndim, - {view->shape, view->shape + view->ndim}, {view->strides, view->strides + view->ndim}, view->readonly) { - this->m_view = view; - this->ownview = ownview; - } - - buffer_info(const buffer_info &) = delete; - buffer_info& operator=(const buffer_info &) = delete; - - buffer_info(buffer_info &&other) { - (*this) = std::move(other); - } - - buffer_info& operator=(buffer_info &&rhs) { - ptr = rhs.ptr; - itemsize = rhs.itemsize; - size = rhs.size; - format = std::move(rhs.format); - ndim = rhs.ndim; - shape = std::move(rhs.shape); - strides = std::move(rhs.strides); - std::swap(m_view, rhs.m_view); - std::swap(ownview, rhs.ownview); - readonly = rhs.readonly; - return *this; - } - - ~buffer_info() { - if (m_view && ownview) { PyBuffer_Release(m_view); delete m_view; } - } - - Py_buffer *view() const { return m_view; } - Py_buffer *&view() { return m_view; } -private: - struct private_ctr_tag { }; - - buffer_info(private_ctr_tag, void *ptr, ssize_t itemsize, const std::string &format, ssize_t ndim, - detail::any_container &&shape_in, detail::any_container &&strides_in, bool readonly) - : buffer_info(ptr, itemsize, format, ndim, std::move(shape_in), std::move(strides_in), readonly) { } - - Py_buffer *m_view = nullptr; - bool ownview = false; -}; - -PYBIND11_NAMESPACE_BEGIN(detail) - -template struct compare_buffer_info { - static bool compare(const buffer_info& b) { - return b.format == format_descriptor::format() && b.itemsize == (ssize_t) sizeof(T); - } -}; - -template struct compare_buffer_info::value>> { - static bool compare(const buffer_info& b) { - return (size_t) b.itemsize == sizeof(T) && (b.format == format_descriptor::value || - ((sizeof(T) == sizeof(long)) && b.format == (std::is_unsigned::value ? "L" : "l")) || - ((sizeof(T) == sizeof(size_t)) && b.format == (std::is_unsigned::value ? "N" : "n"))); - } -}; - -PYBIND11_NAMESPACE_END(detail) -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/allocator_traits.h b/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/allocator_traits.h deleted file mode 100644 index c2557b57efa55a1538f58bf5abc790cff5a360a3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/allocator_traits.h +++ /dev/null @@ -1,422 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -// allocator_traits::rebind_alloc and allocator::rebind_traits are from libc++, -// dual licensed under the MIT and the University of Illinois Open Source -// Licenses. - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace detail -{ - - -// forward declaration for has_member_system -template struct allocator_system; - - -namespace allocator_traits_detail -{ - -__THRUST_DEFINE_HAS_NESTED_TYPE(has_value_type, value_type) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_pointer, pointer) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_const_pointer, const_pointer) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_reference, reference) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_const_reference, const_reference) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_void_pointer, void_pointer) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_const_void_pointer, const_void_pointer) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_difference_type, difference_type) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_size_type, size_type) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_propagate_on_container_copy_assignment, propagate_on_container_copy_assignment) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_propagate_on_container_move_assignment, propagate_on_container_move_assignment) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_propagate_on_container_swap, propagate_on_container_swap) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_system_type, system_type) -__THRUST_DEFINE_HAS_NESTED_TYPE(has_is_always_equal, is_always_equal) -__THRUST_DEFINE_HAS_MEMBER_FUNCTION(has_member_system_impl, system) - -template - struct has_rebind -{ - typedef char yes_type; - typedef int no_type; - - template - static yes_type test(typename S::template rebind::other*); - template - static no_type test(...); - - static bool const value = sizeof(test(0)) == sizeof(yes_type); - - typedef thrust::detail::integral_constant type; -}; - -template - struct nested_pointer -{ - typedef typename T::pointer type; -}; - -template - struct nested_const_pointer -{ - typedef typename T::const_pointer type; -}; - -template - struct nested_reference -{ - typedef typename T::reference type; -}; - -template - struct nested_const_reference -{ - typedef typename T::const_reference type; -}; - -template - struct nested_void_pointer -{ - typedef typename T::void_pointer type; -}; - -template - struct nested_const_void_pointer -{ - typedef typename T::const_void_pointer type; -}; - -template - struct nested_difference_type -{ - typedef typename T::difference_type type; -}; - -template - struct nested_size_type -{ - typedef typename T::size_type type; -}; - -template - struct nested_propagate_on_container_copy_assignment -{ - typedef typename T::propagate_on_container_copy_assignment type; -}; - -template - struct nested_propagate_on_container_move_assignment -{ - typedef typename T::propagate_on_container_move_assignment type; -}; - -template - struct nested_propagate_on_container_swap -{ - typedef typename T::propagate_on_container_swap type; -}; - -template - struct nested_is_always_equal -{ - typedef typename T::is_always_equal type; -}; - -template - struct nested_system_type -{ - typedef typename T::system_type type; -}; - -template - struct has_member_system -{ - typedef typename allocator_system::type system_type; - - typedef typename has_member_system_impl::type type; - static const bool value = type::value; -}; - -template::value> - struct rebind_alloc -{ - typedef typename Alloc::template rebind::other type; -}; - -#if THRUST_CPP_DIALECT >= 2011 -template class Alloc, - typename T, typename... Args, typename U> - struct rebind_alloc, U, true> -{ - typedef typename Alloc::template rebind::other type; -}; - -template class Alloc, - typename T, typename... Args, typename U> - struct rebind_alloc, U, false> -{ - typedef Alloc type; -}; -#else // C++03 -template