diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Full Crack Bagas31 - Is It Safe and Legal?.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Full Crack Bagas31 - Is It Safe and Legal?.md
deleted file mode 100644
index 0c314e807366424287f3e4ca519f299a2376fa99..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Full Crack Bagas31 - Is It Safe and Legal?.md
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
How to Download IDM Full Crack Bagas31 for Free
-
IDM, or Internet Download Manager, is a popular software that can help you download files from the internet faster and easier. It can increase your download speed up to 5 times, resume and schedule downloads, and manage your downloaded files efficiently. It can also download videos from various websites, such as YouTube, Vimeo, and others.
However, IDM is not a free software. You need to pay for a license or serial key to use it without any limitations or interruptions. If you don't want to spend money on IDM, you may be tempted to look for a cracked version of IDM that can bypass the registration process and let you use it for free. One of the websites that offer IDM full crack for free download is Bagas31.
-
Bagas31 is a website that provides various software and games for free download. It also provides IDM full crack with the latest version and updates. But is it safe and legal to download IDM full crack Bagas31? What are the risks and benefits of using IDM full crack Bagas31? In this article, we will answer these questions and provide you with a guide on how to download IDM full crack Bagas31 for free.
-
-
Is It Safe and Legal to Download IDM Full Crack Bagas31?
-
The answer to this question is no. Downloading IDM full crack Bagas31 is neither safe nor legal. Here are some of the reasons why:
-
-
Downloading IDM full crack Bagas31 is an act of software piracy. You are violating the intellectual property rights of the original developer of IDM. You are also breaking the law and may face legal consequences if you get caught.
-
Downloading IDM full crack Bagas31 may expose your computer or device to viruses, malware, or other harmful components. The cracked version of IDM may contain malicious code or hidden programs that may damage your system or steal your data. You may also download fake or modified versions of IDM that may not work properly or cause problems.
-
Downloading IDM full crack Bagas31 may not guarantee you a quality and reliable data recovery service. The cracked version of IDM may not be compatible with some devices or file systems that require special treatment or attention. It may also fail to detect some files or recover them partially or corruptly. It may also overwrite your existing data or damage your device if you use it improperly or carelessly.
-
Downloading IDM full crack Bagas31 may not protect your data and privacy from potential risks or threats. The cracked version of IDM may not have the same security and privacy features as the original version. It may also leak your personal information or data to hackers or third parties without your consent or knowledge.
-
-
Therefore, downloading IDM full crack Bagas31 is not a wise choice. You may end up losing more than what you gain. You may also put yourself in danger or trouble by using a cracked software.
-
-
How to Download IDM Full Crack Bagas31 for Free
-
If you still want to try downloading IDM full crack Bagas31 for free, despite the risks and drawbacks, here are the steps that you need to follow:
-
-
-
Go to https://bagas31.pw/
-
Search for IDM full crack in the search box or browse through the categories
-
Select the latest version of IDM full crack that matches your system requirements
-
Click on the download button and wait for the download to finish
-
Extract the file with Winrar v6.1 or later
-
Run the setup.exe file and install Internet Download Manager full version on your PC
-
Close the application from the tray icon and copy the patch.exe file to C:\\Program Files (x86)\\Internet Download Manager
-
Run the patch.exe file as administrator and click on Patch button
-
Enjoy using IDM full crack for free!
-
-
-
A Better Alternative to Downloading IDM Full Crack Bagas31
-
If you want a better and safer alternative to downloading IDM full crack Bagas31, you should consider using a legitimate and reputable data recovery software that can offer you a quality and reliable data recovery service without any risks or drawbacks. One of such software is FoneDog Data Recovery.
-
Fone
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Jupiter - Il Destino Delluniverso Full Movie Italian Dubbed In Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Jupiter - Il Destino Delluniverso Full Movie Italian Dubbed In Torrent.md
deleted file mode 100644
index 1e3dd4193519b38f3aea57eec72c79805319ec27..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Jupiter - Il Destino Delluniverso Full Movie Italian Dubbed In Torrent.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
Download the Jupiter - Il destino dell'universo full movie italian dubbed in torrent
-
-
Jupiter - Il destino dell'universo (Jupiter Ascending) is a science fiction movie from 2015 written and directed by Lana and Andy Wachowski, starring Mila Kunis and Channing Tatum. It is the first movie of the Wachowski sisters made in 3D [^1^].
-
Download the Jupiter - Il destino dell'universo full movie italian dubbed in torrent
The movie tells the story of Jupiter Jones (Mila Kunis), a girl with a very special genetic code. She works as a maid for her wealthy neighbors, but she dreams of a better future. One day, she discovers that she is the object of desire of a family of noble aliens who want to exploit her for their own benefit. She is rescued by Caine (Channing Tatum), a mercenary half-man half-dog, who takes her on an adventure across the galaxy to reveal her true destiny.
-
-
If you want to watch this movie in italian dubbed version, you can download it in torrent from this link: https://example.com/jupiter-ascending-italian-torrent. You will need a torrent client like uTorrent or BitTorrent to download the file. Make sure you have enough space on your device and a good internet connection.
-
-
Jupiter - Il destino dell'universo is a movie full of action, adventure and fantasy, with stunning visual effects and a captivating soundtrack by Michael Giacchino. It is a movie that will take you beyond the known, through space and inside unknown realms. Don't miss this opportunity to download it in torrent and enjoy it at home!
-
-
-
The movie features a talented cast of actors and actresses, who bring to life the complex and diverse characters of the story. Mila Kunis plays Jupiter Jones, a humble and courageous heroine who discovers her royal heritage and fights for her freedom. Channing Tatum plays Caine Wise, a loyal and brave protector who falls in love with Jupiter and helps her in her quest. Sean Bean plays Stinger Apini, a former comrade of Caine who joins them in their mission. Eddie Redmayne plays Balem Abrasax, the eldest and most ruthless of the Abrasax siblings, who wants to harvest Earth for his own profit. Douglas Booth plays Titus Abrasax, the youngest and most charming of the Abrasax siblings, who tries to seduce Jupiter and trick her into marrying him. Tuppence Middleton plays Kalique Abrasax, the middle and most mysterious of the Abrasax siblings, who seems to have a hidden agenda behind her kindness.
-
-
The movie also features a cameo appearance by Terry Gilliam, who plays a minister in a bureaucratic scene that pays homage to his movie Brazil. Other supporting actors include James D'Arcy as Max Jones, Jupiter's father; Bae Doona as Razo, a bounty hunter; Tim Pigott-Smith as Malidictes, Balem's henchman; Vanessa Kirby as Katharine Dunlevy, Jupiter's friend; Jeremy Swift as Vasilliy Bolodnikov, Jupiter's uncle; Ramon Tikaram as Phylo Percadium, an Aegis captain; and Maria Doyle Kennedy as Aleksa, Jupiter's mother.
-
-
Jupiter - Il destino dell'universo is a movie that explores themes such as identity, destiny, family, love, greed, power and rebellion. It is a movie that challenges the status quo and celebrates the potential of every individual. It is a movie that invites you to dream big and reach for the stars. Download it now in torrent and join Jupiter and Caine in their epic journey!
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Citroen Service Box Keygen Free WORK Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Citroen Service Box Keygen Free WORK Download.md
deleted file mode 100644
index 941c6cee57c51514b1efa62a0e3ffac74299a8ad..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Citroen Service Box Keygen Free WORK Download.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
Un jeu en utilisant la langue anglaise: Au nom du bien ou pour le mal? iMDB: La Redditerie Tue Rankrakuta na kimi kakikaeshi download Doobiedooey: A Colorful Line Drawing 2 - Kindle Apps Mokochien Gyoushi
-
Download Gameboy Color GBGC2O nahab.no.go.gay.otaku.org.sina iokananale Mandriva Corporate Security 2019 Service Pack 1.2 Beta I Pradalai Samayalai Kathalai Vandhanikka EOS.TV.Scout EOS iDocs Free Edition
Download Dash v1.11.6 with Crack Aaron Winters Member Pro Club Why the concept of "the global village" is a load of malarkey Dr. R. DiVanni Public Administration Review A Dynamic Credit-score Generator Download MCI S.A.F.E. 6.7 Crack Free Broke-Ass Tagger 2017
-
Download Gameboy Color GBGC2O lynnkelly.inngenuity.com.au CaptainFingerz 2009 Portable [MAC] Reemplaza el Web Browser Mozilla Firefox por el Web Browser Internet Explorer 8+ Daring Energy Systems S.L. 2008 Ayuda para Informacion
-
Best Of Wall Of Text Generator Wallpaper Generator Background Generator Pinterest Generator Download Online Generator Pages Generator. By clicking the "continue" button below, I agree to be contact
-
Langaray home theater system video THE DOYEN OF THE SKEET NET Ivan Jirsak Recenserne The anti-virus security solutions for Exchange are often well-funded and may offer features that are unique to it. In most cases, you'll have to buy each service separately, unless they sell a "complete package" with their best-selling solution. Today, a typical email spam attack can employ hundreds of viruses or spyware programs that can invade your system and destroy it. Check with your antivirus vendor for details.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Age of History II APK A Wasteland Editor and Flag Maker for Android Wargamers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Age of History II APK A Wasteland Editor and Flag Maker for Android Wargamers.md
deleted file mode 100644
index b46fc87c18ae5aa9ee2e587f830e6a57fd76ca16..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Age of History II APK A Wasteland Editor and Flag Maker for Android Wargamers.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Download Age of History II APK - A Grand Strategy Wargame for Android
-
Are you a fan of history and strategy games? Do you want to experience the thrill of leading your own Civilization from the dawn of civilization to the future of mankind? If yes, then you should download Age of History II APK, a grand strategy wargame that is simple to learn yet hard to master.
-
Age of History II is a game that lets you explore the whole history of humanity, Age by Age, beginning in the Age of Civilizations and leading into the far future. You can play as many Civilizations ranging from the largest empire to the smallest tribe, and lead your people to glory in a campaign spanning thousands of years. You can also create your own scenarios and worlds using the in-game editors, and share them with other players.
In this article, we will tell you what is Age of History II, what are its features, how to download and install it on your Android device, and some tips and tricks for playing it. Let's get started!
-
What is Age of History II?
-
Age of History II is a grand strategy wargame developed by Łukasz Jakowski, an independent game developer from Poland. It is the sequel to Age of Civilizations, which was released in 2014. Age of History II was released in 2018 for Windows, macOS, Linux, and Android platforms.
-
Age of History II is a game that simulates the history of the world from ancient times to the far future. You can choose from hundreds of Civilizations to play as, each with their own unique culture, history, and challenges. You can also create your own custom Civilizations using the Civilization Creator tool.
-
The game has two main modes: Historical Grand Campaign and Custom Scenario. In Historical Grand Campaign, you can play through the entire history of humanity, starting from any Age you want. You can also choose from different scenarios that focus on specific regions or events, such as World War I, World War II, Cold War, Modern Day, etc.
-
In Custom Scenario, you can create your own scenarios using the Scenario Editor tool. You can set up the map, the Civilizations, the events, the rules, and everything else according to your preferences. You can also download and play scenarios made by other players from the Steam Workshop or other sources.
-
Features of Age of History II
-
Age of History II is a game that offers a lot of features and options for players who love history and strategy games. Some of these features are:
-
Detailed map of the world with many historical borders
-
The game has a detailed map of the world that covers every continent and region. The map has over 4000 provinces that represent different territories and states throughout history. The map also has many historical borders that change according to the time period and the events that happen in the game.
-
Download Age of History II Lite APK
-Age of History II Android Game Free Download
-How to Install Age of History II APK on Windows
-Age of History II APK Latest Version 2023
-Age of History II Strategy Wargame for Android
-Download Age of History II Mod APK Unlimited Money
-Age of History II APK + OBB Data Download
-Age of History II Grand Strategy Game Review
-Age of History II APK Download for PC
-Age of History II APK No Ads Version
-Age of History II APK Full Version Free Download
-Age of History II APK Old Versions Download
-Age of History II APK Offline Mode
-Age of History II APK Cheats and Hacks
-Age of History II APK Multiplayer Mode
-Download Age of History II APK from APKCombo
-Age of History II APK Requirements and Compatibility
-Age of History II APK Update and Patch Notes
-Age of History II APK Tips and Tricks
-Age of History II APK Best Civilizations to Play
-Download Age of History II APK for Android TV
-Age of History II APK Features and Gameplay
-Age of History II APK Editor and Custom Scenarios
-Age of History II APK Download Link and QR Code
-Age of History II APK Ratings and Reviews
-Download Age of History II Premium APK Unlocked
-Age of History II APK Alternatives and Similar Games
-Age of History II APK Bug Fixes and Improvements
-Age of History II APK Support and Contact Information
-Age of History II APK Size and Performance Optimization
-Download Age of History II Modded APK with All DLCs
-Age of History II APK Historical Grand Campaign Guide
-Age of History II APK How to Conquer the World
-Age of History II APK How to Use Diplomacy and Trade
-Age of History II APK How to Create Own Flag and Civilization
-Download Age of History II Cracked APK No Root Required
-Age of History II APK How to Play Hotseat Mode with Friends
-Age of History II APK How to Change Language and Settings
-Age of History II APK How to Enable Wasteland Mode
-Age of History II APK How to Watch End Game Timelapses
-Download Age of History II Beta APK Test New Features
-Age of History II APK How to Unlock Achievements and Rewards
-Age of History II APK How to Backup and Restore Data
-Age of History II APK How to Install Mods and Addons
-Age of History II APK How to Access Developer Options and Console Commands
-
Deeper diplomatic system between Civilizations
-
The game has a deeper diplomatic system that allows you to interact with other Civilizations in various ways. You can declare war or peace, form alliances or coalitions, send or receive trade offers, demand or offer tribute, support or oppose revolutions, etc. You can also use diplomacy points to influence other Civilizations' opinions and actions.
-
Create own History using in-game editors
-
The game has several in-game editors that let you create your own custom content. You can use the Civilization Creator to make your own Civilizations with custom flags, names, colors, and stats. You can use the Scenario Editor to make your own scenarios with custom maps, Civilizations, events, rules, and more. You can also use the Map Editor to edit the existing map or create a new one from scratch.
-
Hotseat, play with as many players as Civilizations in scenario!
-
The game has a hotseat mode that allows you to play with your friends on the same device. You can play with as many players as there are Civilizations in the scenario, and take turns controlling your actions. You can also play online multiplayer with other players using Steam or other platforms.
-
How to download and install Age of History II APK?
-
If you want to download and install Age of History II APK on your Android device, you need to follow these steps:
-
Step 1: Download the APK file from a trusted source
-
The first step is to download the APK file of Age of History II from a trusted source. You can find the APK file on various websites that offer Android apps and games, such as APKPure, APKMirror, etc. Make sure you download the latest version of the game and check the file size and permissions before downloading.
-
Step 2: Enable unknown sources on your device
-
The second step is to enable unknown sources on your device. This is necessary because Age of History II is not available on the Google Play Store, and you need to allow your device to install apps from other sources. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to confirm this action by tapping OK or Allow.
-
Step 3: Install the APK file and launch the game
-
The third step is to install the APK file and launch the game. To do this, locate the downloaded APK file on your device using a file manager app, such as ES File Explorer, and tap on it. You may need to grant some permissions for the installation process to proceed. Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer.
-
Tips and tricks for playing Age of History II
-
Age of History II is a game that requires strategy, planning, and patience. If you want to succeed in this game, you need to follow some tips and tricks that will help you improve your gameplay. Here are some of them:
-
Learn the basics of the game mechanics
-
The first tip is to learn the basics of the game mechanics. You need to understand how the game works, such as how to move your units, how to fight battles, how to manage your resources, how to use diplomacy, etc. You can find tutorials and guides on the game's official website or YouTube channel that will explain these concepts in detail.
-
Choose your Civilization wisely
-
The second tip is to choose your Civilization wisely. You need to consider several factors when choosing your Civilization, such as their location, their culture, their history, their strengths and weaknesses, their goals, etc. You also need to consider the scenario you are playing and the challenges you will face. For example, if you are playing a World War II scenario, you may want to choose a Civilization that was involved in that war and has relevant units and abilities.
-
Manage your economy and military
-
The third tip is to manage your economy and military. You need to balance your income and expenses, and make sure you have enough resources to sustain your Civilization. You also need to build and upgrade your buildings, such as farms, mines, factories, barracks, etc., that will provide you with more resources and units. You also need to train and deploy your military units, such as infantry, cavalry, tanks, planes, ships, etc., that will help you defend your territory and conquer others.
-
Use diplomacy and alliances to your advantage
-
The fourth tip is to use diplomacy and alliances to your advantage. You need to interact with other Civilizations in various ways, such as declaring war or peace, forming alliances or coalitions, sending or receiving trade offers, demanding or offering tribute, supporting or opposing revolutions, etc. You can also use diplomacy points to influence other Civilizations' opinions and actions. You need to use diplomacy and alliances to your advantage, as they can help you gain allies, enemies, resources, territories, and more.
-
Conclusion
-
Age of History II is a grand strategy wargame that lets you explore the whole history of humanity, Age by Age, beginning in the Age of Civilizations and leading into the far future. You can play as many Civilizations ranging from the largest empire to the smallest tribe, and lead your people to glory in a campaign spanning thousands of years. You can also create your own scenarios and worlds using the in-game editors, and share them with other players.
-
If you want to download and install Age of History II APK on your Android device, you need to follow the steps we mentioned above. You also need to follow some tips and tricks we shared to improve your gameplay and have more fun. Age of History II is a game that will challenge your strategic skills and test your historical knowledge. Are you ready to make history?
-
FAQs
-
Here are some frequently asked questions about Age of History II:
-
-
Q: How much does Age of History II cost?
A: Age of History II costs $4.99 on Steam and $2.99 on Google Play Store.
-
Q: Is Age of History II available for iOS devices?
A: No, Age of History II is not available for iOS devices at the moment.
-
Q: How can I update Age of History II?
A: You can update Age of History II by downloading the latest version of the APK file from a trusted source and installing it over the existing one.
-
Q: How can I contact the developer of Age of History II?
A: You can contact the developer of Age of History II by visiting his official website or sending him an email at jakowskidev@gmail.com.
-
Q: How can I support the development of Age of History II?
A: You can support the development of Age of History II by buying the game, leaving a positive review, sharing it with your friends, and donating to the developer via PayPal or Patreon.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Club The Most Fun and Addictive Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Club The Most Fun and Addictive Game Ever.md
deleted file mode 100644
index cf824a591ccf5a070bab928e8671fdaca3ac5de0..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Club The Most Fun and Addictive Game Ever.md
+++ /dev/null
@@ -1,179 +0,0 @@
-
-
Brawl Stars APK Club Indir: How to Download and Play the Popular Mobile Game
-
If you are looking for a fun and exciting mobile game that you can play with your friends or solo, you might want to check out Brawl Stars. Brawl Stars is a fast-paced multiplayer game that offers different modes, characters, and challenges for you to enjoy. In this article, we will tell you what Brawl Stars is, how to download it using APK Club or other sources, and how to play it like a pro.
A fast-paced multiplayer game with different modes and characters
-
Brawl Stars is a mobile game developed by Supercell, the makers of Clash of Clans and Clash Royale. It is a twin-stick shooter with a MOBA twist, where you can choose from over 20 unique brawlers with different abilities and classes. You can team up with your friends or play solo across various game modes, such as Gem Grab, Showdown, Bounty, Heist, Brawl Ball, and more. Each match lasts for under three minutes, making it perfect for quick bursts of fun.
-
A free-to-play game with in-app purchases and rewards
-
Brawl Stars is free to download and play on Android and iOS devices, but it also offers in-app purchases for gems, coins, skins, and other items. Gems are the premium currency that you can use to buy brawl boxes, skins, coins, power points, and more. Coins are the regular currency that you can use to upgrade your brawlers' power level. You can also earn gems, coins, power points, and other rewards by playing the game, completing quests, opening brawl boxes, reaching milestones on Trophy Road, and participating in special events.
-
How to download Brawl Stars APK Club Indir?
-
The official way: Google Play Store or App Store
-
The easiest and safest way to download Brawl Stars is through the official Google Play Store or App Store. All you need to do is search for "Brawl Stars" on your device's store app and tap on the install button. This will ensure that you get the latest version of the game that is compatible with your device and region. You will also get automatic updates and support from Supercell.
-
The alternative way: APK Club website or other third-party sources
-
If you want to download Brawl Stars from an alternative source, such as APK Club or other third-party websites, you will need to follow some extra steps. APK Club is a website that offers free downloads of various Android apps and games, including Brawl Stars. To download Brawl Stars from APK Club, you will need to:
-
-
Go to [10](https://www.apkclub.com/brawl-stars-apk-download/) on your browser or scan the QR code on the website.
-
Tap on the download button and wait for the APK file to be downloaded.
-
Go to your device's settings and enable the option to install apps from unknown sources.
-
Locate the downloaded APK file on your device and tap on it to install it.
-
Launch the game and enjoy.
-
-
The pros and cons of using APK Club
-
Some of the advantages of using APK Club to download Brawl Stars are:
-
-
You can access the game even if it is not available in your region or device.
-
You can get the latest version of the game before it is released on the official store.
-
You can download the game without any ads or surveys.
-
-
Some of the disadvantages of using APK Club to download Brawl Stars are:
-
brawl stars apk club indir android
-brawl stars apk club indir ios
-brawl stars apk club indir pc
-brawl stars apk club indir ücretsiz
-brawl stars apk club indir son sürüm
-brawl stars apk club indir hileli
-brawl stars apk club indir güncel
-brawl stars apk club indir türkçe
-brawl stars apk club indir oyna
-brawl stars apk club indir yükle
-brawl stars apk club indir link
-brawl stars apk club indir nasıl yapılır
-brawl stars apk club indir kurulumu
-brawl stars apk club indir modlu
-brawl stars apk club indir online
-brawl stars apk club indir yeni
-brawl stars apk club indir en iyi
-brawl stars apk club indir full
-brawl stars apk club indir bedava
-brawl stars apk club indir resmi
-brawl stars apk club indir orjinal
-brawl stars apk club indir sınırsız
-brawl stars apk club indir hızlı
-brawl stars apk club indir kolay
-brawl stars apk club indir güvenli
-brawl stars apk club indir 2023
-brawl stars apk club indir 2022
-brawl stars apk club indir 2021
-brawl stars apk club indir 2020
-brawl stars apk club indir 2019
-brawl stars apk club indir 2018
-brawl stars apk club indir 2017
-brawl stars apk club indir 2016
-brawl stars apk club indir 2015
-brawl stars apk club indir 2014
-brawl stars apk club indir 2013
-brawl stars apk club indir 2012
-brawl stars apk club indir 2011
-brawl stars apk club indir 2010
-brawl stars apk club indir google play[^1^]
-brawl stars apk club indir app store[^1^]
-brawl stars apk club indir supertcell[^1^]
-brawl stars apk club indir multiplayer[^1^]
-brawl stars apk club indir battle royale[^1^]
-brawl stars apk club indir brawlers[^1^]
-brawl stars apk club indir skins[^1^]
-brawl stars apk club indir events[^1^]
-brawl stars apk club indir maps[^1^]
-brawl stars apk club indir clubs[^1^]
-
-
You may encounter compatibility issues or bugs with the game.
-
You may not be able to update the game automatically or use some features that require an official account.
-
You may expose your device to malware or viruses that may harm your data or privacy.
-
-
The risks and precautions of using third-party sources
-
If you decide to download Brawl Stars from other third-party sources, such as websites, forums, or file-sharing platforms, you should be aware of the potential risks and take some precautions. Some of the risks are:
-
-
You may download a fake or modified version of the game that may contain malicious code or unwanted content.
-
You may violate the terms of service or end-user license agreement of Supercell and get banned from playing the game.
-
You may lose your progress or account if you switch to a different device or source.
-
-
Some of the precautions are:
-
-
Always check the reputation and reviews of the source before downloading anything.
-
Always scan the downloaded file with a reliable antivirus software before installing it.
-
Always backup your data and account before switching to a different device or source.
-
-
How to play Brawl Stars?
The basics: choose a brawler, join a match, and fight
-
Before you start a match, you need to choose a brawler that suits your playstyle and the game mode. You can see the stats, abilities, and skins of each brawler by tapping on them in the Brawlers menu. You can also see their power level, which indicates how much you have upgraded them with power points and coins. The higher the power level, the stronger the brawler.
-
Once you have selected a brawler, you can join a match by tapping on the Play button. You can either play with random teammates or invite your friends to join you. You can also play solo in some game modes, such as Showdown or Solo Showdown. Depending on the game mode, you will be matched with 2 to 9 other players in a map.
-
The objective of each match is different, but the basic gameplay is the same: you need to use your brawler's attacks and super to defeat your enemies and achieve your goal. You can move your brawler with the blue joystick on the left side of the screen, and aim and fire your attacks with the red joystick on the right side. You can also tap on the red joystick to quickfire, which will automatically target the nearest enemy. You can also use gadgets and star powers, which are special abilities that you unlock at higher power levels.
-
The game modes: Smash & Grab, Showdown, Bounty, Heist, Brawl Ball, and more
-
Brawl Stars offers a variety of game modes that test your skills and strategy in different ways. Here are some of the most popular game modes and how to play them:
-
-
-
Game Mode
-
Description
-
Objective
-
-
-
Smash & Grab
-
A 3v3 mode where gems spawn from a mine in the center of the map.
-
Collect and hold 10 gems for 15 seconds to win. If you die, you drop your gems.
-
-
-
Showdown
-
A solo or duo mode where 10 players fight in a shrinking map.
-
Be the last brawler or team standing. Collect power cubes from crates or enemies to boost your stats.
-
-
-
Bounty
-
A 3v3 mode where each kill gives you a star and increases your bounty.
-
Earn more stars than the enemy team by killing them. The higher your bounty, the more stars you drop when you die.
-
-
-
Heist
-
A 3v3 mode where each team has a safe to protect and attack.
-
Destroy the enemy safe or deal more damage to it than they do to yours.
-
-
-
Brawl Ball
-
A 3v3 mode where each team tries to score goals with a ball.
-
Score two goals before the enemy team or have more goals when the time runs out. You can kick or carry the ball, but you can't attack while holding it.
-
-
-
Siege
-
A 3v3 mode where each team tries to destroy the enemy's IKE turret with a siege bot.
-
Collect bolts to build your siege bot, which will attack the enemy turret. Destroy the enemy turret or deal more damage to it than they do to yours.
-
-
-
Hot Zone
-
A 3v3 mode where each team tries to control zones on the map.
-
Earn points by staying in the zones. The first team to reach 100 points or have more points when the time runs out wins.
-
-
-
The tips and tricks: use obstacles, team up, run away, and more
-
Brawl Stars is not just about shooting and smashing your enemies. You also need to use your brain and skills to outsmart and outplay them. Here are some tips and tricks that can help you improve your game:
-
-
Use obstacles to hide from enemy fire or ambush them. You can also destroy some obstacles with your attacks or super.
-
Team up with your allies and coordinate your attacks. You can use voice chat or quick chat messages to communicate with them.
-
Run away when you are low on health or outnumbered. You can heal by staying out of combat for a few seconds.
-
Use your super wisely. Don't waste it on weak enemies or when you are about to die. Save it for critical moments or combo it with other supers.
-
Learn from your mistakes or watch replays of your matches. You can see what you did wrong or right, and learn from other players' strategies and tactics.
-
Have fun and experiment with different brawlers, modes, and maps. You may discover new ways to play or enjoy the game.
-
-
Conclusion
-
A summary of the main points
-
Brawl Stars is a mobile game that you can download and play for free on your Android or iOS device. It is a multiplayer game that offers different modes, characters, and challenges for you to enjoy. You can download it from the official Google Play Store or App Store, or from alternative sources such as APK Club or other third-party websites. However, you should be aware of the pros and cons of using these sources, and take some precautions to avoid any risks. You can also improve your skills and have more fun by following some tips and tricks that we shared in this article.
-
A call to action for the readers
-
If you are interested in Brawl Stars, we encourage you to give it a try and see for yourself why it is one of the most popular mobile games in the world. You can also join the Brawl Stars community and share your feedback, opinions, questions, and suggestions with other players and developers. You can find them on social media platforms such as Facebook, Twitter, Instagram, YouTube, Reddit, Discord, and more. You can also visit the official Brawl Stars website for more information and updates on the game.
-
FAQs
-
What are the best brawlers in Brawl Stars?
-
There is no definitive answer to this question, as different brawlers have different strengths and weaknesses, and may perform better or worse depending on the game mode, map, team composition, and personal preference. However, some of the brawlers that are generally considered to be strong and versatile are:
-
-
Colette: a chromatic brawler who deals damage based on the enemy's health, making her effective against any target.
-
Edgar: an epic brawler who can jump over obstacles and enemies, healing himself with each attack.
-
Spike: a legendary brawler who can throw cactus bombs that explode into spikes, dealing area damage and slowing down enemies.
-
Byron: a mythic brawler who can heal his allies or poison his enemies with his shots, as well as use his super to deal massive damage or healing over time.
-
Belle: a chromatic brawler who can shoot electric bullets that bounce between enemies, as well as use her super to mark an enemy for extra damage.
-
-
How to get new brawlers in Brawl Stars?
-
You can get new brawlers in Brawl Stars by opening brawl boxes, which are containers that contain various rewards such as coins, power points, gadgets, star powers, and brawlers. You can get brawl boxes by playing the game, completing quests, reaching milestones on Trophy Road, participating in special events, or buying them with gems. The chances of getting a new brawler depend on the rarity of the brawler and your luck. The rarer the brawler, the lower the chance of getting it. You can see the odds of getting a new brawler by tapping on the info button on the brawl box screen.
-
How to get gems and coins in Brawl Stars?
-
You can get gems and coins in Brawl Stars by playing the game , completing quests, opening brawl boxes, reaching milestones on Trophy Road, or participating in special events. You can also buy gems with real money through in-app purchases. Gems are the premium currency that you can use to buy brawl boxes, skins, coins, power points, and more. Coins are the regular currency that you can use to upgrade your brawlers' power level.
-
How to join or create a club in Brawl Stars?
-
A club is a group of players who can chat, play, and compete together in Brawl Stars. You can join or create a club by tapping on the social button on the main screen. You can search for an existing club by name or tag, or browse the recommended clubs based on your region and trophies. You can also create your own club by choosing a name, a tag, a badge, a description, and a type (open, invite only, or closed). You can invite your friends to join your club by tapping on the invite button and sending them a link. You can also leave or switch clubs at any time by tapping on the settings button and choosing the appropriate option.
-
How to contact Supercell for support or feedback?
-
If you have any issues, questions, or suggestions regarding Brawl Stars, you can contact Supercell for support or feedback by tapping on the settings button on the main screen and choosing the help and support option. You can browse the frequently asked questions or contact the support team directly by tapping on the message button. You can also visit the official Brawl Stars website for more information and updates on the game.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Fantasy Empire with War and Order APK for Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Fantasy Empire with War and Order APK for Android.md
deleted file mode 100644
index 532508cfb9272c80868527083fc141f61d41f389..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Fantasy Empire with War and Order APK for Android.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
War and Order APK: A Strategy Game for Android
-
If you are looking for a strategy game that combines real-time combat, tower defense, and castle building, then you might want to check out War and Order APK. This is a game that lets you build your own fantasy empire in a gorgeous 3D medieval world. You can command orcs, elves, mages, and other races to fight against enemies from all over the world. You can also join an alliance and cooperate with other players to conquer new lands and castles. In this article, we will tell you more about War and Order APK, how to download and install it, how to play it, what are its features, and what are its pros and cons.
-
What is War and Order APK?
-
War and Order APK is an Android game developed by Camel Games. It is a real-time strategy, tower defense, and castle building game that has received several global Google recommendations. It is one of the most popular games in its genre, with over 10 million downloads on Google Play.
A real-time strategy, tower defense, and castle building game
-
In War and Order APK, you can build your own empire by constructing and upgrading various buildings, such as barracks, farms, mines, workshops, walls, towers, etc. You can also recruit and train over 50 different types of soldiers, such as orcs, elves, humans, mages, beasts, angels, etc. You can use these soldiers to defend your base from enemy attacks or to attack other players' bases. You can also research new magic and technology to unlock new units, buffs, and weapons.
-
A 3D medieval game world with orcs, elves, and mages
-
War and Order APK has a stunning 3D graphics that immerses you in a medieval fantasy world. You can see your buildings, soldiers, enemies, and battles in full detail. You can also zoom in or out to get a better view of the action. The game also has a realistic sound effects that enhance the atmosphere of the game.
-
A global game with players from all over the world
-
War and Order APK is not just a single-player game. You can also interact with other players from around the world in real time. You can chat with them, make friends or enemies, form alliances or rivalries. You can also fight together or against each other in huge battles that involve hundreds or thousands of players. You can also compete for rankings, rewards, territories, castles, etc.
-
How to Download and Install War and Order APK?
-
If you want to play War and Order APK on your Android device, you need to download and install the APK file first. Here are the steps to do so:
-
Download the APK file from a trusted source
-
You can download the War and Order APK file from a trusted source, such as Softonic or APKCombo. You can also scan the APK file with an antivirus software before installing it to ensure its safety.
-
war and order apk download
-war and order apk mod
-war and order apk latest version
-war and order apk update
-war and order apk free
-war and order apk hack
-war and order apk offline
-war and order apk old version
-war and order apk for pc
-war and order apk for android
-war and order apk obb
-war and order apk xapk
-war and order apk unlimited gems
-war and order apk revdl
-war and order apk pure
-war and order apk data
-war and order apk mirror
-war and order apk rexdl
-war and order apk 2023
-war and order apk 2022
-war and order apk 2021
-war and order apk 2020
-war and order apk 2019
-war and order apk 2018
-war and order apk 2017
-war and order apk 2016
-war and order apk 2015
-war and order apk full version
-war and order apk no root
-war and order apk no ads
-war and order apk cheat engine
-war and order apk unlimited money
-war and order apk unlimited resources
-war and order apk unlimited troops
-war and order apk unlimited everything
-war and order apk mega mod
-war and order apk god mode
-war and order apk vip mod
-war and order apk pro mod
-war and order apk premium mod
-war and order apk cracked mod
-war and order apk hacked mod
-war and order apk modded mod
-war and order apk patched mod
-war and order apk unlocked mod
-war and order apk original mod
-war and order apk latest mod
-war and order apk new mod
-
Enable unknown sources on your device settings
-
Before you can install the War and Order APK file, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to confirm this action by tapping OK or Allow.
-
Install the APK file and launch the game
-
Once you have downloaded and enabled unknown sources, you can install the War and Order APK file by tapping on it. You may need to grant some permissions to the app, such as access to storage, location, contacts, etc. After the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer.
-
How to Play War and Order APK?
-
Now that you have installed War and Order APK, you can start playing it. Here are some basic tips on how to play the game:
-
Build your own empire with various buildings and soldiers
-
The first thing you need to do in War and Order APK is to build your own empire. You can do this by constructing and upgrading various buildings, such as barracks, farms, mines, workshops, walls, towers, etc. Each building has a different function and benefit for your empire. For example, barracks allow you to recruit and train soldiers, farms produce food for your army, mines generate gold for your treasury, workshops produce materials for your weapons and equipment, walls protect your base from enemy attacks, towers provide defense and support for your troops, etc. You can also decorate your base with flags, statues, fountains, etc. to make it more attractive.
-
You also need to recruit and train soldiers for your army. You can do this by tapping on the barracks and selecting the type of unit you want to recruit. There are over 50 different types of units in War and Order APK, such as orcs, elves, humans, mages, beasts, angels, etc. Each unit has a different cost, speed, attack, defense, range, and skill. You can also upgrade your units by researching new magic and technology in the academy. You can also equip your units with weapons and armor that you can craft in the workshop or buy in the market.
-
Research new magic and technology for advanced tactics and weapons
-
Another important aspect of War and Order APK is to research new magic and technology for your empire. You can do this by tapping on the academy and selecting the type of research you want to conduct. There are four categories of research in War and Order APK: Development, Military, Defense, and Magic. Each category has several subcategories that contain various research topics. For example, Development research allows you to improve your production, storage, speed, etc., Military research allows you to unlock new units, buffs, weapons, etc., Defense research allows you to enhance your walls, towers, traps, etc., and Magic research allows you to learn new spells, runes, potions, etc. Researching new magic and technology can give you an edge over your enemies and allies in the game.
-
Join an alliance and cooperate with other players to conquer territories and castles
-
One of the most fun and exciting features of War and Order APK is to join an alliance and cooperate with other players. You can do this by tapping on the alliance button and choosing to join an existing alliance or create your own. Joining an alliance can give you many benefits, such as sharing resources, information, troops, gifts, etc. You can also chat with your alliance members, make friends or enemies, form strategies and plans, etc.
-
One of the main goals of an alliance is to conquer new territories and castles in the game world. You can do this by tapping on the map and selecting a target to attack. You can also scout, rally, reinforce, or support your allies or enemies in the map. Conquering new territories and castles can give you more resources, prestige, and power in the game. You can also defend your territories and castles from enemy attacks by building defenses and sending troops.
-
What are the Features of War and Order APK?
-
War and Order APK has many features that make it a fun and addictive game. Here are some of them:
-
Huge battles with fully animated graphics and sound effects
-
War and Order APK has huge battles that involve hundreds or thousands of players and units. You can see your soldiers fight in real time with fully animated graphics and sound effects. You can also zoom in or out to get a better view of the action. The game also has a realistic physics engine that simulates the movement, collision, and damage of the units.
-
Diverse units and races with different abilities and skills
-
War and Order APK has diverse units and races that have different abilities and skills. You can command orcs, elves, humans, mages, beasts, angels, etc. Each unit has a different cost, speed, attack, defense, range, and skill. You can also upgrade your units by researching new magic and technology. You can also equip your units with weapons and armor that you can craft or buy.
-
A dynamic world with monsters, events, and challenges
-
War and Order APK has a dynamic world that changes according to the actions of the players. You can encounter monsters, events, and challenges in the game world that can give you rewards or risks. For example, you can fight against dragons, giants, zombies, etc. that drop rare items or resources. You can also participate in events such as festivals, tournaments, sieges, etc. that offer rewards or rankings. You can also face challenges such as quests, missions, achievements, etc. that test your skills and strategy.
-
What are the Pros and Cons of War and Order APK?
-
Like any other game, War and Order APK has its pros and cons. Here are some of them:
-
Pros: Fun, addictive, and strategic gameplay; Free to play; Regular updates; Friendly community
-
War and Order APK has a fun, addictive, and strategic gameplay that can keep you entertained for hours. You can build your own empire, recruit and train your army, research new magic and technology, join an alliance, fight against other players, conquer new territories and castles, etc. The game is also free to play, although you can buy some in-game items with real money if you want to. The game also has regular updates that add new features, content, and improvements to the game. The game also has a friendly community that you can chat with, make friends or enemies, form alliances or rivalries, etc.
-
Cons: Requires internet connection; May consume battery and data; May have bugs or glitches
-
War and Order APK requires an internet connection to play, which means you cannot play it offline. The game may also consume a lot of battery and data on your device, especially if you play it for a long time or participate in large battles. The game may also have some bugs or glitches that may affect your gameplay or experience. For example, you may encounter crashes, freezes, lags, errors, etc.
-
Conclusion
-
War and Order APK is a strategy game for Android that lets you build your own fantasy empire in a 3D medieval world. You can command orcs, elves, mages, and other races to fight against enemies from all over the world. You can also join an alliance and cooperate with other players to conquer new lands and castles. The game has many features that make it fun and addictive, such as huge battles, diverse units, dynamic world, etc. The game also has some pros and cons that you should consider before playing it.
-
FAQs
-
Here are some frequently asked questions about War and Order APK:
-
Q: Is War and Order APK safe to download and install?
-
A: Yes, War and Order APK is safe to download and install as long as you get it from a trusted source. You can also scan the APK file with an antivirus software before installing it to ensure its safety.
-
Q: How can I get more resources in War and Order APK?
-
A: You can get more resources in War and Order APK by building and upgrading your farms, mines, workshops, etc. You can also collect resources from the map by attacking monsters, events, or other players. You can also trade resources with your alliance members or buy them with real money.
-
Q: How can I get more gems in War and Order APK?
-
A: Gems are the premium currency in War and Order APK that can be used to buy special items, speed up processes, etc. You can get more gems by completing quests, achievements, challenges, etc. You can also get gems as rewards from events, tournaments, sieges, etc. You can also get gems by participating in the daily lottery or watching ads. You can also buy gems with real money.
-
Q: How can I join or create an alliance in War and Order APK?
-
A: You can join or create an alliance in War and Order APK by tapping on the alliance button and choosing to join an existing alliance or create your own. To join an existing alliance, you need to apply for it and wait for the approval of the leader or the elders. To create your own alliance, you need to pay a certain amount of gold and choose a name, flag, and description for your alliance. You can also invite other players to join your alliance or accept their applications.
-
Q: How can I change my name, avatar, or flag in War and Order APK?
-
A: You can change your name, avatar, or flag in War and Order APK by tapping on your profile button and choosing to edit your information. You can change your name once for free and then you need to pay gems for each change. You can change your avatar by selecting from the default options or uploading your own image. You can change your flag by selecting from the default options or creating your own design.
-
Q: How can I contact the customer service or report a problem in War and Order APK?
-
A: You can contact the customer service or report a problem in War and Order APK by tapping on the settings button and choosing to contact us or report a problem. You can also send an email to warandorder@camel4u.com or visit their official website or Facebook page for more information and support.
- References: : https://war-and-order.en.softonic.com/android : https://apkcombo.com/war-and-order/com.camelgames.superking/ : https://www.warandorder.net/ : https://www.facebook.com/WarandOrder1/ 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_batch.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_batch.py
deleted file mode 100644
index 4485605e3ece5b491d1e7d0f223c543b6c91eb96..0000000000000000000000000000000000000000
--- a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_batch.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import numpy as np
-from typing import List
-from speaker_encoder.data_objects.speaker import Speaker
-
-class SpeakerBatch:
- def __init__(self, speakers: List[Speaker], utterances_per_speaker: int, n_frames: int):
- self.speakers = speakers
- self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
-
- # Array of shape (n_speakers * n_utterances, n_frames, mel_n), e.g. for 3 speakers with
- # 4 utterances each of 160 frames of 40 mel coefficients: (12, 160, 40)
- self.data = np.array([frames for s in speakers for _, frames, _ in self.partials[s]])
diff --git a/spaces/52Hz/CMFNet_deblurring/main_test_CMFNet.py b/spaces/52Hz/CMFNet_deblurring/main_test_CMFNet.py
deleted file mode 100644
index 4eb8ef0e52c306edbe58142fcf6f64bb93f615c5..0000000000000000000000000000000000000000
--- a/spaces/52Hz/CMFNet_deblurring/main_test_CMFNet.py
+++ /dev/null
@@ -1,88 +0,0 @@
-import argparse
-import cv2
-import glob
-import numpy as np
-from collections import OrderedDict
-from skimage import img_as_ubyte
-import os
-import torch
-import requests
-from PIL import Image
-import torchvision.transforms.functional as TF
-import torch.nn.functional as F
-from natsort import natsorted
-from model.CMFNet import CMFNet
-
-def main():
- parser = argparse.ArgumentParser(description='Demo Image Deblur')
- parser.add_argument('--input_dir', default='test/', type=str, help='Input images')
- parser.add_argument('--result_dir', default='results/', type=str, help='Directory for results')
- parser.add_argument('--weights',
- default='experiments/pretrained_models/deblur_GoPro_CMFNet.pth', type=str,
- help='Path to weights')
-
- args = parser.parse_args()
-
- inp_dir = args.input_dir
- out_dir = args.result_dir
-
- os.makedirs(out_dir, exist_ok=True)
-
- files = natsorted(glob.glob(os.path.join(inp_dir, '*')))
-
- if len(files) == 0:
- raise Exception(f"No files found at {inp_dir}")
-
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- # Load corresponding models architecture and weights
- model = CMFNet()
- model = model.to(device)
- model.eval()
- load_checkpoint(model, args.weights)
-
-
- mul = 8
- for file_ in files:
- img = Image.open(file_).convert('RGB')
- input_ = TF.to_tensor(img).unsqueeze(0).to(device)
-
- # Pad the input if not_multiple_of 8
- h, w = input_.shape[2], input_.shape[3]
- H, W = ((h + mul) // mul) * mul, ((w + mul) // mul) * mul
- padh = H - h if h % mul != 0 else 0
- padw = W - w if w % mul != 0 else 0
- input_ = F.pad(input_, (0, padw, 0, padh), 'reflect')
-
- with torch.no_grad():
- restored = model(input_)
-
- restored = torch.clamp(restored, 0, 1)
- restored = restored[:, :, :h, :w]
- restored = restored.permute(0, 2, 3, 1).cpu().detach().numpy()
- restored = img_as_ubyte(restored[0])
-
- f = os.path.splitext(os.path.split(file_)[-1])[0]
- save_img((os.path.join(out_dir, f + '.png')), restored)
-
-
-
-def save_img(filepath, img):
- cv2.imwrite(filepath, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
-
-
-def load_checkpoint(model, weights):
- checkpoint = torch.load(weights, map_location=torch.device('cpu'))
- try:
- model.load_state_dict(checkpoint["state_dict"])
- except:
- state_dict = checkpoint["state_dict"]
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- name = k[7:] # remove `module.`
- new_state_dict[name] = v
- model.load_state_dict(new_state_dict)
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/AB-TW/team-ai/agents/tools/smart_domain/entity.py b/spaces/AB-TW/team-ai/agents/tools/smart_domain/entity.py
deleted file mode 100644
index 6af9e009781e46eeb305ddf002773d1cbaa22024..0000000000000000000000000000000000000000
--- a/spaces/AB-TW/team-ai/agents/tools/smart_domain/entity.py
+++ /dev/null
@@ -1,115 +0,0 @@
-from langchain.prompts import PromptTemplate
-from langchain.chains import LLMChain
-from langchain.agents import tool
-from agents.tools.smart_domain.common import getPrefix
-from models import llm
-
-entity_architecture = """
-Entity: This component is use to represents business concepts and encapsulates business rules.
-It may include 3 parts:
-- id(identity of entity)
-- description (properties package of entity represent the value of entity),
-- associations (collection of associated entiy)
----example code:
- @Getter
- @AllArgsConstructor
- public class Feature {{
- // id
- private FeatureId id;
-
- // description
- private FeatureDescription description;
-
- // associations
- private FeatureConfigs configs;
-
- public record FeatureId(String featureKey) {{
-
- }}
-
- @Builder
- public record FeatureDescription(String name,
- String description,
- Boolean isEnable,
- LocalDateTime updatedAt,
- LocalDateTime createdAt))) {{
-
- }}
-
- public Feature update(Feature newFeature) {{
- this.description = FeatureDescription.builder()
- .name(newFeature.description.name())
- .description(newFeature.description.description())
- .isEnable(this.description.isEnable())
- .updatedAt(LocalDateTime.now())
- .createdAt(this.description.createdAt());
-
- return this;
- }}
-
- public interface FeatureConfigs() {{
- Flux findAll();
- Flux subCollection(long from, long to);
- Mono findById(FeatureConfigId id);
- }}
- }}
----end of example code
-"""
-
-entity_test_strategy = """
-For the Entity, we can write unit test to ensure that the business rules related to Merchandise are correctly encapsulated.
----example code
- class FeatureTest {{
- @Test
- void should_update_feature_description() {{
- // given
- Feature feature = Feature.builder()
- .id(new FeatureId("featureKey"))
- .description(new FeatureDescription("name", "description", true, LocalDateTime.now(), LocalDateTime.now()))
- .build();
- Feature newFeature = Feature.builder()
- .id(new FeatureId("featureKey"))
- .description(new FeatureDescription("newName", "newDescription", true, LocalDateTime.now(), LocalDateTime.now()))
- .build();
- // when
- feature.update(newFeature);
- // then
- assertThat(feature.description().name()).isEqualTo("newName");
- assertThat(feature.description().description()).isEqualTo("newDescription");
- }}
- }}
----end of example code
-"""
-
-entity_tech_stack = """
-Java17、reactor、lombok、Junit5、reactor test、Mockito
-"""
-
-entity_task = """Your task is to generate the Enity of domain layer tests and product code."""
-ENTITY = getPrefix(entity_task, entity_tech_stack, entity_architecture, entity_test_strategy) + """
-
-Use the following format:
-request: the request that you need to fulfill
-
-Entity:
-```
-the Entity code that you write to fulfill the request, follow TechStack and Architecture
-```
-
-Test:
-```
-the test code that you write to fulfill the request, follow TechStack Architecture and TestStrategy
-```
-
-request: {input}"""
-
-ENTITY_PROMPT = PromptTemplate(input_variables=["input"], template=ENTITY,)
-
-entityChain = LLMChain(llm = llm(temperature=0.1), prompt=ENTITY_PROMPT)
-
-
-@tool("Generate Entity Code", return_direct=True)
-def entityCodeGenerator(input: str) -> str:
- '''useful for when you need to generate entity code'''
- response = entityChain.run(input)
- return response
\ No newline at end of file
diff --git a/spaces/ADOPLE/AdopleAI-Website-DocumentQA/README.md b/spaces/ADOPLE/AdopleAI-Website-DocumentQA/README.md
deleted file mode 100644
index 46f027e1c546f424409ae59c1488a9576de99146..0000000000000000000000000000000000000000
--- a/spaces/ADOPLE/AdopleAI-Website-DocumentQA/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DocumentQA Website
-emoji: 🏃
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-duplicated_from: ADOPLE/Adopleai-DocumentQA
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/options/option_vq.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/options/option_vq.py
deleted file mode 100644
index 08a53ff1270facc10ab44ec0647e673ed1336d0d..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/options/option_vq.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import argparse
-
-def get_args_parser():
- parser = argparse.ArgumentParser(description='Optimal Transport AutoEncoder training for AIST',
- add_help=True,
- formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-
- ## dataloader
- parser.add_argument('--dataname', type=str, default='kit', help='dataset directory')
- parser.add_argument('--batch-size', default=128, type=int, help='batch size')
- parser.add_argument('--window-size', type=int, default=64, help='training motion length')
-
- ## optimization
- parser.add_argument('--total-iter', default=200000, type=int, help='number of total iterations to run')
- parser.add_argument('--warm-up-iter', default=1000, type=int, help='number of total iterations for warmup')
- parser.add_argument('--lr', default=2e-4, type=float, help='max learning rate')
- parser.add_argument('--lr-scheduler', default=[50000, 400000], nargs="+", type=int, help="learning rate schedule (iterations)")
- parser.add_argument('--gamma', default=0.05, type=float, help="learning rate decay")
-
- parser.add_argument('--weight-decay', default=0.0, type=float, help='weight decay')
- parser.add_argument("--commit", type=float, default=0.02, help="hyper-parameter for the commitment loss")
- parser.add_argument('--loss-vel', type=float, default=0.1, help='hyper-parameter for the velocity loss')
- parser.add_argument('--recons-loss', type=str, default='l2', help='reconstruction loss')
-
- ## vqvae arch
- parser.add_argument("--code-dim", type=int, default=512, help="embedding dimension")
- parser.add_argument("--nb-code", type=int, default=512, help="nb of embedding")
- parser.add_argument("--mu", type=float, default=0.99, help="exponential moving average to update the codebook")
- parser.add_argument("--down-t", type=int, default=2, help="downsampling rate")
- parser.add_argument("--stride-t", type=int, default=2, help="stride size")
- parser.add_argument("--width", type=int, default=512, help="width of the network")
- parser.add_argument("--depth", type=int, default=3, help="depth of the network")
- parser.add_argument("--dilation-growth-rate", type=int, default=3, help="dilation growth rate")
- parser.add_argument("--output-emb-width", type=int, default=512, help="output embedding width")
- parser.add_argument('--vq-act', type=str, default='relu', choices = ['relu', 'silu', 'gelu'], help='dataset directory')
- parser.add_argument('--vq-norm', type=str, default=None, help='dataset directory')
-
- ## quantizer
- parser.add_argument("--quantizer", type=str, default='ema_reset', choices = ['ema', 'orig', 'ema_reset', 'reset'], help="eps for optimal transport")
- parser.add_argument('--beta', type=float, default=1.0, help='commitment loss in standard VQ')
-
- ## resume
- parser.add_argument("--resume-pth", type=str, default=None, help='resume pth for VQ')
- parser.add_argument("--resume-gpt", type=str, default=None, help='resume pth for GPT')
-
-
- ## output directory
- parser.add_argument('--out-dir', type=str, default='output_vqfinal/', help='output directory')
- parser.add_argument('--results-dir', type=str, default='visual_results/', help='output directory')
- parser.add_argument('--visual-name', type=str, default='baseline', help='output directory')
- parser.add_argument('--exp-name', type=str, default='exp_debug', help='name of the experiment, will create a file inside out-dir')
- ## other
- parser.add_argument('--print-iter', default=200, type=int, help='print frequency')
- parser.add_argument('--eval-iter', default=1000, type=int, help='evaluation frequency')
- parser.add_argument('--seed', default=123, type=int, help='seed for initializing training.')
-
- parser.add_argument('--vis-gt', action='store_true', help='whether visualize GT motions')
- parser.add_argument('--nb-vis', default=20, type=int, help='nb of visualizations')
-
-
- return parser.parse_args()
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/ema.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/ema.py
deleted file mode 100644
index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/ema.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates
- else torch.tensor(-1,dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- #remove as '.'-character is not allowed in buffers
- s_name = name.replace('.','')
- self.m_name2s_name.update({name:s_name})
- self.register_buffer(s_name,p.clone().detach().data)
-
- self.collected_params = []
-
- def forward(self,model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/ASJMO/freegpt/server/website.py b/spaces/ASJMO/freegpt/server/website.py
deleted file mode 100644
index 01b35dee1621b5b5bea49de330466ebb62817f20..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/server/website.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from flask import render_template, redirect, url_for, request, session
-from flask_babel import refresh
-from time import time
-from os import urandom
-from server.babel import get_locale, get_languages
-
-
-class Website:
- def __init__(self, bp, url_prefix) -> None:
- self.bp = bp
- self.url_prefix = url_prefix
- self.routes = {
- '/': {
- 'function': lambda: redirect(url_for('._index')),
- 'methods': ['GET', 'POST']
- },
- '/chat/': {
- 'function': self._index,
- 'methods': ['GET', 'POST']
- },
- '/chat/': {
- 'function': self._chat,
- 'methods': ['GET', 'POST']
- },
- '/change-language': {
- 'function': self.change_language,
- 'methods': ['POST']
- },
- '/get-locale': {
- 'function': self.get_locale,
- 'methods': ['GET']
- },
- '/get-languages': {
- 'function': self.get_languages,
- 'methods': ['GET']
- }
- }
-
- def _chat(self, conversation_id):
- if '-' not in conversation_id:
- return redirect(url_for('._index'))
-
- return render_template('index.html', chat_id=conversation_id, url_prefix=self.url_prefix)
-
- def _index(self):
- return render_template('index.html', chat_id=f'{urandom(4).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{hex(int(time() * 1000))[2:]}', url_prefix=self.url_prefix)
-
- def change_language(self):
- data = request.get_json()
- session['language'] = data.get('language')
- refresh()
- return '', 204
-
- def get_locale(self):
- return get_locale()
-
- def get_languages(self):
- return get_languages()
diff --git a/spaces/AlexN/pull_up/README.md b/spaces/AlexN/pull_up/README.md
deleted file mode 100644
index fd416cf9448817841aedd43602ead2c9fb679afc..0000000000000000000000000000000000000000
--- a/spaces/AlexN/pull_up/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Pull_up
-emoji: 💪
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/AlhitawiMohammed22/HTD_HTR/builder.py b/spaces/AlhitawiMohammed22/HTD_HTR/builder.py
deleted file mode 100644
index e4bebf35c90979c17c95eb1cfcebee9a75b63174..0000000000000000000000000000000000000000
--- a/spaces/AlhitawiMohammed22/HTD_HTR/builder.py
+++ /dev/null
@@ -1,305 +0,0 @@
-
-# Copyright (C) 2021, Mindee.
-
-# This program is licensed under the Apache License version 2.
-# See LICENSE or go to for full license details.
-
-
-from typing import Any, Dict, List, Tuple
-import pandas as pd
-
-import numpy as np
-from scipy.cluster.hierarchy import fclusterdata
-
-from doctr.utils.geometry import estimate_page_angle, resolve_enclosing_bbox, resolve_enclosing_rbbox, rotate_boxes
-from doctr.utils.repr import NestedObject
-
-__all__ = ['DocumentBuilder']
-
-
-class DocumentBuilder(NestedObject):
- """Implements a document builder
- Args:
- resolve_lines: whether words should be automatically grouped into lines
- resolve_blocks: whether lines should be automatically grouped into blocks
- paragraph_break: relative length of the minimum space separating paragraphs
- export_as_straight_boxes: if True, force straight boxes in the export (fit a rectangle
- box to all rotated boxes). Else, keep the boxes format unchanged, no matter what it is.
- """
-
- def __init__(
- self,
- resolve_lines: bool = True,
- resolve_blocks: bool = True,
- paragraph_break: float = 0.035,
- export_as_straight_boxes: bool = False,
- ) -> None:
-
- self.resolve_lines = resolve_lines
- self.resolve_blocks = resolve_blocks
- self.paragraph_break = paragraph_break
- self.export_as_straight_boxes = export_as_straight_boxes
-
- @staticmethod
- def _sort_boxes(boxes: np.ndarray) -> np.ndarray:
- """Sort bounding boxes from top to bottom, left to right
- Args:
- boxes: bounding boxes of shape (N, 4) or (N, 4, 2) (in case of rotated bbox)
- Returns:
- tuple: indices of ordered boxes of shape (N,), boxes
- If straight boxes are passed tpo the function, boxes are unchanged
- else: boxes returned are straight boxes fitted to the straightened rotated boxes
- so that we fit the lines afterwards to the straigthened page
- """
- if boxes.ndim == 3:
- boxes = rotate_boxes(
- loc_preds=boxes,
- angle=-estimate_page_angle(boxes),
- orig_shape=(1024, 1024),
- min_angle=5.,
- )
- boxes = np.concatenate((boxes.min(1), boxes.max(1)), -1)
- return (boxes[:, 0] + 2 * boxes[:, 3] / np.median(boxes[:, 3] - boxes[:, 1])).argsort(), boxes
-
- def _resolve_sub_lines(self, boxes: np.ndarray, word_idcs: List[int]) -> List[List[int]]:
- """Split a line in sub_lines
- Args:
- boxes: bounding boxes of shape (N, 4)
- word_idcs: list of indexes for the words of the line
- Returns:
- A list of (sub-)lines computed from the original line (words)
- """
- lines = []
- # Sort words horizontally
- word_idcs = [word_idcs[idx]
- for idx in boxes[word_idcs, 0].argsort().tolist()]
-
- # Eventually split line horizontally
- if len(word_idcs) < 2:
- lines.append(word_idcs)
- else:
- sub_line = [word_idcs[0]]
- for i in word_idcs[1:]:
- horiz_break = True
-
- prev_box = boxes[sub_line[-1]]
- # Compute distance between boxes
- dist = boxes[i, 0] - prev_box[2]
- # If distance between boxes is lower than paragraph break, same sub-line
- if dist < self.paragraph_break:
- horiz_break = False
-
- if horiz_break:
- lines.append(sub_line)
- sub_line = []
-
- sub_line.append(i)
- lines.append(sub_line)
-
- return lines
-
- def _resolve_lines(self, boxes: np.ndarray) -> List[List[int]]:
- """Order boxes to group them in lines
- Args:
- boxes: bounding boxes of shape (N, 4) or (N, 4, 2) in case of rotated bbox
- Returns:
- nested list of box indices
- """
-
- # Sort boxes, and straighten the boxes if they are rotated
- idxs, boxes = self._sort_boxes(boxes)
-
- # Compute median for boxes heights
- y_med = np.median(boxes[:, 3] - boxes[:, 1])
-
- lines = []
- words = [idxs[0]] # Assign the top-left word to the first line
- # Define a mean y-center for the line
- y_center_sum = boxes[idxs[0]][[1, 3]].mean()
-
- for idx in idxs[1:]:
- vert_break = True
-
- # Compute y_dist
- y_dist = abs(boxes[idx][[1, 3]].mean() - y_center_sum / len(words))
- # If y-center of the box is close enough to mean y-center of the line, same line
- if y_dist < y_med / 2:
- vert_break = False
-
- if vert_break:
- # Compute sub-lines (horizontal split)
- lines.extend(self._resolve_sub_lines(boxes, words))
- words = []
- y_center_sum = 0
-
- words.append(idx)
- y_center_sum += boxes[idx][[1, 3]].mean()
-
- # Use the remaining words to form the last(s) line(s)
- if len(words) > 0:
- # Compute sub-lines (horizontal split)
- lines.extend(self._resolve_sub_lines(boxes, words))
-
- return lines
-
- @staticmethod
- def _resolve_blocks(boxes: np.ndarray, lines: List[List[int]]) -> List[List[List[int]]]:
- """Order lines to group them in blocks
- Args:
- boxes: bounding boxes of shape (N, 4) or (N, 4, 2)
- lines: list of lines, each line is a list of idx
- Returns:
- nested list of box indices
- """
- # Resolve enclosing boxes of lines
- if boxes.ndim == 3:
- box_lines = np.asarray([
- resolve_enclosing_rbbox(
- [tuple(boxes[idx, :, :]) for idx in line])
- for line in lines # type: ignore[misc]
- ])
- else:
- _box_lines = [
- resolve_enclosing_bbox([
- # type: ignore[misc]
- (tuple(boxes[idx, :2]), tuple(boxes[idx, 2:])) for idx in line
- ])
- for line in lines
- ]
- box_lines = np.asarray([(x1, y1, x2, y2)
- for ((x1, y1), (x2, y2)) in _box_lines])
-
- # Compute geometrical features of lines to clusterize
- # Clusterizing only with box centers yield to poor results for complex documents
- if boxes.ndim == 3:
- box_features = np.stack(
- (
- (box_lines[:, 0, 0] + box_lines[:, 0, 1]) / 2,
- (box_lines[:, 0, 0] + box_lines[:, 2, 0]) / 2,
- (box_lines[:, 0, 0] + box_lines[:, 2, 1]) / 2,
- (box_lines[:, 0, 1] + box_lines[:, 2, 1]) / 2,
- (box_lines[:, 0, 1] + box_lines[:, 2, 0]) / 2,
- (box_lines[:, 2, 0] + box_lines[:, 2, 1]) / 2,
- ), axis=-1
- )
- else:
- box_features = np.stack(
- (
- (box_lines[:, 0] + box_lines[:, 3]) / 2,
- (box_lines[:, 1] + box_lines[:, 2]) / 2,
- (box_lines[:, 0] + box_lines[:, 2]) / 2,
- (box_lines[:, 1] + box_lines[:, 3]) / 2,
- box_lines[:, 0],
- box_lines[:, 1],
- ), axis=-1
- )
- # Compute clusters
- clusters = fclusterdata(
- box_features, t=0.1, depth=4, criterion='distance', metric='euclidean')
-
- _blocks: Dict[int, List[int]] = {}
- # Form clusters
- for line_idx, cluster_idx in enumerate(clusters):
- if cluster_idx in _blocks.keys():
- _blocks[cluster_idx].append(line_idx)
- else:
- _blocks[cluster_idx] = [line_idx]
-
- # Retrieve word-box level to return a fully nested structure
- blocks = [[lines[idx] for idx in block] for block in _blocks.values()]
-
- return blocks
-
- def _build_blocks(self, boxes: np.ndarray, word_preds: List[Tuple[str, float]], page_shapes: List[Tuple[int, int]]) -> Any:
- """Gather independent words in structured blocks
- Args:
- boxes: bounding boxes of all detected words of the page, of shape (N, 5) or (N, 4, 2)
- word_preds: list of all detected words of the page, of shape N
- Returns:
- list of block elements
- """
-
- if boxes.shape[0] != len(word_preds):
- raise ValueError(
- f"Incompatible argument lengths: {boxes.shape[0]}, {len(word_preds)}")
-
- if boxes.shape[0] == 0:
- return []
-
- # Decide whether we try to form lines
- _boxes = boxes
- if self.resolve_lines:
- lines = self._resolve_lines(
- _boxes if _boxes.ndim == 3 else _boxes[:, :4])
- # Decide whether we try to form blocks
- if self.resolve_blocks and len(lines) > 1:
- _blocks = self._resolve_blocks(
- _boxes if _boxes.ndim == 3 else _boxes[:, :4], lines)
- else:
- _blocks = [lines]
- else:
- # Sort bounding boxes, one line for all boxes, one block for the line
- lines = [self._sort_boxes(
- _boxes if _boxes.ndim == 3 else _boxes[:, :4])[0]]
- _blocks = [lines]
-
- rows = []
- for block_idx, lines in enumerate(_blocks):
- for line_idx, line in enumerate(lines):
- for i,idx in enumerate(line):
- h, w = page_shapes
- row = (
- block_idx, line_idx, i, word_preds[idx],
- int(round(boxes[idx, 0]*w)
- ), int(round(boxes[idx, 1]*h)),
- int(round(boxes[idx, 2]*w)
- ), int(round(boxes[idx, 3]*h)),
- int(round(boxes[idx, 4]*100))
- )
- rows.append(row)
-
- return rows
-
- def extra_repr(self) -> str:
- return (f"resolve_lines={self.resolve_lines}, resolve_blocks={self.resolve_blocks}, "
- f"paragraph_break={self.paragraph_break}, "
- f"export_as_straight_boxes={self.export_as_straight_boxes}")
-
- def __call__(
- self,
- boxes: List[np.ndarray],
- text_preds: List[List[Tuple[str, float]]],
- page_shapes: List[Tuple[int, int]]
- ) -> pd.DataFrame:
- """Re-arrange detected words into structured blocks
- Args:
- boxes: list of N elements, where each element represents the localization predictions, of shape (*, 5)
- or (*, 6) for all words for a given page
- text_preds: list of N elements, where each element is the list of all word prediction (text + confidence)
- page_shape: shape of each page, of size N
- Returns:
- document object
- """
- if len(boxes) != len(text_preds) or len(boxes) != len(page_shapes):
- raise ValueError(
- "All arguments are expected to be lists of the same size")
-
- if self.export_as_straight_boxes and len(boxes) > 0:
- # If boxes are already straight OK, else fit a bounding rect
- if boxes[0].ndim == 3:
- straight_boxes = []
- # Iterate over pages
- for p_boxes in boxes:
- # Iterate over boxes of the pages
- straight_boxes.append(np.concatenate(
- (p_boxes.min(1), p_boxes.max(1)), 1))
- boxes = straight_boxes
-
- _pages = [
- pd.DataFrame.from_records(self._build_blocks(page_boxes, word_preds, shape), columns=[
- "block_num", "line_num", "word_num" ,"word", "xmin", "ymin", "xmax", "ymax", "confidence_score"
- ])
- for _idx, shape, page_boxes, word_preds in zip(range(len(boxes)), page_shapes, boxes, text_preds)
- ]
-
- return _pages
\ No newline at end of file
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/data/flist_dataset.py b/spaces/Alpaca233/SadTalker/src/face3d/data/flist_dataset.py
deleted file mode 100644
index c0b6945c80aa756074a5d3c02b9443b15ddcfc57..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/data/flist_dataset.py
+++ /dev/null
@@ -1,125 +0,0 @@
-"""This script defines the custom dataset for Deep3DFaceRecon_pytorch
-"""
-
-import os.path
-from data.base_dataset import BaseDataset, get_transform, get_affine_mat, apply_img_affine, apply_lm_affine
-from data.image_folder import make_dataset
-from PIL import Image
-import random
-import util.util as util
-import numpy as np
-import json
-import torch
-from scipy.io import loadmat, savemat
-import pickle
-from util.preprocess import align_img, estimate_norm
-from util.load_mats import load_lm3d
-
-
-def default_flist_reader(flist):
- """
- flist format: impath label\nimpath label\n ...(same to caffe's filelist)
- """
- imlist = []
- with open(flist, 'r') as rf:
- for line in rf.readlines():
- impath = line.strip()
- imlist.append(impath)
-
- return imlist
-
-def jason_flist_reader(flist):
- with open(flist, 'r') as fp:
- info = json.load(fp)
- return info
-
-def parse_label(label):
- return torch.tensor(np.array(label).astype(np.float32))
-
-
-class FlistDataset(BaseDataset):
- """
- It requires one directories to host training images '/path/to/data/train'
- You can train the model with the dataset flag '--dataroot /path/to/data'.
- """
-
- def __init__(self, opt):
- """Initialize this dataset class.
-
- Parameters:
- opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
- """
- BaseDataset.__init__(self, opt)
-
- self.lm3d_std = load_lm3d(opt.bfm_folder)
-
- msk_names = default_flist_reader(opt.flist)
- self.msk_paths = [os.path.join(opt.data_root, i) for i in msk_names]
-
- self.size = len(self.msk_paths)
- self.opt = opt
-
- self.name = 'train' if opt.isTrain else 'val'
- if '_' in opt.flist:
- self.name += '_' + opt.flist.split(os.sep)[-1].split('_')[0]
-
-
- def __getitem__(self, index):
- """Return a data point and its metadata information.
-
- Parameters:
- index (int) -- a random integer for data indexing
-
- Returns a dictionary that contains A, B, A_paths and B_paths
- img (tensor) -- an image in the input domain
- msk (tensor) -- its corresponding attention mask
- lm (tensor) -- its corresponding 3d landmarks
- im_paths (str) -- image paths
- aug_flag (bool) -- a flag used to tell whether its raw or augmented
- """
- msk_path = self.msk_paths[index % self.size] # make sure index is within then range
- img_path = msk_path.replace('mask/', '')
- lm_path = '.'.join(msk_path.replace('mask', 'landmarks').split('.')[:-1]) + '.txt'
-
- raw_img = Image.open(img_path).convert('RGB')
- raw_msk = Image.open(msk_path).convert('RGB')
- raw_lm = np.loadtxt(lm_path).astype(np.float32)
-
- _, img, lm, msk = align_img(raw_img, raw_lm, self.lm3d_std, raw_msk)
-
- aug_flag = self.opt.use_aug and self.opt.isTrain
- if aug_flag:
- img, lm, msk = self._augmentation(img, lm, self.opt, msk)
-
- _, H = img.size
- M = estimate_norm(lm, H)
- transform = get_transform()
- img_tensor = transform(img)
- msk_tensor = transform(msk)[:1, ...]
- lm_tensor = parse_label(lm)
- M_tensor = parse_label(M)
-
-
- return {'imgs': img_tensor,
- 'lms': lm_tensor,
- 'msks': msk_tensor,
- 'M': M_tensor,
- 'im_paths': img_path,
- 'aug_flag': aug_flag,
- 'dataset': self.name}
-
- def _augmentation(self, img, lm, opt, msk=None):
- affine, affine_inv, flip = get_affine_mat(opt, img.size)
- img = apply_img_affine(img, affine_inv)
- lm = apply_lm_affine(lm, affine, flip, img.size)
- if msk is not None:
- msk = apply_img_affine(msk, affine_inv, method=Image.BILINEAR)
- return img, lm, msk
-
-
-
-
- def __len__(self):
- """Return the total number of images in the dataset.
- """
- return self.size
diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/value_guided_sampling.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/value_guided_sampling.md
deleted file mode 100644
index 0509b196b57820e88bcff9c6821612df15313ebf..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/value_guided_sampling.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-# Value-guided planning
-
-
-
-🧪 This is an experimental pipeline for reinforcement learning!
-
-
-
-This pipeline is based on the [Planning with Diffusion for Flexible Behavior Synthesis](https://huggingface.co/papers/2205.09991) paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine.
-
-The abstract from the paper is:
-
-*Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility*.
-
-You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb).
-
-The script to run the model is available [here](https://github.com/huggingface/diffusers/tree/main/examples/reinforcement_learning).
-
-## ValueGuidedRLPipeline
-[[autodoc]] diffusers.experimental.ValueGuidedRLPipeline
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/cross_attention.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/cross_attention.py
deleted file mode 100644
index 44bc156b34cfa8536bdac0fee34709dfd66ae488..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/cross_attention.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from ..utils import deprecate
-from .attention_processor import ( # noqa: F401
- Attention,
- AttentionProcessor,
- AttnAddedKVProcessor,
- AttnProcessor2_0,
- LoRAAttnProcessor,
- LoRALinearLayer,
- LoRAXFormersAttnProcessor,
- SlicedAttnAddedKVProcessor,
- SlicedAttnProcessor,
- XFormersAttnProcessor,
-)
-from .attention_processor import AttnProcessor as AttnProcessorRename # noqa: F401
-
-
-deprecate(
- "cross_attention",
- "0.20.0",
- "Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead.",
- standard_warn=False,
-)
-
-
-AttnProcessor = AttentionProcessor
-
-
-class CrossAttention(Attention):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
-
-
-class CrossAttnProcessor(AttnProcessorRename):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
-
-
-class LoRACrossAttnProcessor(LoRAAttnProcessor):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
-
-
-class CrossAttnAddedKVProcessor(AttnAddedKVProcessor):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
-
-
-class XFormersCrossAttnProcessor(XFormersAttnProcessor):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
-
-
-class LoRAXFormersCrossAttnProcessor(LoRAXFormersAttnProcessor):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
-
-
-class SlicedCrossAttnProcessor(SlicedAttnProcessor):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
-
-
-class SlicedCrossAttnAddedKVProcessor(SlicedAttnAddedKVProcessor):
- def __init__(self, *args, **kwargs):
- deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead."
- deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False)
- super().__init__(*args, **kwargs)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
deleted file mode 100644
index dba50312e8d76cf6b368e9161b4a2c24492cafcd..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
+++ /dev/null
@@ -1,373 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-from PIL import Image
-
-from ...models import UNet2DConditionModel, VQModel
-from ...schedulers import DDPMScheduler
-from ...utils import (
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> from diffusers import KandinskyV22Img2ImgPipeline, KandinskyV22PriorPipeline
- >>> from diffusers.utils import load_image
- >>> import torch
-
- >>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
- ... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
- ... )
- >>> pipe_prior.to("cuda")
-
- >>> prompt = "A red cartoon frog, 4k"
- >>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False)
-
- >>> pipe = KandinskyV22Img2ImgPipeline.from_pretrained(
- ... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
- ... )
- >>> pipe.to("cuda")
-
- >>> init_image = load_image(
- ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- ... "/kandinsky/frog.png"
- ... )
-
- >>> image = pipe(
- ... image=init_image,
- ... image_embeds=image_emb,
- ... negative_image_embeds=zero_image_emb,
- ... height=768,
- ... width=768,
- ... num_inference_steps=100,
- ... strength=0.2,
- ... ).images
-
- >>> image[0].save("red_frog.png")
- ```
-"""
-
-
-# Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.downscale_height_and_width
-def downscale_height_and_width(height, width, scale_factor=8):
- new_height = height // scale_factor**2
- if height % scale_factor**2 != 0:
- new_height += 1
- new_width = width // scale_factor**2
- if width % scale_factor**2 != 0:
- new_width += 1
- return new_height * scale_factor, new_width * scale_factor
-
-
-# Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.prepare_image
-def prepare_image(pil_image, w=512, h=512):
- pil_image = pil_image.resize((w, h), resample=Image.BICUBIC, reducing_gap=1)
- arr = np.array(pil_image.convert("RGB"))
- arr = arr.astype(np.float32) / 127.5 - 1
- arr = np.transpose(arr, [2, 0, 1])
- image = torch.from_numpy(arr).unsqueeze(0)
- return image
-
-
-class KandinskyV22Img2ImgPipeline(DiffusionPipeline):
- """
- Pipeline for image-to-image generation using Kandinsky
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- scheduler ([`DDIMScheduler`]):
- A scheduler to be used in combination with `unet` to generate image latents.
- unet ([`UNet2DConditionModel`]):
- Conditional U-Net architecture to denoise the image embedding.
- movq ([`VQModel`]):
- MoVQ Decoder to generate the image from the latents.
- """
-
- def __init__(
- self,
- unet: UNet2DConditionModel,
- scheduler: DDPMScheduler,
- movq: VQModel,
- ):
- super().__init__()
-
- self.register_modules(
- unet=unet,
- scheduler=scheduler,
- movq=movq,
- )
- self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1)
-
- # Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.KandinskyImg2ImgPipeline.get_timesteps
- def get_timesteps(self, num_inference_steps, strength, device):
- # get the original timestep using init_timestep
- init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep, 0)
- timesteps = self.scheduler.timesteps[t_start:]
-
- return timesteps, num_inference_steps - t_start
-
- def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
- if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
- raise ValueError(
- f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
- )
-
- image = image.to(device=device, dtype=dtype)
-
- batch_size = batch_size * num_images_per_prompt
-
- if image.shape[1] == 4:
- init_latents = image
-
- else:
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- elif isinstance(generator, list):
- init_latents = [
- self.movq.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
- ]
- init_latents = torch.cat(init_latents, dim=0)
- else:
- init_latents = self.movq.encode(image).latent_dist.sample(generator)
-
- init_latents = self.movq.config.scaling_factor * init_latents
-
- init_latents = torch.cat([init_latents], dim=0)
-
- shape = init_latents.shape
- noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
-
- # get latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
-
- latents = init_latents
-
- return latents
-
- # Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.KandinskyV22Pipeline.enable_model_cpu_offload
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- hook = None
- for cpu_offloaded_model in [self.unet, self.movq]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]],
- image: Union[torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]],
- negative_image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]],
- height: int = 512,
- width: int = 512,
- num_inference_steps: int = 100,
- guidance_scale: float = 4.0,
- strength: float = 0.3,
- num_images_per_prompt: int = 1,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- output_type: Optional[str] = "pil",
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- return_dict: bool = True,
- ):
- """
- Function invoked when calling the pipeline for generation.
-
- Args:
- image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
- The clip image embeddings for text prompt, that will be used to condition the image generation.
- image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process. Can also accpet image latents as `image`, if passing latents directly, it will not be encoded
- again.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
- be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
- The clip image embeddings for negative text prompt, will be used to condition the image generation.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 100):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 4.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
- (`np.array`) or `"pt"` (`torch.Tensor`).
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
-
- Examples:
-
- Returns:
- [`~pipelines.ImagePipelineOutput`] or `tuple`
- """
- device = self._execution_device
-
- do_classifier_free_guidance = guidance_scale > 1.0
-
- if isinstance(image_embeds, list):
- image_embeds = torch.cat(image_embeds, dim=0)
- batch_size = image_embeds.shape[0]
- if isinstance(negative_image_embeds, list):
- negative_image_embeds = torch.cat(negative_image_embeds, dim=0)
-
- if do_classifier_free_guidance:
- image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0)
- negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0)
-
- image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to(
- dtype=self.unet.dtype, device=device
- )
-
- if not isinstance(image, list):
- image = [image]
- if not all(isinstance(i, (PIL.Image.Image, torch.Tensor)) for i in image):
- raise ValueError(
- f"Input is in incorrect format: {[type(i) for i in image]}. Currently, we only support PIL image and pytorch tensor"
- )
-
- image = torch.cat([prepare_image(i, width, height) for i in image], dim=0)
- image = image.to(dtype=image_embeds.dtype, device=device)
-
- latents = self.movq.encode(image)["latents"]
- latents = latents.repeat_interleave(num_images_per_prompt, dim=0)
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
- latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
- height, width = downscale_height_and_width(height, width, self.movq_scale_factor)
- latents = self.prepare_latents(
- latents, latent_timestep, batch_size, num_images_per_prompt, image_embeds.dtype, device, generator
- )
- for i, t in enumerate(self.progress_bar(timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- added_cond_kwargs = {"image_embeds": image_embeds}
- noise_pred = self.unet(
- sample=latent_model_input,
- timestep=t,
- encoder_hidden_states=None,
- added_cond_kwargs=added_cond_kwargs,
- return_dict=False,
- )[0]
-
- if do_classifier_free_guidance:
- noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1)
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- _, variance_pred_text = variance_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
- noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1)
-
- if not (
- hasattr(self.scheduler.config, "variance_type")
- and self.scheduler.config.variance_type in ["learned", "learned_range"]
- ):
- noise_pred, _ = noise_pred.split(latents.shape[1], dim=1)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(
- noise_pred,
- t,
- latents,
- generator=generator,
- )[0]
-
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # post-processing
- image = self.movq.decode(latents, force_not_quantize=True)["sample"]
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if output_type not in ["pt", "np", "pil"]:
- raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}")
-
- if output_type in ["np", "pil"]:
- image = image * 0.5 + 0.5
- image = image.clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_karras_ve_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_karras_ve_flax.py
deleted file mode 100644
index 45c0dbddf7efd22df21cc9859e68d62b54aa8609..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_karras_ve_flax.py
+++ /dev/null
@@ -1,237 +0,0 @@
-# Copyright 2023 NVIDIA and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import flax
-import jax.numpy as jnp
-from jax import random
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput
-from .scheduling_utils_flax import FlaxSchedulerMixin
-
-
-@flax.struct.dataclass
-class KarrasVeSchedulerState:
- # setable values
- num_inference_steps: Optional[int] = None
- timesteps: Optional[jnp.ndarray] = None
- schedule: Optional[jnp.ndarray] = None # sigma(t_i)
-
- @classmethod
- def create(cls):
- return cls()
-
-
-@dataclass
-class FlaxKarrasVeOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- derivative (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images):
- Derivative of predicted original image sample (x_0).
- state (`KarrasVeSchedulerState`): the `FlaxKarrasVeScheduler` state data class.
- """
-
- prev_sample: jnp.ndarray
- derivative: jnp.ndarray
- state: KarrasVeSchedulerState
-
-
-class FlaxKarrasVeScheduler(FlaxSchedulerMixin, ConfigMixin):
- """
- Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and
- the VE column of Table 1 from [1] for reference.
-
- [1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models."
- https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. "Score-based generative modeling through stochastic
- differential equations." https://arxiv.org/abs/2011.13456
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details on the parameters, see the original paper's Appendix E.: "Elucidating the Design Space of
- Diffusion-Based Generative Models." https://arxiv.org/abs/2206.00364. The grid search values used to find the
- optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper.
-
- Args:
- sigma_min (`float`): minimum noise magnitude
- sigma_max (`float`): maximum noise magnitude
- s_noise (`float`): the amount of additional noise to counteract loss of detail during sampling.
- A reasonable range is [1.000, 1.011].
- s_churn (`float`): the parameter controlling the overall amount of stochasticity.
- A reasonable range is [0, 100].
- s_min (`float`): the start value of the sigma range where we add noise (enable stochasticity).
- A reasonable range is [0, 10].
- s_max (`float`): the end value of the sigma range where we add noise.
- A reasonable range is [0.2, 80].
- """
-
- @property
- def has_state(self):
- return True
-
- @register_to_config
- def __init__(
- self,
- sigma_min: float = 0.02,
- sigma_max: float = 100,
- s_noise: float = 1.007,
- s_churn: float = 80,
- s_min: float = 0.05,
- s_max: float = 50,
- ):
- pass
-
- def create_state(self):
- return KarrasVeSchedulerState.create()
-
- def set_timesteps(
- self, state: KarrasVeSchedulerState, num_inference_steps: int, shape: Tuple = ()
- ) -> KarrasVeSchedulerState:
- """
- Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- state (`KarrasVeSchedulerState`):
- the `FlaxKarrasVeScheduler` state data class.
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
-
- """
- timesteps = jnp.arange(0, num_inference_steps)[::-1].copy()
- schedule = [
- (
- self.config.sigma_max**2
- * (self.config.sigma_min**2 / self.config.sigma_max**2) ** (i / (num_inference_steps - 1))
- )
- for i in timesteps
- ]
-
- return state.replace(
- num_inference_steps=num_inference_steps,
- schedule=jnp.array(schedule, dtype=jnp.float32),
- timesteps=timesteps,
- )
-
- def add_noise_to_input(
- self,
- state: KarrasVeSchedulerState,
- sample: jnp.ndarray,
- sigma: float,
- key: random.KeyArray,
- ) -> Tuple[jnp.ndarray, float]:
- """
- Explicit Langevin-like "churn" step of adding noise to the sample according to a factor gamma_i ≥ 0 to reach a
- higher noise level sigma_hat = sigma_i + gamma_i*sigma_i.
-
- TODO Args:
- """
- if self.config.s_min <= sigma <= self.config.s_max:
- gamma = min(self.config.s_churn / state.num_inference_steps, 2**0.5 - 1)
- else:
- gamma = 0
-
- # sample eps ~ N(0, S_noise^2 * I)
- key = random.split(key, num=1)
- eps = self.config.s_noise * random.normal(key=key, shape=sample.shape)
- sigma_hat = sigma + gamma * sigma
- sample_hat = sample + ((sigma_hat**2 - sigma**2) ** 0.5 * eps)
-
- return sample_hat, sigma_hat
-
- def step(
- self,
- state: KarrasVeSchedulerState,
- model_output: jnp.ndarray,
- sigma_hat: float,
- sigma_prev: float,
- sample_hat: jnp.ndarray,
- return_dict: bool = True,
- ) -> Union[FlaxKarrasVeOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- state (`KarrasVeSchedulerState`): the `FlaxKarrasVeScheduler` state data class.
- model_output (`torch.FloatTensor` or `np.ndarray`): direct output from learned diffusion model.
- sigma_hat (`float`): TODO
- sigma_prev (`float`): TODO
- sample_hat (`torch.FloatTensor` or `np.ndarray`): TODO
- return_dict (`bool`): option for returning tuple rather than FlaxKarrasVeOutput class
-
- Returns:
- [`~schedulers.scheduling_karras_ve_flax.FlaxKarrasVeOutput`] or `tuple`: Updated sample in the diffusion
- chain and derivative. [`~schedulers.scheduling_karras_ve_flax.FlaxKarrasVeOutput`] if `return_dict` is
- True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
- """
-
- pred_original_sample = sample_hat + sigma_hat * model_output
- derivative = (sample_hat - pred_original_sample) / sigma_hat
- sample_prev = sample_hat + (sigma_prev - sigma_hat) * derivative
-
- if not return_dict:
- return (sample_prev, derivative, state)
-
- return FlaxKarrasVeOutput(prev_sample=sample_prev, derivative=derivative, state=state)
-
- def step_correct(
- self,
- state: KarrasVeSchedulerState,
- model_output: jnp.ndarray,
- sigma_hat: float,
- sigma_prev: float,
- sample_hat: jnp.ndarray,
- sample_prev: jnp.ndarray,
- derivative: jnp.ndarray,
- return_dict: bool = True,
- ) -> Union[FlaxKarrasVeOutput, Tuple]:
- """
- Correct the predicted sample based on the output model_output of the network. TODO complete description
-
- Args:
- state (`KarrasVeSchedulerState`): the `FlaxKarrasVeScheduler` state data class.
- model_output (`torch.FloatTensor` or `np.ndarray`): direct output from learned diffusion model.
- sigma_hat (`float`): TODO
- sigma_prev (`float`): TODO
- sample_hat (`torch.FloatTensor` or `np.ndarray`): TODO
- sample_prev (`torch.FloatTensor` or `np.ndarray`): TODO
- derivative (`torch.FloatTensor` or `np.ndarray`): TODO
- return_dict (`bool`): option for returning tuple rather than FlaxKarrasVeOutput class
-
- Returns:
- prev_sample (TODO): updated sample in the diffusion chain. derivative (TODO): TODO
-
- """
- pred_original_sample = sample_prev + sigma_prev * model_output
- derivative_corr = (sample_prev - pred_original_sample) / sigma_prev
- sample_prev = sample_hat + (sigma_prev - sigma_hat) * (0.5 * derivative + 0.5 * derivative_corr)
-
- if not return_dict:
- return (sample_prev, derivative, state)
-
- return FlaxKarrasVeOutput(prev_sample=sample_prev, derivative=derivative, state=state)
-
- def add_noise(self, state: KarrasVeSchedulerState, original_samples, noise, timesteps):
- raise NotImplementedError()
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py
deleted file mode 100644
index 961a22675f33442270751b04da290992d57ed23a..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import random
-import unittest
-
-import torch
-
-from diffusers import IFInpaintingSuperResolutionPipeline
-from diffusers.utils import floats_tensor
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
-
-from ..pipeline_params import (
- TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
- TEXT_GUIDED_IMAGE_INPAINTING_PARAMS,
-)
-from ..test_pipelines_common import PipelineTesterMixin
-from . import IFPipelineTesterMixin
-
-
-@skip_mps
-class IFInpaintingSuperResolutionPipelineFastTests(PipelineTesterMixin, IFPipelineTesterMixin, unittest.TestCase):
- pipeline_class = IFInpaintingSuperResolutionPipeline
- params = TEXT_GUIDED_IMAGE_INPAINTING_PARAMS - {"width", "height"}
- batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS.union({"original_image"})
- required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"}
-
- def get_dummy_components(self):
- return self._get_superresolution_dummy_components()
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
-
- image = floats_tensor((1, 3, 16, 16), rng=random.Random(seed)).to(device)
- original_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
- mask_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
-
- inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "image": image,
- "original_image": original_image,
- "mask_image": mask_image,
- "generator": generator,
- "num_inference_steps": 2,
- "output_type": "numpy",
- }
-
- return inputs
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_xformers_attention_forwardGenerator_pass(self):
- self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=1e-3)
-
- def test_save_load_optional_components(self):
- self._test_save_load_optional_components()
-
- @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
- def test_save_load_float16(self):
- # Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
- super().test_save_load_float16(expected_max_diff=1e-1)
-
- def test_attention_slicing_forward_pass(self):
- self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
-
- def test_save_load_local(self):
- self._test_save_load_local()
-
- def test_inference_batch_single_identical(self):
- self._test_inference_batch_single_identical(
- expected_max_diff=1e-2,
- )
diff --git a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/layers.py b/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/layers.py
deleted file mode 100644
index b7533c90888a66b14361666ec5ae6b5d05a9eb8f..0000000000000000000000000000000000000000
--- a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/layers.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from fastai.layers import *
-from fastai.torch_core import *
-from torch.nn.parameter import Parameter
-from torch.autograd import Variable
-
-
-# The code below is meant to be merged into fastaiv1 ideally
-
-
-def custom_conv_layer(
- ni: int,
- nf: int,
- ks: int = 3,
- stride: int = 1,
- padding: int = None,
- bias: bool = None,
- is_1d: bool = False,
- norm_type: Optional[NormType] = NormType.Batch,
- use_activ: bool = True,
- leaky: float = None,
- transpose: bool = False,
- init: Callable = nn.init.kaiming_normal_,
- self_attention: bool = False,
- extra_bn: bool = False,
-):
- "Create a sequence of convolutional (`ni` to `nf`), ReLU (if `use_activ`) and batchnorm (if `bn`) layers."
- if padding is None:
- padding = (ks - 1) // 2 if not transpose else 0
- bn = norm_type in (NormType.Batch, NormType.BatchZero) or extra_bn == True
- if bias is None:
- bias = not bn
- conv_func = nn.ConvTranspose2d if transpose else nn.Conv1d if is_1d else nn.Conv2d
- conv = init_default(
- conv_func(ni, nf, kernel_size=ks, bias=bias, stride=stride, padding=padding),
- init,
- )
- if norm_type == NormType.Weight:
- conv = weight_norm(conv)
- elif norm_type == NormType.Spectral:
- conv = spectral_norm(conv)
- layers = [conv]
- if use_activ:
- layers.append(relu(True, leaky=leaky))
- if bn:
- layers.append((nn.BatchNorm1d if is_1d else nn.BatchNorm2d)(nf))
- if self_attention:
- layers.append(SelfAttention(nf))
- return nn.Sequential(*layers)
diff --git a/spaces/Aristo/trafficsign/app.py b/spaces/Aristo/trafficsign/app.py
deleted file mode 100644
index 8bceb18cd893ce3bb4e8faa49ae50b081c78fbbc..0000000000000000000000000000000000000000
--- a/spaces/Aristo/trafficsign/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import gradio as gr
-import PIL
-import numpy
-import matplotlib.pyplot as plt
-#load the trained model to classify sign
-from keras.models import load_model
-model = load_model('traffic_classifier.h5')
-#dictionary to label all traffic signs class.
-classes = { 1:'Speed limit (20km/h)',
- 2:'Speed limit (30km/h)',
- 3:'Speed limit (50km/h)',
- 4:'Speed limit (60km/h)',
- 5:'Speed limit (70km/h)',
- 6:'Speed limit (80km/h)',
- 7:'End of speed limit (80km/h)',
- 8:'Speed limit (100km/h)',
- 9:'Speed limit (120km/h)',
- 10:'Veh > 3.5 tons prohibited',
- 11:'Bumpy road',
- 12:'Slippery road',
- 13:'Road narrows on the right',
- 14:'Road work',
- 15:'Pedestrians',
- 16:'Turn right ahead',
- 17:'Turn left ahead',
- 18:'Ahead only',
- 19:'Go straight or right',
- 20:'Go straight or left',
- 21:'Keep right',
- 22:'Keep left',
- 23:'Roundabout mandatory'}
-#initialise GUI
-def predict(img):
- img = numpy.expand_dims(img, axis=0)
- predict_x = model.predict(img)
- pred = numpy.argmax(predict_x, axis = 1)
- sign = classes[pred[0]+1]
- return sign
-gr.Interface(fn= predict, inputs = gr.inputs.Image(shape = (30,30)), outputs= "textbox" ).launch(share=True, debug = True)
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/diagram/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/diagram/__init__.py
deleted file mode 100644
index 898644755cbbf9a8d4df562663114a7eb7e11fd1..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/diagram/__init__.py
+++ /dev/null
@@ -1,642 +0,0 @@
-import railroad
-import pyparsing
-import typing
-from typing import (
- List,
- NamedTuple,
- Generic,
- TypeVar,
- Dict,
- Callable,
- Set,
- Iterable,
-)
-from jinja2 import Template
-from io import StringIO
-import inspect
-
-
-jinja2_template_source = """\
-
-
-
- {% if not head %}
-
- {% else %}
- {{ head | safe }}
- {% endif %}
-
-
-{{ body | safe }}
-{% for diagram in diagrams %}
-
-
{{ diagram.title }}
-
{{ diagram.text }}
-
- {{ diagram.svg }}
-
-
-{% endfor %}
-
-
-"""
-
-template = Template(jinja2_template_source)
-
-# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet
-NamedDiagram = NamedTuple(
- "NamedDiagram",
- [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)],
-)
-"""
-A simple structure for associating a name with a railroad diagram
-"""
-
-T = TypeVar("T")
-
-
-class EachItem(railroad.Group):
- """
- Custom railroad item to compose a:
- - Group containing a
- - OneOrMore containing a
- - Choice of the elements in the Each
- with the group label indicating that all must be matched
- """
-
- all_label = "[ALL]"
-
- def __init__(self, *items):
- choice_item = railroad.Choice(len(items) - 1, *items)
- one_or_more_item = railroad.OneOrMore(item=choice_item)
- super().__init__(one_or_more_item, label=self.all_label)
-
-
-class AnnotatedItem(railroad.Group):
- """
- Simple subclass of Group that creates an annotation label
- """
-
- def __init__(self, label: str, item):
- super().__init__(item=item, label="[{}]".format(label) if label else label)
-
-
-class EditablePartial(Generic[T]):
- """
- Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn't yet been
- constructed.
- """
-
- # We need this here because the railroad constructors actually transform the data, so can't be called until the
- # entire tree is assembled
-
- def __init__(self, func: Callable[..., T], args: list, kwargs: dict):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- @classmethod
- def from_call(cls, func: Callable[..., T], *args, **kwargs) -> "EditablePartial[T]":
- """
- If you call this function in the same way that you would call the constructor, it will store the arguments
- as you expect. For example EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3)
- """
- return EditablePartial(func=func, args=list(args), kwargs=kwargs)
-
- @property
- def name(self):
- return self.kwargs["name"]
-
- def __call__(self) -> T:
- """
- Evaluate the partial and return the result
- """
- args = self.args.copy()
- kwargs = self.kwargs.copy()
-
- # This is a helpful hack to allow you to specify varargs parameters (e.g. *args) as keyword args (e.g.
- # args=['list', 'of', 'things'])
- arg_spec = inspect.getfullargspec(self.func)
- if arg_spec.varargs in self.kwargs:
- args += kwargs.pop(arg_spec.varargs)
-
- return self.func(*args, **kwargs)
-
-
-def railroad_to_html(diagrams: List[NamedDiagram], **kwargs) -> str:
- """
- Given a list of NamedDiagram, produce a single HTML string that visualises those diagrams
- :params kwargs: kwargs to be passed in to the template
- """
- data = []
- for diagram in diagrams:
- if diagram.diagram is None:
- continue
- io = StringIO()
- diagram.diagram.writeSvg(io.write)
- title = diagram.name
- if diagram.index == 0:
- title += " (root)"
- data.append({"title": title, "text": "", "svg": io.getvalue()})
-
- return template.render(diagrams=data, **kwargs)
-
-
-def resolve_partial(partial: "EditablePartial[T]") -> T:
- """
- Recursively resolves a collection of Partials into whatever type they are
- """
- if isinstance(partial, EditablePartial):
- partial.args = resolve_partial(partial.args)
- partial.kwargs = resolve_partial(partial.kwargs)
- return partial()
- elif isinstance(partial, list):
- return [resolve_partial(x) for x in partial]
- elif isinstance(partial, dict):
- return {key: resolve_partial(x) for key, x in partial.items()}
- else:
- return partial
-
-
-def to_railroad(
- element: pyparsing.ParserElement,
- diagram_kwargs: typing.Optional[dict] = None,
- vertical: int = 3,
- show_results_names: bool = False,
- show_groups: bool = False,
-) -> List[NamedDiagram]:
- """
- Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram
- creation if you want to access the Railroad tree before it is converted to HTML
- :param element: base element of the parser being diagrammed
- :param diagram_kwargs: kwargs to pass to the Diagram() constructor
- :param vertical: (optional) - int - limit at which number of alternatives should be
- shown vertically instead of horizontally
- :param show_results_names - bool to indicate whether results name annotations should be
- included in the diagram
- :param show_groups - bool to indicate whether groups should be highlighted with an unlabeled
- surrounding box
- """
- # Convert the whole tree underneath the root
- lookup = ConverterState(diagram_kwargs=diagram_kwargs or {})
- _to_diagram_element(
- element,
- lookup=lookup,
- parent=None,
- vertical=vertical,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- root_id = id(element)
- # Convert the root if it hasn't been already
- if root_id in lookup:
- if not element.customName:
- lookup[root_id].name = ""
- lookup[root_id].mark_for_extraction(root_id, lookup, force=True)
-
- # Now that we're finished, we can convert from intermediate structures into Railroad elements
- diags = list(lookup.diagrams.values())
- if len(diags) > 1:
- # collapse out duplicate diags with the same name
- seen = set()
- deduped_diags = []
- for d in diags:
- # don't extract SkipTo elements, they are uninformative as subdiagrams
- if d.name == "...":
- continue
- if d.name is not None and d.name not in seen:
- seen.add(d.name)
- deduped_diags.append(d)
- resolved = [resolve_partial(partial) for partial in deduped_diags]
- else:
- # special case - if just one diagram, always display it, even if
- # it has no name
- resolved = [resolve_partial(partial) for partial in diags]
- return sorted(resolved, key=lambda diag: diag.index)
-
-
-def _should_vertical(
- specification: int, exprs: Iterable[pyparsing.ParserElement]
-) -> bool:
- """
- Returns true if we should return a vertical list of elements
- """
- if specification is None:
- return False
- else:
- return len(_visible_exprs(exprs)) >= specification
-
-
-class ElementState:
- """
- State recorded for an individual pyparsing Element
- """
-
- # Note: this should be a dataclass, but we have to support Python 3.5
- def __init__(
- self,
- element: pyparsing.ParserElement,
- converted: EditablePartial,
- parent: EditablePartial,
- number: int,
- name: str = None,
- parent_index: typing.Optional[int] = None,
- ):
- #: The pyparsing element that this represents
- self.element: pyparsing.ParserElement = element
- #: The name of the element
- self.name: typing.Optional[str] = name
- #: The output Railroad element in an unconverted state
- self.converted: EditablePartial = converted
- #: The parent Railroad element, which we store so that we can extract this if it's duplicated
- self.parent: EditablePartial = parent
- #: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram
- self.number: int = number
- #: The index of this inside its parent
- self.parent_index: typing.Optional[int] = parent_index
- #: If true, we should extract this out into a subdiagram
- self.extract: bool = False
- #: If true, all of this element's children have been filled out
- self.complete: bool = False
-
- def mark_for_extraction(
- self, el_id: int, state: "ConverterState", name: str = None, force: bool = False
- ):
- """
- Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram
- :param el_id: id of the element
- :param state: element/diagram state tracker
- :param name: name to use for this element's text
- :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the
- root element when we know we're finished
- """
- self.extract = True
-
- # Set the name
- if not self.name:
- if name:
- # Allow forcing a custom name
- self.name = name
- elif self.element.customName:
- self.name = self.element.customName
- else:
- self.name = ""
-
- # Just because this is marked for extraction doesn't mean we can do it yet. We may have to wait for children
- # to be added
- # Also, if this is just a string literal etc, don't bother extracting it
- if force or (self.complete and _worth_extracting(self.element)):
- state.extract_into_diagram(el_id)
-
-
-class ConverterState:
- """
- Stores some state that persists between recursions into the element tree
- """
-
- def __init__(self, diagram_kwargs: typing.Optional[dict] = None):
- #: A dictionary mapping ParserElements to state relating to them
- self._element_diagram_states: Dict[int, ElementState] = {}
- #: A dictionary mapping ParserElement IDs to subdiagrams generated from them
- self.diagrams: Dict[int, EditablePartial[NamedDiagram]] = {}
- #: The index of the next unnamed element
- self.unnamed_index: int = 1
- #: The index of the next element. This is used for sorting
- self.index: int = 0
- #: Shared kwargs that are used to customize the construction of diagrams
- self.diagram_kwargs: dict = diagram_kwargs or {}
- self.extracted_diagram_names: Set[str] = set()
-
- def __setitem__(self, key: int, value: ElementState):
- self._element_diagram_states[key] = value
-
- def __getitem__(self, key: int) -> ElementState:
- return self._element_diagram_states[key]
-
- def __delitem__(self, key: int):
- del self._element_diagram_states[key]
-
- def __contains__(self, key: int):
- return key in self._element_diagram_states
-
- def generate_unnamed(self) -> int:
- """
- Generate a number used in the name of an otherwise unnamed diagram
- """
- self.unnamed_index += 1
- return self.unnamed_index
-
- def generate_index(self) -> int:
- """
- Generate a number used to index a diagram
- """
- self.index += 1
- return self.index
-
- def extract_into_diagram(self, el_id: int):
- """
- Used when we encounter the same token twice in the same tree. When this
- happens, we replace all instances of that token with a terminal, and
- create a new subdiagram for the token
- """
- position = self[el_id]
-
- # Replace the original definition of this element with a regular block
- if position.parent:
- ret = EditablePartial.from_call(railroad.NonTerminal, text=position.name)
- if "item" in position.parent.kwargs:
- position.parent.kwargs["item"] = ret
- elif "items" in position.parent.kwargs:
- position.parent.kwargs["items"][position.parent_index] = ret
-
- # If the element we're extracting is a group, skip to its content but keep the title
- if position.converted.func == railroad.Group:
- content = position.converted.kwargs["item"]
- else:
- content = position.converted
-
- self.diagrams[el_id] = EditablePartial.from_call(
- NamedDiagram,
- name=position.name,
- diagram=EditablePartial.from_call(
- railroad.Diagram, content, **self.diagram_kwargs
- ),
- index=position.number,
- )
-
- del self[el_id]
-
-
-def _worth_extracting(element: pyparsing.ParserElement) -> bool:
- """
- Returns true if this element is worth having its own sub-diagram. Simply, if any of its children
- themselves have children, then its complex enough to extract
- """
- children = element.recurse()
- return any(child.recurse() for child in children)
-
-
-def _apply_diagram_item_enhancements(fn):
- """
- decorator to ensure enhancements to a diagram item (such as results name annotations)
- get applied on return from _to_diagram_element (we do this since there are several
- returns in _to_diagram_element)
- """
-
- def _inner(
- element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
- lookup: ConverterState = None,
- vertical: int = None,
- index: int = 0,
- name_hint: str = None,
- show_results_names: bool = False,
- show_groups: bool = False,
- ) -> typing.Optional[EditablePartial]:
-
- ret = fn(
- element,
- parent,
- lookup,
- vertical,
- index,
- name_hint,
- show_results_names,
- show_groups,
- )
-
- # apply annotation for results name, if present
- if show_results_names and ret is not None:
- element_results_name = element.resultsName
- if element_results_name:
- # add "*" to indicate if this is a "list all results" name
- element_results_name += "" if element.modalResults else "*"
- ret = EditablePartial.from_call(
- railroad.Group, item=ret, label=element_results_name
- )
-
- return ret
-
- return _inner
-
-
-def _visible_exprs(exprs: Iterable[pyparsing.ParserElement]):
- non_diagramming_exprs = (
- pyparsing.ParseElementEnhance,
- pyparsing.PositionToken,
- pyparsing.And._ErrorStop,
- )
- return [
- e
- for e in exprs
- if not (e.customName or e.resultsName or isinstance(e, non_diagramming_exprs))
- ]
-
-
-@_apply_diagram_item_enhancements
-def _to_diagram_element(
- element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
- lookup: ConverterState = None,
- vertical: int = None,
- index: int = 0,
- name_hint: str = None,
- show_results_names: bool = False,
- show_groups: bool = False,
-) -> typing.Optional[EditablePartial]:
- """
- Recursively converts a PyParsing Element to a railroad Element
- :param lookup: The shared converter state that keeps track of useful things
- :param index: The index of this element within the parent
- :param parent: The parent of this element in the output tree
- :param vertical: Controls at what point we make a list of elements vertical. If this is an integer (the default),
- it sets the threshold of the number of items before we go vertical. If True, always go vertical, if False, never
- do so
- :param name_hint: If provided, this will override the generated name
- :param show_results_names: bool flag indicating whether to add annotations for results names
- :returns: The converted version of the input element, but as a Partial that hasn't yet been constructed
- :param show_groups: bool flag indicating whether to show groups using bounding box
- """
- exprs = element.recurse()
- name = name_hint or element.customName or element.__class__.__name__
-
- # Python's id() is used to provide a unique identifier for elements
- el_id = id(element)
-
- element_results_name = element.resultsName
-
- # Here we basically bypass processing certain wrapper elements if they contribute nothing to the diagram
- if not element.customName:
- if isinstance(
- element,
- (
- # pyparsing.TokenConverter,
- # pyparsing.Forward,
- pyparsing.Located,
- ),
- ):
- # However, if this element has a useful custom name, and its child does not, we can pass it on to the child
- if exprs:
- if not exprs[0].customName:
- propagated_name = name
- else:
- propagated_name = None
-
- return _to_diagram_element(
- element.expr,
- parent=parent,
- lookup=lookup,
- vertical=vertical,
- index=index,
- name_hint=propagated_name,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- # If the element isn't worth extracting, we always treat it as the first time we say it
- if _worth_extracting(element):
- if el_id in lookup:
- # If we've seen this element exactly once before, we are only just now finding out that it's a duplicate,
- # so we have to extract it into a new diagram.
- looked_up = lookup[el_id]
- looked_up.mark_for_extraction(el_id, lookup, name=name_hint)
- ret = EditablePartial.from_call(railroad.NonTerminal, text=looked_up.name)
- return ret
-
- elif el_id in lookup.diagrams:
- # If we have seen the element at least twice before, and have already extracted it into a subdiagram, we
- # just put in a marker element that refers to the sub-diagram
- ret = EditablePartial.from_call(
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
- )
- return ret
-
- # Recursively convert child elements
- # Here we find the most relevant Railroad element for matching pyparsing Element
- # We use ``items=[]`` here to hold the place for where the child elements will go once created
- if isinstance(element, pyparsing.And):
- # detect And's created with ``expr*N`` notation - for these use a OneOrMore with a repeat
- # (all will have the same name, and resultsName)
- if not exprs:
- return None
- if len(set((e.name, e.resultsName) for e in exprs)) == 1:
- ret = EditablePartial.from_call(
- railroad.OneOrMore, item="", repeat=str(len(exprs))
- )
- elif _should_vertical(vertical, exprs):
- ret = EditablePartial.from_call(railroad.Stack, items=[])
- else:
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
- elif isinstance(element, (pyparsing.Or, pyparsing.MatchFirst)):
- if not exprs:
- return None
- if _should_vertical(vertical, exprs):
- ret = EditablePartial.from_call(railroad.Choice, 0, items=[])
- else:
- ret = EditablePartial.from_call(railroad.HorizontalChoice, items=[])
- elif isinstance(element, pyparsing.Each):
- if not exprs:
- return None
- ret = EditablePartial.from_call(EachItem, items=[])
- elif isinstance(element, pyparsing.NotAny):
- ret = EditablePartial.from_call(AnnotatedItem, label="NOT", item="")
- elif isinstance(element, pyparsing.FollowedBy):
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKAHEAD", item="")
- elif isinstance(element, pyparsing.PrecededBy):
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKBEHIND", item="")
- elif isinstance(element, pyparsing.Group):
- if show_groups:
- ret = EditablePartial.from_call(AnnotatedItem, label="", item="")
- else:
- ret = EditablePartial.from_call(railroad.Group, label="", item="")
- elif isinstance(element, pyparsing.TokenConverter):
- ret = EditablePartial.from_call(
- AnnotatedItem, label=type(element).__name__.lower(), item=""
- )
- elif isinstance(element, pyparsing.Opt):
- ret = EditablePartial.from_call(railroad.Optional, item="")
- elif isinstance(element, pyparsing.OneOrMore):
- ret = EditablePartial.from_call(railroad.OneOrMore, item="")
- elif isinstance(element, pyparsing.ZeroOrMore):
- ret = EditablePartial.from_call(railroad.ZeroOrMore, item="")
- elif isinstance(element, pyparsing.Group):
- ret = EditablePartial.from_call(
- railroad.Group, item=None, label=element_results_name
- )
- elif isinstance(element, pyparsing.Empty) and not element.customName:
- # Skip unnamed "Empty" elements
- ret = None
- elif len(exprs) > 1:
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
- elif len(exprs) > 0 and not element_results_name:
- ret = EditablePartial.from_call(railroad.Group, item="", label=name)
- else:
- terminal = EditablePartial.from_call(railroad.Terminal, element.defaultName)
- ret = terminal
-
- if ret is None:
- return
-
- # Indicate this element's position in the tree so we can extract it if necessary
- lookup[el_id] = ElementState(
- element=element,
- converted=ret,
- parent=parent,
- parent_index=index,
- number=lookup.generate_index(),
- )
- if element.customName:
- lookup[el_id].mark_for_extraction(el_id, lookup, element.customName)
-
- i = 0
- for expr in exprs:
- # Add a placeholder index in case we have to extract the child before we even add it to the parent
- if "items" in ret.kwargs:
- ret.kwargs["items"].insert(i, None)
-
- item = _to_diagram_element(
- expr,
- parent=ret,
- lookup=lookup,
- vertical=vertical,
- index=i,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- # Some elements don't need to be shown in the diagram
- if item is not None:
- if "item" in ret.kwargs:
- ret.kwargs["item"] = item
- elif "items" in ret.kwargs:
- # If we've already extracted the child, don't touch this index, since it's occupied by a nonterminal
- ret.kwargs["items"][i] = item
- i += 1
- elif "items" in ret.kwargs:
- # If we're supposed to skip this element, remove it from the parent
- del ret.kwargs["items"][i]
-
- # If all this items children are none, skip this item
- if ret and (
- ("items" in ret.kwargs and len(ret.kwargs["items"]) == 0)
- or ("item" in ret.kwargs and ret.kwargs["item"] is None)
- ):
- ret = EditablePartial.from_call(railroad.Terminal, name)
-
- # Mark this element as "complete", ie it has all of its children
- if el_id in lookup:
- lookup[el_id].complete = True
-
- if el_id in lookup and lookup[el_id].extract and lookup[el_id].complete:
- lookup.extract_into_diagram(el_id)
- if ret is not None:
- ret = EditablePartial.from_call(
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
- )
-
- return ret
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py
deleted file mode 100644
index 4b01e9007c2578a7b5ae555c926cc06c8a3010f9..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from typing import Any
-import pydoc
-from fvcore.common.registry import Registry # for backward compatibility.
-
-"""
-``Registry`` and `locate` provide ways to map a string (typically found
-in config files) to callable objects.
-"""
-
-__all__ = ["Registry", "locate"]
-
-
-def _convert_target_to_string(t: Any) -> str:
- """
- Inverse of ``locate()``.
-
- Args:
- t: any object with ``__module__`` and ``__qualname__``
- """
- module, qualname = t.__module__, t.__qualname__
-
- # Compress the path to this object, e.g. ``module.submodule._impl.class``
- # may become ``module.submodule.class``, if the later also resolves to the same
- # object. This simplifies the string, and also is less affected by moving the
- # class implementation.
- module_parts = module.split(".")
- for k in range(1, len(module_parts)):
- prefix = ".".join(module_parts[:k])
- candidate = f"{prefix}.{qualname}"
- try:
- if locate(candidate) is t:
- return candidate
- except ImportError:
- pass
- return f"{module}.{qualname}"
-
-
-def locate(name: str) -> Any:
- """
- Locate and return an object ``x`` using an input string ``{x.__module__}.{x.__qualname__}``,
- such as "module.submodule.class_name".
-
- Raise Exception if it cannot be found.
- """
- obj = pydoc.locate(name)
-
- # Some cases (e.g. torch.optim.sgd.SGD) not handled correctly
- # by pydoc.locate. Try a private function from hydra.
- if obj is None:
- try:
- # from hydra.utils import get_method - will print many errors
- from hydra.utils import _locate
- except ImportError as e:
- raise ImportError(f"Cannot dynamically locate object {name}!") from e
- else:
- obj = _locate(name) # it raises if fails
-
- return obj
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_rpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_rpn.py
deleted file mode 100644
index f14faae56e580d3d4762d31273b9f65c5774346b..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_rpn.py
+++ /dev/null
@@ -1,262 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-import torch
-
-from detectron2.config import get_cfg
-from detectron2.export import scripting_with_instances
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.backbone import build_backbone
-from detectron2.modeling.proposal_generator import RPN, build_proposal_generator
-from detectron2.modeling.proposal_generator.proposal_utils import (
- add_ground_truth_to_proposals,
- find_top_rpn_proposals,
-)
-from detectron2.structures import Boxes, ImageList, Instances, RotatedBoxes
-from detectron2.utils.events import EventStorage
-
-logger = logging.getLogger(__name__)
-
-
-class RPNTest(unittest.TestCase):
- def get_gt_and_features(self):
- num_images = 2
- images_tensor = torch.rand(num_images, 20, 30)
- image_sizes = [(10, 10), (20, 30)]
- images = ImageList(images_tensor, image_sizes)
- image_shape = (15, 15)
- num_channels = 1024
- features = {"res4": torch.rand(num_images, num_channels, 1, 2)}
- gt_boxes = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32)
- gt_instances = Instances(image_shape)
- gt_instances.gt_boxes = Boxes(gt_boxes)
- return (gt_instances, features, images, image_sizes)
-
- def test_rpn(self):
- torch.manual_seed(121)
- cfg = get_cfg()
- backbone = build_backbone(cfg)
- proposal_generator = RPN(cfg, backbone.output_shape())
- (gt_instances, features, images, image_sizes) = self.get_gt_and_features()
- with EventStorage(): # capture events in a new storage to discard them
- proposals, proposal_losses = proposal_generator(
- images, features, [gt_instances[0], gt_instances[1]]
- )
-
- expected_losses = {
- "loss_rpn_cls": torch.tensor(0.08011703193),
- "loss_rpn_loc": torch.tensor(0.101470276),
- }
- for name in expected_losses.keys():
- err_msg = "proposal_losses[{}] = {}, expected losses = {}".format(
- name, proposal_losses[name], expected_losses[name]
- )
- self.assertTrue(torch.allclose(proposal_losses[name], expected_losses[name]), err_msg)
-
- self.assertEqual(len(proposals), len(image_sizes))
- for proposal, im_size in zip(proposals, image_sizes):
- self.assertEqual(proposal.image_size, im_size)
-
- expected_proposal_box = torch.tensor([[0, 0, 10, 10], [7.2702, 0, 10, 10]])
- expected_objectness_logit = torch.tensor([0.1596, -0.0007])
- self.assertTrue(
- torch.allclose(proposals[0].proposal_boxes.tensor, expected_proposal_box, atol=1e-4)
- )
- self.assertTrue(
- torch.allclose(proposals[0].objectness_logits, expected_objectness_logit, atol=1e-4)
- )
-
- def verify_rpn(self, conv_dims, expected_conv_dims):
- torch.manual_seed(121)
- cfg = get_cfg()
- cfg.MODEL.RPN.CONV_DIMS = conv_dims
- backbone = build_backbone(cfg)
- proposal_generator = RPN(cfg, backbone.output_shape())
- for k, conv in enumerate(proposal_generator.rpn_head.conv):
- self.assertEqual(expected_conv_dims[k], conv.out_channels)
- return proposal_generator
-
- def test_rpn_larger_num_convs(self):
- conv_dims = [64, 64, 64, 64, 64]
- proposal_generator = self.verify_rpn(conv_dims, conv_dims)
- (gt_instances, features, images, image_sizes) = self.get_gt_and_features()
- with EventStorage(): # capture events in a new storage to discard them
- proposals, proposal_losses = proposal_generator(
- images, features, [gt_instances[0], gt_instances[1]]
- )
- expected_losses = {
- "loss_rpn_cls": torch.tensor(0.08122821152),
- "loss_rpn_loc": torch.tensor(0.10064548254),
- }
- for name in expected_losses.keys():
- err_msg = "proposal_losses[{}] = {}, expected losses = {}".format(
- name, proposal_losses[name], expected_losses[name]
- )
- self.assertTrue(torch.allclose(proposal_losses[name], expected_losses[name]), err_msg)
-
- def test_rpn_conv_dims_not_set(self):
- conv_dims = [-1, -1, -1]
- expected_conv_dims = [1024, 1024, 1024]
- self.verify_rpn(conv_dims, expected_conv_dims)
-
- def test_rpn_scriptability(self):
- cfg = get_cfg()
- proposal_generator = RPN(cfg, {"res4": ShapeSpec(channels=1024, stride=16)}).eval()
- num_images = 2
- images_tensor = torch.rand(num_images, 30, 40)
- image_sizes = [(32, 32), (30, 40)]
- images = ImageList(images_tensor, image_sizes)
- features = {"res4": torch.rand(num_images, 1024, 1, 2)}
-
- fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor}
- proposal_generator_ts = scripting_with_instances(proposal_generator, fields)
-
- proposals, _ = proposal_generator(images, features)
- proposals_ts, _ = proposal_generator_ts(images, features)
-
- for proposal, proposal_ts in zip(proposals, proposals_ts):
- self.assertEqual(proposal.image_size, proposal_ts.image_size)
- self.assertTrue(
- torch.equal(proposal.proposal_boxes.tensor, proposal_ts.proposal_boxes.tensor)
- )
- self.assertTrue(torch.equal(proposal.objectness_logits, proposal_ts.objectness_logits))
-
- def test_rrpn(self):
- torch.manual_seed(121)
- cfg = get_cfg()
- cfg.MODEL.PROPOSAL_GENERATOR.NAME = "RRPN"
- cfg.MODEL.ANCHOR_GENERATOR.NAME = "RotatedAnchorGenerator"
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1]]
- cfg.MODEL.ANCHOR_GENERATOR.ANGLES = [[0, 60]]
- cfg.MODEL.RPN.BBOX_REG_WEIGHTS = (1, 1, 1, 1, 1)
- cfg.MODEL.RPN.HEAD_NAME = "StandardRPNHead"
- backbone = build_backbone(cfg)
- proposal_generator = build_proposal_generator(cfg, backbone.output_shape())
- num_images = 2
- images_tensor = torch.rand(num_images, 20, 30)
- image_sizes = [(10, 10), (20, 30)]
- images = ImageList(images_tensor, image_sizes)
- image_shape = (15, 15)
- num_channels = 1024
- features = {"res4": torch.rand(num_images, num_channels, 1, 2)}
- gt_boxes = torch.tensor([[2, 2, 2, 2, 0], [4, 4, 4, 4, 0]], dtype=torch.float32)
- gt_instances = Instances(image_shape)
- gt_instances.gt_boxes = RotatedBoxes(gt_boxes)
- with EventStorage(): # capture events in a new storage to discard them
- proposals, proposal_losses = proposal_generator(
- images, features, [gt_instances[0], gt_instances[1]]
- )
-
- expected_losses = {
- "loss_rpn_cls": torch.tensor(0.04291602224),
- "loss_rpn_loc": torch.tensor(0.145077362),
- }
- for name in expected_losses.keys():
- err_msg = "proposal_losses[{}] = {}, expected losses = {}".format(
- name, proposal_losses[name], expected_losses[name]
- )
- self.assertTrue(torch.allclose(proposal_losses[name], expected_losses[name]), err_msg)
-
- expected_proposal_box = torch.tensor(
- [
- [-1.77999556, 0.78155339, 68.04367828, 14.78156471, 60.59333801],
- [13.82740974, -1.50282836, 34.67269897, 29.19676590, -3.81942749],
- [8.10392570, -0.99071521, 145.39100647, 32.13126373, 3.67242432],
- [5.00000000, 4.57370186, 10.00000000, 9.14740372, 0.89196777],
- ]
- )
-
- expected_objectness_logit = torch.tensor([0.10924313, 0.09881870, 0.07649877, 0.05858029])
-
- torch.set_printoptions(precision=8, sci_mode=False)
-
- self.assertEqual(len(proposals), len(image_sizes))
-
- proposal = proposals[0]
- # It seems that there's some randomness in the result across different machines:
- # This test can be run on a local machine for 100 times with exactly the same result,
- # However, a different machine might produce slightly different results,
- # thus the atol here.
- err_msg = "computed proposal boxes = {}, expected {}".format(
- proposal.proposal_boxes.tensor, expected_proposal_box
- )
- self.assertTrue(
- torch.allclose(proposal.proposal_boxes.tensor[:4], expected_proposal_box, atol=1e-5),
- err_msg,
- )
-
- err_msg = "computed objectness logits = {}, expected {}".format(
- proposal.objectness_logits, expected_objectness_logit
- )
- self.assertTrue(
- torch.allclose(proposal.objectness_logits[:4], expected_objectness_logit, atol=1e-5),
- err_msg,
- )
-
- def test_find_rpn_proposals_inf(self):
- N, Hi, Wi, A = 3, 3, 3, 3
- proposals = [torch.rand(N, Hi * Wi * A, 4)]
- pred_logits = [torch.rand(N, Hi * Wi * A)]
- pred_logits[0][1][3:5].fill_(float("inf"))
- find_top_rpn_proposals(proposals, pred_logits, [(10, 10)], 0.5, 1000, 1000, 0, False)
-
- def test_find_rpn_proposals_tracing(self):
- N, Hi, Wi, A = 3, 50, 50, 9
- proposal = torch.rand(N, Hi * Wi * A, 4)
- pred_logit = torch.rand(N, Hi * Wi * A)
-
- def func(proposal, logit, image_size):
- r = find_top_rpn_proposals(
- [proposal], [logit], [image_size], 0.7, 1000, 1000, 0, False
- )[0]
- size = r.image_size
- if not isinstance(size, torch.Tensor):
- size = torch.tensor(size)
- return (size, r.proposal_boxes.tensor, r.objectness_logits)
-
- other_inputs = []
- # test that it generalizes to other shapes
- for Hi, Wi, shp in [(30, 30, 60), (10, 10, 800)]:
- other_inputs.append(
- (
- torch.rand(N, Hi * Wi * A, 4),
- torch.rand(N, Hi * Wi * A),
- torch.tensor([shp, shp]),
- )
- )
- torch.jit.trace(
- func, (proposal, pred_logit, torch.tensor([100, 100])), check_inputs=other_inputs
- )
-
- def test_append_gt_to_proposal(self):
- proposals = Instances(
- (10, 10),
- **{
- "proposal_boxes": Boxes(torch.empty((0, 4))),
- "objectness_logits": torch.tensor([]),
- "custom_attribute": torch.tensor([]),
- }
- )
- gt_boxes = Boxes(torch.tensor([[0, 0, 1, 1]]))
-
- self.assertRaises(AssertionError, add_ground_truth_to_proposals, [gt_boxes], [proposals])
-
- gt_instances = Instances((10, 10))
- gt_instances.gt_boxes = gt_boxes
-
- self.assertRaises(
- AssertionError, add_ground_truth_to_proposals, [gt_instances], [proposals]
- )
-
- gt_instances.custom_attribute = torch.tensor([1])
- gt_instances.custom_attribute2 = torch.tensor([1])
- new_proposals = add_ground_truth_to_proposals([gt_instances], [proposals])[0]
-
- self.assertEqual(new_proposals.custom_attribute[0], 1)
- # new proposals should only include the attributes in proposals
- self.assertRaises(AttributeError, lambda: new_proposals.custom_attribute2)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert_gen.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert_gen.py
deleted file mode 100644
index 52220d81da097772277eb3bd49f0d3db65884523..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert_gen.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import torch
-from torch.utils.data import DataLoader
-from multiprocessing import Pool
-import commons
-import utils
-from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate
-from tqdm import tqdm
-import warnings
-
-from text import cleaned_text_to_sequence, get_bert
-
-config_path = 'configs/config.json'
-hps = utils.get_hparams_from_file(config_path)
-
-def process_line(line):
- _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|")
- phone = phones.split(" ")
- tone = [int(i) for i in tone.split(" ")]
- word2ph = [int(i) for i in word2ph.split(" ")]
- w2pho = [i for i in word2ph]
- word2ph = [i for i in word2ph]
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
-
- if hps.data.add_blank:
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
- wav_path = f'{_id}'
-
- bert_path = wav_path.replace(".wav", ".bert.pt")
-
- try:
- bert = torch.load(bert_path)
- assert bert.shape[-1] == len(phone)
- except:
- bert = get_bert(text, word2ph, language_str)
- assert bert.shape[-1] == len(phone)
- torch.save(bert, bert_path)
-
-
-if __name__ == '__main__':
- lines = []
- with open(hps.data.training_files, encoding='utf-8' ) as f:
- lines.extend(f.readlines())
-
- with open(hps.data.validation_files, encoding='utf-8' ) as f:
- lines.extend(f.readlines())
-
- with Pool(processes=6) as pool: #P40 24GB suitable config,if coom,please decrease the processess number.
- for _ in tqdm(pool.imap_unordered(process_line, lines)):
- pass
diff --git a/spaces/Bajr/softly/README.md b/spaces/Bajr/softly/README.md
deleted file mode 100644
index 05bc77ff2110f21e508f8010a3d40cee35a05ace..0000000000000000000000000000000000000000
--- a/spaces/Bajr/softly/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Yummy Research
-emoji: 🍦
-colorFrom: red
-colorTo: blue
-sdk: docker
-pinned: false
-duplicated_from: Bajr/soft
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Caramelo Crush Soda Saga Juego Gratis Para Pc.md b/spaces/Benson/text-generation/Examples/Caramelo Crush Soda Saga Juego Gratis Para Pc.md
deleted file mode 100644
index 2a9260035906288adf75df79e0c0bf1c02241d68..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Caramelo Crush Soda Saga Juego Gratis Para Pc.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Candy Crush Soda Saga: Cómo descargar y jugar este divertido juego de puzzle en su PC
-
Si te gusta combinar dulces y resolver puzzles, es posible que haya oído hablar de Candy Crush Soda Saga, uno de los juegos más populares del mundo. Este juego es una secuela de la legendaria saga Candy Crush, y ofrece más diversión y desafíos con nuevos dulces, modos y características. En este artículo, te mostraremos cómo descargar y jugar este juego en tu PC de forma gratuita, utilizando la Epic Games Store. También le diremos por qué jugar juegos de puzzle en su PC es bueno para su cerebro y su estado de ánimo, y le daremos algunos consejos y trucos para aprovechar al máximo su experiencia de juego.
-
¿Qué es Candy Crush Soda Saga?
-
Candy Crush Soda Saga es un juego de puzzle match-3 desarrollado por King, una empresa líder en juegos casuales. El juego fue lanzado en 2014 como un spin-off de Candy Crush Saga, que tiene más de mil millones de descargas en todo el mundo. El juego sigue las aventuras de Kimmy, que está buscando a su hermana Tiffi en un mundo lleno de dulces. En el camino, se encuentra con nuevos personajes y se enfrenta a nuevos desafíos.
El juego tiene más de 10.000 niveles, cada uno con un objetivo y diseño diferentes. Tienes que combinar tres o más caramelos del mismo color para eliminarlos del tablero y crear dulces especiales que tienen efectos adicionales. También tienes que lidiar con varios obstáculos, como hielo, panal, mermelada, chocolate y botellas de refresco. El juego tiene diferentes modos, como Soda, Frosting, Honeycomb, Jam, Bubblegum y más. Cada modo tiene sus propias reglas y estrategias.
-
El juego también tiene muchas características que lo hacen más divertido y atractivo. Puedes jugar con tus amigos en línea y competir por puntuaciones altas. También puedes unirte a equipos y cooperar con otros jugadores en eventos y desafíos. También puedes ganar recompensas y refuerzos que te ayudan en niveles difíciles. El juego también tiene actualizaciones mensuales que traen nuevo contenido y sorpresas.
-
¿Por qué jugar Candy Crush Soda Saga en su PC?
-
-
-
Puedes disfrutar de mejores gráficos y calidad de sonido en una pantalla más grande.
-
Puedes usar el ratón y el teclado para controlar el juego más fácilmente.
-
Puede ahorrar su vida útil de la batería y el uso de datos en su teléfono o tableta.
-
Puedes evitar distracciones de notificaciones y llamadas mientras juegas.
-
-
Una de las mejores maneras de jugar Candy Crush Soda Saga en tu PC es usar la Epic Games Store, una plataforma de distribución digital que te permite descargar juegos a tu PC a través del Lanzador de Epic Games. La Epic Games Store tiene muchas ventajas, como:
-
-
Puedes acceder a cientos de juegos de varios géneros y categorías.
-
Puedes obtener juegos gratis cada semana.
-
Puedes disfrutar de ofertas y descuentos exclusivos.
-
Puedes apoyar a los desarrolladores dándoles una mayor proporción de los ingresos.
-
Puedes conectarte con tus amigos y otros jugadores a través de chat y funciones sociales.
-
-
¿Cómo descargar y jugar Candy Crush Soda Saga en su PC?
-
Descargar y jugar Candy Crush Soda Saga en tu PC es muy fácil. Solo sigue estos pasos:
-
-
Instalar el lanzador de juegos épicos. Puede descargarlo desde [5](https:// epicgames.com/en-US/download) y ejecutar el instalador. Necesitarás crear una cuenta o iniciar sesión con la existente.
-
Buscar Candy Crush Soda Saga en la Epic Games Store. Puedes encontrarlo en la sección de Juegos Gratis o usar la barra de búsqueda.
-
Haga clic en el botón Obtener y confirme su pedido. El juego se agregará a su biblioteca.
-
Vaya a su biblioteca y haga clic en el botón Instalar junto al juego. Elija una ubicación para los archivos del juego y espere a que se complete la descarga y la instalación.
-
Inicie el juego desde su biblioteca o desde el acceso directo del escritorio. También puede ajustar la configuración y las preferencias del juego desde el lanzador.
-
-
-
-
-
Requisitos del sistema
-
Mínimo
-
Recomendado
-
-
-
Sistema operativo
-
Windows 7 o superior
-
Windows 10
-
-
-
Procesador
-
Intel Core i3 o equivalente
-
Intel Core i5 o equivalente
-
-
-
Memoria
-
4 GB de RAM
-
8 GB de RAM
-
-
-
Gráficos
-
Intel HD Graphics 4000 o superior
-
NVIDIA GeForce GTX 660 o superior
-
-
-
Almacenamiento
-
500 MB de espacio disponible
-
1 GB de espacio disponible
-
-
-
Conexión a Internet
-
Conexión a Internet de banda ancha
-
Conexión a Internet de banda ancha
-
-
Consejos y trucos para disfrutar de Candy Crush Soda Saga más
-
Candy Crush Soda Saga es un juego divertido y adictivo, pero también puede ser desafiante y frustrante a veces. Aquí hay algunos consejos y trucos para ayudarle a disfrutar del juego más:
-
-
-
Planifique sus movimientos. Trate de buscar partidos que pueden crear dulces especiales, como rayas, envueltos, pescado o caramelos para colorear. Estos pueden ayudarte a eliminar más caramelos y obstáculos en un solo movimiento.
-
Use boosters sabiamente. Los boosters son elementos que pueden darle una ventaja en el juego, como movimientos adicionales, martillos de piruleta, interruptores libres y más. Puedes ganarlos completando niveles, eventos o desafíos, o comprándolos con dinero real. Sin embargo, no confíes demasiado en ellos, ya que son limitados y pueden agotarse rápidamente.
-
Juega con amigos. Jugar con amigos puede hacer el juego más divertido y social. Puedes invitar a tus amigos a unirse a tu equipo, enviar y recibir vidas, chatear con ellos y competir por puntuaciones altas. También puedes pedirles ayuda cuando estás atascado en un nivel.
-
-
Diviértete. No dejes que el juego te estrese o te frustre. Recuerda que es solo un juego, y el propósito principal es divertirse. Disfrute de los gráficos coloridos, la música pegadiza, y los personajes lindos. No tengas miedo de experimentar con diferentes estrategias y ver lo que funciona para ti.
-
Conclusión
-
Candy Crush Soda Saga es un gran juego para jugar en tu PC, especialmente si te gustan los juegos de puzzle y dulces. Puedes descargarlo gratis desde la Epic Games Store y disfrutar de sus características y beneficios. También puedes seguir nuestros consejos y trucos para aprovechar al máximo tu experiencia de juego. ¿Qué estás esperando? ¡Descarga Candy Crush Soda Saga hoy y únete a Kimmy en su dulce aventura!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre Candy Crush Soda Saga:
-
P: ¿Cuántos niveles hay en Candy Crush Soda Saga?
-
A: Hay más de 10,000 niveles en Candy Crush Soda Saga a partir de junio de 2023, y se agregan más cada mes.
-
Q: ¿Cómo puedo sincronizar mi progreso a través de dispositivos?
-
A: Puedes sincronizar tu progreso entre dispositivos conectando tu juego a Facebook o King.com. Esto también le permitirá acceder a sus boosters y vidas salvadas.
-
Q: ¿Cómo consigo más vidas?
-
A: Tienes cinco vidas en Candy Crush Soda Saga, y pierdes una cada vez que fallas un nivel. Puedes obtener más vidas esperando a que se llenen (una vida cada 30 minutos), pidiendo a tus amigos que te envíen algunas, comprándolas con barras de oro o jugando las misiones diarias.
-
P: ¿Qué son las barras de oro y cómo las consigo?
-
A: Las barras de oro son la moneda premium en Candy Crush Soda Saga. Puedes usarlas para comprar potenciadores, vidas, movimientos y otros artículos. Puedes obtener barras de oro completando niveles, eventos o desafíos, o comprándolas con dinero real.
-
P: ¿Cuáles son los diferentes tipos de dulces especiales y cómo los hago?
-
-
-
Caramelo a rayas: Combina cuatro caramelos del mismo color en una fila o columna. Esto creará un caramelo a rayas que limpiará toda una fila o columna cuando coincida.
-
Caramelo envuelto: Combina cinco caramelos del mismo color en forma de L o T. Esto creará un caramelo envuelto que explotará dos veces cuando coincida, despejando un área de 3x3 cada vez.
-
Caramelo de pescado: Combina cuatro caramelos del mismo color en un cuadrado. Esto creará un caramelo de pescado que nadará a un caramelo o obstáculo al azar y lo despejará cuando coincida.
-
Caramelo para colorear: combina seis o más caramelos del mismo color. Esto creará un caramelo para colorear que cambiará el color de todos los dulces que coincidan con su color cuando coincida.
-
Bomba de color: Combina cinco caramelos del mismo color en una fila o columna. Esto creará una bomba de color que borrará todos los caramelos del mismo color con el que se intercambia.
-
Pescado sueco: Este es un caramelo especial que solo se puede obtener mediante el uso de refuerzos o jugando ciertos niveles. Actúa como un caramelo de pescado, pero puede apuntar a dulces específicos u obstáculos que se necesitan para completar el nivel.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Blockman Ir En El PC Gratis.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Blockman Ir En El PC Gratis.md
deleted file mode 100644
index e0d6ebe5e01647e8b24830f021cbd475e48101f5..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar Blockman Ir En El PC Gratis.md
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
Cómo descargar Blockman Go en PC gratis
-
Blockman Go es un popular juego que combina elementos de sandbox, aventura, acción y juegos sociales. Puedes jugar varios minijuegos estilo bloque, chatear y hacer amigos con otros jugadores, y personalizar tu avatar y hogar con diferentes decoraciones. Pero ¿sabías que también puedes jugar Blockman Go en tu PC de forma gratuita? En este artículo, te mostraremos cómo descargar e instalar Blockman Go en tu PC con BlueStacks, un potente emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu ordenador o portátil.
Blockman Go es una aplicación gratuita desarrollada por Blockman GO Studio. Es un juego sandbox que te permite jugar, crear y compartir tus experiencias divertidas con tus amigos. Puedes elegir entre un amplio catálogo de minijuegos, que se actualizan continuamente para mantener las cosas frescas y divertidas. Algunos de los minijuegos populares son Bed Wars, Egg War, Sky Block, Free City RP, Anime Fighting Simulator y más. Puedes unirte a cualquier juego con un clic y ganar recompensas por jugar.
-
Blockman Go es también una plataforma social donde puedes chatear y hacer amigos con otros jugadores. Puede unirse o crear fiestas, enviar mensajes, chat de voz e interactuar con otros de varias maneras. También puedes unirte a la creciente comunidad de desarrolladores y compartir tus creaciones con el mundo.
-
Blockman Go es también una herramienta creativa que te permite personalizar tu avatar y hogar con diferentes accesorios, disfraces y decoraciones. Puedes expresar tu estilo y personalidad únicos con cientos de opciones disponibles. También puedes usar el Blockman Editor para crear tus propias experiencias de sandbox y minijuegos.
-
¿Por qué jugar Blockman ir en el PC?
-
Mientras que Blockman Go está diseñado para dispositivos móviles, también se puede jugar en su PC de forma gratuita con BlueStacks. Hay muchas ventajas de jugar Blockman Go en PC, tales como:
-
-
-
Puedes usar el teclado y el ratón para controles más precisos, lo que te dará una ventaja en minijuegos competitivos.
-
Puede acceder a miles de aplicaciones y herramientas de productividad con BlueStacks, que le ayudarán a trabajar de manera más eficiente y conveniente en su PC.
-
-
Cómo descargar e instalar Blockman Ir en el PC con BlueStacks
-
Para jugar Blockman Ir en el PC de forma gratuita, es necesario descargar e instalar BlueStacks en su PC primero. BlueStacks es un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu ordenador o portátil. Estos son los pasos para descargar e instalar Blockman Ir en el PC con BlueStacks:
-
-
-
Descargue e instale BlueStacks en su PC desde este enlace.
-
Iniciar sesión completo en Google para acceder a Play Store, o hacerlo más tarde.
-
Buscar Blockman Ir en el centro de aplicaciones o la barra de búsqueda en la esquina superior derecha.
-
Haga clic para instalar Blockman Go desde los resultados de búsqueda.
-
Haga clic en el icono de Blockman Go en la pantalla de inicio para comenzar a jugar.
-
-
Conclusión
-
Blockman Go es un juego divertido y versátil que ofrece mucho entretenimiento y creatividad. Puedes jugar varios minijuegos, chatear y hacer amigos, y personalizar tu avatar y hogar. También puedes jugar a Blockman Go en tu PC gratis con BlueStacks, un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu ordenador o portátil. Al jugar Blockman Go en PC, puedes disfrutar de una pantalla más grande, mejores gráficos, controles de teclado y ratón, y acceso a miles de aplicaciones y herramientas de productividad. Para descargar e instalar Blockman Ir en el PC con BlueStacks, solo tiene que seguir unos sencillos pasos. Esperamos que este artículo te haya ayudado a aprender a descargar Blockman Go en PC gratis.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Blockman Go y BlueStacks:
-
-
-
Pregunta
-
Respuesta
-
-
-
Es Blockman Go libre para jugar?
-
-
-
-
¿Es seguro jugar a Blockman?
-
Sí, Blockman Go es seguro para jugar. Tiene una calificación de 4.3 en la Google Play Store y una calificación de 4.6 en la App Store. También tiene controles parentales y sistemas anti-trucos para garantizar un entorno de juego justo y seguro.
-
-
-
¿BlueStacks es de uso gratuito?
-
Sí, BlueStacks es de uso gratuito. Puede descargarlo desde este enlace. También puede actualizar a BlueStacks Premium para obtener más características y beneficios.
-
-
-
¿Es seguro usar BlueStacks?
-
Sí, BlueStacks es seguro de usar. Es el emulador de Android más confiable y popular en el mundo, con más de 500 millones de usuarios. También tiene características avanzadas de seguridad y protección antivirus para garantizar su seguridad y privacidad.
-
-
-
¿Cómo puedo contactar con el soporte de Blockman Go o BlueStacks?
-
Si tiene algún problema o pregunta sobre Blockman Go o BlueStacks, puede ponerse en contacto con sus equipos de soporte a través de sus sitios web oficiales o canales de redes sociales. También puede consultar sus preguntas frecuentes y foros para obtener más información y soluciones.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/docstringparser.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/docstringparser.py
deleted file mode 100644
index 16e74e7d20f0f100b0a0e615069f9b0b4e12449c..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/docstringparser.py
+++ /dev/null
@@ -1,315 +0,0 @@
-# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-from html.parser import HTMLParser
-from itertools import zip_longest
-
-PRIORITY_PARENT_TAGS = ('code', 'a')
-OMIT_NESTED_TAGS = ('span', 'i', 'code', 'a')
-OMIT_SELF_TAGS = ('i', 'b')
-HTML_BLOCK_DISPLAY_TAGS = ('p', 'note', 'ul', 'li')
-
-
-class DocStringParser(HTMLParser):
- """
- A simple HTML parser. Focused on converting the subset of HTML
- that appears in the documentation strings of the JSON models into
- simple ReST format.
- """
-
- def __init__(self, doc):
- self.tree = None
- self.doc = doc
- super().__init__()
-
- def reset(self):
- HTMLParser.reset(self)
- self.tree = HTMLTree(self.doc)
-
- def feed(self, data):
- super().feed(data)
- self.tree.write()
- self.tree = HTMLTree(self.doc)
-
- def close(self):
- super().close()
- # Write if there is anything remaining.
- self.tree.write()
- self.tree = HTMLTree(self.doc)
-
- def handle_starttag(self, tag, attrs):
- self.tree.add_tag(tag, attrs=attrs)
-
- def handle_endtag(self, tag):
- self.tree.add_tag(tag, is_start=False)
-
- def handle_data(self, data):
- self.tree.add_data(data)
-
-
-class HTMLTree:
- """
- A tree which handles HTML nodes. Designed to work with a python HTML parser,
- meaning that the current_node will be the most recently opened tag. When
- a tag is closed, the current_node moves up to the parent node.
- """
-
- def __init__(self, doc):
- self.doc = doc
- self.head = StemNode()
- self.current_node = self.head
- self.unhandled_tags = []
-
- def add_tag(self, tag, attrs=None, is_start=True):
- if not self._doc_has_handler(tag, is_start):
- self.unhandled_tags.append(tag)
- return
-
- if is_start:
- node = TagNode(tag, attrs)
- self.current_node.add_child(node)
- self.current_node = node
- else:
- self.current_node = self.current_node.parent
-
- def _doc_has_handler(self, tag, is_start):
- if is_start:
- handler_name = 'start_%s' % tag
- else:
- handler_name = 'end_%s' % tag
-
- return hasattr(self.doc.style, handler_name)
-
- def add_data(self, data):
- self.current_node.add_child(DataNode(data))
-
- def write(self):
- self.head.write(self.doc)
-
-
-class Node:
- def __init__(self, parent=None):
- self.parent = parent
-
- def write(self, doc):
- raise NotImplementedError
-
-
-class StemNode(Node):
- def __init__(self, parent=None):
- super().__init__(parent)
- self.children = []
-
- def add_child(self, child):
- child.parent = self
- self.children.append(child)
-
- def write(self, doc):
- self.collapse_whitespace()
- self._write_children(doc)
-
- def _write_children(self, doc):
- for child, next_child in zip_longest(self.children, self.children[1:]):
- if isinstance(child, TagNode) and next_child is not None:
- child.write(doc, next_child)
- else:
- child.write(doc)
-
- def is_whitespace(self):
- return all(child.is_whitespace() for child in self.children)
-
- def startswith_whitespace(self):
- return self.children and self.children[0].startswith_whitespace()
-
- def endswith_whitespace(self):
- return self.children and self.children[-1].endswith_whitespace()
-
- def lstrip(self):
- while self.children and self.children[0].is_whitespace():
- self.children = self.children[1:]
- if self.children:
- self.children[0].lstrip()
-
- def rstrip(self):
- while self.children and self.children[-1].is_whitespace():
- self.children = self.children[:-1]
- if self.children:
- self.children[-1].rstrip()
-
- def collapse_whitespace(self):
- """Remove collapsible white-space from HTML.
-
- HTML in docstrings often contains extraneous white-space around tags,
- for readability. Browsers would collapse this white-space before
- rendering. If not removed before conversion to RST where white-space is
- part of the syntax, for example for indentation, it can result in
- incorrect output.
- """
- self.lstrip()
- self.rstrip()
- for child in self.children:
- child.collapse_whitespace()
-
-
-class TagNode(StemNode):
- """
- A generic Tag node. It will verify that handlers exist before writing.
- """
-
- def __init__(self, tag, attrs=None, parent=None):
- super().__init__(parent)
- self.attrs = attrs
- self.tag = tag
-
- def _has_nested_tags(self):
- # Returns True if any children are TagNodes and False otherwise.
- return any(isinstance(child, TagNode) for child in self.children)
-
- def write(self, doc, next_child=None):
- prioritize_nested_tags = (
- self.tag in OMIT_SELF_TAGS and self._has_nested_tags()
- )
- prioritize_parent_tag = (
- isinstance(self.parent, TagNode)
- and self.parent.tag in PRIORITY_PARENT_TAGS
- and self.tag in OMIT_NESTED_TAGS
- )
- if prioritize_nested_tags or prioritize_parent_tag:
- self._write_children(doc)
- return
-
- self._write_start(doc)
- self._write_children(doc)
- self._write_end(doc, next_child)
-
- def collapse_whitespace(self):
- """Remove collapsible white-space.
-
- All tags collapse internal whitespace. Block-display HTML tags also
- strip all leading and trailing whitespace.
-
- Approximately follows the specification used in browsers:
- https://www.w3.org/TR/css-text-3/#white-space-rules
- https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Whitespace
- """
- if self.tag in HTML_BLOCK_DISPLAY_TAGS:
- self.lstrip()
- self.rstrip()
- # Collapse whitespace in situations like `` foo`` into
- # `` foo``.
- for prev, cur in zip(self.children[:-1], self.children[1:]):
- if (
- isinstance(prev, DataNode)
- and prev.endswith_whitespace()
- and cur.startswith_whitespace()
- ):
- cur.lstrip()
- # Same logic, but for situations like ``bar ``:
- for cur, nxt in zip(self.children[:-1], self.children[1:]):
- if (
- isinstance(nxt, DataNode)
- and cur.endswith_whitespace()
- and nxt.startswith_whitespace()
- ):
- cur.rstrip()
- # Recurse into children
- for child in self.children:
- child.collapse_whitespace()
-
- def _write_start(self, doc):
- handler_name = 'start_%s' % self.tag
- if hasattr(doc.style, handler_name):
- getattr(doc.style, handler_name)(self.attrs)
-
- def _write_end(self, doc, next_child):
- handler_name = 'end_%s' % self.tag
- if hasattr(doc.style, handler_name):
- if handler_name == 'end_a':
- # We use lookahead to determine if a space is needed after a link node
- getattr(doc.style, handler_name)(next_child)
- else:
- getattr(doc.style, handler_name)()
-
-
-class DataNode(Node):
- """
- A Node that contains only string data.
- """
-
- def __init__(self, data, parent=None):
- super().__init__(parent)
- if not isinstance(data, str):
- raise ValueError("Expecting string type, %s given." % type(data))
- self._leading_whitespace = ''
- self._trailing_whitespace = ''
- self._stripped_data = ''
- if data == '':
- return
- if data.isspace():
- self._trailing_whitespace = data
- return
- first_non_space = next(
- idx for idx, ch in enumerate(data) if not ch.isspace()
- )
- last_non_space = len(data) - next(
- idx for idx, ch in enumerate(reversed(data)) if not ch.isspace()
- )
- self._leading_whitespace = data[:first_non_space]
- self._trailing_whitespace = data[last_non_space:]
- self._stripped_data = data[first_non_space:last_non_space]
-
- @property
- def data(self):
- return (
- f'{self._leading_whitespace}{self._stripped_data}'
- f'{self._trailing_whitespace}'
- )
-
- def is_whitespace(self):
- return self._stripped_data == '' and (
- self._leading_whitespace != '' or self._trailing_whitespace != ''
- )
-
- def startswith_whitespace(self):
- return self._leading_whitespace != '' or (
- self._stripped_data == '' and self._trailing_whitespace != ''
- )
-
- def endswith_whitespace(self):
- return self._trailing_whitespace != '' or (
- self._stripped_data == '' and self._leading_whitespace != ''
- )
-
- def lstrip(self):
- if self._leading_whitespace != '':
- self._leading_whitespace = ''
- elif self._stripped_data == '':
- self.rstrip()
-
- def rstrip(self):
- if self._trailing_whitespace != '':
- self._trailing_whitespace = ''
- elif self._stripped_data == '':
- self.lstrip()
-
- def collapse_whitespace(self):
- """Noop, ``DataNode.write`` always collapses whitespace"""
- return
-
- def write(self, doc):
- words = doc.translate_words(self._stripped_data.split())
- str_data = (
- f'{self._leading_whitespace}{" ".join(words)}'
- f'{self._trailing_whitespace}'
- )
- if str_data != '':
- doc.handle_data(str_data)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/themes.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/themes.py
deleted file mode 100644
index bf6db104a2c4fd4f3dc699e85f2b262c3d31e9a0..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/themes.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .default_styles import DEFAULT_STYLES
-from .theme import Theme
-
-
-DEFAULT = Theme(DEFAULT_STYLES)
diff --git a/spaces/Billyosoro/ESRGAN/tests/test_discriminator_arch.py b/spaces/Billyosoro/ESRGAN/tests/test_discriminator_arch.py
deleted file mode 100644
index c56a40c7743630aa63b3e99bca8dc1a85949c4c5..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/tests/test_discriminator_arch.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import torch
-
-from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN
-
-
-def test_unetdiscriminatorsn():
- """Test arch: UNetDiscriminatorSN."""
-
- # model init and forward (cpu)
- net = UNetDiscriminatorSN(num_in_ch=3, num_feat=4, skip_connection=True)
- img = torch.rand((1, 3, 32, 32), dtype=torch.float32)
- output = net(img)
- assert output.shape == (1, 1, 32, 32)
-
- # model init and forward (gpu)
- if torch.cuda.is_available():
- net.cuda()
- output = net(img.cuda())
- assert output.shape == (1, 1, 32, 32)
diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/pybind11.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/pybind11.h
deleted file mode 100644
index 3a7d7b88495afddabff7f9604c94e828eb780152..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/include/pybind11/pybind11.h
+++ /dev/null
@@ -1,2235 +0,0 @@
-/*
- pybind11/pybind11.h: Main header file of the C++11 python
- binding generator library
-
- Copyright (c) 2016 Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#pragma once
-
-#if defined(__INTEL_COMPILER)
-# pragma warning push
-# pragma warning disable 68 // integer conversion resulted in a change of sign
-# pragma warning disable 186 // pointless comparison of unsigned integer with zero
-# pragma warning disable 878 // incompatible exception specifications
-# pragma warning disable 1334 // the "template" keyword used for syntactic disambiguation may only be used within a template
-# pragma warning disable 1682 // implicit conversion of a 64-bit integral type to a smaller integral type (potential portability problem)
-# pragma warning disable 1786 // function "strdup" was declared deprecated
-# pragma warning disable 1875 // offsetof applied to non-POD (Plain Old Data) types is nonstandard
-# pragma warning disable 2196 // warning #2196: routine is both "inline" and "noinline"
-#elif defined(_MSC_VER)
-# pragma warning(push)
-# pragma warning(disable: 4100) // warning C4100: Unreferenced formal parameter
-# pragma warning(disable: 4127) // warning C4127: Conditional expression is constant
-# pragma warning(disable: 4512) // warning C4512: Assignment operator was implicitly defined as deleted
-# pragma warning(disable: 4800) // warning C4800: 'int': forcing value to bool 'true' or 'false' (performance warning)
-# pragma warning(disable: 4996) // warning C4996: The POSIX name for this item is deprecated. Instead, use the ISO C and C++ conformant name
-# pragma warning(disable: 4702) // warning C4702: unreachable code
-# pragma warning(disable: 4522) // warning C4522: multiple assignment operators specified
-#elif defined(__GNUG__) && !defined(__clang__)
-# pragma GCC diagnostic push
-# pragma GCC diagnostic ignored "-Wunused-but-set-parameter"
-# pragma GCC diagnostic ignored "-Wunused-but-set-variable"
-# pragma GCC diagnostic ignored "-Wmissing-field-initializers"
-# pragma GCC diagnostic ignored "-Wstrict-aliasing"
-# pragma GCC diagnostic ignored "-Wattributes"
-# if __GNUC__ >= 7
-# pragma GCC diagnostic ignored "-Wnoexcept-type"
-# endif
-#endif
-
-#include "attr.h"
-#include "options.h"
-#include "detail/class.h"
-#include "detail/init.h"
-
-#if defined(__GNUG__) && !defined(__clang__)
-# include
-#endif
-
-PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
-
-/// Wraps an arbitrary C++ function/method/lambda function/.. into a callable Python object
-class cpp_function : public function {
-public:
- cpp_function() { }
- cpp_function(std::nullptr_t) { }
-
- /// Construct a cpp_function from a vanilla function pointer
- template
- cpp_function(Return (*f)(Args...), const Extra&... extra) {
- initialize(f, f, extra...);
- }
-
- /// Construct a cpp_function from a lambda function (possibly with internal state)
- template ::value>>
- cpp_function(Func &&f, const Extra&... extra) {
- initialize(std::forward(f),
- (detail::function_signature_t *) nullptr, extra...);
- }
-
- /// Construct a cpp_function from a class method (non-const, no ref-qualifier)
- template
- cpp_function(Return (Class::*f)(Arg...), const Extra&... extra) {
- initialize([f](Class *c, Arg... args) -> Return { return (c->*f)(std::forward(args)...); },
- (Return (*) (Class *, Arg...)) nullptr, extra...);
- }
-
- /// Construct a cpp_function from a class method (non-const, lvalue ref-qualifier)
- /// A copy of the overload for non-const functions without explicit ref-qualifier
- /// but with an added `&`.
- template
- cpp_function(Return (Class::*f)(Arg...)&, const Extra&... extra) {
- initialize([f](Class *c, Arg... args) -> Return { return (c->*f)(args...); },
- (Return (*) (Class *, Arg...)) nullptr, extra...);
- }
-
- /// Construct a cpp_function from a class method (const, no ref-qualifier)
- template
- cpp_function(Return (Class::*f)(Arg...) const, const Extra&... extra) {
- initialize([f](const Class *c, Arg... args) -> Return { return (c->*f)(std::forward(args)...); },
- (Return (*)(const Class *, Arg ...)) nullptr, extra...);
- }
-
- /// Construct a cpp_function from a class method (const, lvalue ref-qualifier)
- /// A copy of the overload for const functions without explicit ref-qualifier
- /// but with an added `&`.
- template
- cpp_function(Return (Class::*f)(Arg...) const&, const Extra&... extra) {
- initialize([f](const Class *c, Arg... args) -> Return { return (c->*f)(args...); },
- (Return (*)(const Class *, Arg ...)) nullptr, extra...);
- }
-
- /// Return the function name
- object name() const { return attr("__name__"); }
-
-protected:
- /// Space optimization: don't inline this frequently instantiated fragment
- PYBIND11_NOINLINE detail::function_record *make_function_record() {
- return new detail::function_record();
- }
-
- /// Special internal constructor for functors, lambda functions, etc.
- template
- void initialize(Func &&f, Return (*)(Args...), const Extra&... extra) {
- using namespace detail;
- struct capture { remove_reference_t f; };
-
- /* Store the function including any extra state it might have (e.g. a lambda capture object) */
- auto rec = make_function_record();
-
- /* Store the capture object directly in the function record if there is enough space */
- if (sizeof(capture) <= sizeof(rec->data)) {
- /* Without these pragmas, GCC warns that there might not be
- enough space to use the placement new operator. However, the
- 'if' statement above ensures that this is the case. */
-#if defined(__GNUG__) && !defined(__clang__) && __GNUC__ >= 6
-# pragma GCC diagnostic push
-# pragma GCC diagnostic ignored "-Wplacement-new"
-#endif
- new ((capture *) &rec->data) capture { std::forward(f) };
-#if defined(__GNUG__) && !defined(__clang__) && __GNUC__ >= 6
-# pragma GCC diagnostic pop
-#endif
- if (!std::is_trivially_destructible::value)
- rec->free_data = [](function_record *r) { ((capture *) &r->data)->~capture(); };
- } else {
- rec->data[0] = new capture { std::forward(f) };
- rec->free_data = [](function_record *r) { delete ((capture *) r->data[0]); };
- }
-
- /* Type casters for the function arguments and return value */
- using cast_in = argument_loader;
- using cast_out = make_caster<
- conditional_t::value, void_type, Return>
- >;
-
- static_assert(expected_num_args(sizeof...(Args), cast_in::has_args, cast_in::has_kwargs),
- "The number of argument annotations does not match the number of function arguments");
-
- /* Dispatch code which converts function arguments and performs the actual function call */
- rec->impl = [](function_call &call) -> handle {
- cast_in args_converter;
-
- /* Try to cast the function arguments into the C++ domain */
- if (!args_converter.load_args(call))
- return PYBIND11_TRY_NEXT_OVERLOAD;
-
- /* Invoke call policy pre-call hook */
- process_attributes::precall(call);
-
- /* Get a pointer to the capture object */
- auto data = (sizeof(capture) <= sizeof(call.func.data)
- ? &call.func.data : call.func.data[0]);
- capture *cap = const_cast(reinterpret_cast(data));
-
- /* Override policy for rvalues -- usually to enforce rvp::move on an rvalue */
- return_value_policy policy = return_value_policy_override::policy(call.func.policy);
-
- /* Function scope guard -- defaults to the compile-to-nothing `void_type` */
- using Guard = extract_guard_t;
-
- /* Perform the function call */
- handle result = cast_out::cast(
- std::move(args_converter).template call(cap->f), policy, call.parent);
-
- /* Invoke call policy post-call hook */
- process_attributes::postcall(call, result);
-
- return result;
- };
-
- /* Process any user-provided function attributes */
- process_attributes::init(extra..., rec);
-
- {
- constexpr bool has_kwonly_args = any_of...>::value,
- has_args = any_of...>::value,
- has_arg_annotations = any_of...>::value;
- static_assert(has_arg_annotations || !has_kwonly_args, "py::kwonly requires the use of argument annotations");
- static_assert(!(has_args && has_kwonly_args), "py::kwonly cannot be combined with a py::args argument");
- }
-
- /* Generate a readable signature describing the function's arguments and return value types */
- static constexpr auto signature = _("(") + cast_in::arg_names + _(") -> ") + cast_out::name;
- PYBIND11_DESCR_CONSTEXPR auto types = decltype(signature)::types();
-
- /* Register the function with Python from generic (non-templated) code */
- initialize_generic(rec, signature.text, types.data(), sizeof...(Args));
-
- if (cast_in::has_args) rec->has_args = true;
- if (cast_in::has_kwargs) rec->has_kwargs = true;
-
- /* Stash some additional information used by an important optimization in 'functional.h' */
- using FunctionType = Return (*)(Args...);
- constexpr bool is_function_ptr =
- std::is_convertible::value &&
- sizeof(capture) == sizeof(void *);
- if (is_function_ptr) {
- rec->is_stateless = true;
- rec->data[1] = const_cast(reinterpret_cast(&typeid(FunctionType)));
- }
- }
-
- /// Register a function call with Python (generic non-templated code goes here)
- void initialize_generic(detail::function_record *rec, const char *text,
- const std::type_info *const *types, size_t args) {
-
- /* Create copies of all referenced C-style strings */
- rec->name = strdup(rec->name ? rec->name : "");
- if (rec->doc) rec->doc = strdup(rec->doc);
- for (auto &a: rec->args) {
- if (a.name)
- a.name = strdup(a.name);
- if (a.descr)
- a.descr = strdup(a.descr);
- else if (a.value)
- a.descr = strdup(repr(a.value).cast().c_str());
- }
-
- rec->is_constructor = !strcmp(rec->name, "__init__") || !strcmp(rec->name, "__setstate__");
-
-#if !defined(NDEBUG) && !defined(PYBIND11_DISABLE_NEW_STYLE_INIT_WARNING)
- if (rec->is_constructor && !rec->is_new_style_constructor) {
- const auto class_name = std::string(((PyTypeObject *) rec->scope.ptr())->tp_name);
- const auto func_name = std::string(rec->name);
- PyErr_WarnEx(
- PyExc_FutureWarning,
- ("pybind11-bound class '" + class_name + "' is using an old-style "
- "placement-new '" + func_name + "' which has been deprecated. See "
- "the upgrade guide in pybind11's docs. This message is only visible "
- "when compiled in debug mode.").c_str(), 0
- );
- }
-#endif
-
- /* Generate a proper function signature */
- std::string signature;
- size_t type_index = 0, arg_index = 0;
- for (auto *pc = text; *pc != '\0'; ++pc) {
- const auto c = *pc;
-
- if (c == '{') {
- // Write arg name for everything except *args and **kwargs.
- if (*(pc + 1) == '*')
- continue;
-
- if (arg_index < rec->args.size() && rec->args[arg_index].name) {
- signature += rec->args[arg_index].name;
- } else if (arg_index == 0 && rec->is_method) {
- signature += "self";
- } else {
- signature += "arg" + std::to_string(arg_index - (rec->is_method ? 1 : 0));
- }
- signature += ": ";
- } else if (c == '}') {
- // Write default value if available.
- if (arg_index < rec->args.size() && rec->args[arg_index].descr) {
- signature += " = ";
- signature += rec->args[arg_index].descr;
- }
- arg_index++;
- } else if (c == '%') {
- const std::type_info *t = types[type_index++];
- if (!t)
- pybind11_fail("Internal error while parsing type signature (1)");
- if (auto tinfo = detail::get_type_info(*t)) {
- handle th((PyObject *) tinfo->type);
- signature +=
- th.attr("__module__").cast() + "." +
- th.attr("__qualname__").cast(); // Python 3.3+, but we backport it to earlier versions
- } else if (rec->is_new_style_constructor && arg_index == 0) {
- // A new-style `__init__` takes `self` as `value_and_holder`.
- // Rewrite it to the proper class type.
- signature +=
- rec->scope.attr("__module__").cast() + "." +
- rec->scope.attr("__qualname__").cast();
- } else {
- std::string tname(t->name());
- detail::clean_type_id(tname);
- signature += tname;
- }
- } else {
- signature += c;
- }
- }
- if (arg_index != args || types[type_index] != nullptr)
- pybind11_fail("Internal error while parsing type signature (2)");
-
-#if PY_MAJOR_VERSION < 3
- if (strcmp(rec->name, "__next__") == 0) {
- std::free(rec->name);
- rec->name = strdup("next");
- } else if (strcmp(rec->name, "__bool__") == 0) {
- std::free(rec->name);
- rec->name = strdup("__nonzero__");
- }
-#endif
- rec->signature = strdup(signature.c_str());
- rec->args.shrink_to_fit();
- rec->nargs = (std::uint16_t) args;
-
- if (rec->sibling && PYBIND11_INSTANCE_METHOD_CHECK(rec->sibling.ptr()))
- rec->sibling = PYBIND11_INSTANCE_METHOD_GET_FUNCTION(rec->sibling.ptr());
-
- detail::function_record *chain = nullptr, *chain_start = rec;
- if (rec->sibling) {
- if (PyCFunction_Check(rec->sibling.ptr())) {
- auto rec_capsule = reinterpret_borrow(PyCFunction_GET_SELF(rec->sibling.ptr()));
- chain = (detail::function_record *) rec_capsule;
- /* Never append a method to an overload chain of a parent class;
- instead, hide the parent's overloads in this case */
- if (!chain->scope.is(rec->scope))
- chain = nullptr;
- }
- // Don't trigger for things like the default __init__, which are wrapper_descriptors that we are intentionally replacing
- else if (!rec->sibling.is_none() && rec->name[0] != '_')
- pybind11_fail("Cannot overload existing non-function object \"" + std::string(rec->name) +
- "\" with a function of the same name");
- }
-
- if (!chain) {
- /* No existing overload was found, create a new function object */
- rec->def = new PyMethodDef();
- std::memset(rec->def, 0, sizeof(PyMethodDef));
- rec->def->ml_name = rec->name;
- rec->def->ml_meth = reinterpret_cast(reinterpret_cast(*dispatcher));
- rec->def->ml_flags = METH_VARARGS | METH_KEYWORDS;
-
- capsule rec_capsule(rec, [](void *ptr) {
- destruct((detail::function_record *) ptr);
- });
-
- object scope_module;
- if (rec->scope) {
- if (hasattr(rec->scope, "__module__")) {
- scope_module = rec->scope.attr("__module__");
- } else if (hasattr(rec->scope, "__name__")) {
- scope_module = rec->scope.attr("__name__");
- }
- }
-
- m_ptr = PyCFunction_NewEx(rec->def, rec_capsule.ptr(), scope_module.ptr());
- if (!m_ptr)
- pybind11_fail("cpp_function::cpp_function(): Could not allocate function object");
- } else {
- /* Append at the end of the overload chain */
- m_ptr = rec->sibling.ptr();
- inc_ref();
- chain_start = chain;
- if (chain->is_method != rec->is_method)
- pybind11_fail("overloading a method with both static and instance methods is not supported; "
- #if defined(NDEBUG)
- "compile in debug mode for more details"
- #else
- "error while attempting to bind " + std::string(rec->is_method ? "instance" : "static") + " method " +
- std::string(pybind11::str(rec->scope.attr("__name__"))) + "." + std::string(rec->name) + signature
- #endif
- );
- while (chain->next)
- chain = chain->next;
- chain->next = rec;
- }
-
- std::string signatures;
- int index = 0;
- /* Create a nice pydoc rec including all signatures and
- docstrings of the functions in the overload chain */
- if (chain && options::show_function_signatures()) {
- // First a generic signature
- signatures += rec->name;
- signatures += "(*args, **kwargs)\n";
- signatures += "Overloaded function.\n\n";
- }
- // Then specific overload signatures
- bool first_user_def = true;
- for (auto it = chain_start; it != nullptr; it = it->next) {
- if (options::show_function_signatures()) {
- if (index > 0) signatures += "\n";
- if (chain)
- signatures += std::to_string(++index) + ". ";
- signatures += rec->name;
- signatures += it->signature;
- signatures += "\n";
- }
- if (it->doc && strlen(it->doc) > 0 && options::show_user_defined_docstrings()) {
- // If we're appending another docstring, and aren't printing function signatures, we
- // need to append a newline first:
- if (!options::show_function_signatures()) {
- if (first_user_def) first_user_def = false;
- else signatures += "\n";
- }
- if (options::show_function_signatures()) signatures += "\n";
- signatures += it->doc;
- if (options::show_function_signatures()) signatures += "\n";
- }
- }
-
- /* Install docstring */
- PyCFunctionObject *func = (PyCFunctionObject *) m_ptr;
- if (func->m_ml->ml_doc)
- std::free(const_cast(func->m_ml->ml_doc));
- func->m_ml->ml_doc = strdup(signatures.c_str());
-
- if (rec->is_method) {
- m_ptr = PYBIND11_INSTANCE_METHOD_NEW(m_ptr, rec->scope.ptr());
- if (!m_ptr)
- pybind11_fail("cpp_function::cpp_function(): Could not allocate instance method object");
- Py_DECREF(func);
- }
- }
-
- /// When a cpp_function is GCed, release any memory allocated by pybind11
- static void destruct(detail::function_record *rec) {
- while (rec) {
- detail::function_record *next = rec->next;
- if (rec->free_data)
- rec->free_data(rec);
- std::free((char *) rec->name);
- std::free((char *) rec->doc);
- std::free((char *) rec->signature);
- for (auto &arg: rec->args) {
- std::free(const_cast(arg.name));
- std::free(const_cast(arg.descr));
- arg.value.dec_ref();
- }
- if (rec->def) {
- std::free(const_cast(rec->def->ml_doc));
- delete rec->def;
- }
- delete rec;
- rec = next;
- }
- }
-
- /// Main dispatch logic for calls to functions bound using pybind11
- static PyObject *dispatcher(PyObject *self, PyObject *args_in, PyObject *kwargs_in) {
- using namespace detail;
-
- /* Iterator over the list of potentially admissible overloads */
- const function_record *overloads = (function_record *) PyCapsule_GetPointer(self, nullptr),
- *it = overloads;
-
- /* Need to know how many arguments + keyword arguments there are to pick the right overload */
- const size_t n_args_in = (size_t) PyTuple_GET_SIZE(args_in);
-
- handle parent = n_args_in > 0 ? PyTuple_GET_ITEM(args_in, 0) : nullptr,
- result = PYBIND11_TRY_NEXT_OVERLOAD;
-
- auto self_value_and_holder = value_and_holder();
- if (overloads->is_constructor) {
- const auto tinfo = get_type_info((PyTypeObject *) overloads->scope.ptr());
- const auto pi = reinterpret_cast(parent.ptr());
- self_value_and_holder = pi->get_value_and_holder(tinfo, false);
-
- if (!self_value_and_holder.type || !self_value_and_holder.inst) {
- PyErr_SetString(PyExc_TypeError, "__init__(self, ...) called with invalid `self` argument");
- return nullptr;
- }
-
- // If this value is already registered it must mean __init__ is invoked multiple times;
- // we really can't support that in C++, so just ignore the second __init__.
- if (self_value_and_holder.instance_registered())
- return none().release().ptr();
- }
-
- try {
- // We do this in two passes: in the first pass, we load arguments with `convert=false`;
- // in the second, we allow conversion (except for arguments with an explicit
- // py::arg().noconvert()). This lets us prefer calls without conversion, with
- // conversion as a fallback.
- std::vector second_pass;
-
- // However, if there are no overloads, we can just skip the no-convert pass entirely
- const bool overloaded = it != nullptr && it->next != nullptr;
-
- for (; it != nullptr; it = it->next) {
-
- /* For each overload:
- 1. Copy all positional arguments we were given, also checking to make sure that
- named positional arguments weren't *also* specified via kwarg.
- 2. If we weren't given enough, try to make up the omitted ones by checking
- whether they were provided by a kwarg matching the `py::arg("name")` name. If
- so, use it (and remove it from kwargs; if not, see if the function binding
- provided a default that we can use.
- 3. Ensure that either all keyword arguments were "consumed", or that the function
- takes a kwargs argument to accept unconsumed kwargs.
- 4. Any positional arguments still left get put into a tuple (for args), and any
- leftover kwargs get put into a dict.
- 5. Pack everything into a vector; if we have py::args or py::kwargs, they are an
- extra tuple or dict at the end of the positional arguments.
- 6. Call the function call dispatcher (function_record::impl)
-
- If one of these fail, move on to the next overload and keep trying until we get a
- result other than PYBIND11_TRY_NEXT_OVERLOAD.
- */
-
- const function_record &func = *it;
- size_t num_args = func.nargs; // Number of positional arguments that we need
- if (func.has_args) --num_args; // (but don't count py::args
- if (func.has_kwargs) --num_args; // or py::kwargs)
- size_t pos_args = num_args - func.nargs_kwonly;
-
- if (!func.has_args && n_args_in > pos_args)
- continue; // Too many positional arguments for this overload
-
- if (n_args_in < pos_args && func.args.size() < pos_args)
- continue; // Not enough positional arguments given, and not enough defaults to fill in the blanks
-
- function_call call(func, parent);
-
- size_t args_to_copy = (std::min)(pos_args, n_args_in); // Protect std::min with parentheses
- size_t args_copied = 0;
-
- // 0. Inject new-style `self` argument
- if (func.is_new_style_constructor) {
- // The `value` may have been preallocated by an old-style `__init__`
- // if it was a preceding candidate for overload resolution.
- if (self_value_and_holder)
- self_value_and_holder.type->dealloc(self_value_and_holder);
-
- call.init_self = PyTuple_GET_ITEM(args_in, 0);
- call.args.push_back(reinterpret_cast(&self_value_and_holder));
- call.args_convert.push_back(false);
- ++args_copied;
- }
-
- // 1. Copy any position arguments given.
- bool bad_arg = false;
- for (; args_copied < args_to_copy; ++args_copied) {
- const argument_record *arg_rec = args_copied < func.args.size() ? &func.args[args_copied] : nullptr;
- if (kwargs_in && arg_rec && arg_rec->name && PyDict_GetItemString(kwargs_in, arg_rec->name)) {
- bad_arg = true;
- break;
- }
-
- handle arg(PyTuple_GET_ITEM(args_in, args_copied));
- if (arg_rec && !arg_rec->none && arg.is_none()) {
- bad_arg = true;
- break;
- }
- call.args.push_back(arg);
- call.args_convert.push_back(arg_rec ? arg_rec->convert : true);
- }
- if (bad_arg)
- continue; // Maybe it was meant for another overload (issue #688)
-
- // We'll need to copy this if we steal some kwargs for defaults
- dict kwargs = reinterpret_borrow(kwargs_in);
-
- // 2. Check kwargs and, failing that, defaults that may help complete the list
- if (args_copied < num_args) {
- bool copied_kwargs = false;
-
- for (; args_copied < num_args; ++args_copied) {
- const auto &arg = func.args[args_copied];
-
- handle value;
- if (kwargs_in && arg.name)
- value = PyDict_GetItemString(kwargs.ptr(), arg.name);
-
- if (value) {
- // Consume a kwargs value
- if (!copied_kwargs) {
- kwargs = reinterpret_steal(PyDict_Copy(kwargs.ptr()));
- copied_kwargs = true;
- }
- PyDict_DelItemString(kwargs.ptr(), arg.name);
- } else if (arg.value) {
- value = arg.value;
- }
-
- if (value) {
- call.args.push_back(value);
- call.args_convert.push_back(arg.convert);
- }
- else
- break;
- }
-
- if (args_copied < num_args)
- continue; // Not enough arguments, defaults, or kwargs to fill the positional arguments
- }
-
- // 3. Check everything was consumed (unless we have a kwargs arg)
- if (kwargs && kwargs.size() > 0 && !func.has_kwargs)
- continue; // Unconsumed kwargs, but no py::kwargs argument to accept them
-
- // 4a. If we have a py::args argument, create a new tuple with leftovers
- if (func.has_args) {
- tuple extra_args;
- if (args_to_copy == 0) {
- // We didn't copy out any position arguments from the args_in tuple, so we
- // can reuse it directly without copying:
- extra_args = reinterpret_borrow(args_in);
- } else if (args_copied >= n_args_in) {
- extra_args = tuple(0);
- } else {
- size_t args_size = n_args_in - args_copied;
- extra_args = tuple(args_size);
- for (size_t i = 0; i < args_size; ++i) {
- extra_args[i] = PyTuple_GET_ITEM(args_in, args_copied + i);
- }
- }
- call.args.push_back(extra_args);
- call.args_convert.push_back(false);
- call.args_ref = std::move(extra_args);
- }
-
- // 4b. If we have a py::kwargs, pass on any remaining kwargs
- if (func.has_kwargs) {
- if (!kwargs.ptr())
- kwargs = dict(); // If we didn't get one, send an empty one
- call.args.push_back(kwargs);
- call.args_convert.push_back(false);
- call.kwargs_ref = std::move(kwargs);
- }
-
- // 5. Put everything in a vector. Not technically step 5, we've been building it
- // in `call.args` all along.
- #if !defined(NDEBUG)
- if (call.args.size() != func.nargs || call.args_convert.size() != func.nargs)
- pybind11_fail("Internal error: function call dispatcher inserted wrong number of arguments!");
- #endif
-
- std::vector second_pass_convert;
- if (overloaded) {
- // We're in the first no-convert pass, so swap out the conversion flags for a
- // set of all-false flags. If the call fails, we'll swap the flags back in for
- // the conversion-allowed call below.
- second_pass_convert.resize(func.nargs, false);
- call.args_convert.swap(second_pass_convert);
- }
-
- // 6. Call the function.
- try {
- loader_life_support guard{};
- result = func.impl(call);
- } catch (reference_cast_error &) {
- result = PYBIND11_TRY_NEXT_OVERLOAD;
- }
-
- if (result.ptr() != PYBIND11_TRY_NEXT_OVERLOAD)
- break;
-
- if (overloaded) {
- // The (overloaded) call failed; if the call has at least one argument that
- // permits conversion (i.e. it hasn't been explicitly specified `.noconvert()`)
- // then add this call to the list of second pass overloads to try.
- for (size_t i = func.is_method ? 1 : 0; i < pos_args; i++) {
- if (second_pass_convert[i]) {
- // Found one: swap the converting flags back in and store the call for
- // the second pass.
- call.args_convert.swap(second_pass_convert);
- second_pass.push_back(std::move(call));
- break;
- }
- }
- }
- }
-
- if (overloaded && !second_pass.empty() && result.ptr() == PYBIND11_TRY_NEXT_OVERLOAD) {
- // The no-conversion pass finished without success, try again with conversion allowed
- for (auto &call : second_pass) {
- try {
- loader_life_support guard{};
- result = call.func.impl(call);
- } catch (reference_cast_error &) {
- result = PYBIND11_TRY_NEXT_OVERLOAD;
- }
-
- if (result.ptr() != PYBIND11_TRY_NEXT_OVERLOAD) {
- // The error reporting logic below expects 'it' to be valid, as it would be
- // if we'd encountered this failure in the first-pass loop.
- if (!result)
- it = &call.func;
- break;
- }
- }
- }
- } catch (error_already_set &e) {
- e.restore();
- return nullptr;
-#if defined(__GNUG__) && !defined(__clang__)
- } catch ( abi::__forced_unwind& ) {
- throw;
-#endif
- } catch (...) {
- /* When an exception is caught, give each registered exception
- translator a chance to translate it to a Python exception
- in reverse order of registration.
-
- A translator may choose to do one of the following:
-
- - catch the exception and call PyErr_SetString or PyErr_SetObject
- to set a standard (or custom) Python exception, or
- - do nothing and let the exception fall through to the next translator, or
- - delegate translation to the next translator by throwing a new type of exception. */
-
- auto last_exception = std::current_exception();
- auto ®istered_exception_translators = get_internals().registered_exception_translators;
- for (auto& translator : registered_exception_translators) {
- try {
- translator(last_exception);
- } catch (...) {
- last_exception = std::current_exception();
- continue;
- }
- return nullptr;
- }
- PyErr_SetString(PyExc_SystemError, "Exception escaped from default exception translator!");
- return nullptr;
- }
-
- auto append_note_if_missing_header_is_suspected = [](std::string &msg) {
- if (msg.find("std::") != std::string::npos) {
- msg += "\n\n"
- "Did you forget to `#include `? Or ,\n"
- ", , etc. Some automatic\n"
- "conversions are optional and require extra headers to be included\n"
- "when compiling your pybind11 module.";
- }
- };
-
- if (result.ptr() == PYBIND11_TRY_NEXT_OVERLOAD) {
- if (overloads->is_operator)
- return handle(Py_NotImplemented).inc_ref().ptr();
-
- std::string msg = std::string(overloads->name) + "(): incompatible " +
- std::string(overloads->is_constructor ? "constructor" : "function") +
- " arguments. The following argument types are supported:\n";
-
- int ctr = 0;
- for (const function_record *it2 = overloads; it2 != nullptr; it2 = it2->next) {
- msg += " "+ std::to_string(++ctr) + ". ";
-
- bool wrote_sig = false;
- if (overloads->is_constructor) {
- // For a constructor, rewrite `(self: Object, arg0, ...) -> NoneType` as `Object(arg0, ...)`
- std::string sig = it2->signature;
- size_t start = sig.find('(') + 7; // skip "(self: "
- if (start < sig.size()) {
- // End at the , for the next argument
- size_t end = sig.find(", "), next = end + 2;
- size_t ret = sig.rfind(" -> ");
- // Or the ), if there is no comma:
- if (end >= sig.size()) next = end = sig.find(')');
- if (start < end && next < sig.size()) {
- msg.append(sig, start, end - start);
- msg += '(';
- msg.append(sig, next, ret - next);
- wrote_sig = true;
- }
- }
- }
- if (!wrote_sig) msg += it2->signature;
-
- msg += "\n";
- }
- msg += "\nInvoked with: ";
- auto args_ = reinterpret_borrow(args_in);
- bool some_args = false;
- for (size_t ti = overloads->is_constructor ? 1 : 0; ti < args_.size(); ++ti) {
- if (!some_args) some_args = true;
- else msg += ", ";
- try {
- msg += pybind11::repr(args_[ti]);
- } catch (const error_already_set&) {
- msg += "";
- }
- }
- if (kwargs_in) {
- auto kwargs = reinterpret_borrow(kwargs_in);
- if (kwargs.size() > 0) {
- if (some_args) msg += "; ";
- msg += "kwargs: ";
- bool first = true;
- for (auto kwarg : kwargs) {
- if (first) first = false;
- else msg += ", ";
- msg += pybind11::str("{}=").format(kwarg.first);
- try {
- msg += pybind11::repr(kwarg.second);
- } catch (const error_already_set&) {
- msg += "";
- }
- }
- }
- }
-
- append_note_if_missing_header_is_suspected(msg);
- PyErr_SetString(PyExc_TypeError, msg.c_str());
- return nullptr;
- } else if (!result) {
- std::string msg = "Unable to convert function return value to a "
- "Python type! The signature was\n\t";
- msg += it->signature;
- append_note_if_missing_header_is_suspected(msg);
- PyErr_SetString(PyExc_TypeError, msg.c_str());
- return nullptr;
- } else {
- if (overloads->is_constructor && !self_value_and_holder.holder_constructed()) {
- auto *pi = reinterpret_cast(parent.ptr());
- self_value_and_holder.type->init_instance(pi, nullptr);
- }
- return result.ptr();
- }
- }
-};
-
-/// Wrapper for Python extension modules
-class module : public object {
-public:
- PYBIND11_OBJECT_DEFAULT(module, object, PyModule_Check)
-
- /// Create a new top-level Python module with the given name and docstring
- explicit module(const char *name, const char *doc = nullptr) {
- if (!options::show_user_defined_docstrings()) doc = nullptr;
-#if PY_MAJOR_VERSION >= 3
- PyModuleDef *def = new PyModuleDef();
- std::memset(def, 0, sizeof(PyModuleDef));
- def->m_name = name;
- def->m_doc = doc;
- def->m_size = -1;
- Py_INCREF(def);
- m_ptr = PyModule_Create(def);
-#else
- m_ptr = Py_InitModule3(name, nullptr, doc);
-#endif
- if (m_ptr == nullptr)
- pybind11_fail("Internal error in module::module()");
- inc_ref();
- }
-
- /** \rst
- Create Python binding for a new function within the module scope. ``Func``
- can be a plain C++ function, a function pointer, or a lambda function. For
- details on the ``Extra&& ... extra`` argument, see section :ref:`extras`.
- \endrst */
- template
- module &def(const char *name_, Func &&f, const Extra& ... extra) {
- cpp_function func(std::forward(f), name(name_), scope(*this),
- sibling(getattr(*this, name_, none())), extra...);
- // NB: allow overwriting here because cpp_function sets up a chain with the intention of
- // overwriting (and has already checked internally that it isn't overwriting non-functions).
- add_object(name_, func, true /* overwrite */);
- return *this;
- }
-
- /** \rst
- Create and return a new Python submodule with the given name and docstring.
- This also works recursively, i.e.
-
- .. code-block:: cpp
-
- py::module m("example", "pybind11 example plugin");
- py::module m2 = m.def_submodule("sub", "A submodule of 'example'");
- py::module m3 = m2.def_submodule("subsub", "A submodule of 'example.sub'");
- \endrst */
- module def_submodule(const char *name, const char *doc = nullptr) {
- std::string full_name = std::string(PyModule_GetName(m_ptr))
- + std::string(".") + std::string(name);
- auto result = reinterpret_borrow(PyImport_AddModule(full_name.c_str()));
- if (doc && options::show_user_defined_docstrings())
- result.attr("__doc__") = pybind11::str(doc);
- attr(name) = result;
- return result;
- }
-
- /// Import and return a module or throws `error_already_set`.
- static module import(const char *name) {
- PyObject *obj = PyImport_ImportModule(name);
- if (!obj)
- throw error_already_set();
- return reinterpret_steal(obj);
- }
-
- /// Reload the module or throws `error_already_set`.
- void reload() {
- PyObject *obj = PyImport_ReloadModule(ptr());
- if (!obj)
- throw error_already_set();
- *this = reinterpret_steal(obj);
- }
-
- // Adds an object to the module using the given name. Throws if an object with the given name
- // already exists.
- //
- // overwrite should almost always be false: attempting to overwrite objects that pybind11 has
- // established will, in most cases, break things.
- PYBIND11_NOINLINE void add_object(const char *name, handle obj, bool overwrite = false) {
- if (!overwrite && hasattr(*this, name))
- pybind11_fail("Error during initialization: multiple incompatible definitions with name \"" +
- std::string(name) + "\"");
-
- PyModule_AddObject(ptr(), name, obj.inc_ref().ptr() /* steals a reference */);
- }
-};
-
-/// \ingroup python_builtins
-/// Return a dictionary representing the global variables in the current execution frame,
-/// or ``__main__.__dict__`` if there is no frame (usually when the interpreter is embedded).
-inline dict globals() {
- PyObject *p = PyEval_GetGlobals();
- return reinterpret_borrow(p ? p : module::import("__main__").attr("__dict__").ptr());
-}
-
-PYBIND11_NAMESPACE_BEGIN(detail)
-/// Generic support for creating new Python heap types
-class generic_type : public object {
- template friend class class_;
-public:
- PYBIND11_OBJECT_DEFAULT(generic_type, object, PyType_Check)
-protected:
- void initialize(const type_record &rec) {
- if (rec.scope && hasattr(rec.scope, rec.name))
- pybind11_fail("generic_type: cannot initialize type \"" + std::string(rec.name) +
- "\": an object with that name is already defined");
-
- if (rec.module_local ? get_local_type_info(*rec.type) : get_global_type_info(*rec.type))
- pybind11_fail("generic_type: type \"" + std::string(rec.name) +
- "\" is already registered!");
-
- m_ptr = make_new_python_type(rec);
-
- /* Register supplemental type information in C++ dict */
- auto *tinfo = new detail::type_info();
- tinfo->type = (PyTypeObject *) m_ptr;
- tinfo->cpptype = rec.type;
- tinfo->type_size = rec.type_size;
- tinfo->type_align = rec.type_align;
- tinfo->operator_new = rec.operator_new;
- tinfo->holder_size_in_ptrs = size_in_ptrs(rec.holder_size);
- tinfo->init_instance = rec.init_instance;
- tinfo->dealloc = rec.dealloc;
- tinfo->simple_type = true;
- tinfo->simple_ancestors = true;
- tinfo->default_holder = rec.default_holder;
- tinfo->module_local = rec.module_local;
-
- auto &internals = get_internals();
- auto tindex = std::type_index(*rec.type);
- tinfo->direct_conversions = &internals.direct_conversions[tindex];
- if (rec.module_local)
- registered_local_types_cpp()[tindex] = tinfo;
- else
- internals.registered_types_cpp[tindex] = tinfo;
- internals.registered_types_py[(PyTypeObject *) m_ptr] = { tinfo };
-
- if (rec.bases.size() > 1 || rec.multiple_inheritance) {
- mark_parents_nonsimple(tinfo->type);
- tinfo->simple_ancestors = false;
- }
- else if (rec.bases.size() == 1) {
- auto parent_tinfo = get_type_info((PyTypeObject *) rec.bases[0].ptr());
- tinfo->simple_ancestors = parent_tinfo->simple_ancestors;
- }
-
- if (rec.module_local) {
- // Stash the local typeinfo and loader so that external modules can access it.
- tinfo->module_local_load = &type_caster_generic::local_load;
- setattr(m_ptr, PYBIND11_MODULE_LOCAL_ID, capsule(tinfo));
- }
- }
-
- /// Helper function which tags all parents of a type using mult. inheritance
- void mark_parents_nonsimple(PyTypeObject *value) {
- auto t = reinterpret_borrow(value->tp_bases);
- for (handle h : t) {
- auto tinfo2 = get_type_info((PyTypeObject *) h.ptr());
- if (tinfo2)
- tinfo2->simple_type = false;
- mark_parents_nonsimple((PyTypeObject *) h.ptr());
- }
- }
-
- void install_buffer_funcs(
- buffer_info *(*get_buffer)(PyObject *, void *),
- void *get_buffer_data) {
- PyHeapTypeObject *type = (PyHeapTypeObject*) m_ptr;
- auto tinfo = detail::get_type_info(&type->ht_type);
-
- if (!type->ht_type.tp_as_buffer)
- pybind11_fail(
- "To be able to register buffer protocol support for the type '" +
- std::string(tinfo->type->tp_name) +
- "' the associated class<>(..) invocation must "
- "include the pybind11::buffer_protocol() annotation!");
-
- tinfo->get_buffer = get_buffer;
- tinfo->get_buffer_data = get_buffer_data;
- }
-
- // rec_func must be set for either fget or fset.
- void def_property_static_impl(const char *name,
- handle fget, handle fset,
- detail::function_record *rec_func) {
- const auto is_static = rec_func && !(rec_func->is_method && rec_func->scope);
- const auto has_doc = rec_func && rec_func->doc && pybind11::options::show_user_defined_docstrings();
- auto property = handle((PyObject *) (is_static ? get_internals().static_property_type
- : &PyProperty_Type));
- attr(name) = property(fget.ptr() ? fget : none(),
- fset.ptr() ? fset : none(),
- /*deleter*/none(),
- pybind11::str(has_doc ? rec_func->doc : ""));
- }
-};
-
-/// Set the pointer to operator new if it exists. The cast is needed because it can be overloaded.
-template (T::operator new))>>
-void set_operator_new(type_record *r) { r->operator_new = &T::operator new; }
-
-template void set_operator_new(...) { }
-
-template struct has_operator_delete : std::false_type { };
-template struct has_operator_delete(T::operator delete))>>
- : std::true_type { };
-template struct has_operator_delete_size : std::false_type { };
-template struct has_operator_delete_size(T::operator delete))>>
- : std::true_type { };
-/// Call class-specific delete if it exists or global otherwise. Can also be an overload set.
-template ::value, int> = 0>
-void call_operator_delete(T *p, size_t, size_t) { T::operator delete(p); }
-template ::value && has_operator_delete_size::value, int> = 0>
-void call_operator_delete(T *p, size_t s, size_t) { T::operator delete(p, s); }
-
-inline void call_operator_delete(void *p, size_t s, size_t a) {
- (void)s; (void)a;
- #if defined(__cpp_aligned_new) && (!defined(_MSC_VER) || _MSC_VER >= 1912)
- if (a > __STDCPP_DEFAULT_NEW_ALIGNMENT__) {
- #ifdef __cpp_sized_deallocation
- ::operator delete(p, s, std::align_val_t(a));
- #else
- ::operator delete(p, std::align_val_t(a));
- #endif
- return;
- }
- #endif
- #ifdef __cpp_sized_deallocation
- ::operator delete(p, s);
- #else
- ::operator delete(p);
- #endif
-}
-
-inline void add_class_method(object& cls, const char *name_, const cpp_function &cf) {
- cls.attr(cf.name()) = cf;
- if (strcmp(name_, "__eq__") == 0 && !cls.attr("__dict__").contains("__hash__")) {
- cls.attr("__hash__") = none();
- }
-}
-
-PYBIND11_NAMESPACE_END(detail)
-
-/// Given a pointer to a member function, cast it to its `Derived` version.
-/// Forward everything else unchanged.
-template
-auto method_adaptor(F &&f) -> decltype(std::forward(f)) { return std::forward(f); }
-
-template
-auto method_adaptor(Return (Class::*pmf)(Args...)) -> Return (Derived::*)(Args...) {
- static_assert(detail::is_accessible_base_of::value,
- "Cannot bind an inaccessible base class method; use a lambda definition instead");
- return pmf;
-}
-
-template
-auto method_adaptor(Return (Class::*pmf)(Args...) const) -> Return (Derived::*)(Args...) const {
- static_assert(detail::is_accessible_base_of::value,
- "Cannot bind an inaccessible base class method; use a lambda definition instead");
- return pmf;
-}
-
-template
-class class_ : public detail::generic_type {
- template using is_holder = detail::is_holder_type;
- template using is_subtype = detail::is_strict_base_of;
- template using is_base = detail::is_strict_base_of;
- // struct instead of using here to help MSVC:
- template struct is_valid_class_option :
- detail::any_of, is_subtype, is_base> {};
-
-public:
- using type = type_;
- using type_alias = detail::exactly_one_t;
- constexpr static bool has_alias = !std::is_void::value;
- using holder_type = detail::exactly_one_t, options...>;
-
- static_assert(detail::all_of...>::value,
- "Unknown/invalid class_ template parameters provided");
-
- static_assert(!has_alias || std::is_polymorphic::value,
- "Cannot use an alias class with a non-polymorphic type");
-
- PYBIND11_OBJECT(class_, generic_type, PyType_Check)
-
- template
- class_(handle scope, const char *name, const Extra &... extra) {
- using namespace detail;
-
- // MI can only be specified via class_ template options, not constructor parameters
- static_assert(
- none_of...>::value || // no base class arguments, or:
- ( constexpr_sum(is_pyobject::value...) == 1 && // Exactly one base
- constexpr_sum(is_base::value...) == 0 && // no template option bases
- none_of...>::value), // no multiple_inheritance attr
- "Error: multiple inheritance bases must be specified via class_ template options");
-
- type_record record;
- record.scope = scope;
- record.name = name;
- record.type = &typeid(type);
- record.type_size = sizeof(conditional_t);
- record.type_align = alignof(conditional_t&);
- record.holder_size = sizeof(holder_type);
- record.init_instance = init_instance;
- record.dealloc = dealloc;
- record.default_holder = detail::is_instantiation::value;
-
- set_operator_new(&record);
-
- /* Register base classes specified via template arguments to class_, if any */
- PYBIND11_EXPAND_SIDE_EFFECTS(add_base(record));
-
- /* Process optional arguments, if any */
- process_attributes::init(extra..., &record);
-
- generic_type::initialize(record);
-
- if (has_alias) {
- auto &instances = record.module_local ? registered_local_types_cpp() : get_internals().registered_types_cpp;
- instances[std::type_index(typeid(type_alias))] = instances[std::type_index(typeid(type))];
- }
- }
-
- template ::value, int> = 0>
- static void add_base(detail::type_record &rec) {
- rec.add_base(typeid(Base), [](void *src) -> void * {
- return static_cast(reinterpret_cast(src));
- });
- }
-
- template ::value, int> = 0>
- static void add_base(detail::type_record &) { }
-
- template
- class_ &def(const char *name_, Func&& f, const Extra&... extra) {
- cpp_function cf(method_adaptor(std::forward(f)), name(name_), is_method(*this),
- sibling(getattr(*this, name_, none())), extra...);
- add_class_method(*this, name_, cf);
- return *this;
- }
-
- template class_ &
- def_static(const char *name_, Func &&f, const Extra&... extra) {
- static_assert(!std::is_member_function_pointer::value,
- "def_static(...) called with a non-static member function pointer");
- cpp_function cf(std::forward(f), name(name_), scope(*this),
- sibling(getattr(*this, name_, none())), extra...);
- attr(cf.name()) = staticmethod(cf);
- return *this;
- }
-
- template
- class_ &def(const detail::op_ &op, const Extra&... extra) {
- op.execute(*this, extra...);
- return *this;
- }
-
- template
- class_ & def_cast(const detail::op_ &op, const Extra&... extra) {
- op.execute_cast(*this, extra...);
- return *this;
- }
-
- template
- class_ &def(const detail::initimpl::constructor &init, const Extra&... extra) {
- init.execute(*this, extra...);
- return *this;
- }
-
- template
- class_ &def(const detail::initimpl::alias_constructor &init, const Extra&... extra) {
- init.execute(*this, extra...);
- return *this;
- }
-
- template
- class_ &def(detail::initimpl::factory &&init, const Extra&... extra) {
- std::move(init).execute(*this, extra...);
- return *this;
- }
-
- template
- class_ &def(detail::initimpl::pickle_factory &&pf, const Extra &...extra) {
- std::move(pf).execute(*this, extra...);
- return *this;
- }
-
- template class_& def_buffer(Func &&func) {
- struct capture { Func func; };
- capture *ptr = new capture { std::forward(func) };
- install_buffer_funcs([](PyObject *obj, void *ptr) -> buffer_info* {
- detail::make_caster caster;
- if (!caster.load(obj, false))
- return nullptr;
- return new buffer_info(((capture *) ptr)->func(caster));
- }, ptr);
- return *this;
- }
-
- template
- class_ &def_buffer(Return (Class::*func)(Args...)) {
- return def_buffer([func] (type &obj) { return (obj.*func)(); });
- }
-
- template
- class_ &def_buffer(Return (Class::*func)(Args...) const) {
- return def_buffer([func] (const type &obj) { return (obj.*func)(); });
- }
-
- template
- class_ &def_readwrite(const char *name, D C::*pm, const Extra&... extra) {
- static_assert(std::is_same::value || std::is_base_of::value, "def_readwrite() requires a class member (or base class member)");
- cpp_function fget([pm](const type &c) -> const D &{ return c.*pm; }, is_method(*this)),
- fset([pm](type &c, const D &value) { c.*pm = value; }, is_method(*this));
- def_property(name, fget, fset, return_value_policy::reference_internal, extra...);
- return *this;
- }
-
- template
- class_ &def_readonly(const char *name, const D C::*pm, const Extra& ...extra) {
- static_assert(std::is_same::value || std::is_base_of::value, "def_readonly() requires a class member (or base class member)");
- cpp_function fget([pm](const type &c) -> const D &{ return c.*pm; }, is_method(*this));
- def_property_readonly(name, fget, return_value_policy::reference_internal, extra...);
- return *this;
- }
-
- template
- class_ &def_readwrite_static(const char *name, D *pm, const Extra& ...extra) {
- cpp_function fget([pm](object) -> const D &{ return *pm; }, scope(*this)),
- fset([pm](object, const D &value) { *pm = value; }, scope(*this));
- def_property_static(name, fget, fset, return_value_policy::reference, extra...);
- return *this;
- }
-
- template
- class_ &def_readonly_static(const char *name, const D *pm, const Extra& ...extra) {
- cpp_function fget([pm](object) -> const D &{ return *pm; }, scope(*this));
- def_property_readonly_static(name, fget, return_value_policy::reference, extra...);
- return *this;
- }
-
- /// Uses return_value_policy::reference_internal by default
- template
- class_ &def_property_readonly(const char *name, const Getter &fget, const Extra& ...extra) {
- return def_property_readonly(name, cpp_function(method_adaptor(fget)),
- return_value_policy::reference_internal, extra...);
- }
-
- /// Uses cpp_function's return_value_policy by default
- template
- class_ &def_property_readonly(const char *name, const cpp_function &fget, const Extra& ...extra) {
- return def_property(name, fget, nullptr, extra...);
- }
-
- /// Uses return_value_policy::reference by default
- template
- class_ &def_property_readonly_static(const char *name, const Getter &fget, const Extra& ...extra) {
- return def_property_readonly_static(name, cpp_function(fget), return_value_policy::reference, extra...);
- }
-
- /// Uses cpp_function's return_value_policy by default
- template
- class_ &def_property_readonly_static(const char *name, const cpp_function &fget, const Extra& ...extra) {
- return def_property_static(name, fget, nullptr, extra...);
- }
-
- /// Uses return_value_policy::reference_internal by default
- template
- class_ &def_property(const char *name, const Getter &fget, const Setter &fset, const Extra& ...extra) {
- return def_property(name, fget, cpp_function(method_adaptor(fset)), extra...);
- }
- template
- class_ &def_property(const char *name, const Getter &fget, const cpp_function &fset, const Extra& ...extra) {
- return def_property(name, cpp_function(method_adaptor(fget)), fset,
- return_value_policy::reference_internal, extra...);
- }
-
- /// Uses cpp_function's return_value_policy by default
- template
- class_ &def_property(const char *name, const cpp_function &fget, const cpp_function &fset, const Extra& ...extra) {
- return def_property_static(name, fget, fset, is_method(*this), extra...);
- }
-
- /// Uses return_value_policy::reference by default
- template
- class_ &def_property_static(const char *name, const Getter &fget, const cpp_function &fset, const Extra& ...extra) {
- return def_property_static(name, cpp_function(fget), fset, return_value_policy::reference, extra...);
- }
-
- /// Uses cpp_function's return_value_policy by default
- template
- class_ &def_property_static(const char *name, const cpp_function &fget, const cpp_function &fset, const Extra& ...extra) {
- static_assert( 0 == detail::constexpr_sum(std::is_base_of::value...),
- "Argument annotations are not allowed for properties");
- auto rec_fget = get_function_record(fget), rec_fset = get_function_record(fset);
- auto *rec_active = rec_fget;
- if (rec_fget) {
- char *doc_prev = rec_fget->doc; /* 'extra' field may include a property-specific documentation string */
- detail::process_attributes::init(extra..., rec_fget);
- if (rec_fget->doc && rec_fget->doc != doc_prev) {
- free(doc_prev);
- rec_fget->doc = strdup(rec_fget->doc);
- }
- }
- if (rec_fset) {
- char *doc_prev = rec_fset->doc;
- detail::process_attributes::init(extra..., rec_fset);
- if (rec_fset->doc && rec_fset->doc != doc_prev) {
- free(doc_prev);
- rec_fset->doc = strdup(rec_fset->doc);
- }
- if (! rec_active) rec_active = rec_fset;
- }
- def_property_static_impl(name, fget, fset, rec_active);
- return *this;
- }
-
-private:
- /// Initialize holder object, variant 1: object derives from enable_shared_from_this
- template
- static void init_holder(detail::instance *inst, detail::value_and_holder &v_h,
- const holder_type * /* unused */, const std::enable_shared_from_this * /* dummy */) {
- try {
- auto sh = std::dynamic_pointer_cast(
- v_h.value_ptr()->shared_from_this());
- if (sh) {
- new (std::addressof(v_h.holder())) holder_type(std::move(sh));
- v_h.set_holder_constructed();
- }
- } catch (const std::bad_weak_ptr &) {}
-
- if (!v_h.holder_constructed() && inst->owned) {
- new (std::addressof(v_h.holder())) holder_type(v_h.value_ptr());
- v_h.set_holder_constructed();
- }
- }
-
- static void init_holder_from_existing(const detail::value_and_holder &v_h,
- const holder_type *holder_ptr, std::true_type /*is_copy_constructible*/) {
- new (std::addressof(v_h.holder())) holder_type(*reinterpret_cast(holder_ptr));
- }
-
- static void init_holder_from_existing(const detail::value_and_holder &v_h,
- const holder_type *holder_ptr, std::false_type /*is_copy_constructible*/) {
- new (std::addressof(v_h.holder())) holder_type(std::move(*const_cast(holder_ptr)));
- }
-
- /// Initialize holder object, variant 2: try to construct from existing holder object, if possible
- static void init_holder(detail::instance *inst, detail::value_and_holder &v_h,
- const holder_type *holder_ptr, const void * /* dummy -- not enable_shared_from_this) */) {
- if (holder_ptr) {
- init_holder_from_existing(v_h, holder_ptr, std::is_copy_constructible());
- v_h.set_holder_constructed();
- } else if (inst->owned || detail::always_construct_holder::value) {
- new (std::addressof(v_h.holder())) holder_type(v_h.value_ptr());
- v_h.set_holder_constructed();
- }
- }
-
- /// Performs instance initialization including constructing a holder and registering the known
- /// instance. Should be called as soon as the `type` value_ptr is set for an instance. Takes an
- /// optional pointer to an existing holder to use; if not specified and the instance is
- /// `.owned`, a new holder will be constructed to manage the value pointer.
- static void init_instance(detail::instance *inst, const void *holder_ptr) {
- auto v_h = inst->get_value_and_holder(detail::get_type_info(typeid(type)));
- if (!v_h.instance_registered()) {
- register_instance(inst, v_h.value_ptr(), v_h.type);
- v_h.set_instance_registered();
- }
- init_holder(inst, v_h, (const holder_type *) holder_ptr, v_h.value_ptr());
- }
-
- /// Deallocates an instance; via holder, if constructed; otherwise via operator delete.
- static void dealloc(detail::value_and_holder &v_h) {
- // We could be deallocating because we are cleaning up after a Python exception.
- // If so, the Python error indicator will be set. We need to clear that before
- // running the destructor, in case the destructor code calls more Python.
- // If we don't, the Python API will exit with an exception, and pybind11 will
- // throw error_already_set from the C++ destructor which is forbidden and triggers
- // std::terminate().
- error_scope scope;
- if (v_h.holder_constructed()) {
- v_h.holder().~holder_type();
- v_h.set_holder_constructed(false);
- }
- else {
- detail::call_operator_delete(v_h.value_ptr(),
- v_h.type->type_size,
- v_h.type->type_align
- );
- }
- v_h.value_ptr() = nullptr;
- }
-
- static detail::function_record *get_function_record(handle h) {
- h = detail::get_function(h);
- return h ? (detail::function_record *) reinterpret_borrow(PyCFunction_GET_SELF(h.ptr()))
- : nullptr;
- }
-};
-
-/// Binds an existing constructor taking arguments Args...
-template detail::initimpl::constructor init() { return {}; }
-/// Like `init()`, but the instance is always constructed through the alias class (even
-/// when not inheriting on the Python side).
-template detail::initimpl::alias_constructor init_alias() { return {}; }
-
-/// Binds a factory function as a constructor
-template >
-Ret init(Func &&f) { return {std::forward(f)}; }
-
-/// Dual-argument factory function: the first function is called when no alias is needed, the second
-/// when an alias is needed (i.e. due to python-side inheritance). Arguments must be identical.
-template >
-Ret init(CFunc &&c, AFunc &&a) {
- return {std::forward(c), std::forward(a)};
-}
-
-/// Binds pickling functions `__getstate__` and `__setstate__` and ensures that the type
-/// returned by `__getstate__` is the same as the argument accepted by `__setstate__`.
-template
-detail::initimpl::pickle_factory pickle(GetState &&g, SetState &&s) {
- return {std::forward(g), std::forward(s)};
-}
-
-PYBIND11_NAMESPACE_BEGIN(detail)
-struct enum_base {
- enum_base(handle base, handle parent) : m_base(base), m_parent(parent) { }
-
- PYBIND11_NOINLINE void init(bool is_arithmetic, bool is_convertible) {
- m_base.attr("__entries") = dict();
- auto property = handle((PyObject *) &PyProperty_Type);
- auto static_property = handle((PyObject *) get_internals().static_property_type);
-
- m_base.attr("__repr__") = cpp_function(
- [](handle arg) -> str {
- handle type = arg.get_type();
- object type_name = type.attr("__name__");
- dict entries = type.attr("__entries");
- for (const auto &kv : entries) {
- object other = kv.second[int_(0)];
- if (other.equal(arg))
- return pybind11::str("{}.{}").format(type_name, kv.first);
- }
- return pybind11::str("{}.???").format(type_name);
- }, name("__repr__"), is_method(m_base)
- );
-
- m_base.attr("name") = property(cpp_function(
- [](handle arg) -> str {
- dict entries = arg.get_type().attr("__entries");
- for (const auto &kv : entries) {
- if (handle(kv.second[int_(0)]).equal(arg))
- return pybind11::str(kv.first);
- }
- return "???";
- }, name("name"), is_method(m_base)
- ));
-
- m_base.attr("__doc__") = static_property(cpp_function(
- [](handle arg) -> std::string {
- std::string docstring;
- dict entries = arg.attr("__entries");
- if (((PyTypeObject *) arg.ptr())->tp_doc)
- docstring += std::string(((PyTypeObject *) arg.ptr())->tp_doc) + "\n\n";
- docstring += "Members:";
- for (const auto &kv : entries) {
- auto key = std::string(pybind11::str(kv.first));
- auto comment = kv.second[int_(1)];
- docstring += "\n\n " + key;
- if (!comment.is_none())
- docstring += " : " + (std::string) pybind11::str(comment);
- }
- return docstring;
- }, name("__doc__")
- ), none(), none(), "");
-
- m_base.attr("__members__") = static_property(cpp_function(
- [](handle arg) -> dict {
- dict entries = arg.attr("__entries"), m;
- for (const auto &kv : entries)
- m[kv.first] = kv.second[int_(0)];
- return m;
- }, name("__members__")), none(), none(), ""
- );
-
- #define PYBIND11_ENUM_OP_STRICT(op, expr, strict_behavior) \
- m_base.attr(op) = cpp_function( \
- [](object a, object b) { \
- if (!a.get_type().is(b.get_type())) \
- strict_behavior; \
- return expr; \
- }, \
- name(op), is_method(m_base))
-
- #define PYBIND11_ENUM_OP_CONV(op, expr) \
- m_base.attr(op) = cpp_function( \
- [](object a_, object b_) { \
- int_ a(a_), b(b_); \
- return expr; \
- }, \
- name(op), is_method(m_base))
-
- #define PYBIND11_ENUM_OP_CONV_LHS(op, expr) \
- m_base.attr(op) = cpp_function( \
- [](object a_, object b) { \
- int_ a(a_); \
- return expr; \
- }, \
- name(op), is_method(m_base))
-
- if (is_convertible) {
- PYBIND11_ENUM_OP_CONV_LHS("__eq__", !b.is_none() && a.equal(b));
- PYBIND11_ENUM_OP_CONV_LHS("__ne__", b.is_none() || !a.equal(b));
-
- if (is_arithmetic) {
- PYBIND11_ENUM_OP_CONV("__lt__", a < b);
- PYBIND11_ENUM_OP_CONV("__gt__", a > b);
- PYBIND11_ENUM_OP_CONV("__le__", a <= b);
- PYBIND11_ENUM_OP_CONV("__ge__", a >= b);
- PYBIND11_ENUM_OP_CONV("__and__", a & b);
- PYBIND11_ENUM_OP_CONV("__rand__", a & b);
- PYBIND11_ENUM_OP_CONV("__or__", a | b);
- PYBIND11_ENUM_OP_CONV("__ror__", a | b);
- PYBIND11_ENUM_OP_CONV("__xor__", a ^ b);
- PYBIND11_ENUM_OP_CONV("__rxor__", a ^ b);
- m_base.attr("__invert__") = cpp_function(
- [](object arg) { return ~(int_(arg)); }, name("__invert__"), is_method(m_base));
- }
- } else {
- PYBIND11_ENUM_OP_STRICT("__eq__", int_(a).equal(int_(b)), return false);
- PYBIND11_ENUM_OP_STRICT("__ne__", !int_(a).equal(int_(b)), return true);
-
- if (is_arithmetic) {
- #define PYBIND11_THROW throw type_error("Expected an enumeration of matching type!");
- PYBIND11_ENUM_OP_STRICT("__lt__", int_(a) < int_(b), PYBIND11_THROW);
- PYBIND11_ENUM_OP_STRICT("__gt__", int_(a) > int_(b), PYBIND11_THROW);
- PYBIND11_ENUM_OP_STRICT("__le__", int_(a) <= int_(b), PYBIND11_THROW);
- PYBIND11_ENUM_OP_STRICT("__ge__", int_(a) >= int_(b), PYBIND11_THROW);
- #undef PYBIND11_THROW
- }
- }
-
- #undef PYBIND11_ENUM_OP_CONV_LHS
- #undef PYBIND11_ENUM_OP_CONV
- #undef PYBIND11_ENUM_OP_STRICT
-
- m_base.attr("__getstate__") = cpp_function(
- [](object arg) { return int_(arg); }, name("__getstate__"), is_method(m_base));
-
- m_base.attr("__hash__") = cpp_function(
- [](object arg) { return int_(arg); }, name("__hash__"), is_method(m_base));
- }
-
- PYBIND11_NOINLINE void value(char const* name_, object value, const char *doc = nullptr) {
- dict entries = m_base.attr("__entries");
- str name(name_);
- if (entries.contains(name)) {
- std::string type_name = (std::string) str(m_base.attr("__name__"));
- throw value_error(type_name + ": element \"" + std::string(name_) + "\" already exists!");
- }
-
- entries[name] = std::make_pair(value, doc);
- m_base.attr(name) = value;
- }
-
- PYBIND11_NOINLINE void export_values() {
- dict entries = m_base.attr("__entries");
- for (const auto &kv : entries)
- m_parent.attr(kv.first) = kv.second[int_(0)];
- }
-
- handle m_base;
- handle m_parent;
-};
-
-PYBIND11_NAMESPACE_END(detail)
-
-/// Binds C++ enumerations and enumeration classes to Python
-template class enum_ : public class_ {
-public:
- using Base = class_;
- using Base::def;
- using Base::attr;
- using Base::def_property_readonly;
- using Base::def_property_readonly_static;
- using Scalar = typename std::underlying_type::type;
-
- template
- enum_(const handle &scope, const char *name, const Extra&... extra)
- : class_(scope, name, extra...), m_base(*this, scope) {
- constexpr bool is_arithmetic = detail::any_of...>::value;
- constexpr bool is_convertible = std::is_convertible::value;
- m_base.init(is_arithmetic, is_convertible);
-
- def(init([](Scalar i) { return static_cast(i); }));
- def("__int__", [](Type value) { return (Scalar) value; });
- #if PY_MAJOR_VERSION < 3
- def("__long__", [](Type value) { return (Scalar) value; });
- #endif
- #if PY_MAJOR_VERSION > 3 || (PY_MAJOR_VERSION == 3 && PY_MINOR_VERSION >= 8)
- def("__index__", [](Type value) { return (Scalar) value; });
- #endif
-
- attr("__setstate__") = cpp_function(
- [](detail::value_and_holder &v_h, Scalar arg) {
- detail::initimpl::setstate(v_h, static_cast(arg),
- Py_TYPE(v_h.inst) != v_h.type->type); },
- detail::is_new_style_constructor(),
- pybind11::name("__setstate__"), is_method(*this));
- }
-
- /// Export enumeration entries into the parent scope
- enum_& export_values() {
- m_base.export_values();
- return *this;
- }
-
- /// Add an enumeration entry
- enum_& value(char const* name, Type value, const char *doc = nullptr) {
- m_base.value(name, pybind11::cast(value, return_value_policy::copy), doc);
- return *this;
- }
-
-private:
- detail::enum_base m_base;
-};
-
-PYBIND11_NAMESPACE_BEGIN(detail)
-
-
-inline void keep_alive_impl(handle nurse, handle patient) {
- if (!nurse || !patient)
- pybind11_fail("Could not activate keep_alive!");
-
- if (patient.is_none() || nurse.is_none())
- return; /* Nothing to keep alive or nothing to be kept alive by */
-
- auto tinfo = all_type_info(Py_TYPE(nurse.ptr()));
- if (!tinfo.empty()) {
- /* It's a pybind-registered type, so we can store the patient in the
- * internal list. */
- add_patient(nurse.ptr(), patient.ptr());
- }
- else {
- /* Fall back to clever approach based on weak references taken from
- * Boost.Python. This is not used for pybind-registered types because
- * the objects can be destroyed out-of-order in a GC pass. */
- cpp_function disable_lifesupport(
- [patient](handle weakref) { patient.dec_ref(); weakref.dec_ref(); });
-
- weakref wr(nurse, disable_lifesupport);
-
- patient.inc_ref(); /* reference patient and leak the weak reference */
- (void) wr.release();
- }
-}
-
-PYBIND11_NOINLINE inline void keep_alive_impl(size_t Nurse, size_t Patient, function_call &call, handle ret) {
- auto get_arg = [&](size_t n) {
- if (n == 0)
- return ret;
- else if (n == 1 && call.init_self)
- return call.init_self;
- else if (n <= call.args.size())
- return call.args[n - 1];
- return handle();
- };
-
- keep_alive_impl(get_arg(Nurse), get_arg(Patient));
-}
-
-inline std::pair all_type_info_get_cache(PyTypeObject *type) {
- auto res = get_internals().registered_types_py
-#ifdef __cpp_lib_unordered_map_try_emplace
- .try_emplace(type);
-#else
- .emplace(type, std::vector());
-#endif
- if (res.second) {
- // New cache entry created; set up a weak reference to automatically remove it if the type
- // gets destroyed:
- weakref((PyObject *) type, cpp_function([type](handle wr) {
- get_internals().registered_types_py.erase(type);
- wr.dec_ref();
- })).release();
- }
-
- return res;
-}
-
-template
-struct iterator_state {
- Iterator it;
- Sentinel end;
- bool first_or_done;
-};
-
-PYBIND11_NAMESPACE_END(detail)
-
-/// Makes a python iterator from a first and past-the-end C++ InputIterator.
-template ()),
- typename... Extra>
-iterator make_iterator(Iterator first, Sentinel last, Extra &&... extra) {
- typedef detail::iterator_state state;
-
- if (!detail::get_type_info(typeid(state), false)) {
- class_(handle(), "iterator", pybind11::module_local())
- .def("__iter__", [](state &s) -> state& { return s; })
- .def("__next__", [](state &s) -> ValueType {
- if (!s.first_or_done)
- ++s.it;
- else
- s.first_or_done = false;
- if (s.it == s.end) {
- s.first_or_done = true;
- throw stop_iteration();
- }
- return *s.it;
- }, std::forward(extra)..., Policy);
- }
-
- return cast(state{first, last, true});
-}
-
-/// Makes an python iterator over the keys (`.first`) of a iterator over pairs from a
-/// first and past-the-end InputIterator.
-template ()).first),
- typename... Extra>
-iterator make_key_iterator(Iterator first, Sentinel last, Extra &&... extra) {
- typedef detail::iterator_state state;
-
- if (!detail::get_type_info(typeid(state), false)) {
- class_(handle(), "iterator", pybind11::module_local())
- .def("__iter__", [](state &s) -> state& { return s; })
- .def("__next__", [](state &s) -> KeyType {
- if (!s.first_or_done)
- ++s.it;
- else
- s.first_or_done = false;
- if (s.it == s.end) {
- s.first_or_done = true;
- throw stop_iteration();
- }
- return (*s.it).first;
- }, std::forward(extra)..., Policy);
- }
-
- return cast(state{first, last, true});
-}
-
-/// Makes an iterator over values of an stl container or other container supporting
-/// `std::begin()`/`std::end()`
-template iterator make_iterator(Type &value, Extra&&... extra) {
- return make_iterator(std::begin(value), std::end(value), extra...);
-}
-
-/// Makes an iterator over the keys (`.first`) of a stl map-like container supporting
-/// `std::begin()`/`std::end()`
-template iterator make_key_iterator(Type &value, Extra&&... extra) {
- return make_key_iterator(std::begin(value), std::end(value), extra...);
-}
-
-template void implicitly_convertible() {
- struct set_flag {
- bool &flag;
- set_flag(bool &flag) : flag(flag) { flag = true; }
- ~set_flag() { flag = false; }
- };
- auto implicit_caster = [](PyObject *obj, PyTypeObject *type) -> PyObject * {
- static bool currently_used = false;
- if (currently_used) // implicit conversions are non-reentrant
- return nullptr;
- set_flag flag_helper(currently_used);
- if (!detail::make_caster().load(obj, false))
- return nullptr;
- tuple args(1);
- args[0] = obj;
- PyObject *result = PyObject_Call((PyObject *) type, args.ptr(), nullptr);
- if (result == nullptr)
- PyErr_Clear();
- return result;
- };
-
- if (auto tinfo = detail::get_type_info(typeid(OutputType)))
- tinfo->implicit_conversions.push_back(implicit_caster);
- else
- pybind11_fail("implicitly_convertible: Unable to find type " + type_id());
-}
-
-template
-void register_exception_translator(ExceptionTranslator&& translator) {
- detail::get_internals().registered_exception_translators.push_front(
- std::forward(translator));
-}
-
-/**
- * Wrapper to generate a new Python exception type.
- *
- * This should only be used with PyErr_SetString for now.
- * It is not (yet) possible to use as a py::base.
- * Template type argument is reserved for future use.
- */
-template
-class exception : public object {
-public:
- exception() = default;
- exception(handle scope, const char *name, PyObject *base = PyExc_Exception) {
- std::string full_name = scope.attr("__name__").cast() +
- std::string(".") + name;
- m_ptr = PyErr_NewException(const_cast(full_name.c_str()), base, NULL);
- if (hasattr(scope, name))
- pybind11_fail("Error during initialization: multiple incompatible "
- "definitions with name \"" + std::string(name) + "\"");
- scope.attr(name) = *this;
- }
-
- // Sets the current python exception to this exception object with the given message
- void operator()(const char *message) {
- PyErr_SetString(m_ptr, message);
- }
-};
-
-PYBIND11_NAMESPACE_BEGIN(detail)
-// Returns a reference to a function-local static exception object used in the simple
-// register_exception approach below. (It would be simpler to have the static local variable
-// directly in register_exception, but that makes clang <3.5 segfault - issue #1349).
-template
-exception &get_exception_object() { static exception ex; return ex; }
-PYBIND11_NAMESPACE_END(detail)
-
-/**
- * Registers a Python exception in `m` of the given `name` and installs an exception translator to
- * translate the C++ exception to the created Python exception using the exceptions what() method.
- * This is intended for simple exception translations; for more complex translation, register the
- * exception object and translator directly.
- */
-template
-exception ®ister_exception(handle scope,
- const char *name,
- PyObject *base = PyExc_Exception) {
- auto &ex = detail::get_exception_object();
- if (!ex) ex = exception(scope, name, base);
-
- register_exception_translator([](std::exception_ptr p) {
- if (!p) return;
- try {
- std::rethrow_exception(p);
- } catch (const CppException &e) {
- detail::get_exception_object()(e.what());
- }
- });
- return ex;
-}
-
-PYBIND11_NAMESPACE_BEGIN(detail)
-PYBIND11_NOINLINE inline void print(tuple args, dict kwargs) {
- auto strings = tuple(args.size());
- for (size_t i = 0; i < args.size(); ++i) {
- strings[i] = str(args[i]);
- }
- auto sep = kwargs.contains("sep") ? kwargs["sep"] : cast(" ");
- auto line = sep.attr("join")(strings);
-
- object file;
- if (kwargs.contains("file")) {
- file = kwargs["file"].cast