diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md
deleted file mode 100644
index 88ab444405c3a32caec3925baec77e45ec78d0af..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md
+++ /dev/null
@@ -1,173 +0,0 @@
-
-
Danea easyfatt 2013 crack: What is it and how to use it?
-
If you are looking for a software that can help you manage your invoices, inventory, orders, quotes, and accounting, you might have heard of Danea easyfatt. This is a popular program that is designed for small and medium businesses in Italy. However, if you want to use this software without paying for a license, you might also be interested in Danea easyfatt 2013 crack. This is a modified version of the program that allows you to bypass the activation process and use it for free. But what exactly is a crack and how can you use it safely? In this article, we will explain everything you need to know about Danea easyfatt 2013 crack, including how to download, install, and use it.
Danea easyfatt is a software developed by Danea Soft (Italia), a company that specializes in creating solutions for small and medium enterprises. Danea easyfatt is one of their flagship products, which offers a comprehensive and user-friendly interface for managing various aspects of your business. With Danea easyfatt, you can:
-
-
Create and print invoices, receipts, delivery notes, quotes, orders, and more.
-
Manage your inventory, stock movements, suppliers, and purchases.
-
Keep track of your customers, contacts, payments, and reminders.
-
Generate reports, statistics, charts, and graphs.
-
Integrate with other software such as Excel, Outlook, Word, e-commerce platforms, etc.
-
Synchronize your data with cloud services such as Dropbox, Google Drive, OneDrive, etc.
-
-
Danea easyfatt is compatible with Windows operating systems and supports multiple languages. It also comes in different editions depending on your needs: Basic, Professional, Enterprise, etc. However, each edition has a different price tag and requires a license key to activate.
-
What is a crack?
-
A crack is a term used to describe a file or a program that modifies or alters the original software in order to remove or bypass its protection mechanisms. For example, some software require an activation code or a serial number to verify that you have purchased a legitimate copy. A crack can either generate a fake code or replace the original file that checks for the code with a modified one that allows you to use the software without any restrictions.
-
A crack can also be used to unlock or enable features that are otherwise unavailable or limited in the original software. For example, some software have trial versions that expire after a certain period of time or have reduced functionality. A crack can either extend the trial period indefinitely or enable all the features as if you have bought the full version.
-
A crack can be either an executable file (.exe) that you run before or after installing the original software or a patch file (.dll) that you copy and paste into the installation folder of the original software. Sometimes, a crack can also come with instructions or a keygen (a program that generates keys) that you need to follow carefully.
-
Why would you need a crack for Danea easyfatt 2013?
-
There are many reasons why someone would want to use a crack for Danea easyfatt 2013. Some of them are:
-
Danea easyfatt 2013 full version download
-How to crack Danea easyfatt 2013 software
-Danea easyfatt 2013 serial key generator
-Danea easyfatt 2013 activation code free
-Danea easyfatt 2013 patch download
-Danea easyfatt 2013 license key crack
-Danea easyfatt 2013 torrent download
-Danea easyfatt 2013 keygen online
-Danea easyfatt 2013 cracked version for windows
-Danea easyfatt 2013 registration code crack
-Danea easyfatt 2013 product key crack
-Danea easyfatt 2013 crack mac os x
-Danea easyfatt 2013 crack no survey
-Danea easyfatt 2013 crack without password
-Danea easyfatt 2013 crack direct download link
-Danea easyfatt 2013 crack rar file
-Danea easyfatt 2013 crack zip file
-Danea easyfatt 2013 crack iso file
-Danea easyfatt 2013 crack exe file
-Danea easyfatt 2013 crack setup file
-Danea easyfatt 2013 crack installer file
-Danea easyfatt 2013 crack portable file
-Danea easyfatt 2013 crack working file
-Danea easyfatt 2013 crack latest version
-Danea easyfatt 2013 crack updated version
-Danea easyfatt 2013 crack with tutorial
-Danea easyfatt 2013 crack with instructions
-Danea easyfatt 2013 crack with guide
-Danea easyfatt 2013 crack with manual
-Danea easyfatt 2013 crack with video
-Danea easyfatt 2013 crack with proof
-Danea easyfatt 2013 crack with reviews
-Danea easyfatt 2013 crack with testimonials
-Danea easyfatt 2013 crack with feedbacks
-Danea easyfatt 2013 crack with ratings
-Danea easyfatt 2013 crack with comments
-Danea easyfatt 2013 crack with support
-Danea easyfatt 2013 crack with helpdesk
-Danea easyfatt 2013 crack with customer service
-Danea easyfatt 2013 crack with warranty
-Danea easyfatt 2013 crack with guarantee
-Danea easyfatt 2013 crack with refund policy
-Danea easyfatt 2013 crack with discount offer
-Danea easyfatt 2013 crack with coupon code
-Danea easyfatt 2013 crack with promo code
-Danea easyfatt 2013 crack with free trial
-Danea easyfatt 2013 crack with free download link
-
-
You want to try out the software before buying it.
-
You cannot afford to pay for the license fee.
-
You want to use the software for personal or educational purposes only.
-
You want to access features that are not available in your edition.
-
You want to use the software on multiple devices or share it with others.
-
-
However, using a crack also comes with some risks and disadvantages. Some of them are:
-
-
You may violate the terms and conditions of the software developer and face legal consequences.
-
You may expose your device or data to malware or viruses that are hidden in the crack file.
-
You may encounter errors or problems with the software functionality or compatibility.
-
You may not receive updates or support from the software developer.
-
You may damage your reputation or credibility as a professional or ethical user.
-
-
Therefore, before using a crack for Danea easyfatt 2013, you should weigh the pros and cons carefully and decide whether it is worth it or not.
-
How to download and install Danea easyfatt 2013 crack
-
Where to find the crack file
-
If you have decided to use a crack for Danea easyfatt 2013, you need to find a reliable source where you can download it. There are many websites that offer cracks for various software but not all of them are trustworthy or safe. Some of them may contain fake links or malicious files that can harm your device or data. Therefore, you should be careful when choosing where to download from.
-
One way to find a reputable website is to look for reviews or feedback from other users who have downloaded from there before. You can also check if the website has any security certificates or badges that indicate its legitimacy. Another way is to use an antivirus program or an online scanner tool that can scan the website or the file for any potential threats before downloading.
-
For example, one website that claims to offer Danea easyfatt 2013 crack is . According to this website,
-
"Salve a tutti, come da richiesta abbiamo messo a disposizione Danea Easyfatt Enterprise per i sistemi Windows. Consiglio di utilizzare il software jdownloader.org per poter scaricare le varie parti comodamente e WinRaR per estrarre l’archivio."
-
This means "Hello everyone, as requested we have made available Danea Easyfatt Enterprise for Windows systems. I recommend using jdownloader.org software to download various parts comfortably and WinRaR to extract the archive."
-
The website also provides three mirror links where you can download the archive file named Danea_EasyFatt_Enterprise_2020_v46c_Build_6011.rar. The password to open the archive is apritisesamo.
-
How to disable antivirus and extract the file
-
Before installing the program, you need to disable your antivirus and extract the file from the archive. This is because your antivirus may detect the crack as a threat and block or delete it. To disable your antivirus, you can follow these steps:
-
-
Open your antivirus program and go to its settings or options menu.
-
Look for an option that allows you to turn off or pause the protection temporarily. It may be called something like "Disable", "Deactivate", "Suspend", etc.
-
Select the option and choose how long you want to disable it. It may be in minutes, hours, or until restart. You can also choose which components of protection you want to disable, such as real-time scanning, firewall, etc.
-
Confirm your choice and close your antivirus program. You should see an icon on your taskbar indicating that your antivirus is off.
- To extract the file from the archive, you need to use a software that can handle RAR files. One of the most popular and free options is 7-Zip, which you can download from . After installing 7-Zip, you can follow these steps:
-
-
Right-click on the archive file and select "7-Zip" from the menu.
-
Select one of the "Extract" options, depending on where you want to extract the files. You can choose to extract them to a new folder with the same name as the archive, to the current folder, or to a custom location.
-
Enter the password apritisesamo when prompted and click "OK".
-
Wait for the extraction process to finish. You should see a new folder or files in the destination you chose.
-
-
How to install the program and replace the exe file
-
After extracting the file from the archive, you need to install the program and replace the original exe file with the cracked one. To do that, you can follow these steps:
-
-
Open the folder where you extracted the files and double-click on the Setup.exe file.
-
Follow the instructions on the screen to install Danea easyfatt 2013 on your device. You can choose your preferred language, destination folder, and shortcuts.
-
When the installation is complete, close the program completely. You can also exit it from the system tray if it is running in the background.
-
Open the folder named "Crack" and copy the DaneaEasyFatt.exe file.
-
Paste it into the installation folder of Danea easyfatt 2013, which is usually located at C:\Program Files (x86)\Danea Easyfatt 2013.
-
If prompted to replace or overwrite the existing file, click "Yes" or "Replace".
-
-
How to use Danea easyfatt 2013 crack
-
How to activate the program with the crack
-
Now that you have installed the program and replaced the exe file, you can activate the program with the crack. To do that, you can follow these steps:
-
-
Launch Danea easyfatt 2013 from your desktop or start menu shortcut.
-
You should see a window asking you to enter your license key or activate online. Click on "Activate online".
-
You should see another window asking you to enter your email address and password. Enter any email address and password you want and click "OK".
-
You should see a message saying that your activation was successful and that you have a valid license for Danea easyfatt Enterprise 2020.
-
Click "OK" and enjoy using Danea easyfatt 2013 crack.
-
-
How to access the features and functions of Danea easyfatt
-
Danea easyfatt 2013 crack allows you to access all the features and functions of Danea easyfatt Enterprise 2020, which is the most advanced edition of the software. You can explore the various menus, tabs, and buttons on the main interface to find what you need. Some of the main features and functions are:
-
-
Create and manage documents such as invoices, quotes, orders, receipts, etc. You can customize their layout, format, content, and print options. You can also export them to PDF, Excel, Word, or email them directly.
-
Manage your inventory, stock movements, suppliers, and purchases. You can track your products, categories, prices, quantities, barcodes, etc. You can also import or export data from Excel or other sources.
-
Manage your customers, contacts, payments, and reminders. You can store your customer information, history, preferences, etc. You can also send emails or SMS messages to them or create mailing lists.
-
Generate reports, statistics, charts, and graphs. You can analyze your data, performance, trends, etc. You can also customize your reports, filters, criteria, etc.
-
Integrate with other software such as Excel, Outlook, Word, e-commerce platforms, etc. You can import or export data, sync your contacts, calendar, tasks, etc.
-
Synchronize your data with cloud services such as Dropbox, Google Drive, OneDrive, etc. You can backup or restore your data, access it from anywhere, or share it with others.
-
-
How to avoid errors or problems with the crack
-
Danea easyfatt 2013 crack may not work perfectly for everyone. You may encounter some errors or problems with the software functionality or compatibility. To avoid or fix them, you can try some of these tips:
-
-
Make sure you have disabled your antivirus before installing or running the crack. Your antivirus may interfere with the crack operation or delete it.
-
Make sure you have replaced the original exe file with the cracked one in the installation folder. If you have not done so, the program may not activate properly or ask for a license key.
-
Make sure you have entered any email address and password when activating online. If you have left them blank or entered invalid ones, the program may not activate properly or show an error message.
-
Make sure you have installed Danea easyfatt 2013 on a compatible device and operating system. The software requires Windows XP SP3 or later versions (32-bit or 64-bit) and at least 1 GB of RAM and 500 MB of free disk space.
-
If you encounter any other errors or problems with Danea easyfatt 2013 crack, you can try to uninstall and reinstall it following the same steps above. You can also look for solutions online or contact Danea Soft (Italia) for support (but be careful not to reveal that you are using a crack).
-
-
Conclusion
-
Summary of the main points
-
In this article, we have explained what Danea easyfatt 2013 crack is and how to use it. We have covered:
-
-
What is Danea easyfatt and what are its features and functions?
-
What is a crack and why would you need one for Danea easyfatt 2013?
-
How to download and install Danea easyfatt 2013 crack?
-
How to activate and use Danea easyfatt 2013 crack?
-
How to avoid errors or problems with Danea easyfatt 2013 crack?
-
-
Benefits and risks of using a crack
-
We have also discussed some of the benefits and risks of using a crack for Danea easyfatt 2013. Some of them are:
-
-
You can use Danea easyfatt without paying for a license fee.
-
You can access all the features and functions of Danea easyfatt Enterprise 2020.
-
You can use Danea easyfatt on multiple devices or share it with others.
-
You may violate the terms and conditions of Danea Soft (Italia) and face legal consequences.
-
You may expose your device or data to malware or viruses that are hidden in the crack file.
-
You may encounter errors or problems with the software functionality or compatibility.
-
You may not receive updates or support from Danea Soft (Italia).
-
You may damage your reputation or credibility as a professional or ethical user.
-
-
Call to action and disclaimer
-
We hope this article has been helpful for you in understanding and using Danea easyfatt 2013 crack. However, we do not endorse or recommend using cracks for any software as they are illegal and unethical. We are not responsible for any damages or losses that may result from using cracks. We advise you to use cracks at your own risk and discretion. If you like Danea easyfatt and find it useful for your business needs, we encourage you to buy a legitimate license from Danea Soft (Italia) and support their work. Thank you for reading this article!
- **FAQs** Q: What is Danea easyfatt? A: Danea easyfatt is a software that helps you manage your invoices, inventory, orders, quotes, and accounting. Q: What is a crack? A: A crack is a file or a program that modifies or alters the original software in order to remove or bypass its protection mechanisms. Q: How do I download Danea easyfatt 2013 crack? A: You need 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md
deleted file mode 100644
index 10e80be88f86cbc9b8bf737c52b6f941509c9c40..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
microsoft office is a series of office applications offered by microsoft for home and business use. office has advanced features like edit pdfs, advanced multimedia functions, good touch navigation, helpful new assistants and also some disadvantages since the user has almost no choice but to take cloud use, and tablet work. both 32-bit and the 64-bit client application are supported by office 2013. you can even use the trial version for office 2013 for 30 days to get a chance to test it without having to buy it, youll get different office 2013 product key to keeping it operating for one month. you will be able to access word 2013, powerpoint 2013, excel 2013, outlook 2013 with this package.
-
yes. aws support has been successfully supporting our customers who run microsoft windows-based ec2 instances in the aws cloud since 2008 when we first launched windows server on ec2. our support engineers have deep experience with microsoft technologies on aws including amazon ec2, amazon ecs, amazon rds, amazon workspaces and others. now aws has further enhanced our support capabilities with a new additional direct engagement between aws support and microsoft support, to help ensure high quality support and issue resolution for our customers. to find more information on end of support (eos) for microsoft products go here.
-
download microsoft office professional plus 2013 rtm activation
per microsofts visual studio licensing guide, visual studio subscriptions purchased through certain channels provide perpetual use rights even after the subscription has expired. the use of perpetual licenses acquired before 10/1/2019 for products released prior to 10/1/2019 is permitted on aws dedicated infrastructure regardless of the renewal or expiration of the subscription under which the perpetual licenses were acquired.aws also offers fully-compliant, amazon-provided licenses formicrosoft visual studio enterprise 2022 and microsoft visual studio professional 2022 amazon machine images (amis) on amazon elastic compute cloud (amazon ec2). these amis are available on the amazon ec2 console and on aws marketplace, to launch instances on-demand without any long-term licensing commitments.to learn more, visit aws license manager user guide.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md b/spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md
deleted file mode 100644
index ce3dc01ee133084a63ba75716cda060d8e5304b2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
How to Download Clash Royale from Yapup.site
-
If you are looking for a fun and addictive game to play on your Android device, you might want to try Clash Royale. It is a real-time multiplayer battle game that features your favorite characters from Clash of Clans and more. In this article, we will show you how to download Clash Royale from Yapup.site, a website that offers free APK downloads for Android games and apps. We will also give you some tips and tricks to help you win at Clash Royale.
-
What is Clash Royale?
-
A real-time multiplayer battle game
-
Clash Royale is a game developed and published by Supercell, the same company behind the popular Clash of Clans. It was released in 2016 and has since become one of the most played mobile games in the world. In Clash Royale, you have to collect and upgrade cards that feature troops, spells, and defenses from the Clash universe. You then use these cards to create your own battle deck and fight against other players online in fast-paced matches. The goal is to destroy your opponent's three towers, including the king tower, while protecting your own. You can also join or form clans with other players and participate in clan wars, tournaments, and seasonal events.
Some of the features that make Clash Royale an exciting and challenging game are:
-
-
Over 100 cards to collect and upgrade, each with unique abilities and interactions.
-
Nine arenas to unlock and progress through, each with different themes and difficulties.
-
Various game modes to choose from, such as 1v1, 2v2, special challenges, clan wars, global tournaments, and more.
-
New seasonal items to unlock with the season pass, such as tower skins, emotes, and magic items.
-
A vibrant community of millions of players around the world.
-
-
What is Yapup.site?
-
A website that offers free APK downloads
-
Yapup.site is a website that provides free APK downloads for Android games and apps. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. By downloading APK files from Yapup.site, you can access games and apps that are not available on the Google Play Store or that are restricted in your region. You can also get the latest updates and versions of your favorite games and apps before they are officially released.
-
Benefits of using Yapup.site
-
Some of the benefits of using Yapup.site to download APK files are:
-
-
You can download games and apps for free without any registration or subscription.
-
You can download games and apps that are not available on the Google Play Store or that are restricted in your region.
-
You can download games and apps that are modded or hacked with unlimited resources or features.
-
You can download games and apps that are updated regularly with new content and bug fixes.
-
You can download games and apps that are safe and virus-free.
-
-
How to Download Clash Royale from Yapup.site
-
Step 1: Visit the website
-
The first step to download Clash Royale from Yapup.site is to visit the website using your web browser. You can use any browser you prefer, such as Chrome, Firefox, Safari, or Opera. The website has a simple and user-friendly interface that allows you to easily navigate and find the games and apps you want.
-
Step 2: Search for Clash Royale
-
The next step is to search for Clash Royale on the website. You can use the search bar at the top of the homepage to type in the name of the game. Alternatively, you can browse through the categories and genres of games and apps on the website. You can also check out the featured, popular, and new games and apps on the homepage. Once you find Clash Royale, click on it to open its page.
-
Step 3: Click on the download button
-
The third step is to click on the download button on the Clash Royale page. You will see a green button that says "Download APK" at the bottom of the page. You will also see some information about the game, such as its size, version, developer, rating, and description. You can read this information to learn more about the game and its features. You can also see some screenshots and videos of the game to get a glimpse of its gameplay. After you click on the download button, you will be redirected to another page where you have to wait for a few seconds before the download starts.
The final step is to install the APK file on your Android device. After the download is complete, you will see a notification on your device that says "Download complete". You can tap on this notification to open the APK file. Alternatively, you can go to your device's file manager and locate the APK file in your downloads folder. Before you install the APK file, you have to enable the installation of unknown sources on your device. To do this, go to your device's settings and then security. Find the option that says "Unknown sources" and toggle it on. This will allow you to install apps from sources other than the Google Play Store. After you enable this option, you can tap on the APK file and follow the instructions on your screen to install Clash Royale on your device.
-
Tips and Tricks for Playing Clash Royale
-
Join a clan and share cards
-
One of the best ways to improve your skills and progress in Clash Royale is to join a clan and share cards with other players. A clan is a group of players who can chat, donate, request, and trade cards with each other. By joining a clan, you can get more cards to upgrade your deck and also learn from other players' strategies and tips. You can also participate in clan wars and earn rewards for your clan.
-
Build a balanced deck and use your elixir wisely
-
Another important tip for playing Clash Royale is to build a balanced deck and use your elixir wisely. A balanced deck is one that has a good mix of cards that can counter different types of threats and also deal damage to your opponent's towers. You should have cards that can attack from a distance, such as archers or fireball; cards that can tank damage, such as giant or knight; cards that can swarm or distract, such as goblins or skeletons; and cards that can support or enhance, such as witch or rage. You should also have cards that cost different amounts of elixir, so that you can always have something to play depending on your elixir level. Elixir is the resource that you use to play cards in Clash Royale. It regenerates over time during a match, but it is limited by a maximum of 10 units. Therefore, you have to be careful not to waste elixir by playing cards that are not needed or effective. You should also try to gain an elixir advantage over your opponent by playing cards that cost less than their counters or by making positive trades. For example, if you use a fireball that costs 4 elixir to destroy a minion horde that costs 5 elixir, you gain an elixir advantage of 1 unit.
-
Defend your towers and attack the enemy's weak spots
-
The last tip for playing Clash Royale is to defend your towers and attack the enemy's weak spots. Your towers are your main defense against your opponent's attacks. They have high health and damage output, but they are vulnerable to certain types of cards or combinations. Therefore, you have to protect them by placing your troops strategically and using spells or buildings when necessary. On the other hand, you also have to find opportunities to attack your opponent's towers and deal damage to them. You should look for their weak spots, such as their low-health towers or their lack of counters for your cards. You should also try to exploit their mistakes, such as their overcommitment or their poor placement of cards. You should also try to create combos or synergies with your cards, such as using a hog rider with a freeze spell or using a balloon with a rage spell.
-
Conclusion
-
Clash Royale is a fun and addictive game that you can download and play on your Android device. You can download it from Yapup.site, a website that offers free APK downloads for Android games and apps. You can also follow the tips and tricks we shared in this article to improve your skills and win more matches. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please let us know in the comments section below. Happy clashing!
-
FAQs
-
Here are some frequently asked questions about Clash Royale and Yapup.site:
-
-
-
Question
-
Answer
-
-
-
Is Clash Royale free to play?
-
Yes, Clash Royale is free to download and play. However, it also offers in-app purchases that can enhance your gaming experience.
-
-
-
Is Yapup.site safe to use?
-
Yes, Yapup.site is safe to use. It does not contain any malware or viruses that can harm your device. However, you should always be careful when downloading APK files from unknown sources and scan them with an antivirus before installing them.
-
-
-
How can I update Clash Royale from Yapup.site?
-
You can update Clash Royale from Yapup.site by visiting the website again and downloading the latest version of the game. You can also enable the auto-update feature on your device's settings to get the updates automatically.
-
-
-
How can I contact the support team of Clash Royale?
-
You can contact the support team of Clash Royale by tapping on the settings icon on the top right corner of the game screen and then tapping on the help and support button. You can also visit the official website or social media pages of Clash Royale for more information and assistance.
-
-
-
How can I contact the support team of Yapup.site?
-
You can contact the support team of Yapup.site by visiting the website and clicking on the contact us button at the bottom of the page. You can also email them at yapup.site@gmail.com or follow them on Facebook or Twitter for more updates and news.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md b/spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md
deleted file mode 100644
index c9c9c3893c877c75ac5847e4066c318398588c9f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
Dislyte Global Download: How to Play the Stylish Urban Mythological RPG on PC and Mobile
-
Introduction
-
If you are a fan of pop-fantasy RPGs with striking audio-visual experience, you might want to check out Dislyte, a new game that features heroes and monsters from mythologies. Dislyte is set in a futuristic urban playground where mysterious powers and mythology collide. You can build your own squad of Espers, who are ordinary people with divine powers from gods of worldwide mythologies, and fight against the greatest threat to humanity.
In this article, we will show you how to download and play Dislyte on PC and mobile devices, so that you can enjoy the game's high-quality soundtracks and graphics, as well as grind easier without draining your battery. We will also share some tips and tricks to improve your gaming experience.
-
What is Dislyte?
-
Dislyte is a pop-fantasy RPG developed by FARLIGHT and published by Lilith Games. It was released globally in May 2023, after a successful soft launch in selected regions. The game has received positive reviews from players and critics, who praised its unique art style, engaging gameplay, and diverse characters.
-
Dislyte is inspired by various mythologies, such as Chinese, Egyptian, Greek, and Northern European. You can collect and customize over 100 Espers, each with their own skills, personalities, and appearances. You can also form teams with other players and participate in various modes, such as story mode, arena mode, raid mode, and more.
-
Why play Dislyte on PC and mobile?
-
Dislyte is a game that can be enjoyed on both PC and mobile devices. Playing Dislyte on PC has some advantages, such as:
-
-
You can enjoy the game's stunning graphics and soundtracks on a bigger screen.
-
You can use keyboard and mouse controls for better accuracy and comfort.
-
You can grind levels and farm relics easier with auto mode.
-
You don't have to worry about battery draining or overheating issues.
-
-
Playing Dislyte on mobile devices also has some benefits, such as:
-
-
You can play the game anytime and anywhere with an internet connection.
-
You can use touch screen controls for more intuitive gameplay.
-
You can receive notifications and updates from the game.
-
You can connect your account with your social media platforms.
-
-
No matter what device you choose to play Dislyte on, you will have a fun and immersive gaming experience.
-
How to download and play Dislyte on PC and Mac
-
If you want to play Dislyte on PC or Mac, you will need an emulator that can run Android apps on your computer. We recommend using LDPlayer, which is one of the best emulators for playing mobile games on PC. Here are the steps to download and play Dislyte on PC and Mac using LDPlayer:
-
Step 1: Download LDPlayer emulator
-
Go to this link and download LDPlayer emulator for your PC or Mac. Make sure you download the 64-bit version if asked. After downloading, install LDPlayer on your computer by following the instructions.
-
How to download and play Dislyte on PC, Mac & Mobile
-Dislyte APK download for Android devices
-Dislyte official website and social media links
-Dislyte review and gameplay guide
-Dislyte best espers and tier list
-Dislyte codes and how to redeem them
-Dislyte latest news and updates
-Dislyte tips and tricks for beginners
-Dislyte soundtrack and graphics quality
-Dislyte system requirements and compatibility
-Dislyte vs other pop-fantasy RPGs
-Dislyte characters and their mythological origins
-Dislyte story and lore overview
-Dislyte PvP and PvE modes
-Dislyte gacha system and rates
-Dislyte relics and how to farm them
-Dislyte team building and strategy
-Dislyte events and rewards
-Dislyte bugs and issues report
-Dislyte fan art and community
-Dislyte wiki and FAQ
-Dislyte emulator download for PC users
-Dislyte VPN download for region locked players
-Dislyte mod apk download and features
-Dislyte cheats and hacks warning
-Dislyte support and customer service contact
-Dislyte gameplay video and streamers recommendation
-Dislyte memes and funny moments
-Dislyte skins and costumes preview
-Dislyte collaborations and crossover events
-Dislyte reroll guide and best starter espers
-Dislyte coupon codes and freebies giveaway
-Dislyte QooApp download for iOS users
-Dislyte discord server and reddit forum join link
-Dislyte ratings and feedback from players
-Dislyte developer interview and behind the scenes
-Dislyte future plans and roadmap reveal
-Dislyte comparison with Farlight 84, another game by Lilith Games
-Dislyte global release date and countdown timer
-Dislyte pre-registration rewards and how to claim them
-
Step 2: Install Dislyte from Google
A: For playing Dislyte on PC, you need a Windows 7 or higher operating system, an Intel or AMD CPU, 4 GB of RAM, and 4 GB of disk space. For playing Dislyte on mobile, you need an Android 5.0 or higher device with at least 2 GB of RAM and 3 GB of storage space.
-
Q: How can I get more Espers in Dislyte?
-
A: You can get more Espers in Dislyte by summoning them with crystals or tickets, which can be obtained from completing quests, events, achievements, or purchasing them with real money. You can also upgrade your Espers by enhancing their skills, relics, and star levels.
-
Q: How can I join a guild in Dislyte?
-
A: You can join a guild in Dislyte by tapping on the guild icon on the main screen and searching for a guild that suits your preferences. You can also create your own guild if you have enough crystals. Joining a guild will allow you to chat with other members, participate in guild wars, and receive guild rewards.
-
Q: How can I contact the customer service of Dislyte?
-
A: You can contact the customer service of Dislyte by tapping on the gear icon on the top right corner and then tapping on Customer Service. You can also send an email to dislyte@lilithgames.com or visit their official website or social media pages for more information.
-
Q: What are the best Espers to use in Dislyte?
-
A: There is no definitive answer to this question, as different Espers have different strengths and weaknesses, and the best Espers may vary depending on your play style, team composition, and game mode. However, some of the popular Espers that are considered to be powerful and versatile are Zeus, Athena, Odin, Thor, Ra, Anubis, and Sun Wukong.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py
deleted file mode 100644
index a775a741f2a5383b4ab8269dec842f59da5d69d4..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# flake8: noqa
-
-from .rl import ValueGuidedRLPipeline
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py
deleted file mode 100644
index 19f629ea8ffb6f3af770b737c947ff73ea78514c..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# flake8: noqa
-from .pipeline_ddpm import DDPMPipeline
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py
deleted file mode 100644
index 77a4057aa82f226f68474f4c2a19eba84510d663..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import torch
-import numpy as np
-import librosa.util as librosa_util
-from scipy.signal import get_window
-
-
-def window_sumsquare(
- window,
- n_frames,
- hop_length,
- win_length,
- n_fft,
- dtype=np.float32,
- norm=None,
-):
- """
- # from librosa 0.6
- Compute the sum-square envelope of a window function at a given hop length.
-
- This is used to estimate modulation effects induced by windowing
- observations in short-time fourier transforms.
-
- Parameters
- ----------
- window : string, tuple, number, callable, or list-like
- Window specification, as in `get_window`
-
- n_frames : int > 0
- The number of analysis frames
-
- hop_length : int > 0
- The number of samples to advance between frames
-
- win_length : [optional]
- The length of the window function. By default, this matches `n_fft`.
-
- n_fft : int > 0
- The length of each analysis frame.
-
- dtype : np.dtype
- The data type of the output
-
- Returns
- -------
- wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`
- The sum-squared envelope of the window function
- """
- if win_length is None:
- win_length = n_fft
-
- n = n_fft + hop_length * (n_frames - 1)
- x = np.zeros(n, dtype=dtype)
-
- # Compute the squared window at the desired length
- win_sq = get_window(window, win_length, fftbins=True)
- win_sq = librosa_util.normalize(win_sq, norm=norm) ** 2
- win_sq = librosa_util.pad_center(win_sq, n_fft)
-
- # Fill the envelope
- for i in range(n_frames):
- sample = i * hop_length
- x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))]
- return x
-
-
-def griffin_lim(magnitudes, stft_fn, n_iters=30):
- """
- PARAMS
- ------
- magnitudes: spectrogram magnitudes
- stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods
- """
-
- angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size())))
- angles = angles.astype(np.float32)
- angles = torch.autograd.Variable(torch.from_numpy(angles))
- signal = stft_fn.inverse(magnitudes, angles).squeeze(1)
-
- for i in range(n_iters):
- _, angles = stft_fn.transform(signal)
- signal = stft_fn.inverse(magnitudes, angles).squeeze(1)
- return signal
-
-
-def dynamic_range_compression(x, normalize_fun=torch.log, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return normalize_fun(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py
deleted file mode 100644
index 071dd148c772f398e87ecbfc836dcfa4a3ae01af..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py
+++ /dev/null
@@ -1,106 +0,0 @@
-""" timm model adapter
-
-Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model.
-"""
-from collections import OrderedDict
-
-import torch.nn as nn
-
-try:
- import timm
- from timm.models.layers import Mlp, to_2tuple
- from timm.models.layers.attention_pool2d import RotAttentionPool2d
- from timm.models.layers.attention_pool2d import AttentionPool2d as AbsAttentionPool2d
-except ImportError as e:
- timm = None
-
-from .utils import freeze_batch_norm_2d
-
-
-class TimmModel(nn.Module):
- """ timm model adapter
- # FIXME this adapter is a work in progress, may change in ways that break weight compat
- """
-
- def __init__(
- self,
- model_name,
- embed_dim,
- image_size=224,
- pool='avg',
- proj='linear',
- drop=0.,
- pretrained=False):
- super().__init__()
- if timm is None:
- raise RuntimeError("Please `pip install timm` to use timm models.")
-
- self.image_size = to_2tuple(image_size)
- self.trunk = timm.create_model(model_name, pretrained=pretrained)
- feat_size = self.trunk.default_cfg.get('pool_size', None)
- feature_ndim = 1 if not feat_size else 2
- if pool in ('abs_attn', 'rot_attn'):
- assert feature_ndim == 2
- # if attn pooling used, remove both classifier and default pool
- self.trunk.reset_classifier(0, global_pool='')
- else:
- # reset global pool if pool config set, otherwise leave as network default
- reset_kwargs = dict(global_pool=pool) if pool else {}
- self.trunk.reset_classifier(0, **reset_kwargs)
- prev_chs = self.trunk.num_features
-
- head_layers = OrderedDict()
- if pool == 'abs_attn':
- head_layers['pool'] = AbsAttentionPool2d(prev_chs, feat_size=feat_size, out_features=embed_dim)
- prev_chs = embed_dim
- elif pool == 'rot_attn':
- head_layers['pool'] = RotAttentionPool2d(prev_chs, out_features=embed_dim)
- prev_chs = embed_dim
- else:
- assert proj, 'projection layer needed if non-attention pooling is used.'
-
- # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used
- if proj == 'linear':
- head_layers['drop'] = nn.Dropout(drop)
- head_layers['proj'] = nn.Linear(prev_chs, embed_dim)
- elif proj == 'mlp':
- head_layers['mlp'] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop)
-
- self.head = nn.Sequential(head_layers)
-
- def lock(self, unlocked_groups=0, freeze_bn_stats=False):
- """ lock modules
- Args:
- unlocked_groups (int): leave last n layer groups unlocked (default: 0)
- """
- if not unlocked_groups:
- # lock full model
- for param in self.trunk.parameters():
- param.requires_grad = False
- if freeze_bn_stats:
- freeze_batch_norm_2d(self.trunk)
- else:
- # NOTE: partial freeze requires latest timm (master) branch and is subject to change
- try:
- # FIXME import here until API stable and in an official release
- from timm.models.helpers import group_parameters, group_modules
- except ImportError:
- raise RuntimeError(
- 'Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`')
- matcher = self.trunk.group_matcher()
- gparams = group_parameters(self.trunk, matcher)
- max_layer_id = max(gparams.keys())
- max_layer_id = max_layer_id - unlocked_groups
- for group_idx in range(max_layer_id + 1):
- group = gparams[group_idx]
- for param in group:
- self.trunk.get_parameter(param).requires_grad = False
- if freeze_bn_stats:
- gmodules = group_modules(self.trunk, matcher, reverse=True)
- gmodules = {k for k, v in gmodules.items() if v <= max_layer_id}
- freeze_batch_norm_2d(self.trunk, gmodules)
-
- def forward(self, x):
- x = self.trunk(x)
- x = self.head(x)
- return x
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py
deleted file mode 100644
index b61a1b2349a34f504ae59aabb3430cc4eb703fbe..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import torch
-from torch import nn
-from tasks.tts.ps_adv import PortaSpeechAdvTask, FastSpeechTask
-from text_to_speech.utils.commons.hparams import hparams
-
-
-class PortaSpeechAdvMLMTask(PortaSpeechAdvTask):
-
- def build_optimizer(self, model):
- optimizer_gen = torch.optim.AdamW(
- self.model.parameters(),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
-
- optimizer_disc = torch.optim.AdamW(
- self.disc_params,
- lr=hparams['disc_lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- **hparams["discriminator_optimizer_params"]) if len(self.disc_params) > 0 else None
-
- optimizer_encoder = torch.optim.AdamW(
- self.model.encoder.parameters(),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return [optimizer_gen, optimizer_disc, optimizer_encoder]
-
- def build_scheduler(self, optimizer):
- return [
- FastSpeechTask.build_scheduler(self, optimizer[0]), # Generator Scheduler
- torch.optim.lr_scheduler.StepLR(optimizer=optimizer[1], # Discriminator Scheduler
- **hparams["discriminator_scheduler_params"]),
- FastSpeechTask.build_scheduler(self, optimizer[2]), # Generator Scheduler
- ]
-
- def on_before_optimization(self, opt_idx):
- if opt_idx in [0,2]:
- nn.utils.clip_grad_norm_(self.dp_params, hparams['clip_grad_norm'])
- if self.use_graph_encoder:
- nn.utils.clip_grad_norm_(self.gen_params_except_gae_and_dp, hparams['clip_grad_norm'])
- nn.utils.clip_grad_norm_(self.gae_params, hparams['clip_grad_norm'])
- elif self.use_bert:
- nn.utils.clip_grad_norm_(self.gen_params_except_bert_and_dp, hparams['clip_grad_norm'])
- nn.utils.clip_grad_norm_(self.bert_params, hparams['clip_grad_norm'])
- else:
- nn.utils.clip_grad_norm_(self.gen_params_except_dp, hparams['clip_grad_norm'])
- else:
- nn.utils.clip_grad_norm_(self.disc_params, hparams["clip_grad_norm"])
-
- def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx):
- if self.scheduler is not None:
- self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches'])
- self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches'])
- self.scheduler[2].step(self.global_step // hparams['accumulate_grad_batches'])
-
- def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx):
- if self.scheduler is not None:
- self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches'])
- self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches'])
- self.scheduler[2].step(self.global_step // hparams['accumulate_grad_batches'])
-
- def _training_step(self, sample, batch_idx, optimizer_idx):
- loss_output = {}
- loss_weights = {}
- disc_start = self.global_step >= hparams["disc_start_steps"] and hparams['lambda_mel_adv'] > 0
- if optimizer_idx == 0:
- #######################
- # Generator #
- #######################
- loss_output, model_out = self.run_model(sample, infer=False)
- self.model_out_gt = self.model_out = \
- {k: v.detach() for k, v in model_out.items() if isinstance(v, torch.Tensor)}
- if disc_start:
- mel_p = model_out['mel_out']
- if hasattr(self.model, 'out2mel'):
- mel_p = self.model.out2mel(mel_p)
- o_ = self.mel_disc(mel_p)
- p_, pc_ = o_['y'], o_['y_c']
- if p_ is not None:
- loss_output['a'] = self.mse_loss_fn(p_, p_.new_ones(p_.size()))
- loss_weights['a'] = hparams['lambda_mel_adv']
- if pc_ is not None:
- loss_output['ac'] = self.mse_loss_fn(pc_, pc_.new_ones(pc_.size()))
- loss_weights['ac'] = hparams['lambda_mel_adv']
- elif optimizer_idx == 1:
- #######################
- # Discriminator #
- #######################
- if disc_start and self.global_step % hparams['disc_interval'] == 0:
- model_out = self.model_out_gt
- mel_g = sample['mels']
- mel_p = model_out['mel_out']
- o = self.mel_disc(mel_g)
- p, pc = o['y'], o['y_c']
- o_ = self.mel_disc(mel_p)
- p_, pc_ = o_['y'], o_['y_c']
- if p_ is not None:
- loss_output["r"] = self.mse_loss_fn(p, p.new_ones(p.size()))
- loss_output["f"] = self.mse_loss_fn(p_, p_.new_zeros(p_.size()))
- if pc_ is not None:
- loss_output["rc"] = self.mse_loss_fn(pc, pc.new_ones(pc.size()))
- loss_output["fc"] = self.mse_loss_fn(pc_, pc_.new_zeros(pc_.size()))
- else:
- loss_output, model_out = self.run_contrastive_learning(sample)
-
- total_loss = sum([loss_weights.get(k, 1) * v for k, v in loss_output.items() if isinstance(v, torch.Tensor) and v.requires_grad])
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
- return total_loss, loss_output
-
- def run_contrastive_learning(self, sample):
- losses = {}
- outputs = {}
-
- bert = self.model.encoder.bert
- pooler = self.model.encoder.pooler
- sim = self.model.encoder.sim
- # electra_gen = self.model.encoder.electra_gen
- # electra_disc = self.model.encoder.electra_disc
- # electra_head = self.model.encoder.electra_head
-
- cl_feats = sample['cl_feats']
- bs, _, t = cl_feats['cl_input_ids'].shape
- cl_input_ids = cl_feats['cl_input_ids'].reshape([bs*2, t])
- cl_attention_mask = cl_feats['cl_attention_mask'].reshape([bs*2, t])
- cl_token_type_ids = cl_feats['cl_token_type_ids'].reshape([bs*2, t])
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
- pooler_output = pooler(cl_attention_mask, cl_output)
- pooler_output = pooler_output.reshape([bs, 2, -1])
- z1, z2 = pooler_output[:,0], pooler_output[:,1]
-
- cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0))
- labels = torch.arange(cos_sim.size(0)).long().to(z1.device)
- ce_fn = nn.CrossEntropyLoss()
- cl_loss = ce_fn(cos_sim, labels)
- losses['cl_v'] = cl_loss.detach()
- losses['cl'] = cl_loss * hparams['lambda_mlm']
-
- # mlm_input_ids = cl_feats['mlm_input_ids']
- # mlm_input_ids = mlm_input_ids.view((-1, mlm_input_ids.size(-1)))
- # with torch.no_grad():
- # g_pred = electra_gen(mlm_input_ids, cl_attention_mask)[0].argmax(-1)
- # g_pred[:, 0] = 101 # CLS token
- # replaced = (g_pred != cl_input_ids) * cl_attention_mask
- # e_inputs = g_pred * cl_attention_mask
- # mlm_outputs = electra_disc(
- # e_inputs,
- # attention_mask=cl_attention_mask,
- # token_type_ids=cl_token_type_ids,
- # position_ids=None,
- # head_mask=None,
- # inputs_embeds=None,
- # output_attentions=None,
- # output_hidden_states=False, # True if cls.model_args.pooler_type in ['avg_top2', 'avg_first_last'] else False,
- # return_dict=True,
- # cls_input=pooler_output.view((-1, pooler_output.size(-1))),
- # )
- # e_labels = replaced.view(-1, replaced.size(-1))
- # prediction_scores = electra_head(mlm_outputs.last_hidden_state)
- # # rep = (e_labels == 1) * cl_attention_mask
- # # fix = (e_labels == 0) * cl_attention_mask
- # # prediction = prediction_scores.argmax(-1)
- # # self.electra_rep_acc = float((prediction*rep).sum()/rep.sum())
- # # self.electra_fix_acc = float(1.0 - (prediction*fix).sum()/fix.sum())
- # # self.electra_acc = float(((prediction == e_labels) * cl_attention_mask).sum()/cl_attention_mask.sum())
- # masked_lm_loss = ce_fn(prediction_scores.view(-1, 2), e_labels.view(-1))
- # losses['mlm_v'] = masked_lm_loss.detach()
- # losses['mlm'] = masked_lm_loss * hparams['lambda_mlm']
-
- return losses, outputs
-
\ No newline at end of file
diff --git a/spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py b/spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py
deleted file mode 100644
index 5005b3e4ef61effe011430f472570c4832a34320..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py
+++ /dev/null
@@ -1,84 +0,0 @@
-
-# SOP========================================================================================================
-# "environment_prompt"
-# current_state , self(sop)
-Get_environment_prompt = "f\"Here are the description of current scenario:{self.current_state.environment_prompt};\\n\""
-
-
-# sop.transit
-#================================================================
-Transit_system_prompt = "f\"{environment_prompt};\\n{judge_system_prompt}\\n\"";
-
-# transit chat message
-# "environment_prompt" is get from "Get_environment_prompt" ; "chat_history_message" if from Memory
-Transit_message = "f\"{environment_summary};\\n Here is the The chat history:\\n {chat_history_message};\\nHere is the last query you especially need to pay attention:\\n{query};\\n Here is the relevant conversation: \\n{relevant_history} \\n\\n\""
-
-
-Transit_last_prompt = "f\"{judge_last_prompt}\""
-#sop.transit================================================================
-
-# sop.call
-#================================================================
-# help controller to determine the next role to speak.(the {} is agent role) call_prompt + allocate_component
-Allocate_component = "f\"If it's currently supposed to be speaking for {role}, then output {role}.\\n\""
-
-# environment_prompt is get from "Get_environment_prompt" ; "chat_history_message" if from Memory
-Call_system_prompt = "f\"{environment_prompt};\\n{call_system_prompt};\\n{allocate_prompt}.\\n\""
-
-#
-Call_last_prompt = "f\"Here is the last query you especially need to pay attention:\\n{query};\\n Here is the the relevant conversation :\\n{relevant_history};\\nNow please choose the person to speak according to the following rules :{allocate_prompt};\\nNote: The person whose turn it is now cannot be the same as the person who spoke last time, so {last_name} cannot be output\\n.\""
-
-Call_message = "f\"Here is the chat history:\\n{chat_history_message};\\nHere is the name of the person who last speak: {last_name}.\\n \""
-#sop.call================================================================
-# SOP========================================================================================================
-
-
-
-
-
-
-# Memory========================================================================================================
-Single_message = "f\"role: {role} \\n speak content : {content}; \""
-
-Chat_total_message = "f\"{{{chat_history}}}\""
-# Memory========================================================================================================
-
-
-
-
-
-
-# Environment========================================================================================================
-Default_environment_summary_system_prompt = "\"\\nYour task is to summarize the historical dialogue records according to the current scene, and summarize the most important information\""
-
-Default_environment_summary_last_prompt = "\"Please make a summary based on the historical chat records, the output format is history summary: \{your summary content\} \""
-
-Environment_summary_memory = "f\"Here is the information you need to know:\\n\\n\
- Here is the summary of the previous dialogue history:\\n{summary}.\\n\
- Here is the latest conversation record:\\n {chat_history},\\n\
- Here is the relevant chat history you may need:{relevant_history}.\\n\""
-
-Environment_summary_system_prompt = "f\"{environment_prompt};\\n{current_memory};\\n{summary_system_prompt};\\n\""
-
-
-# observe
-Agent_observe_relevant_memory = "f\"\\n{relevant_memory}. \\n\""
-
-
-Agent_observe_memory = "f\"Here's what you need to know(Remember, this is just information, Try not to repeat what's inside):\\nHere is the relevant chat history you may need:{relevant_memory};\\n\
-Here is the previous summary of chat history :\\n{agent.short_term_memory}.\\n\
-Here is the relevant memory :\\n{agent.relevant_memory}.\\n\
-Here is the new chat history:\\n {conversations};\\n\
- \""
-# Environment========================================================================================================
-
-
-
-
-# Agent========================================================================================================
-Agent_summary_system_prompt = "f\"{summary_prompt};\\n Here is the past summary:{self.short_term_memory};\\nHere is the new chat_history:\\n{conversations};\\nPlease summary Please summarize based on the above information;\\n\""
-
-Agent_last_prompt = "f\"{last_prompt};Please continue the talk based on your known information;Remember that you just represent {name}, do not speak for others,just speak as normal.\""
-
-Agent_system_prompt = "f\"{system_prompt},\""
-# Agent========================================================================================================
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py b/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py
deleted file mode 100644
index 773ad893296750992789a77a59e0f5ad657d0e35..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Perform test request
-"""
-
-import pprint
-
-import requests
-
-DETECTION_URL = "http://localhost:5000/v1/object-detection/yolov5s"
-IMAGE = "zidane.jpg"
-
-# Read image
-with open(IMAGE, "rb") as f:
- image_data = f.read()
-
-response = requests.post(DETECTION_URL, files={"image": image_data}).json()
-
-pprint.pprint(response)
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts
deleted file mode 100644
index 0c2572b6395e340f4577395e5870cca3f5ea11c5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-import Rings from './Rings';
-import Base from '../base/Base';
-
-export default function Factory(
- config?: Base.IConfig
-): Rings;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js
deleted file mode 100644
index f2d9958b7d078f20beb6e9022c99ae49b21da8ec..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import PerspectiveCard from './PerspectiveCard.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('perspectiveCard', function (config) {
- var gameObject = new PerspectiveCard(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.PerspectiveCard', PerspectiveCard);
-
-export default PerspectiveCard;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js
deleted file mode 100644
index 2f6db8ed15730f46d687df010daf08dc3a6a867d..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import { Rotate } from '../../../plugins/gestures.js';
-export default Rotate;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts
deleted file mode 100644
index 78081442c308c5d5bc640052efba504bd3f3b721..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts
+++ /dev/null
@@ -1,5 +0,0 @@
-import TabPages from './TabPages';
-
-export default function (
- config?: TabPages.IConfig
-): TabPages;
\ No newline at end of file
diff --git a/spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py b/spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py
deleted file mode 100644
index 677247e899cedc240b7d420722fc808f956d98dc..0000000000000000000000000000000000000000
--- a/spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stablediffusionapi/rev-animated").launch()
\ No newline at end of file
diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py
deleted file mode 100644
index 4e0eba499e6f2fa94b1a962421b3c4bfef7a2f26..0000000000000000000000000000000000000000
--- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py
+++ /dev/null
@@ -1,566 +0,0 @@
-import traceback
-from toolbox import update_ui, get_conf
-
-def input_clipping(inputs, history, max_token_limit):
- import numpy as np
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
-
- mode = 'input-and-history'
- # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史
- input_token_num = get_token_num(inputs)
- if input_token_num < max_token_limit//2:
- mode = 'only-history'
- max_token_limit = max_token_limit - input_token_num
-
- everything = [inputs] if mode == 'input-and-history' else ['']
- everything.extend(history)
- n_token = get_token_num('\n'.join(everything))
- everything_token = [get_token_num(e) for e in everything]
- delta = max(everything_token) // 16 # 截断时的颗粒度
-
- while n_token > max_token_limit:
- where = np.argmax(everything_token)
- encoded = enc.encode(everything[where], disallowed_special=())
- clipped_encoded = encoded[:len(encoded)-delta]
- everything[where] = enc.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
- everything_token[where] = get_token_num(everything[where])
- n_token = get_token_num('\n'.join(everything))
-
- if mode == 'input-and-history':
- inputs = everything[0]
- else:
- pass
- history = everything[1:]
- return inputs, history
-
-def request_gpt_model_in_new_thread_with_ui_alive(
- inputs, inputs_show_user, llm_kwargs,
- chatbot, history, sys_prompt, refresh_interval=0.2,
- handle_token_exceed=True,
- retry_times_at_unknown_error=2,
- ):
- """
- Request GPT model,请求GPT模型同时维持用户界面活跃。
-
- 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行):
- inputs (string): List of inputs (输入)
- inputs_show_user (string): List of inputs to show user(展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性)
- top_p (float): Top p value for sampling from model distribution (GPT参数,浮点数)
- temperature (float): Temperature value for sampling from model distribution(GPT参数,浮点数)
- chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
- history (list): List of chat history (历史,对话历史列表)
- sys_prompt (string): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样)
- refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果)
- handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启
- retry_times_at_unknown_error:失败时的重试次数
-
- 输出 Returns:
- future: 输出,GPT返回的结果
- """
- import time
- from concurrent.futures import ThreadPoolExecutor
- from request_llm.bridge_all import predict_no_ui_long_connection
- # 用户反馈
- chatbot.append([inputs_show_user, ""])
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
- executor = ThreadPoolExecutor(max_workers=16)
- mutable = ["", time.time(), ""]
- def _req_gpt(inputs, history, sys_prompt):
- retry_op = retry_times_at_unknown_error
- exceeded_cnt = 0
- while True:
- # watchdog error
- if len(mutable) >= 2 and (time.time()-mutable[1]) > 5:
- raise RuntimeError("检测到程序终止。")
- try:
- # 【第一种情况】:顺利完成
- result = predict_no_ui_long_connection(
- inputs=inputs, llm_kwargs=llm_kwargs,
- history=history, sys_prompt=sys_prompt, observe_window=mutable)
- return result
- except ConnectionAbortedError as token_exceeded_error:
- # 【第二种情况】:Token溢出
- if handle_token_exceed:
- exceeded_cnt += 1
- # 【选择处理】 尝试计算比例,尽可能多地保留文本
- from toolbox import get_reduce_token_percent
- p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
- MAX_TOKEN = 4096
- EXCEED_ALLO = 512 + 512 * exceeded_cnt
- inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
- mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
- continue # 返回重试
- else:
- # 【选择放弃】
- tb_str = '```\n' + traceback.format_exc() + '```'
- mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
- return mutable[0] # 放弃
- except:
- # 【第三种情况】:其他错误:重试几次
- tb_str = '```\n' + traceback.format_exc() + '```'
- print(tb_str)
- mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
- if retry_op > 0:
- retry_op -= 1
- mutable[0] += f"[Local Message] 重试中,请稍等 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}:\n\n"
- if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
- time.sleep(30)
- time.sleep(5)
- continue # 返回重试
- else:
- time.sleep(5)
- return mutable[0] # 放弃
-
- # 提交任务
- future = executor.submit(_req_gpt, inputs, history, sys_prompt)
- while True:
- # yield一次以刷新前端页面
- time.sleep(refresh_interval)
- # “喂狗”(看门狗)
- mutable[1] = time.time()
- if future.done():
- break
- chatbot[-1] = [chatbot[-1][0], mutable[0]]
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
-
- final_result = future.result()
- chatbot[-1] = [chatbot[-1][0], final_result]
- yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
- return final_result
-
-
-def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array, inputs_show_user_array, llm_kwargs,
- chatbot, history_array, sys_prompt_array,
- refresh_interval=0.2, max_workers=-1, scroller_max_len=30,
- handle_token_exceed=True, show_user_at_complete=False,
- retry_times_at_unknown_error=2,
- ):
- """
- Request GPT model using multiple threads with UI and high efficiency
- 请求GPT模型的[多线程]版。
- 具备以下功能:
- 实时在UI上反馈远程数据流
- 使用线程池,可调节线程池的大小避免openai的流量限制错误
- 处理中途中止的情况
- 网络等出问题时,会把traceback和已经接收的数据转入输出
-
- 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行):
- inputs_array (list): List of inputs (每个子任务的输入)
- inputs_show_user_array (list): List of inputs to show user(每个子任务展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性)
- llm_kwargs: llm_kwargs参数
- chatbot: chatbot (用户界面对话窗口句柄,用于数据流可视化)
- history_array (list): List of chat history (历史对话输入,双层列表,第一层列表是子任务分解,第二层列表是对话历史)
- sys_prompt_array (list): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样)
- refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果)
- max_workers (int, optional): Maximum number of threads (default: see config.py) (最大线程数,如果子任务非常多,需要用此选项防止高频地请求openai导致错误)
- scroller_max_len (int, optional): Maximum length for scroller (default: 30)(数据流的显示最后收到的多少个字符,仅仅服务于视觉效果)
- handle_token_exceed (bool, optional): (是否在输入过长时,自动缩减文本)
- handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启
- show_user_at_complete (bool, optional): (在结束时,把完整输入-输出结果显示在聊天框)
- retry_times_at_unknown_error:子任务失败时的重试次数
-
- 输出 Returns:
- list: List of GPT model responses (每个子任务的输出汇总,如果某个子任务出错,response中会携带traceback报错信息,方便调试和定位问题。)
- """
- import time, random
- from concurrent.futures import ThreadPoolExecutor
- from request_llm.bridge_all import predict_no_ui_long_connection
- assert len(inputs_array) == len(history_array)
- assert len(inputs_array) == len(sys_prompt_array)
- if max_workers == -1: # 读取配置文件
- try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
- except: max_workers = 8
- if max_workers <= 0 or max_workers >= 20: max_workers = 8
- # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
- if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
- max_workers = 1
-
- executor = ThreadPoolExecutor(max_workers=max_workers)
- n_frag = len(inputs_array)
- # 用户反馈
- chatbot.append(["请开始多线程操作。", ""])
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
- # 跨线程传递
- mutable = [["", time.time(), "等待中"] for _ in range(n_frag)]
-
- # 子线程任务
- def _req_gpt(index, inputs, history, sys_prompt):
- gpt_say = ""
- retry_op = retry_times_at_unknown_error
- exceeded_cnt = 0
- mutable[index][2] = "执行中"
- while True:
- # watchdog error
- if len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > 5:
- raise RuntimeError("检测到程序终止。")
- try:
- # 【第一种情况】:顺利完成
- # time.sleep(10); raise RuntimeError("测试")
- gpt_say = predict_no_ui_long_connection(
- inputs=inputs, llm_kwargs=llm_kwargs, history=history,
- sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True
- )
- mutable[index][2] = "已成功"
- return gpt_say
- except ConnectionAbortedError as token_exceeded_error:
- # 【第二种情况】:Token溢出,
- if handle_token_exceed:
- exceeded_cnt += 1
- # 【选择处理】 尝试计算比例,尽可能多地保留文本
- from toolbox import get_reduce_token_percent
- p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
- MAX_TOKEN = 4096
- EXCEED_ALLO = 512 + 512 * exceeded_cnt
- inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
- gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
- mutable[index][2] = f"截断重试"
- continue # 返回重试
- else:
- # 【选择放弃】
- tb_str = '```\n' + traceback.format_exc() + '```'
- gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
- if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
- mutable[index][2] = "输入过长已放弃"
- return gpt_say # 放弃
- except:
- # 【第三种情况】:其他错误
- tb_str = '```\n' + traceback.format_exc() + '```'
- print(tb_str)
- gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
- if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
- if retry_op > 0:
- retry_op -= 1
- wait = random.randint(5, 20)
- if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
- wait = wait * 3
- fail_info = "OpenAI绑定信用卡可解除频率限制 "
- else:
- fail_info = ""
- # 也许等待十几秒后,情况会好转
- for i in range(wait):
- mutable[index][2] = f"{fail_info}等待重试 {wait-i}"; time.sleep(1)
- # 开始重试
- mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}"
- continue # 返回重试
- else:
- mutable[index][2] = "已失败"
- wait = 5
- time.sleep(5)
- return gpt_say # 放弃
-
- # 异步任务开始
- futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip(
- range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)]
- cnt = 0
- while True:
- # yield一次以刷新前端页面
- time.sleep(refresh_interval)
- cnt += 1
- worker_done = [h.done() for h in futures]
- if all(worker_done):
- executor.shutdown()
- break
- # 更好的UI视觉效果
- observe_win = []
- # 每个线程都要“喂狗”(看门狗)
- for thread_index, _ in enumerate(worker_done):
- mutable[thread_index][1] = time.time()
- # 在前端打印些好玩的东西
- for thread_index, _ in enumerate(worker_done):
- print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
- replace('\n', '').replace('```', '...').replace(
- ' ', '.').replace(' ', '.....').replace('$', '.')+"`... ]"
- observe_win.append(print_something_really_funny)
- # 在前端打印些好玩的东西
- stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
- if not done else f'`{mutable[thread_index][2]}`\n\n'
- for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)])
- # 在前端打印些好玩的东西
- chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
-
- # 异步任务结束
- gpt_response_collection = []
- for inputs_show_user, f in zip(inputs_show_user_array, futures):
- gpt_res = f.result()
- gpt_response_collection.extend([inputs_show_user, gpt_res])
-
- # 是否在结束时,在界面上显示结果
- if show_user_at_complete:
- for inputs_show_user, f in zip(inputs_show_user_array, futures):
- gpt_res = f.result()
- chatbot.append([inputs_show_user, gpt_res])
- yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
- time.sleep(0.3)
- return gpt_response_collection
-
-
-def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit):
- def cut(txt_tocut, must_break_at_empty_line): # 递归
- if get_token_fn(txt_tocut) <= limit:
- return [txt_tocut]
- else:
- lines = txt_tocut.split('\n')
- estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
- estimated_line_cut = int(estimated_line_cut)
- for cnt in reversed(range(estimated_line_cut)):
- if must_break_at_empty_line:
- if lines[cnt] != "":
- continue
- print(cnt)
- prev = "\n".join(lines[:cnt])
- post = "\n".join(lines[cnt:])
- if get_token_fn(prev) < limit:
- break
- if cnt == 0:
- raise RuntimeError("存在一行极长的文本!")
- # print(len(post))
- # 列表递归接龙
- result = [prev]
- result.extend(cut(post, must_break_at_empty_line))
- return result
- try:
- return cut(txt, must_break_at_empty_line=True)
- except RuntimeError:
- return cut(txt, must_break_at_empty_line=False)
-
-
-def force_breakdown(txt, limit, get_token_fn):
- """
- 当无法用标点、空行分割时,我们用最暴力的方法切割
- """
- for i in reversed(range(len(txt))):
- if get_token_fn(txt[:i]) < limit:
- return txt[:i], txt[i:]
- return "Tiktoken未知错误", "Tiktoken未知错误"
-
-def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit):
- # 递归
- def cut(txt_tocut, must_break_at_empty_line, break_anyway=False):
- if get_token_fn(txt_tocut) <= limit:
- return [txt_tocut]
- else:
- lines = txt_tocut.split('\n')
- estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
- estimated_line_cut = int(estimated_line_cut)
- cnt = 0
- for cnt in reversed(range(estimated_line_cut)):
- if must_break_at_empty_line:
- if lines[cnt] != "":
- continue
- prev = "\n".join(lines[:cnt])
- post = "\n".join(lines[cnt:])
- if get_token_fn(prev) < limit:
- break
- if cnt == 0:
- if break_anyway:
- prev, post = force_breakdown(txt_tocut, limit, get_token_fn)
- else:
- raise RuntimeError(f"存在一行极长的文本!{txt_tocut}")
- # print(len(post))
- # 列表递归接龙
- result = [prev]
- result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway))
- return result
- try:
- # 第1次尝试,将双空行(\n\n)作为切分点
- return cut(txt, must_break_at_empty_line=True)
- except RuntimeError:
- try:
- # 第2次尝试,将单空行(\n)作为切分点
- return cut(txt, must_break_at_empty_line=False)
- except RuntimeError:
- try:
- # 第3次尝试,将英文句号(.)作为切分点
- res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
- return [r.replace('。\n', '.') for r in res]
- except RuntimeError as e:
- try:
- # 第4次尝试,将中文句号(。)作为切分点
- res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False)
- return [r.replace('。。\n', '。') for r in res]
- except RuntimeError as e:
- # 第5次尝试,没办法了,随便切一下敷衍吧
- return cut(txt, must_break_at_empty_line=False, break_anyway=True)
-
-
-
-def read_and_clean_pdf_text(fp):
- """
- 这个函数用于分割pdf,用了很多trick,逻辑较乱,效果奇好
-
- **输入参数说明**
- - `fp`:需要读取和清理文本的pdf文件路径
-
- **输出参数说明**
- - `meta_txt`:清理后的文本内容字符串
- - `page_one_meta`:第一页清理后的文本内容列表
-
- **函数功能**
- 读取pdf文件并清理其中的文本内容,清理规则包括:
- - 提取所有块元的文本信息,并合并为一个字符串
- - 去除短块(字符数小于100)并替换为回车符
- - 清理多余的空行
- - 合并小写字母开头的段落块并替换为空格
- - 清除重复的换行
- - 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔
- """
- import fitz, copy
- import re
- import numpy as np
- from colorful import print亮黄, print亮绿
- fc = 0 # Index 0 文本
- fs = 1 # Index 1 字体
- fb = 2 # Index 2 框框
- REMOVE_FOOT_NOTE = True # 是否丢弃掉 不是正文的内容 (比正文字体小,如参考文献、脚注、图注等)
- REMOVE_FOOT_FFSIZE_PERCENT = 0.95 # 小于正文的?时,判定为不是正文(有些文章的正文部分字体大小不是100%统一的,有肉眼不可见的小变化)
- def primary_ffsize(l):
- """
- 提取文本块主字体
- """
- fsize_statiscs = {}
- for wtf in l['spans']:
- if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0
- fsize_statiscs[wtf['size']] += len(wtf['text'])
- return max(fsize_statiscs, key=fsize_statiscs.get)
-
- def ffsize_same(a,b):
- """
- 提取字体大小是否近似相等
- """
- return abs((a-b)/max(a,b)) < 0.02
-
- with fitz.open(fp) as doc:
- meta_txt = []
- meta_font = []
-
- meta_line = []
- meta_span = []
- ############################## <第 1 步,搜集初始信息> ##################################
- for index, page in enumerate(doc):
- # file_content += page.get_text()
- text_areas = page.get_text("dict") # 获取页面上的文本信息
- for t in text_areas['blocks']:
- if 'lines' in t:
- pf = 998
- for l in t['lines']:
- txt_line = "".join([wtf['text'] for wtf in l['spans']])
- if len(txt_line) == 0: continue
- pf = primary_ffsize(l)
- meta_line.append([txt_line, pf, l['bbox'], l])
- for wtf in l['spans']: # for l in t['lines']:
- meta_span.append([wtf['text'], wtf['size'], len(wtf['text'])])
- # meta_line.append(["NEW_BLOCK", pf])
- # 块元提取 for each word segment with in line for each line cross-line words for each block
- meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
- '- ', '') for t in text_areas['blocks'] if 'lines' in t])
- meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']])
- for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t])
- if index == 0:
- page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
- '- ', '') for t in text_areas['blocks'] if 'lines' in t]
-
- ############################## <第 2 步,获取正文主字体> ##################################
- fsize_statiscs = {}
- for span in meta_span:
- if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0
- fsize_statiscs[span[1]] += span[2]
- main_fsize = max(fsize_statiscs, key=fsize_statiscs.get)
- if REMOVE_FOOT_NOTE:
- give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT
-
- ############################## <第 3 步,切分和重新整合> ##################################
- mega_sec = []
- sec = []
- for index, line in enumerate(meta_line):
- if index == 0:
- sec.append(line[fc])
- continue
- if REMOVE_FOOT_NOTE:
- if meta_line[index][fs] <= give_up_fize_threshold:
- continue
- if ffsize_same(meta_line[index][fs], meta_line[index-1][fs]):
- # 尝试识别段落
- if meta_line[index][fc].endswith('.') and\
- (meta_line[index-1][fc] != 'NEW_BLOCK') and \
- (meta_line[index][fb][2] - meta_line[index][fb][0]) < (meta_line[index-1][fb][2] - meta_line[index-1][fb][0]) * 0.7:
- sec[-1] += line[fc]
- sec[-1] += "\n\n"
- else:
- sec[-1] += " "
- sec[-1] += line[fc]
- else:
- if (index+1 < len(meta_line)) and \
- meta_line[index][fs] > main_fsize:
- # 单行 + 字体大
- mega_sec.append(copy.deepcopy(sec))
- sec = []
- sec.append("# " + line[fc])
- else:
- # 尝试识别section
- if meta_line[index-1][fs] > meta_line[index][fs]:
- sec.append("\n" + line[fc])
- else:
- sec.append(line[fc])
- mega_sec.append(copy.deepcopy(sec))
-
- finals = []
- for ms in mega_sec:
- final = " ".join(ms)
- final = final.replace('- ', ' ')
- finals.append(final)
- meta_txt = finals
-
- ############################## <第 4 步,乱七八糟的后处理> ##################################
- def 把字符太少的块清除为回车(meta_txt):
- for index, block_txt in enumerate(meta_txt):
- if len(block_txt) < 100:
- meta_txt[index] = '\n'
- return meta_txt
- meta_txt = 把字符太少的块清除为回车(meta_txt)
-
- def 清理多余的空行(meta_txt):
- for index in reversed(range(1, len(meta_txt))):
- if meta_txt[index] == '\n' and meta_txt[index-1] == '\n':
- meta_txt.pop(index)
- return meta_txt
- meta_txt = 清理多余的空行(meta_txt)
-
- def 合并小写开头的段落块(meta_txt):
- def starts_with_lowercase_word(s):
- pattern = r"^[a-z]+"
- match = re.match(pattern, s)
- if match:
- return True
- else:
- return False
- for _ in range(100):
- for index, block_txt in enumerate(meta_txt):
- if starts_with_lowercase_word(block_txt):
- if meta_txt[index-1] != '\n':
- meta_txt[index-1] += ' '
- else:
- meta_txt[index-1] = ''
- meta_txt[index-1] += meta_txt[index]
- meta_txt[index] = '\n'
- return meta_txt
- meta_txt = 合并小写开头的段落块(meta_txt)
- meta_txt = 清理多余的空行(meta_txt)
-
- meta_txt = '\n'.join(meta_txt)
- # 清除重复的换行
- for _ in range(5):
- meta_txt = meta_txt.replace('\n\n', '\n')
-
- # 换行 -> 双换行
- meta_txt = meta_txt.replace('\n', '\n\n')
-
- ############################## <第 5 步,展示分割效果> ##################################
- # for f in finals:
- # print亮黄(f)
- # print亮绿('***************************')
-
- return meta_txt, page_one_meta
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py
deleted file mode 100644
index a99d727712eb44b875576443837c81a442c72a6f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import argparse
-import math
-import os
-
-import torch
-from neural_compressor.utils.pytorch import load
-from PIL import Image
-from transformers import CLIPTextModel, CLIPTokenizer
-
-from diffusers import AutoencoderKL, StableDiffusionPipeline, UNet2DConditionModel
-
-
-def parse_args():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-m",
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "-c",
- "--caption",
- type=str,
- default="robotic cat with wings",
- help="Text used to generate images.",
- )
- parser.add_argument(
- "-n",
- "--images_num",
- type=int,
- default=4,
- help="How much images to generate.",
- )
- parser.add_argument(
- "-s",
- "--seed",
- type=int,
- default=42,
- help="Seed for random process.",
- )
- parser.add_argument(
- "-ci",
- "--cuda_id",
- type=int,
- default=0,
- help="cuda_id.",
- )
- args = parser.parse_args()
- return args
-
-
-def image_grid(imgs, rows, cols):
- if not len(imgs) == rows * cols:
- raise ValueError("The specified number of rows and columns are not correct.")
-
- w, h = imgs[0].size
- grid = Image.new("RGB", size=(cols * w, rows * h))
- grid_w, grid_h = grid.size
-
- for i, img in enumerate(imgs):
- grid.paste(img, box=(i % cols * w, i // cols * h))
- return grid
-
-
-def generate_images(
- pipeline,
- prompt="robotic cat with wings",
- guidance_scale=7.5,
- num_inference_steps=50,
- num_images_per_prompt=1,
- seed=42,
-):
- generator = torch.Generator(pipeline.device).manual_seed(seed)
- images = pipeline(
- prompt,
- guidance_scale=guidance_scale,
- num_inference_steps=num_inference_steps,
- generator=generator,
- num_images_per_prompt=num_images_per_prompt,
- ).images
- _rows = int(math.sqrt(num_images_per_prompt))
- grid = image_grid(images, rows=_rows, cols=num_images_per_prompt // _rows)
- return grid, images
-
-
-args = parse_args()
-# Load models and create wrapper for stable diffusion
-tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
-text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
-vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
-unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
-
-pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path, text_encoder=text_encoder, vae=vae, unet=unet, tokenizer=tokenizer
-)
-pipeline.safety_checker = lambda images, clip_input: (images, False)
-if os.path.exists(os.path.join(args.pretrained_model_name_or_path, "best_model.pt")):
- unet = load(args.pretrained_model_name_or_path, model=unet)
- unet.eval()
- setattr(pipeline, "unet", unet)
-else:
- unet = unet.to(torch.device("cuda", args.cuda_id))
-pipeline = pipeline.to(unet.device)
-grid, images = generate_images(pipeline, prompt=args.caption, num_images_per_prompt=args.images_num, seed=args.seed)
-grid.save(os.path.join(args.pretrained_model_name_or_path, "{}.png".format("_".join(args.caption.split()))))
-dirname = os.path.join(args.pretrained_model_name_or_path, "_".join(args.caption.split()))
-os.makedirs(dirname, exist_ok=True)
-for idx, image in enumerate(images):
- image.save(os.path.join(dirname, "{}.png".format(idx + 1)))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py
deleted file mode 100644
index eff8d9458c877c5db894957e0b1b4597e40da6ab..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', eps=1e-03, requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- backbone=dict(
- type='CGNet',
- norm_cfg=norm_cfg,
- in_channels=3,
- num_channels=(32, 64, 128),
- num_blocks=(3, 21),
- dilations=(2, 4),
- reductions=(8, 16)),
- decode_head=dict(
- type='FCNHead',
- in_channels=256,
- in_index=2,
- channels=256,
- num_convs=0,
- concat_input=False,
- dropout_ratio=0,
- num_classes=19,
- norm_cfg=norm_cfg,
- loss_decode=dict(
- type='CrossEntropyLoss',
- use_sigmoid=False,
- loss_weight=1.0,
- class_weight=[
- 2.5959933, 6.7415504, 3.5354059, 9.8663225, 9.690899, 9.369352,
- 10.289121, 9.953208, 4.3097677, 9.490387, 7.674431, 9.396905,
- 10.347791, 6.3927646, 10.226669, 10.241062, 10.280587,
- 10.396974, 10.055647
- ])),
- # model training and testing settings
- train_cfg=dict(sampler=None),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py
deleted file mode 100644
index df6f36ef7c3b71ba7979aa7a1b226b3e3ebd9bb4..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3_r50-d8_512x512_160k_ade20k.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 9ca7fd23cedc0567a015bd5f8641a509ead6110a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py
deleted file mode 100644
index 1084a57e978195df6d45a9a00415953ddbaeeb51..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = './fcn_hr18_512x512_40k_voc12aug.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w48',
- backbone=dict(
- extra=dict(
- stage2=dict(num_channels=(48, 96)),
- stage3=dict(num_channels=(48, 96, 192)),
- stage4=dict(num_channels=(48, 96, 192, 384)))),
- decode_head=dict(
- in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384])))
diff --git a/spaces/Artrajz/vits-simple-api/vits/commons.py b/spaces/Artrajz/vits-simple-api/vits/commons.py
deleted file mode 100644
index bda0a67534ac34bd02dc28b845619b2433a40df6..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/vits/commons.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py
deleted file mode 100644
index 12ab23713a70dda46edd300bd975b02bfb2be031..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from typing import Any, cast, Set, TYPE_CHECKING
-from inspect import isclass
-
-if TYPE_CHECKING:
- from pip._vendor.rich.console import RenderableType
-
-_GIBBERISH = """aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf"""
-
-
-def is_renderable(check_object: Any) -> bool:
- """Check if an object may be rendered by Rich."""
- return (
- isinstance(check_object, str)
- or hasattr(check_object, "__rich__")
- or hasattr(check_object, "__rich_console__")
- )
-
-
-def rich_cast(renderable: object) -> "RenderableType":
- """Cast an object to a renderable by calling __rich__ if present.
-
- Args:
- renderable (object): A potentially renderable object
-
- Returns:
- object: The result of recursively calling __rich__.
- """
- from pip._vendor.rich.console import RenderableType
-
- rich_visited_set: Set[type] = set() # Prevent potential infinite loop
- while hasattr(renderable, "__rich__") and not isclass(renderable):
- # Detect object which claim to have all the attributes
- if hasattr(renderable, _GIBBERISH):
- return repr(renderable)
- cast_method = getattr(renderable, "__rich__")
- renderable = cast_method()
- renderable_type = type(renderable)
- if renderable_type in rich_visited_set:
- break
- rich_visited_set.add(renderable_type)
-
- return cast(RenderableType, renderable)
diff --git a/spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh b/spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh
deleted file mode 100644
index d3f8f40d9dfaca8e0f4ef97d1885515359528b62..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-conda run --live-stream -n WavJourney python -u services.py 2>&1 | tee services_logs/service.out &
-conda run --live-stream -n WavJourney python -u ui_client.py 2>&1 | tee services_logs/wavejourney.out
\ No newline at end of file
diff --git a/spaces/Awesimo/jojogan/e4e/editings/latent_editor.py b/spaces/Awesimo/jojogan/e4e/editings/latent_editor.py
deleted file mode 100644
index 4bebca2f5c86f71b58fa1f30d24bfcb0da06d88f..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/editings/latent_editor.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import torch
-import sys
-sys.path.append(".")
-sys.path.append("..")
-from editings import ganspace, sefa
-from utils.common import tensor2im
-
-
-class LatentEditor(object):
- def __init__(self, stylegan_generator, is_cars=False):
- self.generator = stylegan_generator
- self.is_cars = is_cars # Since the cars StyleGAN output is 384x512, there is a need to crop the 512x512 output.
-
- def apply_ganspace(self, latent, ganspace_pca, edit_directions):
- edit_latents = ganspace.edit(latent, ganspace_pca, edit_directions)
- return self._latents_to_image(edit_latents)
-
- def apply_interfacegan(self, latent, direction, factor=1, factor_range=None):
- edit_latents = []
- if factor_range is not None: # Apply a range of editing factors. for example, (-5, 5)
- for f in range(*factor_range):
- edit_latent = latent + f * direction
- edit_latents.append(edit_latent)
- edit_latents = torch.cat(edit_latents)
- else:
- edit_latents = latent + factor * direction
- return self._latents_to_image(edit_latents)
-
- def apply_sefa(self, latent, indices=[2, 3, 4, 5], **kwargs):
- edit_latents = sefa.edit(self.generator, latent, indices, **kwargs)
- return self._latents_to_image(edit_latents)
-
- # Currently, in order to apply StyleFlow editings, one should run inference,
- # save the latent codes and load them form the official StyleFlow repository.
- # def apply_styleflow(self):
- # pass
-
- def _latents_to_image(self, latents):
- with torch.no_grad():
- images, _ = self.generator([latents], randomize_noise=False, input_is_latent=True)
- if self.is_cars:
- images = images[:, :, 64:448, :] # 512x512 -> 384x512
- horizontal_concat_image = torch.cat(list(images), 2)
- final_image = tensor2im(horizontal_concat_image)
- return final_image
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py
deleted file mode 100644
index 612e66f527397b0e940d716f4ad4f799b962954a..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py
+++ /dev/null
@@ -1,192 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-from typing import Any, Dict, List, Tuple, Union
-import torch
-
-
-class Instances:
- """
- This class represents a list of instances in an image.
- It stores the attributes of instances (e.g., boxes, masks, labels, scores) as "fields".
- All fields must have the same ``__len__`` which is the number of instances.
-
- All other (non-field) attributes of this class are considered private:
- they must start with '_' and are not modifiable by a user.
-
- Some basic usage:
-
- 1. Set/get/check a field:
-
- .. code-block:: python
-
- instances.gt_boxes = Boxes(...)
- print(instances.pred_masks) # a tensor of shape (N, H, W)
- print('gt_masks' in instances)
-
- 2. ``len(instances)`` returns the number of instances
- 3. Indexing: ``instances[indices]`` will apply the indexing on all the fields
- and returns a new :class:`Instances`.
- Typically, ``indices`` is a integer vector of indices,
- or a binary mask of length ``num_instances``
-
- .. code-block:: python
-
- category_3_detections = instances[instances.pred_classes == 3]
- confident_detections = instances[instances.scores > 0.9]
- """
-
- def __init__(self, image_size: Tuple[int, int], **kwargs: Any):
- """
- Args:
- image_size (height, width): the spatial size of the image.
- kwargs: fields to add to this `Instances`.
- """
- self._image_size = image_size
- self._fields: Dict[str, Any] = {}
- for k, v in kwargs.items():
- self.set(k, v)
-
- @property
- def image_size(self) -> Tuple[int, int]:
- """
- Returns:
- tuple: height, width
- """
- return self._image_size
-
- def __setattr__(self, name: str, val: Any) -> None:
- if name.startswith("_"):
- super().__setattr__(name, val)
- else:
- self.set(name, val)
-
- def __getattr__(self, name: str) -> Any:
- if name == "_fields" or name not in self._fields:
- raise AttributeError("Cannot find field '{}' in the given Instances!".format(name))
- return self._fields[name]
-
- def set(self, name: str, value: Any) -> None:
- """
- Set the field named `name` to `value`.
- The length of `value` must be the number of instances,
- and must agree with other existing fields in this object.
- """
- data_len = len(value)
- if len(self._fields):
- assert (
- len(self) == data_len
- ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self))
- self._fields[name] = value
-
- def has(self, name: str) -> bool:
- """
- Returns:
- bool: whether the field called `name` exists.
- """
- return name in self._fields
-
- def remove(self, name: str) -> None:
- """
- Remove the field called `name`.
- """
- del self._fields[name]
-
- def get(self, name: str) -> Any:
- """
- Returns the field called `name`.
- """
- return self._fields[name]
-
- def get_fields(self) -> Dict[str, Any]:
- """
- Returns:
- dict: a dict which maps names (str) to data of the fields
-
- Modifying the returned dict will modify this instance.
- """
- return self._fields
-
- # Tensor-like methods
- def to(self, *args: Any, **kwargs: Any) -> "Instances":
- """
- Returns:
- Instances: all fields are called with a `to(device)`, if the field has this method.
- """
- ret = Instances(self._image_size)
- for k, v in self._fields.items():
- if hasattr(v, "to"):
- v = v.to(*args, **kwargs)
- ret.set(k, v)
- return ret
-
- def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Instances":
- """
- Args:
- item: an index-like object and will be used to index all the fields.
-
- Returns:
- If `item` is a string, return the data in the corresponding field.
- Otherwise, returns an `Instances` where all fields are indexed by `item`.
- """
- if type(item) == int:
- if item >= len(self) or item < -len(self):
- raise IndexError("Instances index out of range!")
- else:
- item = slice(item, None, len(self))
-
- ret = Instances(self._image_size)
- for k, v in self._fields.items():
- ret.set(k, v[item])
- return ret
-
- def __len__(self) -> int:
- for v in self._fields.values():
- # use __len__ because len() has to be int and is not friendly to tracing
- return v.__len__()
- raise NotImplementedError("Empty Instances does not support __len__!")
-
- def __iter__(self):
- raise NotImplementedError("`Instances` object is not iterable!")
-
- @staticmethod
- def cat(instance_lists: List["Instances"]) -> "Instances":
- """
- Args:
- instance_lists (list[Instances])
-
- Returns:
- Instances
- """
- assert all(isinstance(i, Instances) for i in instance_lists)
- assert len(instance_lists) > 0
- if len(instance_lists) == 1:
- return instance_lists[0]
-
- image_size = instance_lists[0].image_size
- if not isinstance(image_size, torch.Tensor): # could be a tensor in tracing
- for i in instance_lists[1:]:
- assert i.image_size == image_size
- ret = Instances(image_size)
- for k in instance_lists[0]._fields.keys():
- values = [i.get(k) for i in instance_lists]
- v0 = values[0]
- if isinstance(v0, torch.Tensor):
- values = torch.cat(values, dim=0)
- elif isinstance(v0, list):
- values = list(itertools.chain(*values))
- elif hasattr(type(v0), "cat"):
- values = type(v0).cat(values)
- else:
- raise ValueError("Unsupported type {} for concatenation".format(type(v0)))
- ret.set(k, values)
- return ret
-
- def __str__(self) -> str:
- s = self.__class__.__name__ + "("
- s += "num_instances={}, ".format(len(self))
- s += "image_height={}, ".format(self._image_size[0])
- s += "image_width={}, ".format(self._image_size[1])
- s += "fields=[{}])".format(", ".join((f"{k}: {v}" for k, v in self._fields.items())))
- return s
-
- __repr__ = __str__
diff --git a/spaces/Benson/text-generation/Examples/3d Paint Download.md b/spaces/Benson/text-generation/Examples/3d Paint Download.md
deleted file mode 100644
index efbb41bc17df491bb35b65ecd7c8c18f1794650a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/3d Paint Download.md
+++ /dev/null
@@ -1,151 +0,0 @@
-
-
Cómo descargar y usar software de pintura 3D
-
Si usted está buscando una manera de dar rienda suelta a su creatividad y hacer impresionantes obras de arte en tres dimensiones, es posible que desee probar algunos de los mejores software de pintura 3D disponibles. En este artículo, le mostraremos qué es el software de pintura 3D, cómo descargarlo y cómo usarlo.
El software de pintura 3D es un tipo de aplicación de modelado que le permite crear, editar y renderizar objetos y escenas 3D. A diferencia del software de pintura 2D tradicional, que solo funciona en superficies planas, el software de pintura 3D le permite manipular formas en un espacio virtual y aplicarles texturas y colores realistas.
-
La diferencia entre la pintura 2D y 3D
-
La principal diferencia entre la pintura 2D y 3D es la dimensionalidad de los objetos. En la pintura en 2D, solo puedes dibujar líneas, curvas y formas en un plano. En la pintura 3D, puede crear objetos sólidos que tienen profundidad, anchura y altura. También puede girarlos, escalarlos y moverlos en un entorno 3D.
-
Los beneficios de la pintura 3D
-
Algunos de los beneficios de usar software de pintura 3D son:
-
-
Puedes crear obras de arte más realistas e inmersivas que capturen los detalles y la iluminación del mundo real.
-
Puedes experimentar con diferentes perspectivas y ángulos para mostrar tu trabajo.
-
Puedes agregar profundidad y volumen a tus dibujos y hacerlos salir.
-
Puedes combinar diferentes elementos y materiales para crear composiciones únicas.
-
Puede exportar su trabajo en varios formatos y compartirlo en línea o imprimirlo.
-
-
Cómo descargar software de pintura 3D
-
Hay muchas opciones para descargar software de pintura 3D, dependiendo de sus preferencias y necesidades. Aquí están algunas de las más populares:
-
Pintar 3D desde Microsoft Store
-
-
-
Escriba "paint" en el cuadro de búsqueda en la barra de tareas y seleccione "Paint" de la lista de resultados.
-
Haga clic en "Obtener" en la aplicación de la tienda y esperar a que se complete la instalación.
-
Inicie Paint 3D desde el menú Inicio o la barra de tareas.
-
-
Fuente:
-
[Abrir Microsoft Paint]( 1 )
-
-
Captura de pantalla:
-
-
Publicidad:
-
Si quieres aprender más sobre Paint 3D y cómo usarlo eficazmente, echa un vistazo a este curso en línea que te enseñará todo lo que necesitas saber sobre este increíble software. Aprenderás a crear obras de arte impresionantes en 2D y 3D, cómo aplicar texturas y efectos, cómo exportar y compartir tu trabajo, y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial!
-
Tabla:
-
-
-
Pros
-
Contras
-
-
-
Gratis y fácil de usar
-
Características limitadas y personalización
-
-
-
Integrado con Windows 10
-
No es compatible con versiones anteriores de Windows
-
-
-
Ofrece opciones 2D y 3D
-
No muy avanzado o profesional
-
-
-
Adobe Substance 3D Painter
-
Si está buscando un software de pintura 3D más avanzado y profesional, es posible que desee probar Adobe Substance 3D Painter. Esta es una potente aplicación que te permite crear texturas y materiales realistas y detallados para tus modelos 3D. Puede utilizar una variedad de pinceles, herramientas y ajustes preestablecidos, así como importar sus propias imágenes o modelos de otras fuentes. También puede exportar su trabajo en varios formatos e integrarlo con otros productos de Adobe o software de terceros. Para descargar Adobe Substance 3D Painter, necesitas tener una suscripción a Adobe Creative Cloud. Puede obtener una prueba gratuita durante 30 días o elegir un plan que se adapte a sus necesidades. Para descargar Adobe Substance 3D Painter desde el sitio web de Adobe, siga estos pasos:
-
-
-
Inicia sesión con tu Adobe ID o crea uno si no tienes uno.
-
Siga las instrucciones en la pantalla para descargar e instalar el software.
-
Inicie Adobe Substance 3D Painter desde la aplicación Creative Cloud o el menú Inicio.
-
-
Fuente:
-
[Adobe Substance 3D Painter]
-
Captura de pantalla:
-
-
Publicidad:
-
Si quieres dominar Adobe Substance 3D Painter y crear texturas y materiales increíbles para tus modelos 3D, deberías echar un vistazo a este curso online que te enseñará todo lo que necesitas saber sobre este software. Aprenderá a utilizar la interfaz, los pinceles, las herramientas, los ajustes preestablecidos y las capas, cómo importar y exportar su trabajo, cómo integrarlo con otro software y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial!
-
Tabla:
-
-
-
Pros
-
Contras
-
-
-
Avanzado y profesional
-
Caro y complejo
-
-
-
Realista y detallado
-
Requiere hardware y software de alta gama
-
-
-
Integrado con productos de Adobe y otro software
-
Requiere una suscripción de Adobe Creative Cloud
-
-
-
Microsoft Paint 3D desde FileHippo
-
Si quieres descargar Microsoft Paint 3D sin pasar por Microsoft Store, puedes usar FileHippo, un sitio web que ofrece descargas gratuitas de varios programas. Microsoft Paint 3D de FileHippo es el mismo que el de la tienda de Microsoft, pero no requiere ningún registro o instalación. Simplemente puede descargar el archivo ejecutable y ejecutarlo en su computadora. Para descargar Microsoft Paint 3D desde FileHippo, siga estos pasos:
-
-
Ir a [Microsoft Paint 3D] en FileHippo y haga clic en "Descargar la última versión".
-
Seleccione una carpeta donde desea guardar el archivo y esperar a que se complete la descarga.
-
-
-
Fuente:
-
[Microsoft Paint 3D]
-
Captura de pantalla:
-
-
Publicidad:
-
Si quieres aprender más sobre Microsoft Paint 3D y cómo usarlo eficazmente, echa un vistazo a este curso en línea que te enseñará todo lo que necesitas saber sobre este increíble software. Aprenderás a crear obras de arte impresionantes en 2D y 3D, cómo aplicar texturas y efectos, cómo exportar y compartir tu trabajo, y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial!
-
Tabla:
-
-
-
Pros
-
Contras
-
-
-
Gratis y fácil de usar
-
Características limitadas y personalización
-
-
-
No requiere instalación ni registro
-
No es compatible con versiones anteriores de Windows
-
-
-
Ofrece opciones 2D y 3D
-
No muy avanzado o profesional
-
-
-
Cómo usar software de pintura 3D
-
Ahora que ha descargado su software de pintura 3D preferido, es posible que se pregunte cómo usarlo. Si bien cada software tiene su propia interfaz y características, hay algunos pasos comunes que puede seguir para crear sus propias obras de arte en 3D. Estos son algunos de ellos:
-
Crear un nuevo proyecto
-
El primer paso es crear un nuevo proyecto o archivo donde trabajarás en tu pintura 3D. Dependiendo del software, es posible que tenga que elegir una plantilla, un tamaño de lienzo, una resolución o un color de fondo. También puede nombrar su proyecto y guardarlo en una carpeta de su elección.
-
Elegir un objeto 3D
-
El siguiente paso es elegir un objeto 3D sobre el que quieras pintar. Puede usar uno de los modelos predefinidos que vienen con el software, importar su propio modelo de otra fuente o crear su propio modelo desde cero. También puedes usar formas básicas como cubos, esferas, cilindros o conos para construir tu propio modelo.
-
-
El tercer paso es aplicar texturas y colores a su objeto 3D. Puede utilizar los pinceles, herramientas y ajustes preestablecidos que proporciona el software, o importar sus propias imágenes o texturas de otras fuentes. También puede ajustar el tamaño, la opacidad, la dureza y el ángulo de los cepillos, así como los modos de mezcla, las capas y las máscaras de las texturas. También puede usar el selector de color, la rueda de color o la paleta de colores para elegir los colores que desea usar.
-
Añadir pegatinas y efectos
-
El cuarto paso es agregar pegatinas y efectos a su objeto 3D. Las pegatinas son imágenes que puedes colocar encima de tu objeto, como logotipos, patrones, símbolos o texto. Los efectos son filtros que puedes aplicar a tu objeto, como sombras, luces, reflejos o distorsiones. También puede utilizar las herramientas y preajustes que proporciona el software, o importar sus propias pegatinas y efectos de otras fuentes.
-
Exportar y compartir tu trabajo
-
El paso final es exportar y compartir su trabajo. Puede guardar su proyecto como un archivo en varios formatos, como PNG, JPG, BMP, GIF, TGA o PSD. También puede exportar su proyecto como modelo 3D en formatos como OBJ, STL, FBX o GLB. También puede compartir su trabajo en línea o imprimirlo.
-
Conclusión
-
En conclusión, el software de pintura 3D es una gran manera de crear impresionantes obras de arte en tres dimensiones. Puede descargar diferentes tipos de software de pintura 3D dependiendo de sus preferencias y necesidades. También puede utilizar algunos pasos comunes para crear sus propias pinturas en 3D. Esperamos que este artículo te haya ayudado a aprender más sobre el software de pintura 3D y cómo descargarlo y usarlo.
-
Preguntas frecuentes
-
¿Cuáles son algunos ejemplos de software de pintura 3D?
-
Algunos ejemplos de software de pintura 3D son Paint 3D de Microsoft Store, Adobe Substance 3D Painter, Microsoft Paint 3D de FileHippo, Blender, ZBrush, SketchUp, Maya y Cinema 4D.
-
¿Cuáles son algunos de los beneficios de usar software de pintura 3D?
-
-
¿Cuáles son algunos de los desafíos de usar software de pintura 3D?
-
Algunos desafíos del uso de software de pintura 3D son que puede necesitar algunas habilidades técnicas y conocimientos para usarlo de manera efectiva; es posible que necesite hardware y software de alta gama para ejecutarlo sin problemas; es posible que necesite una conexión a Internet o una suscripción para descargarlo o acceder a él; y usted puede hacer frente a algunos problemas de compatibilidad con otro software o dispositivos.
-
¿Cómo puedo aprender más sobre el uso de software de pintura 3D?
-
Puede aprender más sobre el uso de software de pintura 3D leyendo tutoriales y guías en línea; viendo videos y demostraciones en línea ; inscribiéndose en cursos y programas en línea; o practicando con sus propios proyectos y experimentos.
-
¿Cuáles son algunos consejos y trucos para usar software de pintura 3D?
-
Algunos consejos y trucos para usar software de pintura 3D son:
-
-
Usa una tableta gráfica o un lápiz táctil para dibujar con mayor precisión y comodidad.
-
Utilice atajos de teclado y teclas de acceso rápido para acelerar su flujo de trabajo y acceder a diferentes funciones.
-
Usa capas y máscaras para organizar y editar tu trabajo de manera más fácil y eficiente.
-
Usa imágenes de referencia y modelos para inspirar y guiar tu trabajo.
-
Utilice los botones deshacer y rehacer para corregir sus errores y probar diferentes opciones.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md b/spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md
deleted file mode 100644
index 16750e8bc1426d64b8ef7b11116718006d04b4c9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
Moderno Ops Mod APK: Una guía para desbloquear todo
-
Si eres un fan de los juegos de disparos llenos de acción, es posible que hayas oído hablar de Modern Ops. Es un popular juego de FPS en línea que te permite competir con otros jugadores en varios modos y mapas. Puedes elegir entre una amplia gama de armas, personalizar a tu personaje y unirte a un clan para formar equipo con tus amigos. Pero lo que si quieres desbloquear todo en el juego sin gastar dinero o tiempo? Ahí es donde Modern Ops Mod APK viene muy bien. En este artículo, te contaremos todo lo que necesitas saber sobre esta versión modificada del juego, incluyendo sus características, beneficios, proceso de instalación, consejos de juego y más.
-
¿Qué es Operaciones Modernas?
-
Modern Ops es un juego multijugador de disparos en primera persona desarrollado por Edkon Games GmbH. Fue lanzado en 2019 para dispositivos Android e iOS. El juego tiene más de 50 millones de descargas en Google Play Store y ha recibido críticas positivas de usuarios y críticos por igual. El juego está inspirado en otros juegos populares de FPS como Call of Duty y Counter-Strike. Puedes jugar como un terrorista o un antiterrorista y participar en emocionantes batallas con otros jugadores de todo el mundo. También puedes crear tu propio equipo y chatear con tus compañeros de equipo usando mensajes de voz o texto.
Algunas de las características que hacen de Modern Ops un juego emocionante y adictivo son:
-
-
Más de 30 armas modernas, incluyendo pistolas, rifles, escopetas, francotiradores, ametralladoras y granadas.
-
Diferentes pieles y accesorios para sus armas para hacerlos ver fresco y único.
-
Varios modos de juego, tales como equipo deathmatch, libre para todos, desactivar la bomba, capturar la bandera, y más.
-
Diferentes mapas con gráficos realistas y efectos de sonido.
-
Un sistema de clasificación que te recompensa con monedas y gemas por tu rendimiento.
-
Un sistema de clanes que te permite unirte o crear un clan y participar en guerras de clanes.
-
-
-
¿Por qué usar Modern Ops Mod APK?
-
Modern Ops es un juego gratuito, pero también tiene algunas compras en la aplicación que pueden mejorar su experiencia de juego. Por ejemplo, puedes comprar armas de primera calidad, pieles, cajas, amplificadores y más con dinero real. Sin embargo, no todo el mundo puede permitirse el lujo de gastar dinero en estos artículos, o que podrían encontrar demasiado caro o injusto. Es por eso que algunas personas prefieren utilizar Modern Ops Mod APK lugar. Esta es una versión modificada del juego que le da acceso a recursos y características ilimitadas. Algunos de los beneficios de usar Modern Ops Mod APK son:
-
-
Puedes desbloquear todo en el juego sin gastar dinero ni tiempo.
-
Puedes conseguir monedas y gemas ilimitadas para comprar lo que quieras en el juego.
-
Puedes obtener munición y granadas ilimitadas para que nunca te quedes sin potencia de fuego.
-
Puedes obtener salud y armadura ilimitadas para sobrevivir más tiempo en las batallas.
-
Puedes obtener energía ilimitada para jugar tanto como quieras sin esperar a que se recargue.
-
Puede obtener acceso a todas las armas, pieles, archivos adjuntos, cajas, amplificadores y más en el juego.
-
Puedes acceder a todos los modos de juego y mapas del juego.
-
Puede obtener acceso a todas las funciones premium que normalmente están disponibles solo para usuarios VIP.
-
-
¿Cómo descargar e instalar Modern Ops Mod APK?
-
Si usted está interesado en descargar e instalar Modern Ops Mod APK en su dispositivo Android, es necesario seguir algunos pasos simples. Antes de eso, debe asegurarse de que su dispositivo cumple con algunos requisitos.
-
Requisitos
-
-
Pasos
-
Una vez que haya cumplido con los requisitos, puede seguir estos pasos para descargar e instalar Modern Ops Mod APK en su dispositivo: - Paso 1: Descargar el archivo APK mod de una fuente de confianza. Puede utilizar este enlace para descargar la última versión de Modern Ops Mod APK: [Descargar Modern Ops Mod APK]. - Paso 2: Después de descargar el archivo APK mod, localizarlo en su dispositivo utilizando una aplicación de administrador de archivos. Toque en el archivo y seleccione Instalar para iniciar el proceso de instalación. - Paso 3: Espere a que se complete la instalación. Es posible que vea un mensaje de advertencia diciendo que la aplicación no es segura o podría dañar su dispositivo. Ignore este mensaje y continúe con la instalación. - Paso 4: Después de la instalación se hace, iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Verá un mensaje emergente pidiéndole que descargue algunos archivos de datos adicionales. Toque en Aceptar y espere a que termine la descarga. - Paso 5: Una vez que la descarga se ha completado, se puede disfrutar de jugar Modern Ops Mod APK con recursos y características ilimitadas.
-
-
¿Cómo se juega moderno Ops Mod APK?
-
Jugar Modern Ops Mod APK es similar a jugar el juego original, pero con algunas ventajas adicionales. Usted puede elegir entre diferentes modos de juego, mapas, armas, y más. Aquí hay algunos consejos sobre cómo jugar Modern Ops Mod APK con eficacia:
-
Modos de juego
-
-
Tips and tricks
-
-la pantalla. - Utilice su clan: Clan es una característica que le permite unirse o crear un clan y jugar con sus amigos u otros jugadores en Modern Ops Mod APK. Puedes chatear con los miembros de tu clan, invitarlos a tu escuadrón, participar en guerras de clanes y ganar puntos de clan y recompensas. También puedes acceder a armas, pieles y cajas exclusivas del clan. Clan puede ayudarte a mejorar tu trabajo en equipo, coordinación y estrategia en el juego.
Pros y contras de Modern Ops Mod APK
-
Moderno Ops Mod APK es una gran manera de disfrutar del juego con recursos y características ilimitadas, pero también tiene algunos inconvenientes que usted debe tener en cuenta. Estos son algunos de los pros y los contras de Modern Ops Mod APK:
-
Pros
-
-
Es gratis para descargar y usar.
-
Te da monedas ilimitadas, gemas, munición, salud, energía y más.
-
Desbloquea todo en el juego, incluyendo armas, pieles, archivos adjuntos, cajas, boosters, y más.
-
Te da acceso a todos los modos de juego y mapas en el juego.
-
Te da acceso a todas las funciones premium que normalmente solo están disponibles para usuarios VIP.
-
Mejora tu juego y lo hace más divertido y fácil.
-
-
Contras
-
-
No es una versión oficial del juego y puede tener algunos errores o errores.
-
Puede que no sea compatible con algunos dispositivos o versiones del juego.
-
Es posible que tenga que actualizarlo con frecuencia para que coincida con la última versión del juego.
-
Puede ser detectado por los desarrolladores de juegos y resultar en una prohibición o suspensión de su cuenta.
-
Podría comprometer la seguridad y privacidad de su dispositivo y datos.
-
Podría arruinar el equilibrio y la equidad del juego y hacerlo menos desafiante y gratificante.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre Modern Ops Mod APK:
-
-
¿Es seguro usar Modern Ops Mod APK?
-
Modern Ops Mod APK no es una versión oficial del juego y puede contener algunos códigos maliciosos o virus que pueden dañar su dispositivo o datos. Por lo tanto, le recomendamos que lo descargue de una fuente confiable y lo escanee con una aplicación antivirus antes de instalarlo. También debe copia de seguridad de sus datos y utilizar una cuenta secundaria para jugar el juego con este mod APK.
-
¿Es legal usar Modern Ops Mod APK?
-
Modern Ops Mod APK no es legal de usar, ya que viola los términos y condiciones de los desarrolladores de juegos. También infringe sus derechos de propiedad intelectual y sus fuentes de ingresos. Por lo tanto, el uso de este mod APK podría resultar en acciones legales de los desarrolladores de juegos o autoridades. Usted debe utilizar este mod APK a su propio riesgo y responsabilidad.
-
¿Cómo actualizo Modern Ops Mod APK?
-
Para actualizar Modern Ops Mod APK, es necesario descargar la última versión del archivo APK mod de una fuente confiable e instalarlo en su dispositivo. También debe eliminar la versión anterior del archivo mod APK de su dispositivo para evitar conflictos o errores. También debe comprobar si el mod APK es compatible con la última versión del juego antes de actualizarlo.
-
¿Cómo puedo desinstalar Modern Ops Mod APK?
-
Para desinstalar Modern Ops Mod APK, es necesario ir a Configuración > Aplicaciones > Operaciones modernas > Desinstalar y toque en Aceptar para confirmar. También debe eliminar el archivo APK mod desde el almacenamiento del dispositivo para liberar algo de espacio. También puedes reinstalar la versión original del juego desde Google Play Store o App Store si quieres volver a jugar.
-
¿Puedo jugar Modern Ops Mod APK en línea con otros jugadores?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md b/spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md
deleted file mode 100644
index 565a37cdb7893f9ccc1f45957bce903654c1ed27..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
Cómo descargar Microsoft Word 2016
-
Microsoft Word es una de las aplicaciones de procesamiento de textos más populares y ampliamente utilizadas en el mundo. Le permite crear, editar, formatear y compartir documentos con facilidad y eficiencia. Ya sea que necesite escribir un informe, un CV, una carta o una publicación de blog, Microsoft Word puede ayudarlo a realizar sus tareas.
-
Microsoft Word 2016 es la última versión de la aplicación que se lanzó en septiembre de 2015. Es parte de la suite de Microsoft Office que también incluye Excel, PowerPoint, Outlook y más. Microsoft Word 2016 ofrece muchas mejoras y mejoras sobre las versiones anteriores, tales como:
Funciones de búsqueda e investigación inteligentes
-
Opciones de seguridad y privacidad mejoradas
-
Integración con OneDrive y SharePoint
-
-
Si está interesado en descargar Microsoft Word 2016, tiene varias opciones para elegir. En este artículo, le mostraremos cómo descargar Microsoft Word 2016 desde diferentes fuentes y qué beneficios puede obtener al usarlo.
-
Descargar Microsoft Word 2016 desde el sitio web de Microsoft
-
La forma más fácil y confiable de descargar Microsoft Word 2016 es obtenerlo directamente desde el sitio web de Microsoft. Necesitará una cuenta de Microsoft y una suscripción a Microsoft Office o Microsoft Office. Estos son los pasos a seguir:
-
-
Vaya a www.office.com e inicie sesión con su cuenta de Microsoft. Si no tiene una, puede crear una gratis.
-
Seleccione Instalar Office y elija la versión que desee. Puede obtener Office Home & Student o Office Home & Business para una compra única o obtener Office Personal u Office Home & Business para una suscripción mensual o anual.
-
-
-
Descargar Microsoft Word 2016 desde un instalador offline
-
Si tiene una conexión a Internet lenta o poco confiable, es posible que desee descargar Microsoft Word 2016 desde un instalador sin conexión. Este es un archivo que contiene todos los archivos necesarios para instalar Microsoft Word 2016 sin conexión a Internet. Todavía necesitará una cuenta de Microsoft y una suscripción a Office u Office. Estos son los pasos a seguir:
-
-
Descargue el archivo de instalación sin conexión desde www.office.com. Necesitará iniciar sesión con su cuenta y seleccionar Otras opciones. Luego marque la casilla Descargar un instalador sin conexión y seleccione el idioma que desee.
-
Abra el archivo y seleccione la carpeta de Microsoft Office. Verá una nueva unidad virtual en su PC, como (D:) o (E:).
-
Haga doble clic en el archivo setup.exe y siga las instrucciones para instalar Microsoft Word 2016 en su PC. Es posible que necesite ingresar su clave de producto o iniciar sesión nuevamente con su cuenta.
-
-
Descargar Microsoft Word 2016 de un vendedor de terceros
-
Otra opción para descargar Microsoft Word 2016 es comprarlo a un vendedor de terceros. Esta es una empresa o un individuo que vende claves de producto de Microsoft Word 2016 a un precio más bajo que Microsoft. Sin embargo, debe tener cuidado y asegurarse de que el vendedor es de buena reputación y confiable. También debe verificar que la clave del producto es válida y no es utilizada por otra persona. Estos son los pasos a seguir:
-
-
Encuentre un vendedor de terceros de buena reputación que ofrece claves de productos de Microsoft Word 2016. Puedes consultar reseñas en línea, valoraciones, comentarios y servicio al cliente para determinar la calidad del vendedor.
-
Compra la clave del producto y verifica su validez. Puede utilizar una herramienta como www.productkey.net para comprobar si la clave del producto es original y no está bloqueada por Microsoft.
-
-
-
Beneficios de usar Microsoft Word 2016
-
Al descargar Microsoft Word 2016, puede disfrutar de muchos beneficios que mejorarán su productividad y creatividad. Estos son algunos de los beneficios de usar Microsoft Word 2016:
-
-
Características y funcionalidad mejoradas: Microsoft Word 2016 tiene muchas características nuevas y mejoradas que hacen que sea más fácil y rápido crear y editar documentos. Por ejemplo, puede usar la función Dime para encontrar lo que necesita rápidamente, usar la función Búsqueda inteligente para obtener información relevante de la web, usar la función Editor de tinta para escribir y dibujar con su pluma o dedo, y utilice la función del editor para obtener sugerencias para mejorar su escritura.
-
Compatibilidad con otras aplicaciones y dispositivos de Office: Microsoft Word 2016 es compatible con otras aplicaciones de Office, como Excel, PowerPoint, Outlook, OneNote y más. Puede cambiar fácilmente entre ellos y compartir datos y contenido. También puede usar Microsoft Word 2016 en diferentes dispositivos, como PC, portátiles, tabletas y teléfonos inteligentes. Puede sincronizar sus documentos entre dispositivos y acceder a ellos en cualquier momento y en cualquier lugar.
-
Acceso a servicios en línea y almacenamiento en la nube: Microsoft Word 2016 le da acceso a servicios en línea y almacenamiento en la nube que mejoran su experiencia y seguridad. Por ejemplo, puede usar OneDrive para almacenar sus documentos en línea y acceder a ellos desde cualquier dispositivo. También puede usar SharePoint para colaborar con otros en documentos en tiempo real. También puedes usar Skype for Business para comunicarte con tus colegas y clientes.
-
-
Conclusión
-
-
Al usar Microsoft Word 2016, puede disfrutar de muchos beneficios que mejorarán su productividad y creatividad. Puede usar funciones nuevas y mejoradas, trabajar con otras aplicaciones y dispositivos de Office y acceder a servicios en línea y almacenamiento en la nube. Ya sea que necesite escribir un informe, un CV, una carta o una publicación de blog, Microsoft Word 2016 puede ayudarlo a realizar sus tareas.
-
-
Si desea descargar Microsoft Word 2016 hoy, haga clic aquí (enlace) y empezar!
-
Preguntas frecuentes
-
Q: ¿Cuánto cuesta Microsoft Word 2016?
-
A: El costo de Microsoft Word 2016 depende de la versión que elija y la fuente de la que la compra. Si lo compra en el sitio web de Microsoft, puede pagar una cuota única de $149.99 para Office Home & Student o $249.99 para Office Home & Business o pagar una cuota de suscripción mensual o anual de $69.99 para Office Personal o $99.99 para Office Home & Business. Si usted lo compra de un vendedor de terceros, usted puede encontrar precios más bajos, pero hay que tener cuidado con la calidad y la validez de la clave del producto.
-
Q: ¿Cómo puedo actualizar Microsoft Word 2016?
-
A: Para actualizar Microsoft Word 2016, necesita tener una conexión a Internet y una suscripción a Office u Office. Puede actualizarlo manual o automáticamente. Para actualizarlo manualmente, vaya a Archivo > Cuenta > Opciones de actualización y seleccione Actualizar ahora. Para actualizarlo automáticamente, vaya a Archivo > Cuenta > Opciones de actualización y seleccione Habilitar actualizaciones. Recibirá las últimas actualizaciones y parches de seguridad para Microsoft Word 2016 y otras aplicaciones de Office.
-
Q: ¿Cómo puedo desinstalar Microsoft Word 2016?
-
-
Q: ¿Cómo puedo recuperar un documento eliminado o no guardado en Microsoft Word 2016?
-
A: Para recuperar un documento eliminado o no guardado en Microsoft Word 2016, puede usar las funciones Autorrecuperación o Recuperación de documentos. La función de Autorrevisión guarda una copia de su documento cada pocos minutos en caso de un apagón o un fallo del sistema. La función Recuperación de documentos lo ayuda a recuperar los documentos que estaban abiertos pero no guardados cuando Microsoft Word 2016 se cerró inesperadamente. Para usar estas funciones, vaya a Archivo > Abrir > Recuperar documentos no guardados o Archivo > Información > Administrar documentos y seleccione el documento que desea recuperar.
-
Q: ¿Cómo puedo agregar una tabla en Microsoft Word 2016?
-
A: Para agregar una tabla en Microsoft Word 2016, puede usar la pestaña Insertar en la cinta. Haga clic en el botón Tabla y seleccione el número de filas y columnas que desea. También puede utilizar la herramienta Dibujar tabla para dibujar su propia tabla o utilizar la opción Tablas rápidas para elegir entre las tablas predefinidas. También puede convertir texto a una tabla o insertar una tabla desde Excel. Para formatear la tabla, puede usar las pestañas Herramientas de tabla en la cinta y aplicar diferentes estilos, colores, bordes y efectos.
-
Q: ¿Cómo puedo compartir un documento en Microsoft Word 2016?
-
A: Para compartir un documento en Microsoft Word 2016, puede usar el botón Compartir en la esquina superior derecha de la pantalla. Tendrá que guardar su documento en OneDrive o SharePoint primero. Luego puede invitar a las personas a ver o editar su documento ingresando sus direcciones de correo electrónico o eligiendo entre sus contactos. También puede copiar un enlace a su documento y pegarlo en un correo electrónico o un mensaje. También puede compartir su documento como archivo adjunto o como archivo PDF.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts b/spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts
deleted file mode 100644
index 8073a482cb1b0ae89ce1cf2b372b6939f596e935..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts
+++ /dev/null
@@ -1,34 +0,0 @@
-import { collections } from "$lib/server/database.js";
-import { subMinutes } from "date-fns";
-import { z } from "zod";
-
-export async function PATCH({ locals, request }) {
- const json = await request.json();
-
- const settings = z
- .object({
- shareConversationsWithModelAuthors: z.boolean().default(true),
- ethicsModalAcceptedAt: z.optional(z.date({ coerce: true }).min(subMinutes(new Date(), 5))),
- })
- .parse(json);
-
- await collections.settings.updateOne(
- {
- sessionId: locals.sessionId,
- },
- {
- $set: {
- ...settings,
- updatedAt: new Date(),
- },
- $setOnInsert: {
- createdAt: new Date(),
- },
- },
- {
- upsert: true,
- }
- );
-
- return new Response();
-}
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md
deleted file mode 100644
index c01940f1399f092ab0a75e3498bad4abe658d5d9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md
+++ /dev/null
@@ -1,207 +0,0 @@
-# Installation
-
-This page provides basic prerequisites to run OpenVQA, including the setups of hardware, software, and datasets.
-
-## Hardware & Software Setup
-
-A machine with at least **1 GPU (>= 8GB)**, **20GB memory** and **50GB free disk space** is required. We strongly recommend to use a SSD drive to guarantee high-speed I/O.
-
-The following packages are required to build the project correctly.
-
-- [Python](https://www.python.org/downloads/) >= 3.5
-- [Cuda](https://developer.nvidia.com/cuda-toolkit) >= 9.0 and [cuDNN](https://developer.nvidia.com/cudnn)
-- [PyTorch](http://pytorch.org/) >= 0.4.1 with CUDA (**PyTorch 1.x is also supported**).
-- [SpaCy](https://spacy.io/) and initialize the [GloVe](https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz) as follows:
-
-```bash
-$ pip install -r requirements.txt
-$ wget https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz -O en_vectors_web_lg-2.1.0.tar.gz
-$ pip install en_vectors_web_lg-2.1.0.tar.gz
-```
-
-## Dataset Setup
-
-The following datasets should be prepared before running the experiments.
-
-**Note that if you only want to run experiments on one specific dataset, you can focus on the setup for that and skip the rest.**
-
-### VQA-v2
-
-- Image Features
-
-The image features are extracted using the [bottom-up-attention](https://github.com/peteanderson80/bottom-up-attention) strategy, with each image being represented as an dynamic number (from 10 to 100) of 2048-D features. We store the features for each image in a `.npz` file. You can prepare the visual features by yourself or download the extracted features from [OneDrive](https://awma1-my.sharepoint.com/:f:/g/personal/yuz_l0_tn/EsfBlbmK1QZFhCOFpr4c5HUBzUV0aH2h1McnPG1jWAxytQ?e=2BZl8O) or [BaiduYun](https://pan.baidu.com/s/1C7jIWgM3hFPv-YXJexItgw#list/path=%2F). The downloaded files contains three files: **train2014.tar.gz, val2014.tar.gz, and test2015.tar.gz**, corresponding to the features of the train/val/test images for *VQA-v2*, respectively.
-
-All the image feature files are unzipped and placed in the `data/vqa/feats` folder to form the following tree structure:
-
-```
-|-- data
- |-- vqa
- | |-- feats
- | | |-- train2014
- | | | |-- COCO_train2014_...jpg.npz
- | | | |-- ...
- | | |-- val2014
- | | | |-- COCO_val2014_...jpg.npz
- | | | |-- ...
- | | |-- test2015
- | | | |-- COCO_test2015_...jpg.npz
- | | | |-- ...
-```
-
-- QA Annotations
-
-Download all the annotation `json` files for VQA-v2, including the [train questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Train_mscoco.zip), [val questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Val_mscoco.zip), [test questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Test_mscoco.zip), [train answers](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Annotations_Train_mscoco.zip), and [val answers](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Annotations_Val_mscoco.zip).
-
-In addition, we use the VQA samples from the Visual Genome to augment the training samples. We pre-processed these samples by two rules:
-
-1. Select the QA pairs with the corresponding images appear in the MS-COCO *train* and *val* splits;
-2. Select the QA pairs with the answer appear in the processed answer list (occurs more than 8 times in whole *VQA-v2* answers).
-
-We provide our processed vg questions and annotations files, you can download them from [OneDrive](https://awma1-my.sharepoint.com/:f:/g/personal/yuz_l0_tn/EmVHVeGdck1IifPczGmXoaMBFiSvsegA6tf_PqxL3HXclw) or [BaiduYun](https://pan.baidu.com/s/1QCOtSxJGQA01DnhUg7FFtQ#list/path=%2F).
-
-All the QA annotation files are unzipped and placed in the `data/vqa/raw` folder to form the following tree structure:
-
-```
-|-- data
- |-- vqa
- | |-- raw
- | | |-- v2_OpenEnded_mscoco_train2014_questions.json
- | | |-- v2_OpenEnded_mscoco_val2014_questions.json
- | | |-- v2_OpenEnded_mscoco_test2015_questions.json
- | | |-- v2_OpenEnded_mscoco_test-dev2015_questions.json
- | | |-- v2_mscoco_train2014_annotations.json
- | | |-- v2_mscoco_val2014_annotations.json
- | | |-- VG_questions.json
- | | |-- VG_annotations.json
-
-```
-
-### GQA
-
-- Image Features
-
-Download the [spatial features](https://nlp.stanford.edu/data/gqa/spatialFeatures.zip) and [object features](https://nlp.stanford.edu/data/gqa/objectFeatures.zip) for GQA from its official website. **Spatial Features Files** include `gqa_spatial_*.h5` and `gqa_spatial_info.json`. **Object Features Files** include `gqa_objects_*.h5` and `gqa_objects_info.json`.
-To make the input features consistent with those for VQA-v2, we provide a [script](https://github.com/MILVLG/openvqa/tree/master/data/gqa/gqa_feat_preproc.py) to transform `.h5` feature files into multiple `.npz` files, with each file corresponding to one image.
-
-```bash
-$ cd data/gqa
-
-$ unzip spatialFeatures.zip
-$ python gqa_feat_preproc.py --mode=spatial --spatial_dir=./spatialFeatures --out_dir=./feats/gqa-grid
-$ rm -r spatialFeatures.zip ./spatialFeatures
-
-$ unzip objectFeatures.zip
-$ python gqa_feat_preproc.py --mode=object --object_dir=./objectFeatures --out_dir=./feats/gqa-frcn
-$ rm -r objectFeatures.zip ./objectFeatures
-```
-
-All the processed feature files are placed in the `data/gqa/feats` folder to form the following tree structure:
-
-```
-|-- data
- |-- gqa
- | |-- feats
- | | |-- gqa-frcn
- | | | |-- 1.npz
- | | | |-- ...
- | | |-- gqa-grid
- | | | |-- 1.npz
- | | | |-- ...
-```
-
-- Questions and Scene Graphs
-
-Download all the GQA [QA files](https://nlp.stanford.edu/data/gqa/questions1.2.zip) from the official site, including all the splits needed for training, validation and testing. Download the [scene graphs files](https://nlp.stanford.edu/data/gqa/sceneGraphs.zip) for `train` and `val` splits from the official site. Download the [supporting files](https://nlp.stanford.edu/data/gqa/eval.zip) from the official site, including the `train` and `val` choices supporting files for the evaluation.
-
-All the question files and scene graph files are unzipped and placed in the `data/gqa/raw` folder to form the following tree structure:
-
-```
-|-- data
- |-- gqa
- | |-- raw
- | | |-- questions1.2
- | | | |-- train_all_questions
- | | | | |-- train_all_questions_0.json
- | | | | |-- ...
- | | | | |-- train_all_questions_9.json
- | | | |-- train_balanced_questions.json
- | | | |-- val_all_questions.json
- | | | |-- val_balanced_questions.json
- | | | |-- testdev_all_questions.json
- | | | |-- testdev_balanced_questions.json
- | | | |-- test_all_questions.json
- | | | |-- test_balanced_questions.json
- | | | |-- challenge_all_questions.json
- | | | |-- challenge_balanced_questions.json
- | | | |-- submission_all_questions.json
- | | |-- eval
- | | | |-- train_choices
- | | | | |-- train_all_questions_0.json
- | | | | |-- ...
- | | | | |-- train_all_questions_9.json
- | | | |-- val_choices.json
- | | |-- sceneGraphs
- | | | |-- train_sceneGraphs.json
- | | | |-- val_sceneGraphs.json
-```
-
-### CLEVR
-
-- Images, Questions and Scene Graphs
-
-Download all the [CLEVR v1.0](https://dl.fbaipublicfiles.com/clevr/CLEVR_v1.0.zip) from the official site, including all the splits needed for training, validation and testing.
-
-All the image files, question files and scene graph files are unzipped and placed in the `data/clevr/raw` folder to form the following tree structure:
-
-```
-|-- data
- |-- clevr
- | |-- raw
- | | |-- images
- | | | |-- train
- | | | | |-- CLEVR_train_000000.json
- | | | | |-- ...
- | | | | |-- CLEVR_train_069999.json
- | | | |-- val
- | | | | |-- CLEVR_val_000000.json
- | | | | |-- ...
- | | | | |-- CLEVR_val_014999.json
- | | | |-- test
- | | | | |-- CLEVR_test_000000.json
- | | | | |-- ...
- | | | | |-- CLEVR_test_014999.json
- | | |-- questions
- | | | |-- CLEVR_train_questions.json
- | | | |-- CLEVR_val_questions.json
- | | | |-- CLEVR_test_questions.json
- | | |-- scenes
- | | | |-- CLEVR_train_scenes.json
- | | | |-- CLEVR_val_scenes.json
-```
-
-- Image Features
-
-To make the input features consistent with those for VQA-v2, we provide a [script](https://github.com/MILVLG/openvqa/tree/master/data/clevr/clevr_extract_feat.py) to extract image features using a pre-trained ResNet-101 model like most previous works did and generate `.h5` files, with each file corresponding to one image.
-
-```bash
-$ cd data/clevr
-
-$ python clevr_extract_feat.py --mode=all --gpu=0
-```
-
-All the processed feature files are placed in the `data/clevr/feats` folder to form the following tree structure:
-
-```
-|-- data
- |-- clevr
- | |-- feats
- | | |-- train
- | | | |-- 1.npz
- | | | |-- ...
- | | |-- val
- | | | |-- 1.npz
- | | | |-- ...
- | | |-- test
- | | | |-- 1.npz
- | | | |-- ...
-```
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/thrust/thrust/set_operations.h b/spaces/CVPR/LIVE/thrust/thrust/set_operations.h
deleted file mode 100644
index a51eaed4351e52aaf3569c986cc5153640dd15d6..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/set_operations.h
+++ /dev/null
@@ -1,2963 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file set_operations.h
- * \brief Set theoretic operations for sorted ranges
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup set_operations Set Operations
- * \ingroup algorithms
- * \{
- */
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in ascending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result);
- * // result is now {0, 4, 6}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result);
- * // result is now {0, 4, 6}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in descending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A1[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result, thrust::greater());
- * // result is now {6, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_difference constructs a sorted range that is the set difference of the sorted
- * ranges [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_difference performs the "difference" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1) and not contained in [first2, last1). The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [first1, last1) range shall be copied to the output range.
- *
- * This version of \p set_difference compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_difference to compute the
- * set difference of two sets of integers sorted in descending order.
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result, thrust::greater());
- * // result is now {6, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_difference.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares objects using
- * \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute the
- * set intersection of two sets of integers sorted in ascending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {1, 3, 5, 7, 9, 11};
- * int A2[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int result[7];
- *
- * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result);
- * // result is now {1, 3, 5}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares objects using
- * \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute the
- * set intersection of two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {1, 3, 5, 7, 9, 11};
- * int A2[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int result[7];
- *
- * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result);
- * // result is now {1, 3, 5}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_intersection(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute
- * the set intersection of sets of integers sorted in descending order using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {11, 9, 7, 5, 3, 1};
- * int A2[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result, thrust::greater());
- * // result is now {5, 3, 1}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection constructs a sorted range that is the
- * intersection of sorted ranges [first1, last1) and
- * [first2, last2). The return value is the end of the
- * output range.
- *
- * In the simplest case, \p set_intersection performs the
- * "intersection" operation from set theory: the output range
- * contains a copy of every element that is contained in both
- * [first1, last1) and [first2, last2). The
- * general case is more complicated, because the input ranges may
- * contain duplicate elements. The generalization is that if a value
- * appears \c m times in [first1, last1) and \c n times in
- * [first2, last2) (where \c m may be zero), then it
- * appears min(m,n) times in the output range.
- * \p set_intersection is stable, meaning that both elements are
- * copied from the first range rather than the second, and that the
- * relative order of elements in the output range is the same as in
- * the first input range.
- *
- * This version of \p set_intersection compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * The following code snippet demonstrates how to use \p set_intersection to compute
- * the set intersection of sets of integers sorted in descending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {11, 9, 7, 5, 3, 1};
- * int A2[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int result[3];
- *
- * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result, thrust::greater());
- * // result is now {5, 3, 1}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_intersection.html
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_intersection(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in ascending order using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A2[5] = {1, 1, 2, 5, 8};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result);
- * // result = {0, 4, 5, 6, 7, 8}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A2[5] = {1, 1, 2, 5, 8};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result);
- * // result = {0, 4, 5, 6, 7, 8}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_symmetric_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in descending order using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A2[5] = {8, 5, 2, 1, 1};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result);
- * // result = {8, 7, 6, 5, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric
- * difference of the sorted ranges [first1, last1) and [first2, last2).
- * The return value is the end of the output range.
- *
- * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [first1, last1) but not [first2, last1), and a copy of
- * every element that is contained in [first2, last2) but not [first1, last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements that are
- * equivalent to each other and [first2, last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [first1, last1) if m > n, and
- * the last n - m of these elements from [first2, last2) if m < n.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference to compute
- * the symmetric difference of two sets of integers sorted in descending order.
- *
- * \code
- * #include
- * ...
- * int A1[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A2[5] = {8, 5, 2, 1, 1};
- *
- * int result[6];
- *
- * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result);
- * // result = {8, 7, 6, 5, 4, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html
- * \see \p merge
- * \see \p includes
- * \see \p set_difference
- * \see \p set_union
- * \see \p set_intersection
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_symmetric_difference(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order using the \p thrust::host execution policy for
- * parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[7] = {0, 2, 4, 6, 8, 10, 12};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result);
- * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_union(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using \c operator<.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * ...
- * int A1[7] = {0, 2, 4, 6, 8, 10, 12};
- * int A2[5] = {1, 3, 5, 7, 9};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result);
- * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_union(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order using the \p thrust::host execution policy for
- * parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A1[7] = {12, 10, 8, 6, 4, 2, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result, thrust::greater());
- * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
-__host__ __device__
- OutputIterator set_union(const thrust::detail::execution_policy_base &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union constructs a sorted range that is the union of the sorted ranges
- * [first1, last1) and [first2, last2). The return value is the
- * end of the output range.
- *
- * In the simplest case, \p set_union performs the "union" operation from set
- * theory: the output range contains a copy of every element that is contained in
- * [first1, last1), [first2, last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [first1, last1) contains \c m elements
- * that are equivalent to each other and if [first2, last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * This version of \p set_union compares elements using a function object \p comp.
- *
- * \param first1 The beginning of the first input range.
- * \param last1 The end of the first input range.
- * \param first2 The beginning of the second input range.
- * \param last2 The end of the second input range.
- * \param result The beginning of the output range.
- * \param comp Comparison operator.
- * \return The end of the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type.
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type.
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam OutputIterator is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp.
- * \pre The resulting range shall not overlap with either input range.
- *
- * The following code snippet demonstrates how to use \p set_union to compute the union of
- * two sets of integers sorted in ascending order.
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[7] = {12, 10, 8, 6, 4, 2, 0};
- * int A2[5] = {9, 7, 5, 3, 1};
- *
- * int result[11];
- *
- * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result, thrust::greater());
- * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/set_union.html
- * \see \p merge
- * \see \p includes
- * \see \p set_union
- * \see \p set_intersection
- * \see \p set_symmetric_difference
- * \see \p sort
- * \see \p is_sorted
- */
-template
- OutputIterator set_union(InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakCompare comp);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in ascending order with their values using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in descending order with their values using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_difference_by_key performs a key-value difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_difference_by_key performs the "difference" operation from set
- * theory: the keys output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, the last max(m-n,0) elements from
- * [keys_first1, keys_last1) range shall be copied to the output range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_difference_by_key compares key elements using a function object \p comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_difference_by_key to compute the
- * set difference of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[3];
- * int vals_result[3];
- *
- * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {0, 4, 6}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in ascending order with their values using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {1, 3, 5, 7, 9, 11};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result);
- *
- * // keys_result is now {1, 3, 5}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_intersection_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {1, 3, 5, 7, 9, 11};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result);
- *
- * // keys_result is now {1, 3, 5}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_intersection_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using a function object \p comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in descending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {11, 9, 7, 5, 3, 1};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater());
- *
- * // keys_result is now {5, 3, 1}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_intersection_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_intersection_by_key performs a key-value intersection operation from set theory.
- * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set
- * theory: the keys output range contains a copy of every element that is contained in both
- * [keys_first1, keys_last1)[keys_first2, keys_last2).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if an element appears \c m times in [keys_first1, keys_last1)
- * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it
- * appears min(m,n) times in the keys output range.
- * \p set_intersection_by_key is stable, meaning both that elements are copied from the first
- * input range rather than the second, and that the relative order of elements in the output range
- * is the same as the first input range.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range,
- * the corresponding value element is copied from [values_first1, values_last1) to the values
- * output range.
- *
- * This version of \p set_intersection_by_key compares objects using a function object \p comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no
- * \c values_first2 parameter because elements from the second input range are never copied to the output range.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the
- * set intersection of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {11, 9, 7, 5, 3, 1};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1};
- *
- * int keys_result[7];
- * int vals_result[7];
- *
- * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater());
- *
- * // keys_result is now {5, 3, 1}
- * // vals_result is now {0, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_difference_by_key
- * \see \p set_symmetric_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_intersection_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 1, 2, 5, 8};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 5, 6, 7, 8}
- * // vals_result is now {0, 0, 1, 0, 0, 1}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 1, 2, 5, 8};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 4, 5, 6, 7, 8}
- * // vals_result is now {0, 0, 1, 0, 0, 1}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_symmetric_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {8, 5, 2, 1, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {8, 7, 6, 5, 4, 0}
- * // vals_result is now {1, 0, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory.
- * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation:
- * it constructs the union of the two sets A - B and B - A, where A and B are the two
- * input ranges. That is, the output range contains a copy of every element that is
- * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of
- * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1).
- * The general case is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are
- * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are
- * equivalent to them, then |m - n| of those elements shall be copied to the output
- * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and
- * the last n - m of these elements from [keys_first2, keys_last2) if m < n.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {8, 5, 2, 1, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[6];
- * int vals_result[6];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {8, 7, 6, 5, 4, 0}
- * // vals_result is now {1, 0, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_union_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_symmetric_difference_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using \c operator<.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_union_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using \c operator<.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in ascending order with their values.
- *
- * \code
- * #include
- * ...
- * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12};
- * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {1, 3, 5, 7, 9};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result);
- * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12}
- * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_union_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using a function object \c comp.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values using the
- * \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * #include
- * ...
- * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
-__host__ __device__
- thrust::pair
- set_union_by_key(const thrust::detail::execution_policy_base &exec,
- InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \p set_union_by_key performs a key-value union operation from set theory.
- * \p set_union_by_key constructs a sorted range that is the union of the sorted
- * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated
- * with each element from the input and output key ranges is a value element. The associated input
- * value ranges need not be sorted.
- *
- * In the simplest case, \p set_union_by_key performs the "union" operation from set theory:
- * the output range contains a copy of every element that is contained in
- * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case
- * is more complicated, because the input ranges may contain duplicate elements.
- * The generalization is that if [keys_first1, keys_last1) contains \c m elements
- * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n
- * elements that are equivalent to them, then all \c m elements from the first
- * range shall be copied to the output range, in order, and then max(n - m, 0)
- * elements from the second range shall be copied to the output, in order.
- *
- * Each time a key element is copied from [keys_first1, keys_last1) or
- * [keys_first2, keys_last2) is copied to the keys output range, the
- * corresponding value element is copied from the corresponding values input range (beginning at
- * \p values_first1 or \p values_first2) to the values output range.
- *
- * This version of \p set_union_by_key compares key elements using a function object \c comp.
- *
- * \param keys_first1 The beginning of the first input range of keys.
- * \param keys_last1 The end of the first input range of keys.
- * \param keys_first2 The beginning of the second input range of keys.
- * \param keys_last2 The end of the second input range of keys.
- * \param values_first1 The beginning of the first input range of values.
- * \param values_first2 The beginning of the first input range of values.
- * \param keys_result The beginning of the output range of keys.
- * \param values_result The beginning of the output range of values.
- * \param comp Comparison operator.
- * \return A \p pair \c p such that p.first is the end of the output range of keys,
- * and such that p.second is the end of the output range of values.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * \p InputIterator1 and \p InputIterator2 have the same \c value_type,
- * \p InputIterator1's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator2 is a model of Input Iterator,
- * \p InputIterator2 and \p InputIterator1 have the same \c value_type,
- * \p InputIterator2's \c value_type is a model of LessThan Comparable,
- * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements,
- * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types.
- * \tparam InputIterator3 is a model of Input Iterator,
- * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam InputIterator4 is a model of Input Iterator,
- * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types.
- * \tparam OutputIterator1 is a model of Output Iterator.
- * \tparam OutputIterator2 is a model of Output Iterator.
- * \tparam StrictWeakCompare is a model of Strict Weak Ordering.
- *
- * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp.
- * \pre The resulting ranges shall not overlap with any input range.
- *
- * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the
- * symmetric difference of two sets of integers sorted in descending order with their values.
- *
- * \code
- * #include
- * #include
- * ...
- * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0};
- * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0};
- *
- * int B_keys[5] = {9, 7, 5, 3, 1};
- * int B_vals[5] = {1, 1, 1, 1, 1};
- *
- * int keys_result[11];
- * int vals_result[11];
- *
- * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater());
- * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
- * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0}
- * \endcode
- *
- * \see \p set_symmetric_difference_by_key
- * \see \p set_intersection_by_key
- * \see \p set_difference_by_key
- * \see \p sort_by_key
- * \see \p is_sorted
- */
-template
- thrust::pair
- set_union_by_key(InputIterator1 keys_first1,
- InputIterator1 keys_last1,
- InputIterator2 keys_first2,
- InputIterator2 keys_last2,
- InputIterator3 values_first1,
- InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- StrictWeakCompare comp);
-
-
-/*! \} // end set_operations
- */
-
-
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py b/spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py
deleted file mode 100644
index bec75db871cd8d66118748aa90fe10d014bdaf89..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py
+++ /dev/null
@@ -1,158 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import bisect
-import copy
-import logging
-import os
-import torch
-import torch.utils.data
-import torch.distributed
-from torch.utils.data.dataset import ConcatDataset
-
-from .catalog import DatasetCatalog
-from .clip_datasets.clip_img_txt_pair_tsv import CLIPImgTxtPairTSVDataset
-
-from .transforms.build import build_clip_transforms
-
-def config_tsv_dataset_args(cfg, dataset_file, factory_name=None, is_train=True):
- ############### code removecd as tsv_dataset_name = factory_name = "CLIPImgTxtPairTSVDataset" ##############
- if factory_name is not None:
- tsv_dataset_name = factory_name
-
- if tsv_dataset_name in ["CLIPImgTxtPairTSVDataset"]:
- # no need for extra arguments
- args = {}
- args['args'] = cfg
- args['seq_len'] = cfg.DATASETS.MAX_SEQ_LENGTH # cfg.max_seq_length
-
- return args, tsv_dataset_name
-
-
-def build_dataset(cfg, transforms, dataset_catalog, is_train=True, is_aux=False):
- """
- Arguments:
- cfg: config file.
- transforms (callable): transforms to apply to each (image, target) sample
- dataset_catalog (DatasetCatalog): contains the information on how to construct a dataset.
- is_train (bool): whether to setup the dataset for training or testing
- """
-
- dataset_list = (cfg.DATASETS.TRAIN if not is_aux else cfg.DATASETS.AUX) if is_train else cfg.DATASETS.TEST
- factory_list = (cfg.DATASETS.FACTORY_TRAIN if not is_aux else cfg.DATASETS.FACTORY_AUX) if is_train else cfg.DATASETS.FACTORY_TEST
- path_list = (cfg.DATASETS.PATH_TRAIN if not is_aux else cfg.DATASETS.PATH_AUX) if is_train else cfg.DATASETS.PATH_TEST
-
- if not isinstance(dataset_list, (list, tuple)):
- raise RuntimeError(
- "dataset_list should be a list of strings, got {}".format(dataset_list))
- if not isinstance(factory_list, (list, tuple)):
- raise RuntimeError(
- "factory_list should be a list of strings, got {}".format(factory_list))
- datasets = []
- target_offset = 0
- for i, dataset_name in enumerate(dataset_list):
- factory_name = factory_list[i] if i < len(factory_list) else None
-
- if factory_name == "CLIPImgTxtPairTSVDataset":
- dataset_names_merged = dataset_name.split('+')
- path_lists_merged = path_list[i].split('+')
-
- assert len(dataset_names_merged) == len(path_lists_merged), "number of datasets must match that of dataset paths"
-
- image_tsv_list = []
- text_tsv_list = []
- dataset_name_list = []
- map_files = []
- max_num_tsv = 20 # maximum tsv files to load within a given folder
-
- for dname, dpath in zip(dataset_names_merged, path_lists_merged):
- args, tsv_dataset_name = config_tsv_dataset_args(
- cfg, dataset_name, factory_name, is_train
- )
- factory = CLIPImgTxtPairTSVDataset if tsv_dataset_name in ["CLIPImgTxtPairTSVDataset"] else None
- prev_len = len(image_tsv_list)
-
- isFile = os.path.isfile(dpath)
- if isFile:
- dpath_listed_files = [os.path.basename(dpath)]
- dpath = os.path.dirname(dpath)
- else:
- dpath_listed_files = sorted(os.listdir(dpath))
-
- for filename in dpath_listed_files:
- if ("images" in filename or "image" in filename or "img" in filename) and filename.endswith(".tsv"):
- image_tsv_list.append(os.path.join(dpath, filename))
- if "images" in filename: # "images" - "text"
- text_tsv_list.append(os.path.join(dpath, filename.replace("images", "text")))
- elif "image" in filename: # "image"-"text"
- text_tsv_list.append(os.path.join(dpath, filename.replace("image", "text")))
- elif "img" in filename: # "img"-"caption"
- text_tsv_list.append(os.path.join(dpath, filename.replace("img", "caption")))
- if len(image_tsv_list) - prev_len == max_num_tsv:
- break
- dataset_name_list += [dname] * (len(image_tsv_list) - prev_len)
-
- if dname == "imagenet22k":
- map_files += [os.path.join(dpath, 'darknet_data_imagenet.labels.list')] * (len(image_tsv_list) - prev_len)
- else:
- map_files += [None] * (len(image_tsv_list) - prev_len)
-
- assert len(image_tsv_list) == len(text_tsv_list), \
- "the number image tsv files must be equal to that of text tsv files, otherwise check your data!"
-
- args["image_tsv_file"] = image_tsv_list
- args["text_tsv_file"] = text_tsv_list
- args["dataset_name"] = dataset_name_list
- args["map_file"] = map_files
- args["filtered_datasets"] = cfg.DATASETS.FILTERED_CLASSIFICATION_DATASETS
- assert len(image_tsv_list) == len(text_tsv_list) == len(dataset_name_list) == len(map_files)
-
- print("number of image tsv files: ", len(image_tsv_list))
- print("number of text tsv fies: ", len(text_tsv_list))
-
- args["is_train"] = is_train
- args["transforms"] = transforms
- args["target_offset"] = target_offset
- if "bpe" in cfg.INPUT.TEXT_TOKENIZER:
- from detectron2.data.datasets.clip_prompt_utils import SimpleTokenizer as _Tokenizer
- tokenizer = _Tokenizer()
- args["tokenizer_type"] = "bpe"
- args["tokenizer"] = tokenizer
- # make dataset from factory
- dataset = factory(**args)
- datasets.append(dataset)
-
- precomputed_tokens = {}
- dataset_classes = {}
- for dataset in datasets:
- if hasattr(dataset, "input_ids_all_classes"):
- precomputed_tokens["imagenet"] = \
- [dataset.input_ids_all_classes, dataset.input_mask_all_classes, dataset.segment_ids_all_classes]
- if hasattr(dataset, "classnames"):
- if isinstance(dataset.classnames, dict):
- dataset_classes.update(dataset.classnames)
- else:
- dataset_classes[dataset.dataset_name] = dataset.classnames
-
- # for testing, return a list of datasets
- if not is_train:
- return datasets, precomputed_tokens, dataset_classes
-
- if len(datasets) == 0:
- return None, None, None
-
- # for training, concatenate all datasets into a single one
- dataset = datasets[0]
- if len(datasets) > 1:
- dataset = ConcatDataset(datasets)
- return [dataset], precomputed_tokens, dataset_classes
-
-
-def make_clip_dataset(cfg, is_train=True, is_aux=False, transforms=None):
- if transforms is None:
- transforms = build_clip_transforms(cfg, is_train)
- print("data transforms: ")
- print(transforms)
- datasets, precomputed_tokens, dataset_classes = build_dataset(cfg, transforms, DatasetCatalog, is_train, is_aux)
-
- if not datasets:
- return None, None, None
- return datasets, precomputed_tokens, dataset_classes
\ No newline at end of file
diff --git a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py b/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
deleted file mode 100644
index 8f369a2afedb6c6e69fd52ff9a9a6b1cdf965937..0000000000000000000000000000000000000000
--- a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 4 # 100ep -> 400ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 4 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py
deleted file mode 100644
index 8675f01518412a8c0dd98887ed15586000308f03..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-from meme_generator.utils import make_jpg_or_gif
-
-img_dir = Path(__file__).parent / "images"
-
-
-def dont_go_near(images: List[BuildImage], texts, args):
- frame = BuildImage.open(img_dir / "0.png")
-
- def make(img: BuildImage) -> BuildImage:
- img = img.convert("RGBA").resize((170, 170), keep_ratio=True)
- return frame.copy().paste(img, (23, 231), alpha=True)
-
- return make_jpg_or_gif(images[0], make)
-
-
-add_meme("dont_go_near", dont_go_near, min_images=1, max_images=1, keywords=["不要靠近"])
diff --git a/spaces/CoWork/dreambooth-training-public/app.py b/spaces/CoWork/dreambooth-training-public/app.py
deleted file mode 100644
index f7d90f7250ccac1b7d250062b6d3348124acdf4e..0000000000000000000000000000000000000000
--- a/spaces/CoWork/dreambooth-training-public/app.py
+++ /dev/null
@@ -1,687 +0,0 @@
-from subprocess import getoutput
-import os
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- which_gpu = "A10G"
- os.system(f"pip install --no-deps xformers==0.0.16rc425")
-elif("T4" in gpu_info):
- which_gpu = "T4"
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-else:
- which_gpu = "CPU"
-
-import gradio as gr
-from pathlib import Path
-import argparse
-import shutil
-from train_dreambooth import run_training
-from convertosd import convert
-from PIL import Image
-from slugify import slugify
-import requests
-import torch
-import zipfile
-import tarfile
-import urllib.parse
-import gc
-from diffusers import StableDiffusionPipeline
-from huggingface_hub import snapshot_download, update_repo_visibility, HfApi
-
-is_spaces = True if "SPACE_ID" in os.environ else False
-if(is_spaces):
- is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False
-else:
- is_shared_ui = False
-is_gpu_associated = torch.cuda.is_available()
-
-os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
-
-if(is_gpu_associated):
- model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable")
- model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"])
- model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"])
- safety_checker = snapshot_download(repo_id="multimodalart/sd-sc")
- model_to_load = model_v1
-
-def swap_base_model(selected_model):
- if(is_gpu_associated):
- global model_to_load
- if(selected_model == "v1-5"):
- model_to_load = model_v1
- elif(selected_model == "v2-1-768"):
- model_to_load = model_v2
- else:
- model_to_load = model_v2_512
-
-
-
-css = '''
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
- #component-4, #component-3, #component-10{min-height: 0}
- .duplicate-button img{margin: 0}
-'''
-maximum_concepts = 3
-
-def swap_text(option, base):
- resize_width = 768 if base == "v2-1-768" else 512
- mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
- if(option == "object"):
- instance_prompt_example = "cttoy"
- freeze_for = 30
- return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)]
- elif(option == "person"):
- instance_prompt_example = "julcto"
- freeze_for = 70
- #show_prior_preservation = True if base != "v2-1-768" else False
- show_prior_preservation=False
- if(show_prior_preservation):
- prior_preservation_box_update = gr.update(visible=show_prior_preservation)
- else:
- prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False)
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update]
- elif(option == "style"):
- instance_prompt_example = "trsldamrl"
- freeze_for = 10
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)]
-
-def count_files(*inputs):
- file_counter = 0
- concept_counter = 0
- for i, input in enumerate(inputs):
- if(i < maximum_concepts):
- files = inputs[i]
- if(files):
- concept_counter+=1
- file_counter+=len(files)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- selected_model = inputs[-5]
- experimental_faces = inputs[-6]
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- else:
- Training_Steps = file_counter*150
- if(type_of_thing == "person" and Training_Steps > 2400):
- Training_Steps = 2400 #Avoid overfitting on person faces
- if(is_spaces):
- if(selected_model == "v1-5"):
- its = 1.1 if which_gpu == "T4" else 1.8
- if(experimental_faces):
- its = 1
- elif(selected_model == "v2-1-512"):
- its = 0.8 if which_gpu == "T4" else 1.5
- if(experimental_faces):
- its = 0.7
- elif(selected_model == "v2-1-768"):
- its = 0.48 if which_gpu == "T4" else 0.85
-
- gpu_price = 0.60 if which_gpu == "T4" else 1.10
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes.
- The setup, compression and uploading the model can take up to 20 minutes. As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.
- If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.
'''
- else:
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.
'''
-
- return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
-
-def update_steps(*files_list):
- file_counter = 0
- for i, files in enumerate(files_list):
- if(files):
- file_counter+=len(files)
- return(gr.update(value=file_counter*200))
-
-def visualise_progress_bar():
- return gr.update(visible=True)
-
-def pad_image(image):
- w, h = image.size
- if w == h:
- return image
- elif w > h:
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
- new_image.paste(image, (0, (w - h) // 2))
- return new_image
- else:
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
- new_image.paste(image, ((h - w) // 2, 0))
- return new_image
-
-def validate_model_upload(hf_token, model_name):
- if(hf_token != ''):
- api = HfApi()
- try:
- _ = api.whoami(hf_token)
- except:
- raise gr.Error("You have inserted an invalid Hugging Face token")
- try:
- if(is_spaces):
- update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space")
- except:
- raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions")
- else:
- raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)")
- if(model_name == ""):
- raise gr.Error("Please fill in your model's name")
-
-def swap_hardware(hf_token, hardware="cpu-basic"):
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'flavor': hardware}
- requests.post(hardware_url, json = body, headers=headers)
-
-def swap_sleep_time(hf_token,sleep_time):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'seconds':sleep_time}
- requests.post(sleep_time_url,json=body,headers=headers)
-
-def get_sleep_time(hf_token):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}"
- headers = { "authorization" : f"Bearer {hf_token}"}
- response = requests.get(sleep_time_url,headers=headers)
- try:
- gcTimeout = response.json()['runtime']['gcTimeout']
- except:
- gcTimeout = None
- return gcTimeout
-
-def write_to_community(title, description,hf_token):
- from huggingface_hub import HfApi
- api = HfApi()
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token)
-
-def train(progress=gr.Progress(track_tqdm=True), *inputs):
- which_model = inputs[-10]
- if(which_model == ""):
- raise gr.Error("You forgot to select a base model to use")
-
- if is_shared_ui:
- raise gr.Error("This Space only works in duplicated instances")
- if not is_gpu_associated:
- raise gr.Error("Please associate a T4 or A10G GPU for this Space")
- hf_token = inputs[-5]
- model_name = inputs[-7]
- if(is_spaces):
- sleep_time = get_sleep_time(hf_token)
- if sleep_time:
- swap_sleep_time(hf_token, -1)
- remove_attribution_after = inputs[-6]
- else:
- remove_attribution_after = False
-
- if(remove_attribution_after):
- validate_model_upload(hf_token, model_name)
-
- torch.cuda.empty_cache()
- if 'pipe' in globals():
- global pipe, pipe_is_set
- del pipe
- pipe_is_set = False
- gc.collect()
-
- if os.path.exists("output_model"): shutil.rmtree('output_model')
- if os.path.exists("instance_images"): shutil.rmtree('instance_images')
- if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
- if os.path.exists("hastrained.success"): os.remove("hastrained.success")
- file_counter = 0
- resolution = 512 if which_model != "v2-1-768" else 768
- for i, input in enumerate(inputs):
- if(i < maximum_concepts-1):
- if(input):
- os.makedirs('instance_images',exist_ok=True)
- files = inputs[i+(maximum_concepts*2)]
- prompt = inputs[i+maximum_concepts]
- if(prompt == "" or prompt == None):
- raise gr.Error("You forgot to define your concept prompt")
- for j, file_temp in enumerate(files):
- file = Image.open(file_temp.name)
- image = pad_image(file)
- image = image.resize((resolution, resolution))
- extension = file_temp.name.split(".")[1]
- image = image.convert('RGB')
- image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100)
- file_counter += 1
-
- os.makedirs('output_model',exist_ok=True)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- experimental_face_improvement = inputs[-9]
-
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- Train_text_encoder_for = int(inputs[-2])
- else:
- if(type_of_thing == "object"):
- Train_text_encoder_for=30
-
- elif(type_of_thing == "style"):
- Train_text_encoder_for=15
-
- elif(type_of_thing == "person"):
- Train_text_encoder_for=70
-
- Training_Steps = file_counter*150
- if(type_of_thing == "person" and Training_Steps > 2600):
- Training_Steps = 2600 #Avoid overfitting on people's faces
- stptxt = int((Training_Steps*Train_text_encoder_for)/100)
- gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False
- cache_latents = True if which_model != "v1-5" else False
- if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)):
- args_general = argparse.Namespace(
- image_captions_filename = True,
- train_text_encoder = True if stptxt > 0 else False,
- stop_text_encoder_training = stptxt,
- save_n_steps = 0,
- pretrained_model_name_or_path = model_to_load,
- instance_data_dir="instance_images",
- class_data_dir=None,
- output_dir="output_model",
- instance_prompt="",
- seed=42,
- resolution=resolution,
- mixed_precision="fp16",
- train_batch_size=1,
- gradient_accumulation_steps=1,
- use_8bit_adam=True,
- learning_rate=2e-6,
- lr_scheduler="polynomial",
- lr_warmup_steps = 0,
- max_train_steps=Training_Steps,
- gradient_checkpointing=gradient_checkpointing,
- cache_latents=cache_latents,
- )
- print("Starting single training...")
- lock_file = open("intraining.lock", "w")
- lock_file.close()
- try:
- run_training(args_general)
- except Exception as e:
- if(is_spaces):
- title="There was an error on during your training"
- description=f'''
- Unfortunately there was an error during training your {model_name} model.
- Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training):
- ```
- {str(e)}
- ```
- '''
- swap_hardware(hf_token, "cpu-basic")
- write_to_community(title,description,hf_token)
-
-
- gc.collect()
- torch.cuda.empty_cache()
- if(which_model == "v1-5"):
- print("Adding Safety Checker to the model...")
- shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True)
- shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True)
- shutil.copy(f"model_index.json", "output_model/model_index.json")
-
- if(not remove_attribution_after):
- swap_sleep_time(hf_token, sleep_time)
- print("Archiving model file...")
- with tarfile.open("diffusers_model.tar", "w") as tar:
- tar.add("output_model", arcname=os.path.basename("output_model"))
- if os.path.exists("intraining.lock"): os.remove("intraining.lock")
- trained_file = open("hastrained.success", "w")
- trained_file.close()
- print("Training completed!")
- return [
- gr.update(visible=False), #progress_bar
- gr.update(visible=True, value=["diffusers_model.tar"]), #result
- gr.update(visible=True), #try_your_model
- gr.update(visible=True), #push_to_hub
- gr.update(visible=True), #convert_button
- gr.update(visible=False), #training_ongoing
- gr.update(visible=True) #completed_training
- ]
- else:
- where_to_upload = inputs[-8]
- push(model_name, where_to_upload, hf_token, which_model, True)
- swap_hardware(hf_token, "cpu-basic")
-
-pipe_is_set = False
-def generate(prompt, steps):
- torch.cuda.empty_cache()
- from diffusers import StableDiffusionPipeline
- global pipe_is_set
- if(not pipe_is_set):
- global pipe
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
- pipe = pipe.to("cuda")
- pipe_is_set = True
-
- image = pipe(prompt, num_inference_steps=steps).images[0]
- return(image)
-
-def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
- validate_model_upload(hf_token, model_name)
- if(not os.path.exists("model.ckpt")):
- convert("output_model", "model.ckpt")
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
- from huggingface_hub import create_repo
- model_name_slug = slugify(model_name)
- api = HfApi()
- your_username = api.whoami(token=hf_token)["name"]
- if(where_to_upload == "My personal profile"):
- model_id = f"{your_username}/{model_name_slug}"
- else:
- model_id = f"sd-dreambooth-library/{model_name_slug}"
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
-
- print(f"Starting to upload the model {model_id}...")
- images_upload = os.listdir("instance_images")
- image_string = ""
- instance_prompt_list = []
- previous_instance_prompt = ''
- for i, image in enumerate(images_upload):
- instance_prompt = image.split("_")[0]
- if(instance_prompt != previous_instance_prompt):
- title_instance_prompt_string = instance_prompt
- instance_prompt_list.append(instance_prompt)
- else:
- title_instance_prompt_string = ''
- previous_instance_prompt = instance_prompt
- image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
-{image_string}})'''
- readme_text = f'''---
-license: creativeml-openrail-m
-tags:
-- text-to-image
-widget:
-- text: {instance_prompt_list[0]}
----
-### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
-
-You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
-
-Sample pictures of:
-{image_string}
-'''
- #Save the readme to a file
- readme_file = open("model.README.md", "w")
- readme_file.write(readme_text)
- readme_file.close()
- #Save the token identifier to a file
- text_file = open("token_identifier.txt", "w")
- text_file.write(', '.join(instance_prompt_list))
- text_file.close()
- try:
- create_repo(model_id,private=True, token=hf_token)
- except:
- import time
- epoch_time = str(int(time.time()))
- create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
- operations = [
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
- ]
- api.create_commit(
- repo_id=model_id,
- operations=operations,
- commit_message=f"Upload the model {model_name}",
- token=hf_token
- )
- api.upload_folder(
- folder_path="output_model",
- repo_id=model_id,
- token=hf_token
- )
- api.upload_folder(
- folder_path="instance_images",
- path_in_repo="concept_images",
- repo_id=model_id,
- token=hf_token
- )
- if is_spaces:
- if(not comes_from_automated):
- extra_message = "Don't forget to remove the GPU attribution after you play with it."
- else:
- extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
- title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!"
- description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}"
- write_to_community(title, description, hf_token)
- #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
- print("Model uploaded successfully!")
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
-
-def convert_to_ckpt():
- if 'pipe' in globals():
- global pipe, pipe_is_set
- del pipe
- pipe_is_set = False
- gc.collect()
- convert("output_model", "model.ckpt")
- return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
-
-def check_status(top_description):
- if os.path.exists("hastrained.success"):
- if is_spaces:
- update_top_tag = gr.update(value=f'''
-
-
Your model has finished training ✅
-
Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic
You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model
Attention - This Space doesn't work in this shared UI
-
For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!
You have successfully cloned the Dreambooth Training Space locally 🎉
-
Do a pip install requirements-local.txt
-
- ''')
- gr.Markdown("# Dreambooth Training UI 💭")
- gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)")
-
- with gr.Row() as what_are_you_training:
- type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
- with gr.Column():
- base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True)
-
- #Very hacky approach to emulate dynamically created Gradio components
- with gr.Row() as upload_your_concept:
- with gr.Column():
- thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example")
- thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False)
- thing_image_example = gr.HTML('''''')
- things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.")
-
- with gr.Column():
- file_collection = []
- concept_collection = []
- buttons_collection = []
- delete_collection = []
- is_visible = []
-
- row = [None] * maximum_concepts
- for x in range(maximum_concepts):
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
- if(x == 0):
- visible = True
- is_visible.append(gr.State(value=True))
- else:
- visible = False
- is_visible.append(gr.State(value=False))
-
- file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
- with gr.Column(visible=visible) as row[x]:
- concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions'''))
- with gr.Row():
- if(x < maximum_concepts-1):
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
- if(x > 0):
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
-
- counter_add = 1
- for button in buttons_collection:
- if(counter_add < len(buttons_collection)):
- button.click(lambda:
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
- None,
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
- else:
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
- counter_add += 1
-
- counter_delete = 1
- for delete_button in delete_collection:
- if(counter_delete < len(delete_collection)+1):
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
- counter_delete += 1
-
- with gr.Accordion("Custom Settings", open=False):
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
- gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
- steps = gr.Number(label="How many steps", value=2400)
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30)
-
- with gr.Box(visible=False) as training_summary:
- training_summary_text = gr.HTML("", visible=True, label="Training Summary")
- is_advanced_visible = True if is_spaces else False
- training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible)
- training_summary_model_name = gr.Textbox(label="Name of your model", visible=True)
- training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True)
- training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True)
- training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True)
-
- train_btn = gr.Button("Start Training")
- progress_bar = gr.Textbox(visible=False)
- if(is_shared_ui):
- training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False)
- elif(not is_gpu_associated):
- training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False)
- else:
- training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
-
-
- #Post-training UI
- completed_training = gr.Markdown('''# ✅ Training completed.
- ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
-
- with gr.Row():
- with gr.Box(visible=False) as try_your_model:
- gr.Markdown("## Try your model")
- prompt = gr.Textbox(label="Type your prompt")
- result_image = gr.Image()
- inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
- generate_button = gr.Button("Generate Image")
-
- with gr.Box(visible=False) as push_to_hub:
- gr.Markdown("## Push to Hugging Face Hub")
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
- hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
-
- push_button = gr.Button("Push to the Hub")
-
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
- success_message_upload = gr.Markdown(visible=False)
- convert_button = gr.Button("Convert to CKPT", visible=False)
-
- #Swap the examples and the % of text encoder trained depending if it is an object, person or style
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
-
- #Swap the base model
-
- base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
- #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar)
- base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
- #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
- for file in file_collection:
- #file.change(fn=update_steps,inputs=file_collection, outputs=steps)
- file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- #Give more options if the user wants to finish everything after training
- if(is_spaces):
- training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
- #Add a message for while it is in training
-
- #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
-
- #The main train function
- train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar)
- train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
-
- #Button to generate an image from your trained model after training
- generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
- #Button to push the model to the Hugging Face Hub
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
- #Button to convert the model to ckpt format
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
-
- #Checks if the training is running
- demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
-
-demo.queue(default_enabled=False).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/CofAI/picscore/picscore.py b/spaces/CofAI/picscore/picscore.py
deleted file mode 100644
index dcf7bdfa03fa9a21f5f644b13477acabe2e2cfd1..0000000000000000000000000000000000000000
--- a/spaces/CofAI/picscore/picscore.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import gradio as gr
-
-description = """
- PICSCORE BETA-1
-
- """
-gr.Interface.load("CompVis/stable-diffusion-v1-4", description=description).launch()
\ No newline at end of file
diff --git a/spaces/CofAI/picscore1/README.md b/spaces/CofAI/picscore1/README.md
deleted file mode 100644
index 5db32d4bb0c64fa27e161a05697df5348d7c923c..0000000000000000000000000000000000000000
--- a/spaces/CofAI/picscore1/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: PicScore — Stabel Diffusion
-emoji: 🖼
-colorFrom: indigo
-colorTo: purple
-sdk: static
-pinned: true
-license: other
----
-
-#tags: StableDiffusion, SD, PicScore, promt, picgen
-
----
-
-This is PicScore with Stable Diffusion 2.1 for FREE!
\ No newline at end of file
diff --git a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py b/spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py
deleted file mode 100644
index fc5385a4b573559446f5c6ba42d1bd7c477116cb..0000000000000000000000000000000000000000
--- a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import streamlit as st
-import pandas as pd
-import plotly.express as px
-import matplotlib.pyplot as plt
-import numpy as np
-import plotly.graph_objects as go
-
-def plot_top_n(df, target_column, n=10):
- top_n = df.nlargest(n, target_column)
-
- # Initialize the bar plot
- fig, ax1 = plt.subplots(figsize=(10, 5))
-
- # Set width for each bar and their positions
- width = 0.28
- ind = np.arange(len(top_n))
-
- # Plot target_column and MMLU_average on the primary y-axis with adjusted positions
- ax1.bar(ind - width, top_n[target_column], width=width, color='blue', label=target_column)
- ax1.bar(ind, top_n['MMLU_average'], width=width, color='orange', label='MMLU_average')
-
- # Set the primary y-axis labels and title
- ax1.set_title(f'Top {n} performing models on {target_column}')
- ax1.set_xlabel('Model')
- ax1.set_ylabel('Score')
-
- # Create a secondary y-axis for Parameters
- ax2 = ax1.twinx()
-
- # Plot Parameters as bars on the secondary y-axis with adjusted position
- ax2.bar(ind + width, top_n['Parameters'], width=width, color='red', label='Parameters')
-
- # Set the secondary y-axis labels
- ax2.set_ylabel('Parameters', color='red')
- ax2.tick_params(axis='y', labelcolor='red')
-
- # Set the x-ticks and their labels
- ax1.set_xticks(ind)
- ax1.set_xticklabels(top_n.index, rotation=45, ha="right")
-
- # Adjust the legend
- fig.tight_layout()
- fig.legend(loc='center left', bbox_to_anchor=(1, 0.5))
-
- # Show the plot
- st.pyplot(fig)
-
-# Function to create an unfilled radar chart
-def create_radar_chart_unfilled(df, model_names, metrics):
- fig = go.Figure()
- min_value = df.loc[model_names, metrics].min().min()
- max_value = df.loc[model_names, metrics].max().max()
- for model_name in model_names:
- values_model = df.loc[model_name, metrics]
- fig.add_trace(go.Scatterpolar(
- r=values_model,
- theta=metrics,
- name=model_name
- ))
-
- fig.update_layout(
- polar=dict(
- radialaxis=dict(
- visible=True,
- range=[min_value, max_value]
- )),
- showlegend=True,
- width=800, # Change the width as needed
- height=600 # Change the height as needed
- )
- return fig
-
-
-
-# Function to create a line chart
-def create_line_chart(df, model_names, metrics):
- line_data = []
- for model_name in model_names:
- values_model = df.loc[model_name, metrics]
- for metric, value in zip(metrics, values_model):
- line_data.append({'Model': model_name, 'Metric': metric, 'Value': value})
-
- line_df = pd.DataFrame(line_data)
-
- fig = px.line(line_df, x='Metric', y='Value', color='Model', title='Comparison of Models', line_dash_sequence=['solid'])
- fig.update_layout(showlegend=True)
- return fig
-
-def create_plot(df, x_values, y_values, models=None, title=None):
- if models is not None:
- df = df[df.index.isin(models)]
-
- # remove rows with NaN values
- df = df.dropna(subset=[x_values, y_values])
-
- plot_data = pd.DataFrame({
- 'Model': df.index,
- x_values: df[x_values],
- y_values: df[y_values],
- })
-
- plot_data['color'] = 'purple'
- fig = px.scatter(plot_data, x=x_values, y=y_values, color='color', hover_data=['Model'], trendline="ols")
-
- # If title is not provided, use x_values vs. y_values as the default title
- if title is None:
- title = x_values + " vs. " + y_values
-
- layout_args = dict(
- showlegend=False,
- xaxis_title=x_values,
- yaxis_title=y_values,
- xaxis=dict(),
- yaxis=dict(),
- title=title,
- height=500,
- width=1000,
- )
- fig.update_layout(**layout_args)
-
- # Add a dashed line at 0.25 for the y_values
- x_min = df[x_values].min()
- x_max = df[x_values].max()
-
- y_min = df[y_values].min()
- y_max = df[y_values].max()
-
- if x_values.startswith('MMLU'):
- fig.add_shape(
- type='line',
- x0=0.25, x1=0.25,
- y0=y_min, y1=y_max,
- line=dict(
- color='red',
- width=2,
- dash='dash'
- )
- )
-
- if y_values.startswith('MMLU'):
- fig.add_shape(
- type='line',
- x0=x_min, x1=x_max,
- y0=0.25, y1=0.25,
- line=dict(
- color='red',
- width=2,
- dash='dash'
- )
- )
-
- return fig
\ No newline at end of file
diff --git a/spaces/Cyril666/ContourNet-ABI/setup.py b/spaces/Cyril666/ContourNet-ABI/setup.py
deleted file mode 100644
index 837c2cd15f4624f630540ef6993dcb9123adb39b..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/setup.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-#!/usr/bin/env python
-
-import glob
-import os
-
-import torch
-from setuptools import find_packages
-from setuptools import setup
-from torch.utils.cpp_extension import CUDA_HOME
-from torch.utils.cpp_extension import CppExtension
-from torch.utils.cpp_extension import CUDAExtension
-
-requirements = ["torch", "torchvision"]
-
-
-def get_extensions():
- this_dir = os.path.dirname(os.path.abspath(__file__))
- extensions_dir = os.path.join(this_dir, "maskrcnn_benchmark", "csrc")
-
- main_file = glob.glob(os.path.join(extensions_dir, "*.cpp"))
- source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp"))
- source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu"))
-
- sources = main_file + source_cpu
- extension = CppExtension
-
- extra_compile_args = {"cxx": []}
- define_macros = []
-
- if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv("FORCE_CUDA", "0") == "1":
- extension = CUDAExtension
- sources += source_cuda
- define_macros += [("WITH_CUDA", None)]
- extra_compile_args["nvcc"] = [
- "-DCUDA_HAS_FP16=1",
- "-D__CUDA_NO_HALF_OPERATORS__",
- "-D__CUDA_NO_HALF_CONVERSIONS__",
- "-D__CUDA_NO_HALF2_OPERATORS__",
- ]
-
- sources = [os.path.join(extensions_dir, s) for s in sources]
-
- include_dirs = [extensions_dir]
-
- ext_modules = [
- extension(
- "maskrcnn_benchmark._C",
- sources,
- include_dirs=include_dirs,
- define_macros=define_macros,
- extra_compile_args=extra_compile_args,
- )
- ]
-
- return ext_modules
-
-
-setup(
- name="maskrcnn_benchmark",
- version="0.1",
- author="fmassa",
- url="https://github.com/facebookresearch/maskrcnn-benchmark",
- description="object detection in pytorch",
- packages=find_packages(exclude=("configs", "tests",)),
- # install_requires=requirements,
- ext_modules=get_extensions(),
- cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
-)
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py
deleted file mode 100644
index d53a5254d4b319eaf2cbfbd081b0ca8e38c5c7a0..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import numpy as np
-from matplotlib import pyplot as plt
-from scipy.ndimage import filters
-from skimage import transform as skimage_transform
-
-
-def getAttMap(img, attMap, blur=True, overlap=True):
- attMap -= attMap.min()
- if attMap.max() > 0:
- attMap /= attMap.max()
- attMap = skimage_transform.resize(attMap, (img.shape[:2]), order=3, mode="constant")
- if blur:
- attMap = filters.gaussian_filter(attMap, 0.02 * max(img.shape[:2]))
- attMap -= attMap.min()
- attMap /= attMap.max()
- cmap = plt.get_cmap("jet")
- attMapV = cmap(attMap)
- attMapV = np.delete(attMapV, 3, 2)
- if overlap:
- attMap = (
- 1 * (1 - attMap**0.7).reshape(attMap.shape + (1,)) * img
- + (attMap**0.7).reshape(attMap.shape + (1,)) * attMapV
- )
- return attMap
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py
deleted file mode 100644
index 2268dd229091d10dd0535bd21515b40409b8ce1b..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py
+++ /dev/null
@@ -1,611 +0,0 @@
-from enum import Enum
-from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union
-
-from fastapi._compat import (
- PYDANTIC_V2,
- CoreSchema,
- GetJsonSchemaHandler,
- JsonSchemaValue,
- _model_rebuild,
- general_plain_validator_function,
-)
-from fastapi.logger import logger
-from pydantic import AnyUrl, BaseModel, Field
-from typing_extensions import Annotated, Literal
-from typing_extensions import deprecated as typing_deprecated
-
-try:
- import email_validator
-
- assert email_validator # make autoflake ignore the unused import
- from pydantic import EmailStr
-except ImportError: # pragma: no cover
-
- class EmailStr(str): # type: ignore
- @classmethod
- def __get_validators__(cls) -> Iterable[Callable[..., Any]]:
- yield cls.validate
-
- @classmethod
- def validate(cls, v: Any) -> str:
- logger.warning(
- "email-validator not installed, email fields will be treated as str.\n"
- "To install, run: pip install email-validator"
- )
- return str(v)
-
- @classmethod
- def _validate(cls, __input_value: Any, _: Any) -> str:
- logger.warning(
- "email-validator not installed, email fields will be treated as str.\n"
- "To install, run: pip install email-validator"
- )
- return str(__input_value)
-
- @classmethod
- def __get_pydantic_json_schema__(
- cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
- ) -> JsonSchemaValue:
- return {"type": "string", "format": "email"}
-
- @classmethod
- def __get_pydantic_core_schema__(
- cls, source: Type[Any], handler: Callable[[Any], CoreSchema]
- ) -> CoreSchema:
- return general_plain_validator_function(cls._validate)
-
-
-class Contact(BaseModel):
- name: Optional[str] = None
- url: Optional[AnyUrl] = None
- email: Optional[EmailStr] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class License(BaseModel):
- name: str
- identifier: Optional[str] = None
- url: Optional[AnyUrl] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Info(BaseModel):
- title: str
- summary: Optional[str] = None
- description: Optional[str] = None
- termsOfService: Optional[str] = None
- contact: Optional[Contact] = None
- license: Optional[License] = None
- version: str
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class ServerVariable(BaseModel):
- enum: Annotated[Optional[List[str]], Field(min_length=1)] = None
- default: str
- description: Optional[str] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Server(BaseModel):
- url: Union[AnyUrl, str]
- description: Optional[str] = None
- variables: Optional[Dict[str, ServerVariable]] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Reference(BaseModel):
- ref: str = Field(alias="$ref")
-
-
-class Discriminator(BaseModel):
- propertyName: str
- mapping: Optional[Dict[str, str]] = None
-
-
-class XML(BaseModel):
- name: Optional[str] = None
- namespace: Optional[str] = None
- prefix: Optional[str] = None
- attribute: Optional[bool] = None
- wrapped: Optional[bool] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class ExternalDocumentation(BaseModel):
- description: Optional[str] = None
- url: AnyUrl
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Schema(BaseModel):
- # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-the-json-schema-core-vocabu
- # Core Vocabulary
- schema_: Optional[str] = Field(default=None, alias="$schema")
- vocabulary: Optional[str] = Field(default=None, alias="$vocabulary")
- id: Optional[str] = Field(default=None, alias="$id")
- anchor: Optional[str] = Field(default=None, alias="$anchor")
- dynamicAnchor: Optional[str] = Field(default=None, alias="$dynamicAnchor")
- ref: Optional[str] = Field(default=None, alias="$ref")
- dynamicRef: Optional[str] = Field(default=None, alias="$dynamicRef")
- defs: Optional[Dict[str, "SchemaOrBool"]] = Field(default=None, alias="$defs")
- comment: Optional[str] = Field(default=None, alias="$comment")
- # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-a-vocabulary-for-applying-s
- # A Vocabulary for Applying Subschemas
- allOf: Optional[List["SchemaOrBool"]] = None
- anyOf: Optional[List["SchemaOrBool"]] = None
- oneOf: Optional[List["SchemaOrBool"]] = None
- not_: Optional["SchemaOrBool"] = Field(default=None, alias="not")
- if_: Optional["SchemaOrBool"] = Field(default=None, alias="if")
- then: Optional["SchemaOrBool"] = None
- else_: Optional["SchemaOrBool"] = Field(default=None, alias="else")
- dependentSchemas: Optional[Dict[str, "SchemaOrBool"]] = None
- prefixItems: Optional[List["SchemaOrBool"]] = None
- # TODO: uncomment and remove below when deprecating Pydantic v1
- # It generales a list of schemas for tuples, before prefixItems was available
- # items: Optional["SchemaOrBool"] = None
- items: Optional[Union["SchemaOrBool", List["SchemaOrBool"]]] = None
- contains: Optional["SchemaOrBool"] = None
- properties: Optional[Dict[str, "SchemaOrBool"]] = None
- patternProperties: Optional[Dict[str, "SchemaOrBool"]] = None
- additionalProperties: Optional["SchemaOrBool"] = None
- propertyNames: Optional["SchemaOrBool"] = None
- unevaluatedItems: Optional["SchemaOrBool"] = None
- unevaluatedProperties: Optional["SchemaOrBool"] = None
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-structural
- # A Vocabulary for Structural Validation
- type: Optional[str] = None
- enum: Optional[List[Any]] = None
- const: Optional[Any] = None
- multipleOf: Optional[float] = Field(default=None, gt=0)
- maximum: Optional[float] = None
- exclusiveMaximum: Optional[float] = None
- minimum: Optional[float] = None
- exclusiveMinimum: Optional[float] = None
- maxLength: Optional[int] = Field(default=None, ge=0)
- minLength: Optional[int] = Field(default=None, ge=0)
- pattern: Optional[str] = None
- maxItems: Optional[int] = Field(default=None, ge=0)
- minItems: Optional[int] = Field(default=None, ge=0)
- uniqueItems: Optional[bool] = None
- maxContains: Optional[int] = Field(default=None, ge=0)
- minContains: Optional[int] = Field(default=None, ge=0)
- maxProperties: Optional[int] = Field(default=None, ge=0)
- minProperties: Optional[int] = Field(default=None, ge=0)
- required: Optional[List[str]] = None
- dependentRequired: Optional[Dict[str, Set[str]]] = None
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-vocabularies-for-semantic-c
- # Vocabularies for Semantic Content With "format"
- format: Optional[str] = None
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-the-conten
- # A Vocabulary for the Contents of String-Encoded Data
- contentEncoding: Optional[str] = None
- contentMediaType: Optional[str] = None
- contentSchema: Optional["SchemaOrBool"] = None
- # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-basic-meta
- # A Vocabulary for Basic Meta-Data Annotations
- title: Optional[str] = None
- description: Optional[str] = None
- default: Optional[Any] = None
- deprecated: Optional[bool] = None
- readOnly: Optional[bool] = None
- writeOnly: Optional[bool] = None
- examples: Optional[List[Any]] = None
- # Ref: OpenAPI 3.1.0: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#schema-object
- # Schema Object
- discriminator: Optional[Discriminator] = None
- xml: Optional[XML] = None
- externalDocs: Optional[ExternalDocumentation] = None
- example: Annotated[
- Optional[Any],
- typing_deprecated(
- "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, "
- "although still supported. Use examples instead."
- ),
- ] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-# Ref: https://json-schema.org/draft/2020-12/json-schema-core.html#name-json-schema-documents
-# A JSON Schema MUST be an object or a boolean.
-SchemaOrBool = Union[Schema, bool]
-
-
-class Example(BaseModel):
- summary: Optional[str] = None
- description: Optional[str] = None
- value: Optional[Any] = None
- externalValue: Optional[AnyUrl] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class ParameterInType(Enum):
- query = "query"
- header = "header"
- path = "path"
- cookie = "cookie"
-
-
-class Encoding(BaseModel):
- contentType: Optional[str] = None
- headers: Optional[Dict[str, Union["Header", Reference]]] = None
- style: Optional[str] = None
- explode: Optional[bool] = None
- allowReserved: Optional[bool] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class MediaType(BaseModel):
- schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema")
- example: Optional[Any] = None
- examples: Optional[Dict[str, Union[Example, Reference]]] = None
- encoding: Optional[Dict[str, Encoding]] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class ParameterBase(BaseModel):
- description: Optional[str] = None
- required: Optional[bool] = None
- deprecated: Optional[bool] = None
- # Serialization rules for simple scenarios
- style: Optional[str] = None
- explode: Optional[bool] = None
- allowReserved: Optional[bool] = None
- schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema")
- example: Optional[Any] = None
- examples: Optional[Dict[str, Union[Example, Reference]]] = None
- # Serialization rules for more complex scenarios
- content: Optional[Dict[str, MediaType]] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Parameter(ParameterBase):
- name: str
- in_: ParameterInType = Field(alias="in")
-
-
-class Header(ParameterBase):
- pass
-
-
-class RequestBody(BaseModel):
- description: Optional[str] = None
- content: Dict[str, MediaType]
- required: Optional[bool] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Link(BaseModel):
- operationRef: Optional[str] = None
- operationId: Optional[str] = None
- parameters: Optional[Dict[str, Union[Any, str]]] = None
- requestBody: Optional[Union[Any, str]] = None
- description: Optional[str] = None
- server: Optional[Server] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Response(BaseModel):
- description: str
- headers: Optional[Dict[str, Union[Header, Reference]]] = None
- content: Optional[Dict[str, MediaType]] = None
- links: Optional[Dict[str, Union[Link, Reference]]] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Operation(BaseModel):
- tags: Optional[List[str]] = None
- summary: Optional[str] = None
- description: Optional[str] = None
- externalDocs: Optional[ExternalDocumentation] = None
- operationId: Optional[str] = None
- parameters: Optional[List[Union[Parameter, Reference]]] = None
- requestBody: Optional[Union[RequestBody, Reference]] = None
- # Using Any for Specification Extensions
- responses: Optional[Dict[str, Union[Response, Any]]] = None
- callbacks: Optional[Dict[str, Union[Dict[str, "PathItem"], Reference]]] = None
- deprecated: Optional[bool] = None
- security: Optional[List[Dict[str, List[str]]]] = None
- servers: Optional[List[Server]] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class PathItem(BaseModel):
- ref: Optional[str] = Field(default=None, alias="$ref")
- summary: Optional[str] = None
- description: Optional[str] = None
- get: Optional[Operation] = None
- put: Optional[Operation] = None
- post: Optional[Operation] = None
- delete: Optional[Operation] = None
- options: Optional[Operation] = None
- head: Optional[Operation] = None
- patch: Optional[Operation] = None
- trace: Optional[Operation] = None
- servers: Optional[List[Server]] = None
- parameters: Optional[List[Union[Parameter, Reference]]] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class SecuritySchemeType(Enum):
- apiKey = "apiKey"
- http = "http"
- oauth2 = "oauth2"
- openIdConnect = "openIdConnect"
-
-
-class SecurityBase(BaseModel):
- type_: SecuritySchemeType = Field(alias="type")
- description: Optional[str] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class APIKeyIn(Enum):
- query = "query"
- header = "header"
- cookie = "cookie"
-
-
-class APIKey(SecurityBase):
- type_: SecuritySchemeType = Field(default=SecuritySchemeType.apiKey, alias="type")
- in_: APIKeyIn = Field(alias="in")
- name: str
-
-
-class HTTPBase(SecurityBase):
- type_: SecuritySchemeType = Field(default=SecuritySchemeType.http, alias="type")
- scheme: str
-
-
-class HTTPBearer(HTTPBase):
- scheme: Literal["bearer"] = "bearer"
- bearerFormat: Optional[str] = None
-
-
-class OAuthFlow(BaseModel):
- refreshUrl: Optional[str] = None
- scopes: Dict[str, str] = {}
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class OAuthFlowImplicit(OAuthFlow):
- authorizationUrl: str
-
-
-class OAuthFlowPassword(OAuthFlow):
- tokenUrl: str
-
-
-class OAuthFlowClientCredentials(OAuthFlow):
- tokenUrl: str
-
-
-class OAuthFlowAuthorizationCode(OAuthFlow):
- authorizationUrl: str
- tokenUrl: str
-
-
-class OAuthFlows(BaseModel):
- implicit: Optional[OAuthFlowImplicit] = None
- password: Optional[OAuthFlowPassword] = None
- clientCredentials: Optional[OAuthFlowClientCredentials] = None
- authorizationCode: Optional[OAuthFlowAuthorizationCode] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class OAuth2(SecurityBase):
- type_: SecuritySchemeType = Field(default=SecuritySchemeType.oauth2, alias="type")
- flows: OAuthFlows
-
-
-class OpenIdConnect(SecurityBase):
- type_: SecuritySchemeType = Field(
- default=SecuritySchemeType.openIdConnect, alias="type"
- )
- openIdConnectUrl: str
-
-
-SecurityScheme = Union[APIKey, HTTPBase, OAuth2, OpenIdConnect, HTTPBearer]
-
-
-class Components(BaseModel):
- schemas: Optional[Dict[str, Union[Schema, Reference]]] = None
- responses: Optional[Dict[str, Union[Response, Reference]]] = None
- parameters: Optional[Dict[str, Union[Parameter, Reference]]] = None
- examples: Optional[Dict[str, Union[Example, Reference]]] = None
- requestBodies: Optional[Dict[str, Union[RequestBody, Reference]]] = None
- headers: Optional[Dict[str, Union[Header, Reference]]] = None
- securitySchemes: Optional[Dict[str, Union[SecurityScheme, Reference]]] = None
- links: Optional[Dict[str, Union[Link, Reference]]] = None
- # Using Any for Specification Extensions
- callbacks: Optional[Dict[str, Union[Dict[str, PathItem], Reference, Any]]] = None
- pathItems: Optional[Dict[str, Union[PathItem, Reference]]] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class Tag(BaseModel):
- name: str
- description: Optional[str] = None
- externalDocs: Optional[ExternalDocumentation] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-class OpenAPI(BaseModel):
- openapi: str
- info: Info
- jsonSchemaDialect: Optional[str] = None
- servers: Optional[List[Server]] = None
- # Using Any for Specification Extensions
- paths: Optional[Dict[str, Union[PathItem, Any]]] = None
- webhooks: Optional[Dict[str, Union[PathItem, Reference]]] = None
- components: Optional[Components] = None
- security: Optional[List[Dict[str, List[str]]]] = None
- tags: Optional[List[Tag]] = None
- externalDocs: Optional[ExternalDocumentation] = None
-
- if PYDANTIC_V2:
- model_config = {"extra": "allow"}
-
- else:
-
- class Config:
- extra = "allow"
-
-
-_model_rebuild(Schema)
-_model_rebuild(Operation)
-_model_rebuild(Encoding)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css
deleted file mode 100644
index 1febd1de643feeadb668f5d0fc297f661ce47482..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css
+++ /dev/null
@@ -1 +0,0 @@
-.block.svelte-90oupt{position:relative;margin:0;box-shadow:var(--block-shadow);border-width:var(--block-border-width);border-color:var(--block-border-color);border-radius:var(--block-radius);background:var(--block-background-fill);width:100%;line-height:var(--line-sm)}.block.border_focus.svelte-90oupt{border-color:var(--color-accent)}.padded.svelte-90oupt{padding:var(--block-padding)}.hidden.svelte-90oupt{display:none}.hide-container.svelte-90oupt{margin:0;box-shadow:none;--block-border-width:0;background:transparent;padding:0;overflow:visible}div.svelte-e8n7p6{margin-bottom:var(--spacing-lg);color:var(--block-info-text-color);font-weight:var(--block-info-text-weight);font-size:var(--block-info-text-size);line-height:var(--line-sm)}span.has-info.svelte-1gfkn6j{margin-bottom:var(--spacing-xs)}span.svelte-1gfkn6j:not(.has-info){margin-bottom:var(--spacing-lg)}span.svelte-1gfkn6j{display:inline-block;position:relative;z-index:var(--layer-4);border:solid var(--block-title-border-width) var(--block-title-border-color);border-radius:var(--block-title-radius);background:var(--block-title-background-fill);padding:var(--block-title-padding);color:var(--block-title-text-color);font-weight:var(--block-title-text-weight);font-size:var(--block-title-text-size);line-height:var(--line-sm)}.hide.svelte-1gfkn6j{margin:0;height:0}div.svelte-1mwvhlq{display:inline-flex;align-items:center;z-index:var(--layer-2);box-shadow:var(--block-label-shadow);border:var(--block-label-border-width) solid var(--border-color-primary);border-top:none;border-left:none;border-radius:var(--block-label-radius);background:var(--block-label-background-fill);padding:var(--block-label-padding);pointer-events:none;color:var(--block-label-text-color);font-weight:var(--block-label-text-weight);font-size:var(--block-label-text-size);line-height:var(--line-sm)}.gr-group div.svelte-1mwvhlq{border-top-left-radius:0}div.float.svelte-1mwvhlq{position:absolute;top:var(--block-label-margin);left:var(--block-label-margin)}div.svelte-1mwvhlq:not(.float){position:static;margin-top:var(--block-label-margin);margin-left:var(--block-label-margin)}.hide.svelte-1mwvhlq{height:0}span.svelte-1mwvhlq{opacity:.8;margin-right:var(--size-2);width:calc(var(--block-label-text-size) - 1px);height:calc(var(--block-label-text-size) - 1px)}.hide-label.svelte-1mwvhlq{box-shadow:none;border-width:0;background:transparent;overflow:visible}button.svelte-1030q2h{display:flex;justify-content:center;align-items:center;gap:1px;z-index:var(--layer-1);box-shadow:var(--shadow-drop);border:1px solid var(--button-secondary-border-color);border-radius:var(--radius-sm);background:var(--background-fill-primary);padding:2px;color:var(--block-label-text-color)}button.svelte-1030q2h:hover{cursor:pointer;border:2px solid var(--button-secondary-border-color-hover);padding:1px;color:var(--block-label-text-color)}span.svelte-1030q2h{padding:0 1px;font-size:10px}div.svelte-1030q2h{padding:2px;width:14px;height:14px}.pending.svelte-1030q2h{animation:svelte-1030q2h-flash .5s infinite}@keyframes svelte-1030q2h-flash{0%{opacity:.5}50%{opacity:1}to{opacity:.5}}.empty.svelte-lk9eg8{display:flex;justify-content:center;align-items:center;margin-top:calc(0px - var(--size-6));height:var(--size-full)}.icon.svelte-lk9eg8{opacity:.5;height:var(--size-5);color:var(--body-text-color)}.small.svelte-lk9eg8{min-height:calc(var(--size-32) - 20px)}.large.svelte-lk9eg8{min-height:calc(var(--size-64) - 20px)}.unpadded_box.svelte-lk9eg8{margin-top:0}.small_parent.svelte-lk9eg8{min-height:100%!important}.dropdown-arrow.svelte-p5edak{fill:var(--body-text-color);margin-right:var(--size-2);width:var(--size-5)}button.svelte-1e89no8{display:inline-flex;justify-content:center;align-items:center;transition:var(--button-transition);box-shadow:var(--button-shadow);padding:var(--size-0-5) var(--size-2);text-align:center}button.svelte-1e89no8:hover,button[disabled].svelte-1e89no8{box-shadow:var(--button-shadow-hover)}button.svelte-1e89no8:active{box-shadow:var(--button-shadow-active)}button[disabled].svelte-1e89no8{opacity:.5;filter:grayscale(30%);cursor:not-allowed}.hidden.svelte-1e89no8{display:none}.primary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-primary-border-color);background:var(--button-primary-background-fill);color:var(--button-primary-text-color)}.primary.svelte-1e89no8:hover,.primary[disabled].svelte-1e89no8{border-color:var(--button-primary-border-color-hover);background:var(--button-primary-background-fill-hover);color:var(--button-primary-text-color-hover)}.secondary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-secondary-border-color);background:var(--button-secondary-background-fill);color:var(--button-secondary-text-color)}.secondary.svelte-1e89no8:hover,.secondary[disabled].svelte-1e89no8{border-color:var(--button-secondary-border-color-hover);background:var(--button-secondary-background-fill-hover);color:var(--button-secondary-text-color-hover)}.stop.svelte-1e89no8{border:var(--button-border-width) solid var(--button-cancel-border-color);background:var(--button-cancel-background-fill);color:var(--button-cancel-text-color)}.stop.svelte-1e89no8:hover,.stop[disabled].svelte-1e89no8{border-color:var(--button-cancel-border-color-hover);background:var(--button-cancel-background-fill-hover);color:var(--button-cancel-text-color-hover)}.sm.svelte-1e89no8{border-radius:var(--button-small-radius);padding:var(--button-small-padding);font-weight:var(--button-small-text-weight);font-size:var(--button-small-text-size)}.lg.svelte-1e89no8{border-radius:var(--button-large-radius);padding:var(--button-large-padding);font-weight:var(--button-large-text-weight);font-size:var(--button-large-text-size)}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html
deleted file mode 100644
index 78e36810f98d2d6ec71a95092a1d7828a4ffc972..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/DRAGSclub/README/README.md b/spaces/DRAGSclub/README/README.md
deleted file mode 100644
index 2ac40a86a64be6bd47e89f8e15493adbf433833e..0000000000000000000000000000000000000000
--- a/spaces/DRAGSclub/README/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: README
-emoji: 🔥
-colorFrom: purple
-colorTo: indigo
-sdk: static
-pinned: false
----
-
-Edit this `README.md` markdown file to author your organization card 🔥
diff --git a/spaces/Darkk88/medium-GPT4/app.py b/spaces/Darkk88/medium-GPT4/app.py
deleted file mode 100644
index 9caa518a2040f2462c7ba70d684f3ed92bf2185d..0000000000000000000000000000000000000000
--- a/spaces/Darkk88/medium-GPT4/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ingen51/DialoGPT-medium-GPT4").launch()
\ No newline at end of file
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py b/spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py
deleted file mode 100644
index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from . import audio, audio_dataset
diff --git a/spaces/Deepak107/Bottle_images/README.md b/spaces/Deepak107/Bottle_images/README.md
deleted file mode 100644
index de2e2c1f760a9362ce8999ad2f0b7a14ea1ea83d..0000000000000000000000000000000000000000
--- a/spaces/Deepak107/Bottle_images/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Bottle Images
-emoji: 🐢
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.2
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Duskfallcrew/textual-inversion-training/app.py b/spaces/Duskfallcrew/textual-inversion-training/app.py
deleted file mode 100644
index f6ed5cd899a841034993df3f7e6861811b7a0442..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/textual-inversion-training/app.py
+++ /dev/null
@@ -1,559 +0,0 @@
-import gradio as gr
-import os
-from pathlib import Path
-import argparse
-import shutil
-# from train_dreambooth import run_training
-from textual_inversion import run_training
-from convertosd import convert
-from PIL import Image
-from slugify import slugify
-import requests
-import torch
-import zipfile
-import tarfile
-import urllib.parse
-import gc
-from diffusers import StableDiffusionPipeline
-from huggingface_hub import snapshot_download
-
-
-is_spaces = True if "SPACE_ID" in os.environ else False
-#is_shared_ui = True if "IS_SHARED_UI" in os.environ else False
-if(is_spaces):
- is_shared_ui = True if ("lvkaokao/textual-inversion-training" in os.environ['SPACE_ID'] or "Intel/textual-inversion-training" in os.environ['SPACE_ID']) else False
-else:
- is_shared_ui = False
-
-css = '''
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
- #component-4, #component-3, #component-10{min-height: 0}
- .duplicate-button img{margin: 0}
-'''
-maximum_concepts = 1
-
-#Pre download the files
-'''
-model_v1_4 = snapshot_download(repo_id="CompVis/stable-diffusion-v1-4")
-#model_v1_5 = snapshot_download(repo_id="runwayml/stable-diffusion-v1-5")
-model_v1_5 = snapshot_download(repo_id="stabilityai/stable-diffusion-2")
-model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-base", revision="fp16")
-safety_checker = snapshot_download(repo_id="multimodalart/sd-sc")
-'''
-model_v1_4 = "CompVis/stable-diffusion-v1-4"
-model_v1_5 = "stabilityai/stable-diffusion-2"
-model_v2_512 = "stabilityai/stable-diffusion-2-base"
-
-model_to_load = model_v1_4
-
-
-with zipfile.ZipFile("mix.zip", 'r') as zip_ref:
- zip_ref.extractall(".")
-
-def swap_text(option):
- mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
- if(option == "object"):
- instance_prompt_example = "cttoy"
- freeze_for = 30
- return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=False)]
- elif(option == "person"):
- instance_prompt_example = "julcto"
- freeze_for = 70
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=True)]
- elif(option == "style"):
- instance_prompt_example = "trsldamrl"
- freeze_for = 10
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. Name the files with the words you would like {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=False)]
-
-def swap_base_model(selected_model):
- global model_to_load
- if(selected_model == "v1-4"):
- model_to_load = model_v1_4
- elif(selected_model == "v1-5"):
- model_to_load = model_v1_5
- else:
- model_to_load = model_v2_512
-
-def count_files(*inputs):
- file_counter = 0
- concept_counter = 0
- for i, input in enumerate(inputs):
- if(i < maximum_concepts-1):
- files = inputs[i]
- if(files):
- concept_counter+=1
- file_counter+=len(files)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- else:
- Training_Steps = file_counter*200
- if(Training_Steps > 2400):
- Training_Steps=2400
- elif(Training_Steps < 1400):
- Training_Steps=1400
- if(is_spaces):
- summary_sentence = f'''The training should take around 24 hours for 1000 steps using the default free CPU.
'''
- else:
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.
'''
-
- return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
-
-def update_steps(*files_list):
- file_counter = 0
- for i, files in enumerate(files_list):
- if(files):
- file_counter+=len(files)
- return(gr.update(value=file_counter*200))
-
-def pad_image(image):
- w, h = image.size
- if w == h:
- return image
- elif w > h:
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
- new_image.paste(image, (0, (w - h) // 2))
- return new_image
- else:
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
- new_image.paste(image, ((h - w) // 2, 0))
- return new_image
-
-def train(*inputs):
- if is_shared_ui:
- raise gr.Error("This Space only works in duplicated instances")
-
- torch.cuda.empty_cache()
- if 'pipe' in globals():
- global pipe, pipe_is_set
- del pipe
- pipe_is_set = False
- gc.collect()
-
- if os.path.exists("output_model"): shutil.rmtree('output_model')
- if os.path.exists("concept_images"): shutil.rmtree('concept_images')
- if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
- if os.path.exists("hastrained.success"): os.remove("hastrained.success")
- file_counter = 0
- print(inputs)
-
- os.makedirs('concept_images', exist_ok=True)
- files = inputs[maximum_concepts*3]
- init_word = inputs[maximum_concepts*2]
- prompt = inputs[maximum_concepts]
- if(prompt == "" or prompt == None):
- raise gr.Error("You forgot to define your concept prompt")
-
- for j, file_temp in enumerate(files):
- file = Image.open(file_temp.name)
- image = pad_image(file)
- image = image.resize((512, 512))
- extension = file_temp.name.split(".")[1]
- image = image.convert('RGB')
- image.save(f'concept_images/{j+1}.jpg', format="JPEG", quality = 100)
- file_counter += 1
-
-
- os.makedirs('output_model',exist_ok=True)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- remove_attribution_after = inputs[-6]
- experimental_face_improvement = inputs[-9]
- which_model = inputs[-10]
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- else:
- Training_Steps = 1000
-
- print(os.listdir("concept_images"))
-
- args_general = argparse.Namespace(
- pretrained_model_name_or_path = model_to_load,
- train_data_dir="concept_images",
- learnable_property=type_of_thing,
- placeholder_token=prompt,
- initializer_token=init_word,
- resolution=512,
- train_batch_size=1,
- gradient_accumulation_steps=2,
- use_bf16=True,
- max_train_steps=Training_Steps,
- learning_rate=5.0e-4,
- scale_lr=True,
- lr_scheduler="constant",
- lr_warmup_steps=0,
- output_dir="output_model",
- )
- print("Starting single training...")
- lock_file = open("intraining.lock", "w")
- lock_file.close()
- run_training(args_general)
-
- gc.collect()
- torch.cuda.empty_cache()
- if(which_model in ["v1-5"]):
- print("Adding Safety Checker to the model...")
- shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor")
- shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker")
- shutil.copy(f"model_index.json", "output_model/model_index.json")
-
- if(not remove_attribution_after):
- print("Archiving model file...")
- with tarfile.open("diffusers_model.tar", "w") as tar:
- tar.add("output_model", arcname=os.path.basename("output_model"))
- if os.path.exists("intraining.lock"): os.remove("intraining.lock")
- trained_file = open("hastrained.success", "w")
- trained_file.close()
- print(os.listdir("output_model"))
- print("Training completed!")
- return [
- gr.update(visible=True, value=["diffusers_model.tar"]), #result
- gr.update(visible=True), #try_your_model
- gr.update(visible=True), #push_to_hub
- gr.update(visible=True), #convert_button
- gr.update(visible=False), #training_ongoing
- gr.update(visible=True) #completed_training
- ]
- else:
- hf_token = inputs[-5]
- model_name = inputs[-7]
- where_to_upload = inputs[-8]
- push(model_name, where_to_upload, hf_token, which_model, True)
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'flavor': 'cpu-basic'}
- requests.post(hardware_url, json = body, headers=headers)
-
-import time
-pipe_is_set = False
-def generate(prompt, steps):
-
- print("prompt: ", prompt)
- print("steps: ", steps)
-
- torch.cuda.empty_cache()
- from diffusers import StableDiffusionPipeline
- global pipe_is_set
- if(not pipe_is_set):
- global pipe
- if torch.cuda.is_available():
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
- pipe = pipe.to("cuda")
- else:
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float)
- pipe_is_set = True
-
- start_time = time.time()
- image = pipe(prompt, num_inference_steps=steps, guidance_scale=7.5).images[0]
- print("cost: ", time.time() - start_time)
- return(image)
-
-def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
-
- if(not os.path.exists("model.ckpt")):
- convert("output_model", "model.ckpt")
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
- from huggingface_hub import create_repo
- model_name_slug = slugify(model_name)
- api = HfApi()
- your_username = api.whoami(token=hf_token)["name"]
- if(where_to_upload == "My personal profile"):
- model_id = f"{your_username}/{model_name_slug}"
- else:
- model_id = f"sd-dreambooth-library/{model_name_slug}"
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
-
- images_upload = os.listdir("concept_images")
- image_string = ""
- instance_prompt_list = []
- previous_instance_prompt = ''
- for i, image in enumerate(images_upload):
- instance_prompt = image.split("_")[0]
- if(instance_prompt != previous_instance_prompt):
- title_instance_prompt_string = instance_prompt
- instance_prompt_list.append(instance_prompt)
- else:
- title_instance_prompt_string = ''
- previous_instance_prompt = instance_prompt
- image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
-{image_string}})'''
- readme_text = f'''---
-license: creativeml-openrail-m
-tags:
-- text-to-image
----
-### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
-
-You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
-
-Sample pictures of:
-{image_string}
-'''
- #Save the readme to a file
- readme_file = open("model.README.md", "w")
- readme_file.write(readme_text)
- readme_file.close()
- #Save the token identifier to a file
- text_file = open("token_identifier.txt", "w")
- text_file.write(', '.join(instance_prompt_list))
- text_file.close()
- try:
- create_repo(model_id,private=True, token=hf_token)
- except:
- import time
- epoch_time = str(int(time.time()))
- create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
- operations = [
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
- ]
- api.create_commit(
- repo_id=model_id,
- operations=operations,
- commit_message=f"Upload the model {model_name}",
- token=hf_token
- )
- api.upload_folder(
- folder_path="output_model",
- repo_id=model_id,
- token=hf_token
- )
- api.upload_folder(
- folder_path="concept_images",
- path_in_repo="concept_images",
- repo_id=model_id,
- token=hf_token
- )
- if is_spaces:
- if(not comes_from_automated):
- extra_message = "Don't forget to remove the GPU attribution after you play with it."
- else:
- extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
-
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
-
-def convert_to_ckpt():
- convert("output_model", "model.ckpt")
- return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
-
-def check_status(top_description):
- print('=='*20)
- print(os.listdir("./"))
-
- if os.path.exists("hastrained.success"):
- if is_spaces:
- update_top_tag = gr.update(value=f'''
-
-
Your model has finished training ✅
-
Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic
You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model
Attention - This Space doesn't work in this shared UI
-
For it to work, you can either run locally or duplicate the Space and run it on your own profile using the free CPU or a (paid) private T4 GPU for training. CPU training takes a long time while each T4 costs US$0.60/h which should cost < US$1 to train most models using default settings!
You have successfully duplicated the Textual Inversion Training Space 🎉
-
If you want to use CPU, it will take a long time to run the training below. If you want to use GPU, please get this ready: attribute a T4 GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.
-
- ''')
- else:
- top_description = gr.HTML(f'''
-
-
You have successfully cloned the Dreambooth Training Space locally 🎉
-
Do a pip install requirements-local.txt
-
- ''')
- gr.Markdown("# Textual Inversion Training UI 💭")
- gr.Markdown("Customize Stable Diffusion by training it on a new concept. This Space is based on [Intel® Neural Compressor](https://github.com/intel/neural-compressor/tree/master/examples/pytorch/diffusion_model/diffusers/textual_inversion) with [🧨 diffusers](https://github.com/huggingface/diffusers)")
-
- with gr.Row() as what_are_you_training:
- type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
- base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-4", "v1-5", "v2-512"], value="v1-4", interactive=True)
-
- #Very hacky approach to emulate dynamically created Gradio components
- with gr.Row() as upload_your_concept:
- with gr.Column():
- thing_description = gr.Markdown("You are going to train an `object`, please upload 1-5 images of the object to teach new concepts to Stable Diffusion, example")
- thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False)
- thing_image_example = gr.HTML('''''')
- things_naming = gr.Markdown("You should name your concept with a unique made up word that never appears in the model vocab (e.g.: `dicoo*` here). **The meaning of the initial word** is to initialize the concept word embedding which will make training easy (e.g.: `toy` here). Images will be automatically cropped to 512x512.")
-
- with gr.Column():
- file_collection = []
- concept_collection = []
- init_collection = []
- buttons_collection = []
- delete_collection = []
- is_visible = []
-
- row = [None] * maximum_concepts
- for x in range(maximum_concepts):
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
- if(x == 0):
- visible = True
- is_visible.append(gr.State(value=True))
- else:
- visible = False
- is_visible.append(gr.State(value=False))
-
- file_collection.append(gr.File(label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
- with gr.Column(visible=visible) as row[x]:
- concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept word - use a unique, made up word to avoid collisions'''))
- init_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} initial word - to init the concept embedding'''))
- with gr.Row():
- if(x < maximum_concepts-1):
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
- if(x > 0):
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
-
- counter_add = 1
- for button in buttons_collection:
- if(counter_add < len(buttons_collection)):
- button.click(lambda:
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
- None,
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
- else:
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
- counter_add += 1
-
- counter_delete = 1
- for delete_button in delete_collection:
- if(counter_delete < len(delete_collection)+1):
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
- counter_delete += 1
-
- with gr.Accordion("Custom Settings", open=False):
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
- gr.Markdown("The default steps is 1000. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
- steps = gr.Number(label="How many steps", value=1000)
- # need to remove
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30, visible=False)
- # perc_txt_encoder = 30
-
- with gr.Box(visible=False) as training_summary:
- training_summary_text = gr.HTML("", visible=False, label="Training Summary")
- is_advanced_visible = True if is_spaces else False
- training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=False, visible=is_advanced_visible)
- training_summary_model_name = gr.Textbox(label="Name of your model", visible=False)
- training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to", visible=False)
- training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=False)
- training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=False)
-
- train_btn = gr.Button("Start Training")
-
- training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
-
- #Post-training UI
- completed_training = gr.Markdown('''# ✅ Training completed.
- ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
-
- with gr.Row():
- with gr.Box(visible=True) as try_your_model:
- gr.Markdown("## Try your model")
- prompt = gr.Textbox(label="Type your prompt")
- result_image = gr.Image()
- inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
- generate_button = gr.Button("Generate Image")
-
- with gr.Box(visible=False) as push_to_hub:
- gr.Markdown("## Push to Hugging Face Hub")
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
- hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
-
- push_button = gr.Button("Push to the Hub")
-
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
- success_message_upload = gr.Markdown(visible=False)
- convert_button = gr.Button("Convert to CKPT", visible=False)
-
- #Swap the examples and the % of text encoder trained depending if it is an object, person or style
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
-
- #Swap the base model
- base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
-
- #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
- for file in file_collection:
- #file.change(fn=update_steps,inputs=file_collection, outputs=steps)
- file.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- steps.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- #Give more options if the user wants to finish everything after training
- if(is_spaces):
- training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
- #Add a message for while it is in training
- train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
-
- #The main train function
- train_btn.click(fn=train, inputs=is_visible+concept_collection+init_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
-
- #Button to generate an image from your trained model after training
- print('=='*20)
- print(prompt)
- print(inference_steps)
- generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
-
- #Button to push the model to the Hugging Face Hub
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
- #Button to convert the model to ckpt format
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
-
- #Checks if the training is running
- demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
-
-demo.queue(default_enabled=False).launch(debug=True)
diff --git a/spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/include/BYTETracker.h b/spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/include/BYTETracker.h
deleted file mode 100644
index e3dda973fa27ccdb85a27841ec2a1cf8dcc1e9b0..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/include/BYTETracker.h
+++ /dev/null
@@ -1,49 +0,0 @@
-#pragma once
-
-#include "STrack.h"
-
-struct Object
-{
- cv::Rect_ rect;
- int label;
- float prob;
-};
-
-class BYTETracker
-{
-public:
- BYTETracker(int frame_rate = 30, int track_buffer = 30);
- ~BYTETracker();
-
- vector update(const vector