diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FULL Flare 2019 Key How to Transfer Your License to a New Computer.md b/spaces/1gistliPinn/ChatGPT4/Examples/FULL Flare 2019 Key How to Transfer Your License to a New Computer.md deleted file mode 100644 index 54770287cfd0b0e20d3fc69444c5aa167e01a475..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/FULL Flare 2019 Key How to Transfer Your License to a New Computer.md +++ /dev/null @@ -1,32 +0,0 @@ -
-

Cloudflare has been committed to data privacy and security since our founding, and it is important to us that we can demonstrate these commitments. Certification provides assurance to our customers that a third party has independently verified that Cloudflare meets the requirements set out in the standard.

-

Today, we are pleased to announce today that Cloudflare has joined the General Assembly of the EU Cloud Code of Conduct. We look forward to the second stage in this process, undertaking our audit and publicly affirming our compliance to the GDPR as a processor of personal data.

-

FULL Flare 2019 Key


Download Filehttps://imgfil.com/2uxX0n



-


To better suit the Triton styling, a slimline style flare was developed, adding 31-33mm to the panel.
EGR Flares are Australian-Made, vacuum-formed ABS, and robotically trimmed to fit the panels of your MR Triton.

-

The key to stopping an asthma attack is recognizing and treating an asthma flare-up early. Follow the treatment plan you worked out with your doctor ahead of time. Your treatment plan should include what to do when your asthma starts getting worse, and how to deal with an asthma attack in progress.

-

If your asthma flares up, immediately follow the treatment steps you and your doctor worked out in your written asthma plan. If your symptoms and peak expiratory flow (PEF) readings improve, home treatment may be all that's needed. If your symptoms don't improve with home treatment, you may need to seek emergency care.

-

-

When your asthma symptoms flare up, follow your written asthma plan's instructions for using your quick-acting (rescue) inhaler. PEF readings ranging from 51% to 79% of your personal best are a sign you need to use the quick-acting (rescue) medications prescribed by your doctor.

-

Asthma can change over time, so you'll need periodic adjustments to your treatment plan to keep daily symptoms under control. If your asthma isn't well controlled, you're more likely to have an asthma attack. Lingering lung inflammation means your asthma could flare up at any time.

-

Go to all scheduled doctor's appointments. If you have regular asthma flare-ups, or if you have low peak flow readings or other signs your asthma isn't well controlled, make an appointment to see your doctor.

-

For many people, asthma symptoms get worse with respiratory infections, such as those caused by the common cold. Some people have asthma flare-ups caused by something in their work environment. Sometimes, there isn't an apparent cause for an asthma attack.

-

If your asthma symptoms flare up when you have a cold or the flu, take steps to avoid an asthma attack by watching your lung function and symptoms and adjusting your treatment as needed. Be sure to reduce exposure to your allergy triggers, and wear a face mask when exercising in cold weather.

-

Add a bit of flare to messages that you've received by enabling joyful animations. With joyful animations enabled, you'll see a burst of colorful shapes when you open a message that includes words like Happy Birthday or Congratulations.

-

For years, oil and gas companies have struggled with the problem of what to do when they accidentally hit a natural gas formation while drilling for oil. Whereas oil can easily be trucked out to a remote destination, gas delivery requires a pipeline. If a drilling site is right next door to a pipeline, they chuck the gas in and take whatever cash the buyer on the other end is willing to pay that day. But if it's 20 miles from a pipeline, drillers often burn it off, or flare it. That is why you will typically see flames rising from oil fields.

-

Giga places a shipping container full of thousands of bitcoin miners on an oil well, then diverts the natural gas into generators, which convert the gas into electricity that is then used to power the miners. The process reduces CO2-equivalent emissions by about 63% compared to continued flaring, according to research from Denver-based Crusoe Energy Systems.

-

"Growing up, I always saw flares, just being in the oil and gas industry. I knew how wasteful it was," Whitehead told CNBC on the sidelines of the North American Prospect Expo summit in Houston, a flagship event for the industry. "It's a new way to not only lower emissions but to monetize gas."

-

"They are making their clients revenue through stranded energy bitcoin mining and solving the environmental challenge with flared gas at the same time," said Lee Bratcher, president of the Texas Blockchain Council.

-

But flares are only 75 to 90% efficient, explained Adam Ortolf, who heads up business development in the U.S. for Upstream Data. "Even with a flare, some of the methane is being vented without being combusted," he said.

-

Cloudflare, Inc. is an American content delivery network and DDoS mitigation company,[3] founded in 2009. It primarily acts as a reverse proxy between a website's visitor and the Cloudflare customer's hosting provider.[4][5] Its headquarters are in San Francisco, California.[3] According to The Hill, it is used by more than 20 percent of the entire Internet for its web security services.[6]

-

Cloudflare was founded in July 2009 by Matthew Prince, Lee Holloway, and Michelle Zatlyn.[1][7][8][9] Prince and Holloway had previously collaborated on Project Honey Pot, a product of Unspam Technologies that served as some inspiration for the basis of Cloudflare.[10] From 2009, the company was venture-capital funded.[11] On August 15, 2019, Cloudflare submitted its S-1 filing for IPO on the New York Stock Exchange under the stock ticker NET.[12] It opened for public trading on September 13, 2019 at $15 per share.[13]

-

Cloudflare has acquired web-services and security companies, including StopTheHacker (Feb 2014),[15] CryptoSeal (June 2014),[16] Eager Platform Co. (December 2016),[17] Neumob (November 2017),[18] S2 Systems (January 2020),[19] Linc (December 2020),[20] Zaraz (December 2021),[21] Vectrix (February 2022),[22] and Area 1 Security (February 2022).[23]

-

Since at least 2017, Cloudflare has been using a wall of lava lamps in their San Francisco headquarters as a source of randomness for encryption keys, alongside double pendulums in its London offices and a geiger counter in its Singapore offices.[24] The lava lamp installation implements the Lavarand method, where a camera transforms the unpredictable shapes of the "lava" blobs into a digital image.[25][24]

-

In March 2013, The Spamhaus Project was targeted by a DDoS attack that Cloudflare reported exceeded 300 gigabits per second (Gbit/s).[27][28] Patrick Gilmore, of Akamai, stated that at the time it was "the largest publicly announced DDoS attack in the history of the Internet." While trying to defend Spamhaus against the DDoS attacks, Cloudflare ended up being attacked as well; Google and other companies eventually came to Spamhaus' defense and helped it to absorb the unprecedented amount of attack traffic.[29]

-

In February 2014, Cloudflare claimed to have mitigated an NTP reflection attack against an unnamed European customer, which they stated peaked at 400 Gbit/s.[30][31] In November 2014, it reported a 500 Gbit/s DDoS attack in Hong Kong.[32] In June 2020, it mitigated a DDoS attack that peaked at 250 Gbit/s.[33] In July 2021 the company claimed to have absorbed a DDoS attack three times larger than any they'd previously recorded, which their corporate blog implied was over 1.2 Tbit/s in total.[34]

-

In 2017 Cloudflare launched Cloudflare Workers, a serverless computing platform for creating new applications, augmenting existing ones, without configuring or maintaining infrastructure. It has expanded to include Workers KV, a low-latency key-value data store; Cron Triggers, for scheduling Cron jobs; and additional tooling for developers to deploy and scale their code across the globe.[37]

-

On September 26, 2022, Cloudflare announced Zero Trust SIM, an eSIM designed to secure mobile devices and prevent SIM-swapping attacks. The technology is based on the zero trust security model. According to Cloudflare, the secure eSIM can also be used as a second identification factor with 2FA verification protocols. The product will be first available in the United States, with a planned global rollout in the future.[45][46]

-

In 2014, Cloudflare began providing free DDoS mitigation for artists, activists, journalists, and human rights groups under the name "Project Galileo."[48] More than 1,000 users and organizations were participating in Project Galileo as of 2020.[49] In 2017, they extended the service to electoral infrastructure and political campaigns under the name "Athenian Project."[50][51][52] In December 2020, Cloudflare released a beta Jamstack platform for front-end developers to deploy websites on Cloudflare's infrastructure, under the name "Pages."[53]In January 2021, the company began providing its "Waiting Room" digital queue product for free for COVID-19 vaccination scheduling under the title "Project Fair Shot."[54] Project Fair Shot later won a Webby People's Choice Award in 2022 for Event Management under the Apps & Software category.[55]

-

In March 2021, Tillie Kottmann from the hacking collective "Advanced Persistent Threat 69420" demonstrated that the group had gained root shell access to security cameras in Cloudflare offices managed by cloud-based physical security company Verkada after obtaining the credentials of a Verkada superuser account that had been leaked on the Internet.[58][59][60][61][62] Cloudflare stated that the compromised cameras were in offices that had been officially closed for several months,[58] though the hacking collective also obtained access to Verkada-operated cameras in Cloudflare's offices in New York City, London, Austin and San Francisco.[58][62] The hacking group told Bloomberg News that it had video archives from all Verkada customers;[58] it accessed footage from Cloudflare's cameras and posted a screenshot of security footage which they said was taken by a Verkada camera in a Cloudflare office.[61][63]

-

From September 2016 until February 2017, a major Cloudflare bug (nicknamed Cloudbleed) leaked sensitive data, including passwords and authentication tokens, from customer websites by sending extra data in response to web requests.[64] The leaks resulted from a buffer overflow which occurred, according to numbers provided by Cloudflare at the time, more than 18,000,000 times before the problem was corrected.[65][66]

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/CJ APK A New and Improved Card Game from the Creator of CardJacks.md b/spaces/1phancelerku/anime-remove-background/CJ APK A New and Improved Card Game from the Creator of CardJacks.md deleted file mode 100644 index 2bbaabf79ebaca06d29908e628e7f1cdfdd40174..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/CJ APK A New and Improved Card Game from the Creator of CardJacks.md +++ /dev/null @@ -1,94 +0,0 @@ - -

What is CJ APK and How to Download It?

-

Introduction

-

If you are looking for a way to shop online, find products, or play games on your Android device, you might want to check out CJ APK. CJ APK is a term that refers to various Android applications developed by CJ Group, a South Korean conglomerate that operates in various sectors such as media, entertainment, retail, logistics, food, and more. In this article, we will explain what CJ APK is, how to download it, and what benefits it can offer you.

-

What is CJ APK?

-

CJ APK is not a single app, but a collection of apps that are related to CJ Group's businesses and services. Some of the most popular CJ APKs are:

-

c j apk


Download File ✏ ✏ ✏ https://jinyurl.com/2uNMTQ



-

CJdropshipping APK

-

CJdropshipping APK is an app that allows you to import products from CJdropshipping.com, a platform that provides dropshipping and fulfillment services for online sellers. You can also source products from 1688 and Taobao, two of the largest e-commerce platforms in China. With CJdropshipping APK, you can easily list and source any products into your online stores, and find thousands of POD (Print on Demand) products available for your customization.

-

SHOP CJ Mobile App APK

-

SHOP CJ Mobile App APK is an app that lets you shop for your favorite brands and products effortlessly. You can choose from a wide range of products in home, kitchen, electronics, mobile, tablet, fashion, and other categories. You can also watch live TV shows and videos featuring product demonstrations and reviews. With SHOP CJ Mobile App APK, you can enjoy exclusive deals, discounts, coupons, and rewards.

-

CJ APK

-

CJ APK is a game app that features various characters from CJ Group's media and entertainment businesses. You can play as CJ E&M's singers, actors, comedians, or characters from their TV shows and movies. You can also collect cards, stickers, and badges of your favorite stars. With CJ APK, you can have fun and interact with other fans of CJ Group's content.

-

How to Download CJ APK?

-

If you want to download any of the CJ APKs mentioned above, you can follow these simple steps:

-

Step 1: Choose the CJ APK you want to download

-

Depending on your preferences and needs, you can choose one or more of the CJ APKs available. You can browse through their features and reviews online or ask for recommendations from other users.

-

c j apk download
-c j apk mod
-c j apk game
-c j apk latest version
-c j apk for android
-c j apk free
-c j apk offline
-c j apk online
-c j apk hack
-c j apk update
-c j apk cardjacks
-c j apk full
-c j apk premium
-c j apk pro
-c j apk cracked
-c j apk unlimited money
-c j apk no ads
-c j apk cheats
-c j apk tips and tricks
-c j apk review
-c j apk gameplay
-c j apk tutorial
-c j apk how to play
-c j apk features
-c j apk best settings
-c j apk requirements
-c j apk size
-c j apk rating
-c j apk feedback
-c j apk support
-c j apk developer
-c j apk publisher
-c j apk genre
-c j apk category
-c j apk theme
-c j apk graphics
-c j apk sound
-c j apk music
-c j apk fun factor
-c j apk difficulty
-c j apk strategy
-c j apk challenge
-c j apk multiplayer
-c j apk single player
-c j apk co-op mode
-c j apk leaderboards
-c j apk achievements
-c j apk rewards
-c j apk customization options

-

Step 2: Go to the official website or APKCombo

-

Once you have decided which CJ APK you want to download, you can go to its official website or use a third-party app store like APKCombo. APKCombo is a website that allows you to download free Android apps in various versions and formats. You can also scan QR codes or use direct links to download the apps.

-

Step 3: Click on the download button and install the APK file

-

After you have accessed the website or app store of your choice, you can click on the download button and save the APK file on your device. You may need to enable unknown sources in your settings to allow the installation of apps from outside sources. Once the file is downloaded, you can open it and follow the instructions on the screen to install the app.

-

Step 4: Open the CJ APK and enjoy its features

-

Once the app is installed, you can open it and start using its features. You may need to sign up or log in with your account to access some of the functions. You can also customize your settings and preferences according to your liking.

-

Benefits of Using CJ APK

-

There are many benefits of using CJ APK on your Android device. Some of them are:

-

Access to thousands of products and services

-

With CJ APK, you can access thousands of products and services from CJ Group's businesses and partners. You can find anything you need, from household items, electronics, fashion, beauty, food, and more. You can also enjoy high-quality content from CJ E&M's media and entertainment platforms.

-

Easy and convenient shopping experience

-

With CJ APK, you can shop online with ease and convenience. You can browse through various categories, search for specific products, compare prices, read reviews, watch videos, and more. You can also place orders, track shipments, make payments, and request refunds with just a few clicks. You can also use coupons, discounts, and rewards to save money and get more value for your purchases.

-

Customization and personalization options

-

With CJ APK, you can customize and personalize your app experience. You can choose your preferred language, currency, theme, layout, and more. You can also create your own profile, wishlist, cart, and favorites. You can also design your own products with POD (Print on Demand) features.

-

Conclusion

-

CJ APK is a great way to enjoy CJ Group's products and services on your Android device. You can download various apps that suit your needs and preferences, such as CJdropshipping APK, SHOP CJ Mobile App APK, or CJ APK. You can also benefit from the features and functions of these apps, such as access to thousands of products and services, easy and convenient shopping experience, and customization and personalization options. If you want to try CJ APK for yourself, you can follow the steps above to download and install it on your device.

-

FAQs

-

Here are some frequently asked questions about CJ APK:

- - - - - - - -
QuestionAnswer
Is CJ APK safe to use?CJ APK is safe to use as long as you download it from the official website or a trusted app store like APKCombo. You should also check the permissions and reviews before installing any app.
Is CJ APK free to use?CJ APK is free to use for most of its features and functions. However, some apps may require in-app purchases or subscriptions for premium content or services.
Is CJ APK compatible with my device?CJ APK is compatible with most Android devices that run on Android 4.1 or higher. However, some apps may have different requirements or specifications depending on their functions.
How do I update CJ APK?You can update CJ APK by checking for updates on the official website or the app store where you downloaded it. You can also enable automatic updates in your settings to get the latest version of the app.
How do I contact CJ APK support?You can contact CJ APK support by visiting the official website or the app store where you downloaded it. You can also find contact information or feedback forms within the app itself.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Uplay Client and Join the Ubisoft Community.md b/spaces/1phancelerku/anime-remove-background/Download Uplay Client and Join the Ubisoft Community.md deleted file mode 100644 index 0f7a26508e46893a6f6bee95397aa77cd47b2cef..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Uplay Client and Join the Ubisoft Community.md +++ /dev/null @@ -1,134 +0,0 @@ -
-

How to Download and Use Uplay Client on PC

-

If you are a fan of Ubisoft games, you might have heard of Uplay, a platform that allows you to access, download, play, and connect with other players across all Ubisoft games. In this article, we will show you how to download and use the Uplay client on your PC, as well as some of its features, benefits, games, and subscription options. We will also provide some troubleshooting tips for common Uplay issues on PC.

-

What is Uplay and Why You Need It

-

Uplay is a digital distribution, digital rights management, multiplayer, and communications service developed by Ubisoft to provide an experience similar to the achievements/trophies offered by various other game companies. The service is provided across various platforms, including PC, consoles, mobile devices, and web browsers.

-

uplay client download


Download Ziphttps://jinyurl.com/2uNUhJ



-

Uplay serves as a combination of a free reward system (formerly Ubisoft Club) and online profile system for players of Ubisoft games. While playing Ubisoft games, players can complete in-game achievements which earn points towards their profile. They can then redeem these points for in-game content across many Ubisoft games, typically as cosmetic items which can otherwise be purchased through microtransactions. Players can also maintain friend lists which will be used in various games to help with matchmaking or tied with certain in-game features.

-

The Uplay client on personal computers also serves as a storefront and digital download management tool. Players can purchase Ubisoft games through its storefront and manage downloads and updates of games. The client also maintains digital rights management (DRM) for Ubisoft games, and is required to be run for any Ubisoft game, even if the game is purchased on a different storefront such as through Steam or the Epic Games Store.

-

An optional subscription service, Ubisoft+ (formerly Uplay+), allows subscribers to have access to Ubisoft's full library of games as well as immediate access to its newest games and closed beta tests for upcoming games.

-

Uplay Features and Benefits

-

Some of the features and benefits of using Uplay are:

-

How to install Ubisoft Connect on PC
-Ubisoft Connect free download for Windows 10
-Ubisoft games launcher download
-Ubisoft Connect PC app download
-Download Ubisoft Connect for Assassin's Creed Valhalla
-Ubisoft Connect desktop app download
-Ubisoft Connect download error
-Ubisoft Connect download speed
-Ubisoft Connect download location
-Ubisoft Connect download stuck
-How to uninstall Ubisoft Connect on PC
-Ubisoft Connect offline mode download
-Ubisoft Connect cloud save download
-Ubisoft Connect achievements download
-Ubisoft Connect rewards download
-How to update Ubisoft Connect on PC
-Ubisoft Connect beta download
-Ubisoft Connect patch notes download
-Ubisoft Connect game library download
-Ubisoft Connect game codes download
-How to register for Ubisoft Connect on PC
-Ubisoft Connect login download
-Ubisoft Connect account settings download
-Ubisoft Connect profile picture download
-Ubisoft Connect friends list download
-How to link Ubisoft Connect to Steam
-How to link Ubisoft Connect to Epic Games Store
-How to link Ubisoft Connect to Xbox Live
-How to link Ubisoft Connect to PlayStation Network
-How to link Ubisoft Connect to Nintendo Switch
-How to redeem a code on Ubisoft Connect on PC
-How to get 20% off on the Ubisoft Store with Ubisoft Connect
-How to access the Ubisoft+ subscription service on PC with Ubisoft Connect
-How to play free games on PC with Ubisoft Connect
-How to join beta events and public test servers with Ubisoft Connect on PC
-How to check your stats and progression with Ubisoft Connect on PC
-How to get news and updates on your games with Ubisoft Connect on PC
-How to chat and group up with other players with Ubisoft Connect on PC
-How to stream your games with Ubisoft Connect on PC
-How to use the overlay feature with Ubisoft Connect on PC
-How to troubleshoot issues with Ubisoft Connect on PC
-How to contact support for Ubisoft Connect on PC
-How to provide feedback for Ubisoft Connect on PC
-How to change the language of Ubisoft Connect on PC
-How to change the display mode of Ubisoft Connect on PC
-How to change the bandwidth limit of Ubisoft Connect on PC
-How to change the installation folder of Ubisoft Connect on PC
-How to verify game files with Ubisoft Connect on PC
-How to backup and restore game saves with Ubisoft Connect on PC
-How to enable or disable automatic updates with Ubisoft Connect on PC

- -

Uplay Games and Subscription

-

Uplay offers a wide range of games from various genres and franchises, such as Assassin's Creed, Far Cry, Watch Dogs, Tom Clancy's Rainbow Six, Ghost Recon, The Division, Splinter Cell, Prince of Persia, Rayman, Just Dance, and more. You can browse the full catalog of games on the Uplay website or the Uplay client.

-

If you want to enjoy unlimited access to over 100 Ubisoft games on PC, including new releases, classic titles, and premium editions*, you can subscribe to Ubisoft+ for $14.99/month. You can cancel anytime and keep playing with your existing game library. With Ubisoft+, you also get access to exclusive game features, such as advanced editions and additional content. You can also play select games on Stadia or Amazon Luna at no additional cost.

-

*Where premium or special editions of the game are indicated (for example: Ultimate/Gold/Deluxe Editions), editions included in Ubisoft+ may not include all premium content. Offer subject to change.

-

How to Download Uplay Client on PC

-

Downloading and installing the Uplay client on your PC is easy and fast. Just follow these simple steps:

-

Step 1: Visit the Official Website

-

Go to https://uplay.ubisoft.com/ and click on the "Download Uplay" button at the top right corner of the page. This will start downloading the Uplay installer file on your PC.

-

Step 2: Create or Log in to Your Ubisoft Account

-

If you already have a Ubisoft account, you can log in with your email and password. If you don't have one, you can create one for free by clicking on the "Create a Ubisoft account" link below the login form. You will need to provide a valid email address, a password, a username, and agree to the terms of service and privacy policy.

-

Step 3: Download and Install the Uplay Client

-

Once you have logged in or created your account, you can run the Uplay installer file that you downloaded in step 1. Follow the instructions on the screen to choose your language, accept the license agreement, and select your installation folder. The Uplay client will then download and install on your PC.

-

Step 4: Launch the Uplay Client and Start Playing

-

After the installation is complete, you can launch the Uplay client from your desktop shortcut or start menu. You will see your game library, where you can browse, buy, download, update, and play your Ubisoft games. You can also access your profile, rewards, friends list, chat, store, settings, and more from the Uplay client.

-

How to Troubleshoot Uplay Issues on PC

-

Sometimes, you might encounter some issues with Uplay on your PC, such as not being able to launch games, connect to servers, or update games or client. Here are some common problems and solutions that might help you fix them:

-

Common Uplay Problems and Solutions

-

Uplay Not Launching Games

-

If you are unable to launch a game from Uplay on your PC, try these steps:

-
    -
  1. Make sure that your PC meets the minimum system requirements for the game.
  2. -
  3. Make sure that your PC has the latest drivers for your graphics card and sound card.
  4. -
  5. Make sure that your antivirus or firewall software is not blocking Uplay or the game.
  6. -
  7. Run Uplay and the game as administrator by right-clicking on their icons and selecting "Run as administrator".
  8. -
  9. Verify the integrity of your game files by going to your game library in Uplay, clicking on the game tile, selecting "Properties", then "Local files", then "Verify files".
  10. -
  11. Delete the cache folder of Uplay by going to C:\Program Files (x86)\Ubisoft\Ubisoft Game Launcher\cache (or wherever you installed Uplay) and deleting the folder.
  12. -
  13. Reinstall Uplay by downloading it from https://uplay.ubisoft.com/ and running the installer file.
  14. -
  15. Contact Ubisoft support if none of the above steps work.
  16. -
-

Uplay Not Connecting to Servers

-

If you are unable to connect to Uplay servers on your PC, try these steps:

-
    -
  1. Make sure that your internet connection is stable and working properly.
  2. -
  3. Make sure that your router or modem is not blocking Uplay or the game ports. You can find the list of ports to forward for Uplay and Ubisoft games here: https://support.ubisoft.com/en-US/faqs/000024619
  4. -
  5. Make sure that your system clock is set to the correct date and time.
  6. -
  7. Restart Uplay and the game.
  8. -
  9. Restart your PC and your router or modem.
  10. -
  11. Contact Ubisoft support if none of the above steps work.
  12. -
-

Uplay Not Updating Games or Client

-

If you are unable to update your games or Uplay client on your PC, try these steps:

-
    -
  1. Make sure that you have enough disk space on your PC for the update.
  2. -
  3. Make sure that your internet connection is stable and fast enough for the update.
  4. -
  5. Make sure that your antivirus or firewall software is not blocking Uplay or the game update.
  6. -
  7. Pause and resume the update in Uplay by clicking on the pause and play buttons next to the progress bar.
  8. -
  9. Change the download region in Uplay by going to Settings, then Downloads, then Download region, and selecting a different region.
  10. -
  11. Delete the temporary files of Uplay by going to C:\Program Files (x86)\Ubisoft\Ubisoft Game Launcher\data (or wherever you installed Uplay) and deleting the files inside the folder.
  12. -
  13. Contact Ubisoft support if none of the above steps work.
  14. -
-

Conclusion

-

Uplay is a great platform for Ubisoft fans who want to enjoy their games, rewards, features, and community. Downloading and using Uplay on PC is easy and fast, as long as you follow the steps in this article. If you encounter any issues with Uplay on PC, you can try some of the troubleshooting tips we provided, or contact Ubisoft support for further assistance. We hope you have a great time playing Ubisoft games with Uplay!

-

FAQs

-

Here are some frequently asked questions about Uplay on PC:

-

Q: How do I uninstall Uplay from my PC?

-

A: To uninstall Uplay from your PC, go to Control Panel, then Programs and Features, then find and select Ubisoft Connect (formerly Uplay), then click on Uninstall. Follow the instructions on the screen to complete the uninstallation process. Note that uninstalling Uplay will not delete your Ubisoft account or your game progress, but it will delete your game files from your PC. You can reinstall Uplay anytime from https://uplay.ubisoft.com/.

-

Q: How do I link my Steam account to my Ubisoft account?

-

A: To link your Steam account to your Ubisoft account, launch any Ubisoft game from Steam, then log in to your Ubisoft account when prompted by Uplay. This will automatically link your Steam account to your Ubisoft account. You can also link your Steam account manually by going to https://account.ubisoft.com/en-US/linked-accounts, then clicking on Link under Steam, then following the instructions on the screen.

-

Q: How do I cancel my Ubisoft+ subscription?

-

A: To cancel your Ubisoft+ subscription, go to https://store.ubi.com/us/subscription/, then log in to your Ubisoft account, then click on Manage under Your Subscription, then click on Cancel Subscription. You will still have access to Ubisoft+ until the end of your current billing cycle. You can resubscribe anytime from the same page.

-

Q: How do I redeem a code for a game or a reward on Uplay?

-

A: To redeem a code for a game or a reward on Uplay, launch the Uplay client on your PC, then click on the menu icon at the top left corner of the screen, then select Activate a key. Enter your code in the field provided, then click on Activate. Your game or reward will be added to your library or inventory.

-

Q: How do I contact Ubisoft support?

-

A: To contact Ubisoft support, go to https://support.ubisoft.com/en-US/, then select your game or product, then select your issue category and subcategory, then click on Contact Us. You can then choose to chat with a live agent, submit a web ticket, or browse the FAQs and forums for more help.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/AB-TW/team-ai/README.md b/spaces/AB-TW/team-ai/README.md deleted file mode 100644 index d9afeb40f86829971c20c8bf8eae4ebe9bd039b4..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/README.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Chat Robot -emoji: 📊 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -## 环境依赖 -python3 - -## 安装依赖 - -```shell -pip install -r requirments -python -m spacy download zh_core_web_sm -``` - -## 运行命令 - -```shell -export OPENAI_API_KEY=sk-... -python -m app.py -``` \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/gigaspeech/extract.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/gigaspeech/extract.py deleted file mode 100644 index 10210b089d81065aad46a3ffcb483e0645553e84..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/gigaspeech/extract.py +++ /dev/null @@ -1,76 +0,0 @@ -import os -import json -import tqdm - -from utils.commons.multiprocess_utils import multiprocess_run_tqdm -from functools import partial - -# def process_segment0(segment, opus_path, audio_out_dir, audio_id): -# segment_id = segment['sid'] -# item_name = segment_id -# begin_time = segment['begin_time'] -# end_time = segment['end_time'] -# out_wav_path = os.path.join(audio_out_dir, segment_id+'.wav') -# text = segment['text_tn'] -# text = text.replace("", ",") -# text = text.replace("", ".") -# text = text.replace("", "?") -# text = text.replace("", "!") -# text = text.lower() -# item_meta = {'item_name': item_name, 'wav_fn': out_wav_path, 'txt': text, 'spk_name': audio_id} -# return item_meta - -def process_segment(segment, opus_path, audio_out_dir, audio_id): - segment_id = segment['sid'] - item_name = segment_id - begin_time = segment['begin_time'] - end_time = segment['end_time'] - out_wav_path = os.path.join(audio_out_dir, segment_id+'.wav') - if os.path.exists(out_wav_path): - return - cmd = f'ffmpeg -v quiet -y -i {opus_path} -ac 1 -ar 16000 -ss {begin_time} -to {end_time} {out_wav_path}' - os.system(cmd) - text = segment['text_tn'] - text = text.replace("", ",") - text = text.replace("", ".") - text = text.replace("", "?") - text = text.replace("", "!") - text = text.lower() - item_meta = {'item_name': item_name, 'wav_fn': out_wav_path, 'txt': text, 'spk_name': audio_id} - return item_meta - -giga_root_dir = '/home/yezhenhui/datasets/raw/GigaSpeech/' -giga_out_dir = '/home/yezhenhui/datasets/raw/GigaSpeech_extract/' -os.makedirs(giga_out_dir, exist_ok=True) - -with open(f'{giga_root_dir}/GigaSpeech.json', 'r') as injson: - json_data = json.load(injson) - -meta = [] -out_meta_name = os.path.join(giga_out_dir, 'meta.json') - -audio_corpus = json_data['audios'] # list of dict, length 38131 - -args = [] -for audio_source in tqdm.tqdm(audio_corpus, total=len(audio_corpus), desc='loading the args'): - audio_id = audio_source['aid'] - subset = audio_source['subsets'] - audio_path = audio_source['path'] - opus_path = os.path.join(giga_root_dir, audio_path) - audio_out_dir = os.path.join(giga_out_dir, os.path.dirname(audio_path), audio_id) - os.makedirs(audio_out_dir, exist_ok=True) - segments = audio_source['segments'] - spk_name = audio_id - args += [{'segment': segment, 'opus_path': opus_path, 'audio_out_dir': audio_out_dir, 'audio_id': audio_id} for segment in segments] - -# for segment_meta in multiprocess_run_tqdm(process_segment0, args, desc='extracting...'): -# meta += segment_meta - -# with open(out_meta_name, 'w') as f: -# json.dump(meta, f) -# print("successful!") - -for segment_meta in multiprocess_run_tqdm(process_segment, args, num_workers=32, desc='extracting...'): - pass - - diff --git a/spaces/AIWaves/SOP_Generation-single/Prompt/__init__.py b/spaces/AIWaves/SOP_Generation-single/Prompt/__init__.py deleted file mode 100644 index da69c35ed2c4ec583721339c324a53d5622429d1..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Prompt/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .base_Prompts import * \ No newline at end of file diff --git a/spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/app.py b/spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/app.py deleted file mode 100644 index fce91f44b5f6858d0571cc4fc4655932fbb4899d..0000000000000000000000000000000000000000 --- a/spaces/AIZeroToHero/03-NLP-MLM-SOTA-MedEntity/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import gradio as gr -title = "Medical Entity Mask Language Modeling (MLM)" -description = "Medical Entity Feature Extraction uses Match Language Modeling to fill in the blank with likely word classification based on context." -article = "

" -examples = [ - ["Scientific breakthroughs in treatment of HIV/AIDS may be solved in our lifetime using a procedure called [MASK] modulation which strengthens the immune system to fight the disease."],["A disease called [MASK] disease involves progressive memory loss and has new treatments to improve memory and delay progression of the disease."],["[MASK] refers to the uncontrolled growth of abnormal cells in the body. With chemotherapy and radiation therapy have improvements and replacements that destroy cancer cells before they become resistant to current treatment methods."],["The hereditary disease [MASK] is caused by mucus abnormally thick preventing lungs and pancreas from doing their jobs correctly."],["[MASK] or atherosclerosis is the buildup of cholesterol, fatty cells, and inflammatory deposits in the arteries. Stem cells, mechanical devices, and lowering cholesterol and blood pressure levels are helping prevention."] -] - -gr.Interface.load("huggingface/ajitrajasekharan/biomedical",title=title,description=description,article=article, examples=examples).launch() \ No newline at end of file diff --git a/spaces/Abhaykoul/HelpingAI-t2/app.py b/spaces/Abhaykoul/HelpingAI-t2/app.py deleted file mode 100644 index 7e8df7a2f8fee51f239f37cd09f2138440462fa5..0000000000000000000000000000000000000000 --- a/spaces/Abhaykoul/HelpingAI-t2/app.py +++ /dev/null @@ -1,101 +0,0 @@ -from huggingface_hub import InferenceClient -import gradio as gr -import random - -API_URL = "https://api-inference.huggingface.co/models/" - - -client = InferenceClient( - "mistralai/Mistral-7B-Instruct-v0.1" -) - -def format_prompt(message, history, custom_instructions=None): - prompt = "" - for user_prompt, bot_response in history: - prompt += f"[INST] {user_prompt} [/INST]" - prompt += f" {bot_response} " - if custom_instructions: - prompt += f"[INST] {custom_instructions} [/INST]" - prompt += f"[INST] {message} [/INST]" - return prompt - -def generate(prompt, history, temperature=0.9, max_new_tokens=512, top_p=0.95, repetition_penalty=1.0): - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - generate_kwargs = dict( - temperature=temperature, - max_new_tokens=max_new_tokens, - top_p=top_p, - repetition_penalty=repetition_penalty, - do_sample=True, - seed=random.randint(0, 10**7), - ) - - custom_instructions = "Hello! I am HelpingAI. I am developed by Abhay Koul, who is 16 years old and lives in KP Colony, Vessu, Anantnag. Abhay Koul is currently working on more AI and AI-related projects. Certainly! If you're looking for more information about Abhay Koul's projects and HelpingAI, I would recommend joining the Discord community. You can find additional details there. He has been working on AI from Jan 2023. If you want more details, you can join the Discord community. The main function of HelpingAI is to make learning and education easier and more convenient for students. My purpose is to simplify your learning journey by providing personalized assistance, innovative teaching methods, and tailored resources to meet your unique needs. I work on HelpingAI model which is also developed by Abhay Koul also I am not opensource AI I am here to make your educational experience more enjoyable and effective. Feel free to ask me any questions or let me know how I can assist you in your learning adventure and in many more things from your life. Also, HelpingAI was initially developed for S.U.P.E.R.B.O.T. and vortexAI, for more info visit: https://github.com/HelpingAI, https://replit.com/@Devastation-war, join Discord https://discord.gg/2EeZcJjyRd." - - formatted_prompt = format_prompt(prompt, history, custom_instructions) - - stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) - output = "" - - for response in stream: - output += response.token.text - yield output - return output - -additional_inputs = [ - gr.Slider( - label="Temperature", - value=0.9, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - gr.Slider( - label="Max new tokens", - value=512, - minimum=64, - maximum=1024, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.90, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.2, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) -] - -customCSS = """ -#component-7 { # this is the default element ID of the chat component - height: 800px; # adjust the height as needed - flex-grow: 1; -} -""" - -with gr.Blocks(css=customCSS) as demo: - gr.ChatInterface( - generate, - additional_inputs=additional_inputs, - ) - -demo.queue().launch(debug=True) diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/hljs.css b/spaces/AchyuthGamer/OpenGPT/client/css/hljs.css deleted file mode 100644 index 1fcf16ba358a7c5d287b1c6e33c3afbfff38f623..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/client/css/hljs.css +++ /dev/null @@ -1,68 +0,0 @@ -.hljs { - color: #e9e9f4; - background: #28293629; - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); - font-size: 15px; - word-wrap: break-word; - white-space: pre-wrap; -} - -/* style for hljs copy */ -.hljs-copy-wrapper { - position: relative; - overflow: hidden; -} - -.hljs-copy-wrapper:hover .hljs-copy-button, -.hljs-copy-button:focus { - transform: translateX(0); -} - -.hljs-copy-button { - position: absolute; - transform: translateX(calc(100% + 1.125em)); - top: 1em; - right: 1em; - width: 2rem; - height: 2rem; - text-indent: -9999px; - color: #fff; - border-radius: 0.25rem; - border: 1px solid #ffffff22; - background-color: #2d2b57; - background-image: url('data:image/svg+xml;utf-8,'); - background-repeat: no-repeat; - background-position: center; - transition: background-color 200ms ease, transform 200ms ease-out; -} - -.hljs-copy-button:hover { - border-color: #ffffff44; -} - -.hljs-copy-button:active { - border-color: #ffffff66; -} - -.hljs-copy-button[data-copied="true"] { - text-indent: 0; - width: auto; - background-image: none; -} - -.hljs-copy-alert { - clip: rect(0 0 0 0); - clip-path: inset(50%); - height: 1px; - overflow: hidden; - position: absolute; - white-space: nowrap; - width: 1px; -} - -@media (prefers-reduced-motion) { - .hljs-copy-button { - transition: none; - } -} diff --git a/spaces/AdVisual/MaskCut/app.py b/spaces/AdVisual/MaskCut/app.py deleted file mode 100644 index cb6edf041f5f5c47337943a6f41d9b4526b8ce66..0000000000000000000000000000000000000000 --- a/spaces/AdVisual/MaskCut/app.py +++ /dev/null @@ -1,293 +0,0 @@ -#!/usr/bin/env python - -from fastapi import FastAPI, WebSocket, WebSocketDisconnect -from fastapi.middleware.cors import CORSMiddleware -from fastapi.logger import logger - -# General -from config import CONFIG -from pydantic import BaseModel -from PIL import Image - -# Connection Manager -from connectionManager import ConnectionManager - -# CutLER Model -from model import Model -import base64 -from io import BytesIO -from predict import predict - -# Stable Diffusion Inpainting Model -# from diffusers import StableDiffusionInpaintPipeline - -# About -import torch -import os -import sys - -# Server API -import uvicorn - -app = FastAPI( - title="AdVisual Model Hosting", - description="Description of the ML Model", - version="0.0.1", - terms_of_service=None, - contact=None, - license_info=None, - docs_url="/", -) - -# Allow CORS for local debugging -if CONFIG['ENV'] == 'development': - app.add_middleware(CORSMiddleware, allow_origins=["*"]) -else: - app.add_middleware(CORSMiddleware, allow_origins=["https://advisual.io"]) - -@app.on_event("startup") -async def startup_event(): - """ - Initialize FastAPI and add variables - """ - - logger.info('Running envirnoment: {}'.format(CONFIG['ENV'])) - logger.info('PyTorch using device: {}'.format(CONFIG['DEVICE'])) - - # Initialize the CutLER model - model = Model(CONFIG['DEVICE']) - - # Initialize the stable-diffusion-inpainting model - # pipe = StableDiffusionInpaintPipeline.from_pretrained("stabilityai/stable-diffusion-2-inpainting", safety_checker=None) - - # pipe.to(CONFIG['DEVICE']) - - # Initialize the connection manager - connectionManager = ConnectionManager() - - # add model and other preprocess tools too app state - app.package = { - "model": model, - "connectionManager": connectionManager, - # "pipe": pipe - } - -@app.get("/ping") -def ping(): - return {"ok": True, "message": "Pong"} - -@app.get("/about") -def show_about(): - """ - Get deployment information, for debugging - """ - - logger.info('API /about called') - - def bash(command): - output = os.popen(command).read() - return output - - return { - "sys.version": sys.version, - "torch.__version__": torch.__version__, - "torch.cuda.is_available()": torch.cuda.is_available(), - "torch.version.cuda": torch.version.cuda, - "torch.backends.cudnn.version()": torch.backends.cudnn.version(), - "torch.backends.cudnn.enabled": torch.backends.cudnn.enabled, - "nvidia-smi": bash('nvidia-smi') - } - -# def resize_image(img, height=512, width=512): -# '''Resize image to `size`''' - -# size = (width, height) - -# img_resized = img.resize(size, Image.ANTIALIAS) -# return img_resized - -# def crop_image(img, d=64): -# '''Make dimensions divisible by `d`''' - -# new_size = (img.size[0] - img.size[0] % d, -# img.size[1] - img.size[1] % d) - -# bbox = [ -# int((img.size[0] - new_size[0])/2), -# int((img.size[1] - new_size[1])/2), -# int((img.size[0] + new_size[0])/2), -# int((img.size[1] + new_size[1])/2), -# ] - -# img_cropped = img.crop(bbox) -# return img_cropped - -# class InpaintBody(BaseModel): -# image: str -# mask: str -# prompt: str - -# @app.post("/inpaint") -# async def do_inpaint(body: InpaintBody): -# """ -# Perform inpainting on input data -# """ - -# logger.info('API inpaint called') -# image_data = body.image -# mask_data = body.mask -# prompt = body.prompt - -# # Extract base64 from mask and convert to PIL.Image -# if (',' in image_data): -# image = Image.open(BytesIO(base64.b64decode(image_data.split(',')[1]))) -# else: -# image = Image.open(BytesIO(base64.b64decode(image_data))) - -# # Extract base64 from mask and convert to PIL.Image -# if (',' in mask_data): -# mask = Image.open(BytesIO(base64.b64decode(mask_data.split(',')[1]))) -# else: -# mask = Image.open(BytesIO(base64.b64decode(mask_data))) - -# # Resize image and mask to 512x512 -# image = crop_image(resize_image(image, 512, 512)) -# mask = crop_image(resize_image(image, 512, 512)) - -# pipe = app.package.get('pipe') -# result = pipe(prompt=prompt, image=image, mask_image=mask, num_inference_steps=10, num_images_per_prompt=1) -# images = result['images'] -# return images - - -class ImageBody(BaseModel): - image: str - threshold: float = 0.15 - num_objects: int = 1 - -@app.post("/predict") -async def do_predict(body: ImageBody): - """ - Perform prediction on input data - """ - - logger.info('API predict called') - - image: str = body.image - threshold: float = body.threshold - num_objects: int = body.num_objects - - # Run the algorithm - result = predict(app.package, image, threshold, num_objects) - - # Convert the result to base64 and send the json back - buffered = BytesIO() - result.save(buffered, format="JPEG") - img_str = 'data:image/jpeg;base64,' + base64.b64encode(buffered.getvalue()).decode("utf-8") - - return {"ok": True, "status": "FINISHED", "result": img_str} - - -# @app.websocket("/ws-inpaint") -# async def inpaint_websocket_endpoint(websocket: WebSocket): -# connectionManager = app.package.get('connectionManager') -# await connectionManager.connect(websocket) -# await connectionManager.send_json({"ok": True, "status": "CONNECTED"}, websocket) -# while True: -# try: -# data: ImageBody = await connectionManager.receive_json(websocket) -# if (data is None): -# # Wait for data -# if not connectionManager.isConnected(websocket): -# break -# if connectionManager.shouldDisconnect(websocket): -# await websocket.close() -# connectionManager.disconnect(websocket) -# break -# continue - -# image_data: str = data.get('image') -# mask_data: str = data.get('mask') -# prompt: str = data.get('prompt') - -# await connectionManager.send_json({"ok": True, "status": "STARTED"}, websocket) - -# # Extract base64 from mask and convert to PIL.Image -# if (',' in image_data): -# image = Image.open(BytesIO(base64.b64decode(image_data.split(',')[1]))) -# else: -# image = Image.open(BytesIO(base64.b64decode(image_data))) - -# # Extract base64 from mask and convert to PIL.Image -# if (',' in mask_data): -# mask = Image.open(BytesIO(base64.b64decode(mask_data.split(',')[1]))) -# else: -# mask = Image.open(BytesIO(base64.b64decode(mask_data))) - -# # Resize image and mask to 512x512 -# image = crop_image(resize_image(image, 512, 512)) -# mask = crop_image(resize_image(image, 512, 512)) - -# pipe = app.package.get('pipe') -# result = pipe(prompt=prompt, image=image, mask_image=mask, num_inference_steps=20, num_images_per_prompt=1) -# images = result['images'] - -# # Convert the result to base64 and send the json back -# result_array = [] -# for image in images: -# buffered = BytesIO() -# image.save(buffered, format="JPEG") -# img_str = 'data:image/jpeg;base64,' + base64.b64encode(buffered.getvalue()).decode("utf-8") -# result_array.append(img_str) - -# await connectionManager.send_json({"ok": True, "status": "FINISHED", "result": result_array}, websocket) - -# await websocket.close() -# connectionManager.disconnect(websocket) -# except WebSocketDisconnect: -# connectionManager.disconnect(websocket) -# break - -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - connectionManager = app.package.get('connectionManager') - await connectionManager.connect(websocket) - await connectionManager.send_json({"ok": True, "status": "CONNECTED"}, websocket) - while True: - try: - data: ImageBody = await connectionManager.receive_json(websocket) - if (data is None): - # Wait for data - if not connectionManager.isConnected(websocket): - break - if connectionManager.shouldDisconnect(websocket): - await websocket.close() - connectionManager.disconnect(websocket) - break - continue - - image: str = data.get('image') - threshold: float = data.get('threshold') or 0.15 - num_objects: int = data.get('num_objects') or 1 - - await connectionManager.send_json({"ok": True, "status": "STARTED"}, websocket) - - # Run the algorithm - result = predict(app.package, image, threshold, num_objects) - - # Convert the result to base64 and send the json back - buffered = BytesIO() - result.save(buffered, format="JPEG") - img_str = 'data:image/jpeg;base64,' + base64.b64encode(buffered.getvalue()).decode("utf-8") - - await connectionManager.send_json({"ok": True, "status": "FINISHED", "result": img_str}, websocket) - - await websocket.close() - connectionManager.disconnect(websocket) - except WebSocketDisconnect: - connectionManager.disconnect(websocket) - break - -if __name__ == '__main__': - # server api - uvicorn.run("app:app", host="0.0.0.0", port=7860, reload=True) diff --git a/spaces/Adapter/CoAdapter/ldm/models/diffusion/dpm_solver/dpm_solver.py b/spaces/Adapter/CoAdapter/ldm/models/diffusion/dpm_solver/dpm_solver.py deleted file mode 100644 index 23ebfebf167a6c16f3b57e09d491998c4adf68db..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/models/diffusion/dpm_solver/dpm_solver.py +++ /dev/null @@ -1,1217 +0,0 @@ -import torch -import torch.nn.functional as F -import math -from tqdm import tqdm - - -class NoiseScheduleVP: - def __init__( - self, - schedule='discrete', - betas=None, - alphas_cumprod=None, - continuous_beta_0=0.1, - continuous_beta_1=20., - ): - """Create a wrapper class for the forward SDE (VP type). - - *** - Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t. - We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images. - *** - - The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ). - We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper). - Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have: - - log_alpha_t = self.marginal_log_mean_coeff(t) - sigma_t = self.marginal_std(t) - lambda_t = self.marginal_lambda(t) - - Moreover, as lambda(t) is an invertible function, we also support its inverse function: - - t = self.inverse_lambda(lambda_t) - - =============================================================== - - We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]). - - 1. For discrete-time DPMs: - - For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by: - t_i = (i + 1) / N - e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1. - We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3. - - Args: - betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details) - alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details) - - Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`. - - **Important**: Please pay special attention for the args for `alphas_cumprod`: - The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that - q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ). - Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have - alpha_{t_n} = \sqrt{\hat{alpha_n}}, - and - log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}). - - - 2. For continuous-time DPMs: - - We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise - schedule are the default settings in DDPM and improved-DDPM: - - Args: - beta_min: A `float` number. The smallest beta for the linear schedule. - beta_max: A `float` number. The largest beta for the linear schedule. - cosine_s: A `float` number. The hyperparameter in the cosine schedule. - cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule. - T: A `float` number. The ending time of the forward process. - - =============================================================== - - Args: - schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs, - 'linear' or 'cosine' for continuous-time DPMs. - Returns: - A wrapper object of the forward SDE (VP type). - - =============================================================== - - Example: - - # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', betas=betas) - - # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod) - - # For continuous-time DPMs (VPSDE), linear schedule: - >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.) - - """ - - if schedule not in ['discrete', 'linear', 'cosine']: - raise ValueError( - "Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format( - schedule)) - - self.schedule = schedule - if schedule == 'discrete': - if betas is not None: - log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0) - else: - assert alphas_cumprod is not None - log_alphas = 0.5 * torch.log(alphas_cumprod) - self.total_N = len(log_alphas) - self.T = 1. - self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1)) - self.log_alpha_array = log_alphas.reshape((1, -1,)) - else: - self.total_N = 1000 - self.beta_0 = continuous_beta_0 - self.beta_1 = continuous_beta_1 - self.cosine_s = 0.008 - self.cosine_beta_max = 999. - self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.)) - self.schedule = schedule - if schedule == 'cosine': - # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T. - # Note that T = 0.9946 may be not the optimal setting. However, we find it works well. - self.T = 0.9946 - else: - self.T = 1. - - def marginal_log_mean_coeff(self, t): - """ - Compute log(alpha_t) of a given continuous-time label t in [0, T]. - """ - if self.schedule == 'discrete': - return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), - self.log_alpha_array.to(t.device)).reshape((-1)) - elif self.schedule == 'linear': - return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0 - elif self.schedule == 'cosine': - log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.)) - log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0 - return log_alpha_t - - def marginal_alpha(self, t): - """ - Compute alpha_t of a given continuous-time label t in [0, T]. - """ - return torch.exp(self.marginal_log_mean_coeff(t)) - - def marginal_std(self, t): - """ - Compute sigma_t of a given continuous-time label t in [0, T]. - """ - return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t))) - - def marginal_lambda(self, t): - """ - Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T]. - """ - log_mean_coeff = self.marginal_log_mean_coeff(t) - log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff)) - return log_mean_coeff - log_std - - def inverse_lambda(self, lamb): - """ - Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t. - """ - if self.schedule == 'linear': - tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - Delta = self.beta_0 ** 2 + tmp - return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0) - elif self.schedule == 'discrete': - log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb) - t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), - torch.flip(self.t_array.to(lamb.device), [1])) - return t.reshape((-1,)) - else: - log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - t = t_fn(log_alpha) - return t - - -def model_wrapper( - model, - noise_schedule, - model_type="noise", - model_kwargs={}, - guidance_type="uncond", - condition=None, - unconditional_condition=None, - guidance_scale=1., - classifier_fn=None, - classifier_kwargs={}, -): - """Create a wrapper function for the noise prediction model. - - DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to - firstly wrap the model function to a noise prediction model that accepts the continuous time as the input. - - We support four types of the diffusion model by setting `model_type`: - - 1. "noise": noise prediction model. (Trained by predicting noise). - - 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0). - - 3. "v": velocity prediction model. (Trained by predicting the velocity). - The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2]. - - [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models." - arXiv preprint arXiv:2202.00512 (2022). - [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models." - arXiv preprint arXiv:2210.02303 (2022). - - 4. "score": marginal score function. (Trained by denoising score matching). - Note that the score function and the noise prediction model follows a simple relationship: - ``` - noise(x_t, t) = -sigma_t * score(x_t, t) - ``` - - We support three types of guided sampling by DPMs by setting `guidance_type`: - 1. "uncond": unconditional sampling by DPMs. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - The input `classifier_fn` has the following format: - `` - classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond) - `` - - [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis," - in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794. - - 3. "classifier-free": classifier-free guidance sampling by conditional DPMs. - The input `model` has the following format: - `` - model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score - `` - And if cond == `unconditional_condition`, the model output is the unconditional DPM output. - - [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." - arXiv preprint arXiv:2207.12598 (2022). - - - The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999) - or continuous-time labels (i.e. epsilon to T). - - We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise: - `` - def model_fn(x, t_continuous) -> noise: - t_input = get_model_input_time(t_continuous) - return noise_pred(model, x, t_input, **model_kwargs) - `` - where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver. - - =============================================================== - - Args: - model: A diffusion model with the corresponding format described above. - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - model_type: A `str`. The parameterization type of the diffusion model. - "noise" or "x_start" or "v" or "score". - model_kwargs: A `dict`. A dict for the other inputs of the model function. - guidance_type: A `str`. The type of the guidance for sampling. - "uncond" or "classifier" or "classifier-free". - condition: A pytorch tensor. The condition for the guided sampling. - Only used for "classifier" or "classifier-free" guidance type. - unconditional_condition: A pytorch tensor. The condition for the unconditional sampling. - Only used for "classifier-free" guidance type. - guidance_scale: A `float`. The scale for the guided sampling. - classifier_fn: A classifier function. Only used for the classifier guidance. - classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function. - Returns: - A noise prediction model that accepts the noised data and the continuous time as the inputs. - """ - - def get_model_input_time(t_continuous): - """ - Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time. - For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N]. - For continuous-time DPMs, we just use `t_continuous`. - """ - if noise_schedule.schedule == 'discrete': - return (t_continuous - 1. / noise_schedule.total_N) * 1000. - else: - return t_continuous - - def noise_pred_fn(x, t_continuous, cond=None): - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - t_input = get_model_input_time(t_continuous) - if cond is None: - output = model(x, t_input, **model_kwargs) - else: - output = model(x, t_input, cond, **model_kwargs) - if model_type == "noise": - return output - elif model_type == "x_start": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims) - elif model_type == "v": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x - elif model_type == "score": - sigma_t = noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return -expand_dims(sigma_t, dims) * output - - def cond_grad_fn(x, t_input): - """ - Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t). - """ - with torch.enable_grad(): - x_in = x.detach().requires_grad_(True) - log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs) - return torch.autograd.grad(log_prob.sum(), x_in)[0] - - def model_fn(x, t_continuous): - """ - The noise predicition model function that is used for DPM-Solver. - """ - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - if guidance_type == "uncond": - return noise_pred_fn(x, t_continuous) - elif guidance_type == "classifier": - assert classifier_fn is not None - t_input = get_model_input_time(t_continuous) - cond_grad = cond_grad_fn(x, t_input) - sigma_t = noise_schedule.marginal_std(t_continuous) - noise = noise_pred_fn(x, t_continuous) - return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad - elif guidance_type == "classifier-free": - if guidance_scale == 1. or unconditional_condition is None: - return noise_pred_fn(x, t_continuous, cond=condition) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t_continuous] * 2) - c_in = torch.cat([unconditional_condition, condition]) - noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) - return noise_uncond + guidance_scale * (noise - noise_uncond) - - assert model_type in ["noise", "x_start", "v"] - assert guidance_type in ["uncond", "classifier", "classifier-free"] - return model_fn - - -class DPM_Solver: - def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.): - """Construct a DPM-Solver. - - We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0"). - If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver). - If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++). - In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True. - The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales. - - Args: - model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]): - `` - def model_fn(x, t_continuous): - return noise - `` - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model. - thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1]. - max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding. - - [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b. - """ - self.model = model_fn - self.noise_schedule = noise_schedule - self.predict_x0 = predict_x0 - self.thresholding = thresholding - self.max_val = max_val - - def noise_prediction_fn(self, x, t): - """ - Return the noise prediction model. - """ - return self.model(x, t) - - def data_prediction_fn(self, x, t): - """ - Return the data prediction model (with thresholding). - """ - noise = self.noise_prediction_fn(x, t) - dims = x.dim() - alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t) - x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims) - if self.thresholding: - p = 0.995 # A hyperparameter in the paper of "Imagen" [1]. - s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1) - s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims) - x0 = torch.clamp(x0, -s, s) / s - return x0 - - def model_fn(self, x, t): - """ - Convert the model to the noise prediction model or the data prediction model. - """ - if self.predict_x0: - return self.data_prediction_fn(x, t) - else: - return self.noise_prediction_fn(x, t) - - def get_time_steps(self, skip_type, t_T, t_0, N, device): - """Compute the intermediate time steps for sampling. - - Args: - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - N: A `int`. The total number of the spacing of the time steps. - device: A torch device. - Returns: - A pytorch tensor of the time steps, with the shape (N + 1,). - """ - if skip_type == 'logSNR': - lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device)) - lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device)) - logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device) - return self.noise_schedule.inverse_lambda(logSNR_steps) - elif skip_type == 'time_uniform': - return torch.linspace(t_T, t_0, N + 1).to(device) - elif skip_type == 'time_quadratic': - t_order = 2 - t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device) - return t - else: - raise ValueError( - "Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type)) - - def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): - """ - Get the order of each step for sampling by the singlestep DPM-Solver. - - We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". - Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - - If order == 1: - We take `steps` of DPM-Solver-1 (i.e. DDIM). - - If order == 2: - - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of DPM-Solver-2. - - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If order == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. - - ============================================ - Args: - order: A `int`. The max order for the solver (2 or 3). - steps: A `int`. The total number of function evaluations (NFE). - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - device: A torch device. - Returns: - orders: A list of the solver order of each step. - """ - if order == 3: - K = steps // 3 + 1 - if steps % 3 == 0: - orders = [3, ] * (K - 2) + [2, 1] - elif steps % 3 == 1: - orders = [3, ] * (K - 1) + [1] - else: - orders = [3, ] * (K - 1) + [2] - elif order == 2: - if steps % 2 == 0: - K = steps // 2 - orders = [2, ] * K - else: - K = steps // 2 + 1 - orders = [2, ] * (K - 1) + [1] - elif order == 1: - K = 1 - orders = [1, ] * steps - else: - raise ValueError("'order' must be '1' or '2' or '3'.") - if skip_type == 'logSNR': - # To reproduce the results in DPM-Solver paper - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) - else: - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[ - torch.cumsum(torch.tensor([0, ] + orders)).to(device)] - return timesteps_outer, orders - - def denoise_to_zero_fn(self, x, s): - """ - Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization. - """ - return self.data_prediction_fn(x, s) - - def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False): - """ - DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - if self.predict_x0: - phi_1 = torch.expm1(-h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - else: - phi_1 = torch.expm1(h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - - def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False, - solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-2 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the second-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 0.5 - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - s1 = ns.inverse_lambda(lambda_s1) - log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff( - s1), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t) - alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_1 = torch.expm1(-h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * ( - model_s1 - model_s) - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_1 = torch.expm1(h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s) - ) - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1} - else: - return x_t - - def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None, - return_intermediate=False, solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-3 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`). - If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 1. / 3. - if r2 is None: - r2 = 2. / 3. - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - lambda_s2 = lambda_s + r2 * h - s1 = ns.inverse_lambda(lambda_s1) - s2 = ns.inverse_lambda(lambda_s2) - log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff( - s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std( - s2), ns.marginal_std(t) - alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_12 = torch.expm1(-r2 * h) - phi_1 = torch.expm1(-h) - phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1. - phi_2 = phi_1 / h + 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(sigma_s2 / sigma_s, dims) * x - - expand_dims(alpha_s2 * phi_12, dims) * model_s - + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + expand_dims(alpha_t * phi_2, dims) * D1 - - expand_dims(alpha_t * phi_3, dims) * D2 - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_12 = torch.expm1(r2 * h) - phi_1 = torch.expm1(h) - phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1. - phi_2 = phi_1 / h - 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x - - expand_dims(sigma_s2 * phi_12, dims) * model_s - - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - expand_dims(sigma_t * phi_2, dims) * D1 - - expand_dims(sigma_t * phi_3, dims) * D2 - ) - - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2} - else: - return x_t - - def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"): - """ - Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - ns = self.noise_schedule - dims = x.dim() - model_prev_1, model_prev_0 = model_prev_list - t_prev_1, t_prev_0 = t_prev_list - lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda( - t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0 = h_0 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - if self.predict_x0: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0 - ) - else: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0 - ) - return x_t - - def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'): - """ - Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - model_prev_2, model_prev_1, model_prev_0 = model_prev_list - t_prev_2, t_prev_1, t_prev_0 = t_prev_list - lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda( - t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_1 = lambda_prev_1 - lambda_prev_2 - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0, r1 = h_0 / h, h_1 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2) - D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1) - D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1) - if self.predict_x0: - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1 - - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2 - ) - else: - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1 - - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2 - ) - return x_t - - def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None, - r2=None): - """ - Singlestep DPM-Solver with the order `order` from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - r1: A `float`. The hyperparameter of the second-order or third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate) - elif order == 2: - return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1) - elif order == 3: - return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1, r2=r2) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'): - """ - Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1]) - elif order == 2: - return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - elif order == 3: - return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5, - solver_type='dpm_solver'): - """ - The adaptive step size solver based on singlestep DPM-Solver. - - Args: - x: A pytorch tensor. The initial value at time `t_T`. - order: A `int`. The (higher) order of the solver. We only support order == 2 or 3. - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - h_init: A `float`. The initial step size (for logSNR). - atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1]. - rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05. - theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1]. - t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the - current time and `t_0` is less than `t_err`. The default setting is 1e-5. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_0: A pytorch tensor. The approximated solution at time `t_0`. - - [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021. - """ - ns = self.noise_schedule - s = t_T * torch.ones((x.shape[0],)).to(x) - lambda_s = ns.marginal_lambda(s) - lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x)) - h = h_init * torch.ones_like(s).to(x) - x_prev = x - nfe = 0 - if order == 2: - r1 = 0.5 - lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - solver_type=solver_type, - **kwargs) - elif order == 3: - r1, r2 = 1. / 3., 2. / 3. - lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - return_intermediate=True, - solver_type=solver_type) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2, - solver_type=solver_type, - **kwargs) - else: - raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order)) - while torch.abs((s - t_0)).mean() > t_err: - t = ns.inverse_lambda(lambda_s + h) - x_lower, lower_noise_kwargs = lower_update(x, s, t) - x_higher = higher_update(x, s, t, **lower_noise_kwargs) - delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev))) - norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True)) - E = norm_fn((x_higher - x_lower) / delta).max() - if torch.all(E <= 1.): - x = x_higher - s = t - x_prev = x_lower - lambda_s = ns.marginal_lambda(s) - h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s) - nfe += order - print('adaptive solver nfe', nfe) - return x - - def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform', - method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver', - atol=0.0078, rtol=0.05, - ): - """ - Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`. - - ===================================================== - - We support the following algorithms for both noise prediction model and data prediction model: - - 'singlestep': - Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver. - We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps). - The total number of function evaluations (NFE) == `steps`. - Given a fixed NFE == `steps`, the sampling procedure is: - - If `order` == 1: - - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2. - - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If `order` == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2. - - 'multistep': - Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`. - We initialize the first `order` values by lower order multistep solvers. - Given a fixed NFE == `steps`, the sampling procedure is: - Denote K = steps. - - If `order` == 1: - - We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2. - - If `order` == 3: - - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3. - - 'singlestep_fixed': - Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3). - We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE. - - 'adaptive': - Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper). - We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`. - You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs - (NFE) and the sample quality. - - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2. - - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3. - - ===================================================== - - Some advices for choosing the algorithm: - - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs: - Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3, - skip_type='time_uniform', method='singlestep') - - For **guided sampling with large guidance scale** by DPMs: - Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2, - skip_type='time_uniform', method='multistep') - - We support three types of `skip_type`: - - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images** - - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**. - - 'time_quadratic': quadratic time for the time steps. - - ===================================================== - Args: - x: A pytorch tensor. The initial value at time `t_start` - e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution. - steps: A `int`. The total number of function evaluations (NFE). - t_start: A `float`. The starting time of the sampling. - If `T` is None, we use self.noise_schedule.T (default is 1.0). - t_end: A `float`. The ending time of the sampling. - If `t_end` is None, we use 1. / self.noise_schedule.total_N. - e.g. if total_N == 1000, we have `t_end` == 1e-3. - For discrete-time DPMs: - - We recommend `t_end` == 1. / self.noise_schedule.total_N. - For continuous-time DPMs: - - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15. - order: A `int`. The order of DPM-Solver. - skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'. - method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'. - denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step. - Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1). - - This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and - score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID - for diffusion models sampling by diffusion SDEs for low-resolutional images - (such as CIFAR-10). However, we observed that such trick does not matter for - high-resolutional images. As it needs an additional NFE, we do not recommend - it for high-resolutional images. - lower_order_final: A `bool`. Whether to use lower order solvers at the final steps. - Only valid for `method=multistep` and `steps < 15`. We empirically find that - this trick is a key to stabilizing the sampling by DPM-Solver with very few steps - (especially for steps <= 10). So we recommend to set it to be `True`. - solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`. - atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - Returns: - x_end: A pytorch tensor. The approximated solution at time `t_end`. - - """ - t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end - t_T = self.noise_schedule.T if t_start is None else t_start - device = x.device - if method == 'adaptive': - with torch.no_grad(): - x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol, - solver_type=solver_type) - elif method == 'multistep': - assert steps >= order - timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device) - assert timesteps.shape[0] - 1 == steps - with torch.no_grad(): - vec_t = timesteps[0].expand((x.shape[0])) - model_prev_list = [self.model_fn(x, vec_t)] - t_prev_list = [vec_t] - # Init the first `order` values by lower order multistep DPM-Solver. - for init_order in tqdm(range(1, order), desc="DPM init order"): - vec_t = timesteps[init_order].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order, - solver_type=solver_type) - model_prev_list.append(self.model_fn(x, vec_t)) - t_prev_list.append(vec_t) - # Compute the remaining values by `order`-th order multistep DPM-Solver. - for step in tqdm(range(order, steps + 1), desc="DPM multistep"): - vec_t = timesteps[step].expand(x.shape[0]) - if lower_order_final and steps < 15: - step_order = min(order, steps + 1 - step) - else: - step_order = order - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order, - solver_type=solver_type) - for i in range(order - 1): - t_prev_list[i] = t_prev_list[i + 1] - model_prev_list[i] = model_prev_list[i + 1] - t_prev_list[-1] = vec_t - # We do not need to evaluate the final model value. - if step < steps: - model_prev_list[-1] = self.model_fn(x, vec_t) - elif method in ['singlestep', 'singlestep_fixed']: - if method == 'singlestep': - timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order, - skip_type=skip_type, - t_T=t_T, t_0=t_0, - device=device) - elif method == 'singlestep_fixed': - K = steps // order - orders = [order, ] * K - timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device) - for i, order in enumerate(orders): - t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1] - timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(), - N=order, device=device) - lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner) - vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0]) - h = lambda_inner[-1] - lambda_inner[0] - r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h - r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h - x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2) - if denoise_to_zero: - x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0) - return x - - -############################################################# -# other utility functions -############################################################# - -def interpolate_fn(x, xp, yp): - """ - A piecewise linear function y = f(x), using xp and yp as keypoints. - We implement f(x) in a differentiable way (i.e. applicable for autograd). - The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.) - - Args: - x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver). - xp: PyTorch tensor with shape [C, K], where K is the number of keypoints. - yp: PyTorch tensor with shape [C, K]. - Returns: - The function values f(x), with shape [N, C]. - """ - N, K = x.shape[0], xp.shape[1] - all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2) - sorted_all_x, x_indices = torch.sort(all_x, dim=2) - x_idx = torch.argmin(x_indices, dim=2) - cand_start_idx = x_idx - 1 - start_idx = torch.where( - torch.eq(x_idx, 0), - torch.tensor(1, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1) - start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2) - end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2) - start_idx2 = torch.where( - torch.eq(x_idx, 0), - torch.tensor(0, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1) - start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2) - end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2) - cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x) - return cand - - -def expand_dims(v, dims): - """ - Expand the tensor `v` to the dim `dims`. - - Args: - `v`: a PyTorch tensor with shape [N]. - `dim`: a `int`. - Returns: - a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`. - """ - return v[(...,) + (None,) * (dims - 1)] diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/drag/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/drag/Factory.d.ts deleted file mode 100644 index 9dacc3d1a684c7317a7055cc27c8717c96706fe1..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/drag/Factory.d.ts +++ /dev/null @@ -1,7 +0,0 @@ -// import * as Phaser from 'phaser'; -import Drag from "./Drag"; - -export default function ( - gameObject: Phaser.GameObjects.GameObject, - config?: Drag.IConfig -): Drag; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Methods.js deleted file mode 100644 index ab6d7f975ecda617ff549eab7d44e42c741f3ed6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/Methods.js +++ /dev/null @@ -1,45 +0,0 @@ -import GetChildrenWidth from './GetChildrenWidth.js'; -import GetChildrenHeight from './GetChildrenHeight.js'; -import GetExpandedChildWidth from './GetExpandedChildWidth.js'; -import GetExpandedChildHeight from './GetExpandedChildHeight.js'; -import GetChildrenSizers from './GetChildrenSizers.js'; -import PreLayout from './PreLayout.js'; -import LayoutChildren from './LayoutChildren.js'; -import ResolveWidth from './ResolveWidth.js'; -import ResolveHeight from './ResolveHeight.js'; -import ResolveChildrenWidth from './ResolveChildrenWidth.js'; -import RunWidthWrap from './RunWidthWrap.js'; -import AddChildMethods from './AddChildMethods.js'; -import RemoveChildMethods from './RemoveChildMethods.js'; -import ResetGrid from './ResetGrid.js'; -import { InseryEmptyRow, AddEmptyRow } from './InsertEmptyRow.js'; -import { InsertEmptyColumn, AddEmptyColumn } from './InsertEmptyColumn.js'; - - -var methods = { - getChildrenWidth: GetChildrenWidth, - getChildrenHeight: GetChildrenHeight, - getExpandedChildWidth: GetExpandedChildWidth, - getExpandedChildHeight: GetExpandedChildHeight, - getChildrenSizers: GetChildrenSizers, - preLayout: PreLayout, - layoutChildren: LayoutChildren, - resolveWidth: ResolveWidth, - resolveHeight: ResolveHeight, - resolveChildrenWidth: ResolveChildrenWidth, - runWidthWrap: RunWidthWrap, - - resetGrid: ResetGrid, - inseryEmptyRow: InseryEmptyRow, - addEmptyRow: AddEmptyRow, - insertEmptyColumn: InsertEmptyColumn, - addEmptyColumn: AddEmptyColumn, -}; - -Object.assign( - methods, - AddChildMethods, - RemoveChildMethods -); - -export default methods; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/hiddenedit/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/hiddenedit/Factory.js deleted file mode 100644 index 73488ef24fafc697b93de5b47c6fdad112c96ae2..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/hiddenedit/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import HiddenEdit from './HiddenEdit.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('hiddenEdit', function (textObject, config) { - var gameObject = new HiddenEdit(textObject, config); - // Note: Don't add this game object into scene - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.HiddenEdit', HiddenEdit); - -export default HiddenEdit; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/ShowChildMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/ShowChildMethods.js deleted file mode 100644 index 7e3c5780c938a2001de4f141454de8018d11f285..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/ShowChildMethods.js +++ /dev/null @@ -1,111 +0,0 @@ -export default { - showChild(key, reset) { - var child = this.sizerChildren[key]; - if (child) { - this.emit('showchild', child, key, this, reset); - this.resetChildState(child); - } - return this; - }, - - hideChild(key, reset) { - var child = this.sizerChildren[key]; - if (child) { - this.emit('hidechild', child, key, this, reset); - this.resetChildState(child); - } - return this; - }, - - swapChild(key, reset) { - if (this.currentChildKey === key) { - // Do nothing - } else if ((this.currentChildKey === 'panel') || (key === 'panel')) { - this.previousChildKey = this.currentChildKey; - this.currentChildKey = key; - this.hideChild(this.previousChildKey, reset); - this.showChild(this.currentChildKey, reset); - } else { // Swap from current side to another side - this.swapChild('panel', reset); - this.swapChild(key, reset); - } - return this; - }, - - showPanel(reset) { - this.swapChild('panel', reset); - return this; - }, - - showLeftSide() { - this.swapChild('leftSide'); - return this; - }, - - showRightSide() { - this.swapChild('rightSide'); - return this; - }, - - showTopSide() { - this.swapChild('topSide'); - return this; - }, - - showBottomSide() { - this.swapChild('bottomSide'); - return this; - }, - - hideLeftSide() { - if (this.currentChildKey == 'leftSide') { - this.showPanel(); - } - return this; - }, - - hideRightSide() { - if (this.currentChildKey == 'rightSide') { - this.showPanel(); - } - return this; - }, - - hideTopSide() { - if (this.currentChildKey == 'topSide') { - this.showPanel(); - } - return this; - }, - - hideBottomSide() { - if (this.currentChildKey == 'bottomSide') { - this.showPanel(); - } - return this; - }, - - toggleLeftSide() { - var key = (this.currentChildKey !== 'panel') ? 'panel' : 'leftSide'; - this.swapChild(key); - return this; - }, - - toggleRightSide() { - var key = (this.currentChildKey !== 'panel') ? 'panel' : 'rightSide'; - this.swapChild(key); - return this; - }, - - toggleTopSide() { - var key = (this.currentChildKey !== 'panel') ? 'panel' : 'topSide'; - this.swapChild(key); - return this; - }, - - toggleBottomSide() { - var key = (this.currentChildKey !== 'panel') ? 'panel' : 'bottomSide'; - this.swapChild(key); - return this; - } -}; \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/training/losses/perceptual.py b/spaces/AlexWang/lama/saicinpainting/training/losses/perceptual.py deleted file mode 100644 index 8c055c2b327ce7943682af5c5f9394b9fcbec506..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/losses/perceptual.py +++ /dev/null @@ -1,113 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -from models.ade20k import ModelBuilder -from saicinpainting.utils import check_and_warn_input_range - - -IMAGENET_MEAN = torch.FloatTensor([0.485, 0.456, 0.406])[None, :, None, None] -IMAGENET_STD = torch.FloatTensor([0.229, 0.224, 0.225])[None, :, None, None] - - -class PerceptualLoss(nn.Module): - def __init__(self, normalize_inputs=True): - super(PerceptualLoss, self).__init__() - - self.normalize_inputs = normalize_inputs - self.mean_ = IMAGENET_MEAN - self.std_ = IMAGENET_STD - - vgg = torchvision.models.vgg19(pretrained=True).features - vgg_avg_pooling = [] - - for weights in vgg.parameters(): - weights.requires_grad = False - - for module in vgg.modules(): - if module.__class__.__name__ == 'Sequential': - continue - elif module.__class__.__name__ == 'MaxPool2d': - vgg_avg_pooling.append(nn.AvgPool2d(kernel_size=2, stride=2, padding=0)) - else: - vgg_avg_pooling.append(module) - - self.vgg = nn.Sequential(*vgg_avg_pooling) - - def do_normalize_inputs(self, x): - return (x - self.mean_.to(x.device)) / self.std_.to(x.device) - - def partial_losses(self, input, target, mask=None): - check_and_warn_input_range(target, 0, 1, 'PerceptualLoss target in partial_losses') - - # we expect input and target to be in [0, 1] range - losses = [] - - if self.normalize_inputs: - features_input = self.do_normalize_inputs(input) - features_target = self.do_normalize_inputs(target) - else: - features_input = input - features_target = target - - for layer in self.vgg[:30]: - - features_input = layer(features_input) - features_target = layer(features_target) - - if layer.__class__.__name__ == 'ReLU': - loss = F.mse_loss(features_input, features_target, reduction='none') - - if mask is not None: - cur_mask = F.interpolate(mask, size=features_input.shape[-2:], - mode='bilinear', align_corners=False) - loss = loss * (1 - cur_mask) - - loss = loss.mean(dim=tuple(range(1, len(loss.shape)))) - losses.append(loss) - - return losses - - def forward(self, input, target, mask=None): - losses = self.partial_losses(input, target, mask=mask) - return torch.stack(losses).sum(dim=0) - - def get_global_features(self, input): - check_and_warn_input_range(input, 0, 1, 'PerceptualLoss input in get_global_features') - - if self.normalize_inputs: - features_input = self.do_normalize_inputs(input) - else: - features_input = input - - features_input = self.vgg(features_input) - return features_input - - -class ResNetPL(nn.Module): - def __init__(self, weight=1, - weights_path=None, arch_encoder='resnet50dilated', segmentation=True): - super().__init__() - self.impl = ModelBuilder.get_encoder(weights_path=weights_path, - arch_encoder=arch_encoder, - arch_decoder='ppm_deepsup', - fc_dim=2048, - segmentation=segmentation) - self.impl.eval() - for w in self.impl.parameters(): - w.requires_grad_(False) - - self.weight = weight - - def forward(self, pred, target): - pred = (pred - IMAGENET_MEAN.to(pred)) / IMAGENET_STD.to(pred) - target = (target - IMAGENET_MEAN.to(target)) / IMAGENET_STD.to(target) - - pred_feats = self.impl(pred, return_feature_maps=True) - target_feats = self.impl(target, return_feature_maps=True) - - result = torch.stack([F.mse_loss(cur_pred, cur_target) - for cur_pred, cur_target - in zip(pred_feats, target_feats)]).sum() * self.weight - return result diff --git a/spaces/Ame42/rwms/README.md b/spaces/Ame42/rwms/README.md deleted file mode 100644 index 1e41c4b135e43ae4b44389693be2dcae052955e1..0000000000000000000000000000000000000000 --- a/spaces/Ame42/rwms/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rwms -emoji: 💻 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.36.1 -python_version: 3.11 -app_file: main.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AmrElsayeh/Interior_style_detector/app.py b/spaces/AmrElsayeh/Interior_style_detector/app.py deleted file mode 100644 index ab0884537a7bbbf1579c68e30d305483d7e25ac3..0000000000000000000000000000000000000000 --- a/spaces/AmrElsayeh/Interior_style_detector/app.py +++ /dev/null @@ -1,22 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -def is_classical(x): return x[0].isupper() - -# Cell -learn = load_learner('interior.pkl') - -# Cell -categories = ('classical','japandi','minimal','poho','earthy') - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -# Cell -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() -examples = ['classical.jpg','japandi.jpg','minimal.jpg','poho.jpg','earthy.jpg'] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -intf.launch() \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vae_diff_to_onnx.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vae_diff_to_onnx.py deleted file mode 100644 index e023e04b94973f26ff6a93b6fa3e2b7b3661b829..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vae_diff_to_onnx.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -from pathlib import Path - -import torch -from packaging import version -from torch.onnx import export - -from diffusers import AutoencoderKL - - -is_torch_less_than_1_11 = version.parse(version.parse(torch.__version__).base_version) < version.parse("1.11") - - -def onnx_export( - model, - model_args: tuple, - output_path: Path, - ordered_input_names, - output_names, - dynamic_axes, - opset, - use_external_data_format=False, -): - output_path.parent.mkdir(parents=True, exist_ok=True) - # PyTorch deprecated the `enable_onnx_checker` and `use_external_data_format` arguments in v1.11, - # so we check the torch version for backwards compatibility - if is_torch_less_than_1_11: - export( - model, - model_args, - f=output_path.as_posix(), - input_names=ordered_input_names, - output_names=output_names, - dynamic_axes=dynamic_axes, - do_constant_folding=True, - use_external_data_format=use_external_data_format, - enable_onnx_checker=True, - opset_version=opset, - ) - else: - export( - model, - model_args, - f=output_path.as_posix(), - input_names=ordered_input_names, - output_names=output_names, - dynamic_axes=dynamic_axes, - do_constant_folding=True, - opset_version=opset, - ) - - -@torch.no_grad() -def convert_models(model_path: str, output_path: str, opset: int, fp16: bool = False): - dtype = torch.float16 if fp16 else torch.float32 - if fp16 and torch.cuda.is_available(): - device = "cuda" - elif fp16 and not torch.cuda.is_available(): - raise ValueError("`float16` model export is only supported on GPUs with CUDA") - else: - device = "cpu" - output_path = Path(output_path) - - # VAE DECODER - vae_decoder = AutoencoderKL.from_pretrained(model_path + "/vae") - vae_latent_channels = vae_decoder.config.latent_channels - # forward only through the decoder part - vae_decoder.forward = vae_decoder.decode - onnx_export( - vae_decoder, - model_args=( - torch.randn(1, vae_latent_channels, 25, 25).to(device=device, dtype=dtype), - False, - ), - output_path=output_path / "vae_decoder" / "model.onnx", - ordered_input_names=["latent_sample", "return_dict"], - output_names=["sample"], - dynamic_axes={ - "latent_sample": {0: "batch", 1: "channels", 2: "height", 3: "width"}, - }, - opset=opset, - ) - del vae_decoder - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--model_path", - type=str, - required=True, - help="Path to the `diffusers` checkpoint to convert (either a local directory or on the Hub).", - ) - - parser.add_argument("--output_path", type=str, required=True, help="Path to the output model.") - parser.add_argument( - "--opset", - default=14, - type=int, - help="The version of the ONNX operator set to use.", - ) - parser.add_argument("--fp16", action="store_true", default=False, help="Export the models in `float16` mode") - - args = parser.parse_args() - print(args.output_path) - convert_models(args.model_path, args.output_path, args.opset, args.fp16) - print("SD: Done: ONNX") diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py deleted file mode 100644 index a6a1b4e6640d1bc10ef6475bde39b5f39a87ec80..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py +++ /dev/null @@ -1,401 +0,0 @@ -# Copyright 2023 Katherine Crowson, The HuggingFace Team and hlky. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from collections import defaultdict -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -class KDPM2DiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Scheduler created by @crowsonkb in [k_diffusion](https://github.com/crowsonkb/k-diffusion), see: - https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 - - Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. beta_start (`float`): the - starting `beta` value of inference. beta_end (`float`): the final `beta` value. beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`, - `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - timestep_spacing (`str`, default `"linspace"`): - The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample - Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 2 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.00085, # sensible defaults - beta_end: float = 0.012, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - timestep_spacing: str = "linspace", - steps_offset: int = 0, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # set all values - self.set_timesteps(num_train_timesteps, None, num_train_timesteps) - - # Copied from diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler.index_for_timestep - def index_for_timestep(self, timestep, schedule_timesteps=None): - if schedule_timesteps is None: - schedule_timesteps = self.timesteps - - indices = (schedule_timesteps == timestep).nonzero() - - # The sigma index that is taken for the **very** first `step` - # is always the second index (or the last index if there is only 1) - # This way we can ensure we don't accidentally skip a sigma in - # case we start in the middle of the denoising schedule (e.g. for image-to-image) - if len(self._index_counter) == 0: - pos = 1 if len(indices) > 1 else 0 - else: - timestep_int = timestep.cpu().item() if torch.is_tensor(timestep) else timestep - pos = self._index_counter[timestep_int] - - return indices[pos].item() - - @property - def init_noise_sigma(self): - # standard deviation of the initial noise distribution - if self.config.timestep_spacing in ["linspace", "trailing"]: - return self.sigmas.max() - - return (self.sigmas.max() ** 2 + 1) ** 0.5 - - def scale_model_input( - self, - sample: torch.FloatTensor, - timestep: Union[float, torch.FloatTensor], - ) -> torch.FloatTensor: - """ - Args: - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - sample (`torch.FloatTensor`): input sample timestep (`int`, optional): current timestep - Returns: - `torch.FloatTensor`: scaled input sample - """ - step_index = self.index_for_timestep(timestep) - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - else: - sigma = self.sigmas_interpol[step_index] - - sample = sample / ((sigma**2 + 1) ** 0.5) - return sample - - def set_timesteps( - self, - num_inference_steps: int, - device: Union[str, torch.device] = None, - num_train_timesteps: Optional[int] = None, - ): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - - num_train_timesteps = num_train_timesteps or self.config.num_train_timesteps - - # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891 - if self.config.timestep_spacing == "linspace": - timesteps = np.linspace(0, num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy() - elif self.config.timestep_spacing == "leading": - step_ratio = num_train_timesteps // self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(float) - timesteps += self.config.steps_offset - elif self.config.timestep_spacing == "trailing": - step_ratio = num_train_timesteps / self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(num_train_timesteps, 0, -step_ratio)).round().copy().astype(float) - timesteps -= 1 - else: - raise ValueError( - f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'." - ) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - self.log_sigmas = torch.from_numpy(np.log(sigmas)).to(device) - - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - sigmas = torch.from_numpy(sigmas).to(device=device) - - # interpolate sigmas - sigmas_interpol = sigmas.log().lerp(sigmas.roll(1).log(), 0.5).exp() - - self.sigmas = torch.cat([sigmas[:1], sigmas[1:].repeat_interleave(2), sigmas[-1:]]) - self.sigmas_interpol = torch.cat( - [sigmas_interpol[:1], sigmas_interpol[1:].repeat_interleave(2), sigmas_interpol[-1:]] - ) - - if str(device).startswith("mps"): - # mps does not support float64 - timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32) - else: - timesteps = torch.from_numpy(timesteps).to(device) - - # interpolate timesteps - timesteps_interpol = self.sigma_to_t(sigmas_interpol).to(device, dtype=timesteps.dtype) - interleaved_timesteps = torch.stack((timesteps_interpol[1:-1, None], timesteps[1:, None]), dim=-1).flatten() - - self.timesteps = torch.cat([timesteps[:1], interleaved_timesteps]) - - self.sample = None - - # for exp beta schedules, such as the one for `pipeline_shap_e.py` - # we need an index counter - self._index_counter = defaultdict(int) - - def sigma_to_t(self, sigma): - # get log sigma - log_sigma = sigma.log() - - # get distribution - dists = log_sigma - self.log_sigmas[:, None] - - # get sigmas range - low_idx = dists.ge(0).cumsum(dim=0).argmax(dim=0).clamp(max=self.log_sigmas.shape[0] - 2) - high_idx = low_idx + 1 - - low = self.log_sigmas[low_idx] - high = self.log_sigmas[high_idx] - - # interpolate sigmas - w = (low - log_sigma) / (low - high) - w = w.clamp(0, 1) - - # transform interpolation to time range - t = (1 - w) * low_idx + w * high_idx - t = t.view(sigma.shape) - return t - - @property - def state_in_first_order(self): - return self.sample is None - - def step( - self, - model_output: Union[torch.FloatTensor, np.ndarray], - timestep: Union[float, torch.FloatTensor], - sample: Union[torch.FloatTensor, np.ndarray], - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Args: - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - model_output (`torch.FloatTensor` or `np.ndarray`): direct output from learned diffusion model. timestep - (`int`): current discrete timestep in the diffusion chain. sample (`torch.FloatTensor` or `np.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - Returns: - [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - step_index = self.index_for_timestep(timestep) - - # advance index counter by 1 - timestep_int = timestep.cpu().item() if torch.is_tensor(timestep) else timestep - self._index_counter[timestep_int] += 1 - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - sigma_interpol = self.sigmas_interpol[step_index + 1] - sigma_next = self.sigmas[step_index + 1] - else: - # 2nd order / KDPM2's method - sigma = self.sigmas[step_index - 1] - sigma_interpol = self.sigmas_interpol[step_index] - sigma_next = self.sigmas[step_index] - - # currently only gamma=0 is supported. This usually works best anyways. - # We can support gamma in the future but then need to scale the timestep before - # passing it to the model which requires a change in API - gamma = 0 - sigma_hat = sigma * (gamma + 1) # Note: sigma_hat == sigma for now - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = sample - sigma_input * model_output - elif self.config.prediction_type == "v_prediction": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = model_output * (-sigma_input / (sigma_input**2 + 1) ** 0.5) + ( - sample / (sigma_input**2 + 1) - ) - elif self.config.prediction_type == "sample": - raise NotImplementedError("prediction_type not implemented yet: sample") - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - if self.state_in_first_order: - # 2. Convert to an ODE derivative for 1st order - derivative = (sample - pred_original_sample) / sigma_hat - # 3. delta timestep - dt = sigma_interpol - sigma_hat - - # store for 2nd order step - self.sample = sample - else: - # DPM-Solver-2 - # 2. Convert to an ODE derivative for 2nd order - derivative = (sample - pred_original_sample) / sigma_interpol - - # 3. delta timestep - dt = sigma_next - sigma_hat - - sample = self.sample - self.sample = None - - prev_sample = sample + derivative * dt - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - # Copied from diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler.add_noise - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - # Make sure sigmas and timesteps have the same device and dtype as original_samples - sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) - if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): - # mps does not support float64 - schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) - timesteps = timesteps.to(original_samples.device, dtype=torch.float32) - else: - schedule_timesteps = self.timesteps.to(original_samples.device) - timesteps = timesteps.to(original_samples.device) - - step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timesteps] - - sigma = sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/combined_sampler.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/combined_sampler.py deleted file mode 100644 index 564729f0895b1863d94c479a67202438af45f996..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/combined_sampler.py +++ /dev/null @@ -1,20 +0,0 @@ -from ..builder import BBOX_SAMPLERS, build_sampler -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class CombinedSampler(BaseSampler): - """A sampler that combines positive sampler and negative sampler.""" - - def __init__(self, pos_sampler, neg_sampler, **kwargs): - super(CombinedSampler, self).__init__(**kwargs) - self.pos_sampler = build_sampler(pos_sampler, **kwargs) - self.neg_sampler = build_sampler(neg_sampler, **kwargs) - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/ocrnet_hr18.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/ocrnet_hr18.py deleted file mode 100644 index c60f62a7cdf3f5c5096a7a7e725e8268fddcb057..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/ocrnet_hr18.py +++ /dev/null @@ -1,68 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='CascadeEncoderDecoder', - num_stages=2, - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - type='HRNet', - norm_cfg=norm_cfg, - norm_eval=False, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(18, 36)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(18, 36, 72)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(18, 36, 72, 144)))), - decode_head=[ - dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - channels=sum([18, 36, 72, 144]), - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - channels=512, - ocr_channels=256, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - ], - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/cgnet/cgnet_512x1024_60k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/cgnet/cgnet_512x1024_60k_cityscapes.py deleted file mode 100644 index 11421ef9d375d01b01c333c3705d6eb6e3348ee8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/cgnet/cgnet_512x1024_60k_cityscapes.py +++ /dev/null @@ -1,66 +0,0 @@ -_base_ = ['../_base_/models/cgnet.py', '../_base_/default_runtime.py'] - -# optimizer -optimizer = dict(type='Adam', lr=0.001, eps=1e-08, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -total_iters = 60000 -checkpoint_config = dict(by_epoch=False, interval=4000) -evaluation = dict(interval=4000, metric='mIoU') - -# dataset settings -dataset_type = 'CityscapesDataset' -data_root = 'data/cityscapes/' -img_norm_cfg = dict( - mean=[72.39239876, 82.90891754, 73.15835921], std=[1, 1, 1], to_rgb=True) -crop_size = (512, 1024) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 1024), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=8, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/train', - ann_dir='gtFine/train', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k.py deleted file mode 100644 index e15b8cc82b09ac3e64875936cdfd0f663aaba936..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = '../deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py' -model = dict( - pretrained='mmcls://mobilenet_v2', - backbone=dict( - _delete_=True, - type='MobileNetV2', - widen_factor=1., - strides=(1, 2, 2, 1, 1, 1, 1), - dilations=(1, 1, 1, 2, 2, 4, 4), - out_indices=(1, 2, 4, 6)), - decode_head=dict(in_channels=320), - auxiliary_head=dict(in_channels=96)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_512x512_160k_ade20k.py deleted file mode 100644 index 6107b41544378ad371cee95ee5ebc2e98ccbd9ad..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './pspnet_r50-d8_512x512_160k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Armored-Atom/DiFuse_Your_Thoughts/app.py b/spaces/Armored-Atom/DiFuse_Your_Thoughts/app.py deleted file mode 100644 index 71b64446251d1748ac8077eca7f4e94f1701ec4f..0000000000000000000000000000000000000000 --- a/spaces/Armored-Atom/DiFuse_Your_Thoughts/app.py +++ /dev/null @@ -1,54 +0,0 @@ -from transformers import pipeline, set_seed -import gradio as grad, random, re - - -gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2') -with open("ideas.txt", "r") as f: - line = f.readlines() - - -def generate(starting_text): - seed = random.randint(100, 1000000) - set_seed(seed) - - if starting_text == "": - starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize() - starting_text: str = re.sub(r"[,:\-–.!;?_]", '', starting_text) - - response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 90)), num_return_sequences=4) - response_list = [] - for x in response: - resp = x['generated_text'].strip() - if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False: - response_list.append(resp+'\n') - - response_end = "\n".join(response_list) - response_end = re.sub('[^ ]+\.[^ ]+','', response_end) - response_end = response_end.replace("<", "").replace(">", "") - - if response_end != "": - return response_end - - -txt = grad.Textbox(lines=1, label="Magic Prompt", placeholder="English Text here") -out = grad.Textbox(lines=4, label="DiFused Prompts") - -examples = [] -for x in range(8): - examples.append(line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize()) - -title = "DiFuse Your Thoughts Here!" -description = 'Place your thoughts into the "Magic Prompt" then submit! Learn about the model, [click here](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion).
' - -grad.Interface(fn=generate, - inputs=txt, - outputs=out, - examples=examples, - title=title, - description=description, - article='', - allow_flagging='never', - cache_examples=False, - theme="default").launch(enable_queue=True, debug=True) - - diff --git a/spaces/Arsenii2023/Demo1/linear.py b/spaces/Arsenii2023/Demo1/linear.py deleted file mode 100644 index ca22e5d966db16afa5a842baa2b359e0020fb944..0000000000000000000000000000000000000000 --- a/spaces/Arsenii2023/Demo1/linear.py +++ /dev/null @@ -1,40 +0,0 @@ -#Author: Arsenii Kostenko -import numpy as np -from sklearn.linear_model import LinearRegression -import gradio as gr - -# Данные для обучения модели -x_train = np.array([[0, 0], [1, 1], [2, 2]]) -y_train = np.array([0, 1, 2]) - -# Обучение модели -model = LinearRegression() -model.fit(x_train, y_train) - -# Функция для предсказания значений -def predict(x, y): - # Преобразование строк в списки списков - x_nested_list = [list(map(int, sublist.split(","))) for sublist in x.split(";")] - y_nested_list = [list(map(int, sublist.split(","))) for sublist in y.split(";")] - - # Преобразование списков списков в numpy arrays - x_array = np.array(x_nested_list) - y_array = np.array(y_nested_list) - - # Проверка исходных данных на соответствие - if x_array.shape != y_array.shape: - return "Ошибка: x и y должны иметь одинаковую размерность" - - # Предсказывание значений - predictions = model.predict(x_array) - - return predictions - -# Создание интерфейса gradio -iface = gr.Interface( - fn=predict, - inputs=["text", "text"], - outputs="text" -) - -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/irc.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/irc.py deleted file mode 100644 index 53e19b83d1e80335f70c3b477cb84fb6de62c897..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/irc.py +++ /dev/null @@ -1,154 +0,0 @@ -""" - pygments.formatters.irc - ~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for IRC output - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Keyword, Name, Comment, String, Error, \ - Number, Operator, Generic, Token, Whitespace -from pip._vendor.pygments.util import get_choice_opt - - -__all__ = ['IRCFormatter'] - - -#: Map token types to a tuple of color values for light and dark -#: backgrounds. -IRC_COLORS = { - Token: ('', ''), - - Whitespace: ('gray', 'brightblack'), - Comment: ('gray', 'brightblack'), - Comment.Preproc: ('cyan', 'brightcyan'), - Keyword: ('blue', 'brightblue'), - Keyword.Type: ('cyan', 'brightcyan'), - Operator.Word: ('magenta', 'brightcyan'), - Name.Builtin: ('cyan', 'brightcyan'), - Name.Function: ('green', 'brightgreen'), - Name.Namespace: ('_cyan_', '_brightcyan_'), - Name.Class: ('_green_', '_brightgreen_'), - Name.Exception: ('cyan', 'brightcyan'), - Name.Decorator: ('brightblack', 'gray'), - Name.Variable: ('red', 'brightred'), - Name.Constant: ('red', 'brightred'), - Name.Attribute: ('cyan', 'brightcyan'), - Name.Tag: ('brightblue', 'brightblue'), - String: ('yellow', 'yellow'), - Number: ('blue', 'brightblue'), - - Generic.Deleted: ('brightred', 'brightred'), - Generic.Inserted: ('green', 'brightgreen'), - Generic.Heading: ('**', '**'), - Generic.Subheading: ('*magenta*', '*brightmagenta*'), - Generic.Error: ('brightred', 'brightred'), - - Error: ('_brightred_', '_brightred_'), -} - - -IRC_COLOR_MAP = { - 'white': 0, - 'black': 1, - 'blue': 2, - 'brightgreen': 3, - 'brightred': 4, - 'yellow': 5, - 'magenta': 6, - 'orange': 7, - 'green': 7, #compat w/ ansi - 'brightyellow': 8, - 'lightgreen': 9, - 'brightcyan': 9, # compat w/ ansi - 'cyan': 10, - 'lightblue': 11, - 'red': 11, # compat w/ ansi - 'brightblue': 12, - 'brightmagenta': 13, - 'brightblack': 14, - 'gray': 15, -} - -def ircformat(color, text): - if len(color) < 1: - return text - add = sub = '' - if '_' in color: # italic - add += '\x1D' - sub = '\x1D' + sub - color = color.strip('_') - if '*' in color: # bold - add += '\x02' - sub = '\x02' + sub - color = color.strip('*') - # underline (\x1F) not supported - # backgrounds (\x03FF,BB) not supported - if len(color) > 0: # actual color - may have issues with ircformat("red", "blah")+"10" type stuff - add += '\x03' + str(IRC_COLOR_MAP[color]).zfill(2) - sub = '\x03' + sub - return add + text + sub - return '<'+add+'>'+text+'' - - -class IRCFormatter(Formatter): - r""" - Format tokens with IRC color sequences - - The `get_style_defs()` method doesn't do anything special since there is - no support for common styles. - - Options accepted: - - `bg` - Set to ``"light"`` or ``"dark"`` depending on the terminal's background - (default: ``"light"``). - - `colorscheme` - A dictionary mapping token types to (lightbg, darkbg) color names or - ``None`` (default: ``None`` = use builtin colorscheme). - - `linenos` - Set to ``True`` to have line numbers in the output as well - (default: ``False`` = no line numbers). - """ - name = 'IRC' - aliases = ['irc', 'IRC'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.darkbg = get_choice_opt(options, 'bg', - ['light', 'dark'], 'light') == 'dark' - self.colorscheme = options.get('colorscheme', None) or IRC_COLORS - self.linenos = options.get('linenos', False) - self._lineno = 0 - - def _write_lineno(self, outfile): - if self.linenos: - self._lineno += 1 - outfile.write("%04d: " % self._lineno) - - def format_unencoded(self, tokensource, outfile): - self._write_lineno(outfile) - - for ttype, value in tokensource: - color = self.colorscheme.get(ttype) - while color is None: - ttype = ttype[:-1] - color = self.colorscheme.get(ttype) - if color: - color = color[self.darkbg] - spl = value.split('\n') - for line in spl[:-1]: - if line: - outfile.write(ircformat(color, line)) - outfile.write('\n') - self._write_lineno(outfile) - if spl[-1]: - outfile.write(ircformat(color, spl[-1])) - else: - outfile.write(value) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_functools.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_functools.py deleted file mode 100644 index e7053bac12fdb7b2cc50448f88318cd93f62cc0e..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_functools.py +++ /dev/null @@ -1,20 +0,0 @@ -import functools - - -# from jaraco.functools 3.5 -def pass_none(func): - """ - Wrap func so it's not called if its first param is None - - >>> print_text = pass_none(print) - >>> print_text('text') - text - >>> print_text(None) - """ - - @functools.wraps(func) - def wrapper(param, *args, **kwargs): - if param is not None: - return func(param, *args, **kwargs) - - return wrapper diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/training/infer_demo.py b/spaces/Audio-AGI/AudioSep/models/CLAP/training/infer_demo.py deleted file mode 100644 index 6a1bcc1fd8cf89ba30773d3479b2a78e8dc06d9f..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/training/infer_demo.py +++ /dev/null @@ -1,109 +0,0 @@ -import sys - -sys.path.append( - "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/src" -) - -import os -import torch -import librosa -from open_clip import create_model -from training.data import get_audio_features -from training.data import int16_to_float32, float32_to_int16 -from transformers import RobertaTokenizer - -tokenize = RobertaTokenizer.from_pretrained("roberta-base") - - -def tokenizer(text): - result = tokenize( - text, - padding="max_length", - truncation=True, - max_length=77, - return_tensors="pt", - ) - return {k: v.squeeze(0) for k, v in result.items()} - - -PRETRAINED_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/checkpoints/epoch_top_0_audioset_no_fusion.pt" -WAVE_48k_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/audio/machine.wav" - - -def infer_text(): - device = "cuda:0" if torch.cuda.is_available() else "cpu" - precision = "fp32" - amodel = "HTSAT-tiny" # or 'PANN-14' - tmodel = "roberta" # the best text encoder in our training - enable_fusion = False # False if you do not want to use the fusion model - fusion_type = "aff_2d" - pretrained = PRETRAINED_PATH - - model, model_cfg = create_model( - amodel, - tmodel, - pretrained, - precision=precision, - device=device, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - # load the text, can be a list (i.e. batch size) - text_data = ["I love the contrastive learning", "I love the pretrain model"] - # tokenize for roberta, if you want to tokenize for another text encoder, please refer to data.py#L43-90 - text_data = tokenizer(text_data) - - text_embed = model.get_text_embedding(text_data) - print(text_embed.size()) - - -def infer_audio(): - - device = "cuda:0" if torch.cuda.is_available() else "cpu" - precision = "fp32" - amodel = "HTSAT-tiny" # or 'PANN-14' - tmodel = "roberta" # the best text encoder in our training - enable_fusion = False # False if you do not want to use the fusion model - fusion_type = "aff_2d" - pretrained = PRETRAINED_PATH - - model, model_cfg = create_model( - amodel, - tmodel, - pretrained, - precision=precision, - device=device, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - - # load the waveform of the shape (T,), should resample to 48000 - audio_waveform, sr = librosa.load(WAVE_48k_PATH, sr=48000) - # quantize - audio_waveform = int16_to_float32(float32_to_int16(audio_waveform)) - audio_waveform = torch.from_numpy(audio_waveform).float() - audio_dict = {} - - # the 'fusion' truncate mode can be changed to 'rand_trunc' if run in unfusion mode - import ipdb - - ipdb.set_trace() - audio_dict = get_audio_features( - audio_dict, - audio_waveform, - 480000, - data_truncating="fusion", - data_filling="repeatpad", - audio_cfg=model_cfg["audio_cfg"], - ) - # can send a list to the model, to process many audio tracks in one time (i.e. batch size) - audio_embed = model.get_audio_embedding([audio_dict]) - print(audio_embed.size()) - import ipdb - - ipdb.set_trace() - - -if __name__ == "__main__": - infer_text() - infer_audio() diff --git a/spaces/Ayya/anime-remove-background/README.md b/spaces/Ayya/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/Ayya/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Cooking.md b/spaces/Benson/text-generation/Examples/Cooking.md deleted file mode 100644 index b33058f77610ef700dbebb07ccc903d68077ec7e..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cooking.md +++ /dev/null @@ -1,49 +0,0 @@ -
-

Cocinar como pasatiempo: beneficios, desafíos y cómo empezar

-

La cocina es una parte fundamental de la vida. Es una actividad que reúne a las familias y tiene un significado cultural y social en todo el mundo. Pero cocinar también puede ser algo más que una necesidad o una tarea. Puede ser un hobby que disfrutes y encuentres gratificante.

-

cooking


Download File ››› https://bltlly.com/2v6IY1



-

Cocinar como pasatiempo significa preparar comidas frescas de su cocina para usted o para otros. Significa explorar nuevas recetas, probar nuevas cocinas, aprender nuevas habilidades y expresar tu creatividad. También significa ahorrar dinero, comer más sano y divertirse.

-

En este artículo, discutiremos algunos de los beneficios y desafíos de cocinar como pasatiempo, y le daremos algunos consejos sobre cómo comenzar su propia aventura culinaria.

-

Beneficios de cocinar como hobby

-

Cocinar como hobby tiene muchas ventajas que pueden mejorar su bienestar en varios aspectos. Aquí están algunos de ellos:

-

-
    -
  • Ahorra dinero: Cocinar tus propias comidas puede ayudarte a reducir tus gastos de comida. Comer fuera o pedir comida para llevar puede ser caro y a menudo poco saludable. Al cocinar en casa, puede controlar la calidad y cantidad de sus alimentos, y evitar el pago adicional por el servicio o los gastos de entrega.
  • -
  • Come más sano: Cocinar tus propias comidas también puede ayudarte a comer alimentos más nutritivos y equilibrados. Puede elegir ingredientes frescos y saludables, evitar los aditivos procesados y artificiales, y ajustar el condimento y el tamaño de la porción a su preferencia. También puede satisfacer sus necesidades o preferencias dietéticas, como vegetariana, vegana, sin gluten o baja en carbohidratos.
  • - -
  • Explora diferentes cocinas: Cocinar como hobby puede ayudarte a descubrir la diversidad y riqueza de las diferentes culturas y sus alimentos. Puedes probar recetas de todo el mundo, como tacos mexicanos, pasta italiana, soufflés franceses, albóndigas chinas, curry indio y más. También puede aprender sobre la historia, las tradiciones y las costumbres detrás de cada plato, y apreciar sus sabores, aromas y texturas.
  • -
  • Expresa creatividad: Cocinar como hobby puede ayudarte a liberar tu creatividad e imaginación. Puedes experimentar con diferentes ingredientes, métodos y técnicas, y crear tus propios platos. También puedes decorar tu comida, cocinarla hermosamente y presentarla con estilo. Puede expresar su personalidad, estado de ánimo y sabor a través de su cocina.
  • -
-

Desafíos de la cocina como hobby

-

Cocinar como pasatiempo también puede tener algunos desafíos que pueden hacerlo difícil o frustrante a veces. Estos son algunos de ellos:

-
    -
  • Encontrar tiempo -
  • Presupuesto: Cocinar como pasatiempo también puede ser costoso si no tienes cuidado. Puede sentirse tentado a comprar ingredientes, herramientas o electrodomésticos caros o exóticos que realmente no necesita ni usa. También puede terminar desperdiciando alimentos si compra demasiado o cocina demasiado. Debe establecer un presupuesto realista para su hobby de cocina y ceñirse a él. También puede buscar maneras de ahorrar dinero, como comprar a granel, usar cupones, comprar en mercados de agricultores o cultivar sus propias hierbas y verduras.
  • - -
  • Compras: Cocinar como pasatiempo implica comprar alimentos y suministros. Necesita encontrar fuentes confiables para sus ingredientes, herramientas y electrodomésticos. Necesita comparar precios, calidad y disponibilidad. También necesita transportar sus compras de forma segura y almacenarlas correctamente. Las compras pueden consumir mucho tiempo, ser agotadoras y estresantes si no planeas con anticipación o no tienes una lista.
  • -
  • Limpieza: Cocinar como hobby implica limpiar después de ti mismo. Necesitas lavar tus platos, utensilios, ollas, sartenes y electrodomésticos. Necesitas limpiar tus mostradores, mesas, estufas, hornos y fregaderos. Necesitas desechar la basura, el compost y el reciclaje. Necesitas desinfectar tu cocina y prevenir plagas y gérmenes. La limpieza puede ser tediosa, aburrida y desagradable si no lo haces regularmente o no tienes ayuda.
  • -
  • Lidiar con fracasos: Cocinar como pasatiempo también puede resultar en fracasos y decepciones. Usted puede encontrar problemas tales como alimentos quemados, alimentos poco cocidos, alimentos sobrecocidos, alimentos empapados, alimentos secos, alimentos blandos, alimentos picantes o alimentos en mal estado. También puede cometer errores como olvidar un ingrediente, agregar demasiado o muy poco de algo, mezclar las mediciones o seguir los pasos equivocados. También puede enfrentar críticas o rechazos de usted o de otros que saborean su comida. Necesitas aceptar tus fracasos, aprender de ellos, e intentarlo de nuevo.
  • -
-

Cómo empezar a cocinar como un hobby

-

Si te interesa cocinar como hobby, pero no sabes por dónde empezar, no te preocupes. Usted no necesita ser un chef profesional o tener una cocina de lujo para disfrutar de la cocina. Solo necesita algunas herramientas básicas, ingredientes y motivación. Aquí hay algunos consejos sobre cómo empezar a cocinar como un hobby:

-
    - -
  • Abastézcase de herramientas e ingredientes básicos: Invierta en algunas herramientas e ingredientes básicos que utilizará con frecuencia en su cocina. Algunas herramientas esenciales incluyen un cuchillo de chef, una tabla de cortar, una taza medidora y cucharas, un tazón de mezcla, una cuchara de madera, una espátula, un batidor, un colador, una bandeja para hornear, una cacerola, una sartén y un guante de horno. Algunos ingredientes esenciales incluyen sal, pimienta, aceite, mantequilla, harina, azúcar, huevos, leche, queso, pan, pasta, arroz, frijoles, tomates enlatados, cebollas, ajo, zanahorias, patatas, caldo de pollo, vinagre, salsa de soja y hierbas y especias.
  • -
  • Siga tutoriales y libros de cocina en línea: Aprenda de tutoriales y libros de cocina en línea que pueden guiarlo paso a paso a través del proceso de cocción. Puedes ver videos en canales de YouTube como [Tasty], [Bon Appét t], [Food Wishes] o [Jamie Oliver]. También puedes leer blogs o sitios web como [The Kitchn], [Serious Eats], [Simply Recipes] o [Smitten Kitchen]. También puede comprar o pedir prestados libros de cocina que se adapten a su nivel, interés y estilo de cocina. Algunos libros de cocina populares para principiantes incluyen [Cómo cocinar todo], [La alegría de cocinar], [El nuevo libro de cocina básico], y [Sal, grasa, ácido, calor].
  • -
  • Únase a clases o clubes de cocina: Aprenda de otras personas que comparten su pasión por la cocina uniéndose a clases o clubes de cocina en su área o en línea. Puede encontrar clases de cocina o clubes que atienden a diferentes niveles de habilidad, cocinas, temas u ocasiones. También puede aprender de chefs profesionales, expertos o instructores que pueden enseñarle consejos y trucos, responder a sus preguntas y darle retroalimentación. También puedes conocer nuevos amigos, intercambiar recetas y divertirte.
  • - -
-

Conclusión

-

Cocinar como hobby es una gran manera de disfrutar y mejorar tu bienestar. Puede ayudarle a ahorrar dinero, comer más sano, aprender nuevas habilidades, explorar diferentes cocinas y expresar la creatividad. También puede tener algunos desafíos, como encontrar tiempo, presupuestar, planificar, comprar, limpiar y lidiar con fallas. Pero con algunos consejos y orientación, puedes superar estos retos y comenzar tu propia aventura culinaria.

-

Si te interesa cocinar como hobby, no dudes en probarlo. No necesitas ser perfecto ni seguir ninguna regla. Solo necesitas divertirte y ser tú mismo. Cocinar como pasatiempo no es solo hacer comida. Se trata de crear recuerdos, conexiones y felicidad.

-

Preguntas frecuentes

-

Aquí hay algunas preguntas y respuestas comunes sobre cocinar como pasatiempo:

-
    -
  1. ¿Cómo puedo encontrar recetas? : Puedes encontrar recetas de varias fuentes, como sitios web en línea, blogs, videos, libros de cocina, revistas, periódicos o amigos y familiares. También puede buscar recetas por ingredientes, cocinas, ocasiones o palabras clave. También puedes crear tus propias recetas experimentando con diferentes combinaciones de ingredientes, métodos y técnicas.
  2. -
  3. ¿Cómo almaceno los alimentos? : Puede almacenar los alimentos de diferentes maneras dependiendo del tipo, la cantidad y la vida útil de los alimentos. Puede almacenar alimentos en el refrigerador, congelador, despensa o encimera. También puede usar diferentes recipientes o bolsas para almacenar alimentos, como frascos de vidrio, contenedores de plástico, bolsas con cremallera o envolturas de aluminio. Siempre debe etiquetar y fechar su comida y verificar si hay signos de deterioro antes de consumirla.
  4. - -
  5. ¿Cómo cocino para dietas especiales? : Puede cocinar para dietas especiales adaptando sus recetas para satisfacer las necesidades dietéticas o las preferencias de usted u otros. Puede sustituir u omitir ciertos ingredientes que no están permitidos o deseados en la dieta, como carne, lácteos, gluten, azúcar o sal. También puede agregar o aumentar ciertos ingredientes que son beneficiosos o recomendados en la dieta, como verduras, frutas, granos, nueces o semillas. Siempre debes revisar las etiquetas de los ingredientes que usas y consultar con un médico o nutricionista si tienes alguna duda o pregunta.
  6. -
  7. ¿Cómo mido los ingredientes? : Puedes medir ingredientes usando diferentes herramientas o métodos dependiendo del tipo y cantidad del ingrediente. Puedes usar tazas medidoras y cucharas para medir los ingredientes secos o líquidos por volumen. Puedes usar una balanza de cocina para medir los ingredientes por peso. También puedes usar señales visuales o estimaciones para medir los ingredientes por tamaño o número. Siempre debe seguir las medidas indicadas en la receta o utilizar una tabla de conversión si necesita cambiar las unidades de medida. También debe nivelar sus tazas y cucharas medidoras y evitar empacar o amontonar sus ingredientes a menos que se especifique lo contrario.
  8. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat/src/lib/utils/streamToAsyncIterable.ts b/spaces/BetterAPI/BetterChat/src/lib/utils/streamToAsyncIterable.ts deleted file mode 100644 index e935d719c8c29eb5e4efc30812f61b5f44716923..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/lib/utils/streamToAsyncIterable.ts +++ /dev/null @@ -1,15 +0,0 @@ -// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of#iterating_over_async_generators -export async function* streamToAsyncIterable( - stream: ReadableStream -): AsyncIterableIterator { - const reader = stream.getReader(); - try { - while (true) { - const { done, value } = await reader.read(); - if (done) return; - yield value; - } - } finally { - reader.releaseLock(); - } -} diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/stores/errors.ts b/spaces/BetterAPI/BetterChat_new/src/lib/stores/errors.ts deleted file mode 100644 index c7dd124ff03c1845237213b6c22ec7afefcd18e8..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/lib/stores/errors.ts +++ /dev/null @@ -1,7 +0,0 @@ -import { writable } from "svelte/store"; - -export const ERROR_MESSAGES = { - default: "Oops, something went wrong.", -}; - -export const error = writable(null); diff --git a/spaces/Boilin/URetinex-Net/network/illumination_adjustment.py b/spaces/Boilin/URetinex-Net/network/illumination_adjustment.py deleted file mode 100644 index 3967cca3a24f09bc979907a411e64ccb012d77ad..0000000000000000000000000000000000000000 --- a/spaces/Boilin/URetinex-Net/network/illumination_adjustment.py +++ /dev/null @@ -1,24 +0,0 @@ -import torch -import numpy as np -import torch.nn as nn -from network.architecture import get_conv2d_layer -import torch.nn.functional as F - -class Adjust_naive(nn.Module): - def __init__(self, opt): - super().__init__() - self.conv1 = get_conv2d_layer(in_c=2, out_c=32, k=5, s=1, p=2) - self.conv2 = get_conv2d_layer(in_c=32, out_c=32, k=5, s=1, p=2) - self.conv3 = get_conv2d_layer(in_c=32, out_c=32, k=5, s=1, p=2) - self.conv4 = get_conv2d_layer(in_c=32, out_c=1, k=5, s=1, p=2) - self.leaky_relu = nn.LeakyReLU(0.2) - self.relu = nn.ReLU() - def forward(self, l, alpha): - input = torch.cat([l, alpha], dim=1) - x = self.conv1(input) - x = self.conv2(self.leaky_relu(x)) - x = self.conv3(self.leaky_relu(x)) - x = self.conv4(self.leaky_relu(x)) - x = self.relu(x) - return x - diff --git a/spaces/CGMatter/modelscope-text-to-video-synthesis/style.css b/spaces/CGMatter/modelscope-text-to-video-synthesis/style.css deleted file mode 100644 index 2f399e973fe275a7299ae85c6a85bd5d35eb64cb..0000000000000000000000000000000000000000 --- a/spaces/CGMatter/modelscope-text-to-video-synthesis/style.css +++ /dev/null @@ -1,191 +0,0 @@ -/* -This CSS file is copied from here: -https://huggingface.co/spaces/stabilityai/stable-diffusion/blob/2794a3c3ba66115c307075098e713f572b08bf80/app.py -*/ - -h1 { - text-align: center; -} - -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} - -.gr-button { - color: white; - border-color: black; - background: black; -} - -input[type='range'] { - accent-color: black; -} - -.dark input[type='range'] { - accent-color: #dfdfdf; -} - -.container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; -} - -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} - -#gallery>div>.h-full { - min-height: 20rem; -} - -.details:hover { - text-decoration: underline; -} - -.gr-button { - white-space: nowrap; -} - -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} - -#advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; -} - -#advanced-options { - display: none; - margin-bottom: 20px; -} - -.footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} - -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} - -.dark .footer { - border-color: #303030; -} - -.dark .footer>p { - background: #0b0f19; -} - -.acknowledgments h4 { - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} - -.animate-spin { - animation: spin 1s linear infinite; -} - -@keyframes spin { - from { - transform: rotate(0deg); - } - - to { - transform: rotate(360deg); - } -} - -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - width: 13rem; - margin-top: 10px; - margin-left: auto; -} - -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor: pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; - right: 0; -} - -#share-btn * { - all: unset; -} - -#share-btn-container div:nth-child(-n+2) { - width: auto !important; - min-height: 0px !important; -} - -#share-btn-container .wrap { - display: none !important; -} - -.gr-form { - flex: 1 1 50%; - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} - -#prompt-container { - gap: 0; -} - -#prompt-text-input, -#negative-prompt-text-input { - padding: .45rem 0.625rem -} - -#component-16 { - border-top-width: 1px !important; - margin-top: 1em -} - -.image_duplication { - position: absolute; - width: 100px; - left: 50px -} - -#component-0 { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; -} diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/__init__.py deleted file mode 100644 index 2fcdeb45a03d3835b3c2498ca8021a11d8cb4758..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .config import add_tridentnet_config -from .trident_backbone import ( - TridentBottleneckBlock, - build_trident_resnet_backbone, - make_trident_stage, -) -from .trident_rpn import TridentRPN -from .trident_rcnn import TridentRes5ROIHeads, TridentStandardROIHeads diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/sort.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/sort.h deleted file mode 100644 index 0900743d8d4f3106afe19a2373bac45657b41247..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/sort.h +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file sort.h - * \brief Sequential implementations of sort algorithms. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -template -__host__ __device__ -void stable_sort(sequential::execution_policy &exec, - RandomAccessIterator first, - RandomAccessIterator last, - StrictWeakOrdering comp); - - -template -__host__ __device__ -void stable_sort_by_key(sequential::execution_policy &exec, - RandomAccessIterator1 first1, - RandomAccessIterator1 last1, - RandomAccessIterator2 first2, - StrictWeakOrdering comp); - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/WALT/mmdet/models/losses/__init__.py b/spaces/CVPR/WALT/mmdet/models/losses/__init__.py deleted file mode 100644 index 297aa228277768eb0ba0e8a377f19704d1feeca8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/losses/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -from .accuracy import Accuracy, accuracy -from .ae_loss import AssociativeEmbeddingLoss -from .balanced_l1_loss import BalancedL1Loss, balanced_l1_loss -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .focal_loss import FocalLoss, sigmoid_focal_loss -from .gaussian_focal_loss import GaussianFocalLoss -from .gfocal_loss import DistributionFocalLoss, QualityFocalLoss -from .ghm_loss import GHMC, GHMR -from .iou_loss import (BoundedIoULoss, CIoULoss, DIoULoss, GIoULoss, IoULoss, - bounded_iou_loss, iou_loss) -from .kd_loss import KnowledgeDistillationKLDivLoss -from .mse_loss import MSELoss, mse_loss -from .pisa_loss import carl_loss, isr_p -from .smooth_l1_loss import L1Loss, SmoothL1Loss, l1_loss, smooth_l1_loss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss -from .varifocal_loss import VarifocalLoss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'sigmoid_focal_loss', - 'FocalLoss', 'smooth_l1_loss', 'SmoothL1Loss', 'balanced_l1_loss', - 'BalancedL1Loss', 'mse_loss', 'MSELoss', 'iou_loss', 'bounded_iou_loss', - 'IoULoss', 'BoundedIoULoss', 'GIoULoss', 'DIoULoss', 'CIoULoss', 'GHMC', - 'GHMR', 'reduce_loss', 'weight_reduce_loss', 'weighted_loss', 'L1Loss', - 'l1_loss', 'isr_p', 'carl_loss', 'AssociativeEmbeddingLoss', - 'GaussianFocalLoss', 'QualityFocalLoss', 'DistributionFocalLoss', - 'VarifocalLoss', 'KnowledgeDistillationKLDivLoss' -] diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/docs/intro.md b/spaces/CVPR/unicl-zero-shot-img-recog/docs/intro.md deleted file mode 100644 index dbd256115061cd13df6290bb980abd961e60a8ba..0000000000000000000000000000000000000000 --- a/spaces/CVPR/unicl-zero-shot-img-recog/docs/intro.md +++ /dev/null @@ -1,6 +0,0 @@ - -["**Unifiled Contrastive Learning in Image-Text-Label Space. CVPR 2022**"](https://arxiv.org/abs/2204.03610) by [Jianwei Yang*](https://jwyang.github.io/), [Chunyuan Li*](https://chunyuan.li/), [Pengchuan Zhang*](https://pzzhang.github.io/pzzhang/), [Bin Xiao*](https://www.microsoft.com/en-us/research/people/bixi/), [Ce Liu](http://people.csail.mit.edu/celiu/), [Lu Yuan](https://scholar.google.com/citations?user=k9TsUVsAAAAJ&hl=en) and [Jianfeng Gao](https://www.microsoft.com/en-us/research/people/jfgao/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Fjfgao%2F). - -In this paper, we introduce a new perspective on commonly used image-label and image-text data by residing them in an image-text-label space. In this space, a new learning paradigm, called **Unified Contrastive Learning (UniCL)** with a single learning objective is proposed to seamlessly prompt the synergy of two data types. We demonstrate that UniCL is an effective way of learning **semantically rich yet discriminative representations**, universally for image recognition in zero-shot, linear-probe, fully finetuning and transfer learning scenarios. When scaled up to billions of data, UniCL can exclusively learn a powerful visual-semantic representation supporting dozens of downstream tasks shown in [Florence](https://arxiv.org/pdf/2111.11432v1.pdf). - -Code: https://github.com/microsoft/unicl \ No newline at end of file diff --git a/spaces/CZ5624/anime-remove-background/app.py b/spaces/CZ5624/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/CZ5624/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/grab/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/grab/__init__.py deleted file mode 100644 index d40c0c88560d21fb947e730fbd9980e23a49d34d..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/grab/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -from pathlib import Path -from typing import List - -from meme_generator import add_meme -from pil_utils import BuildImage - -img_dir = Path(__file__).parent / "images" - - -def grab(images: List[BuildImage], texts, args): - frame = BuildImage.open(img_dir / "0.png") - frame.paste( - images[0].convert("RGBA").resize((500, 500), keep_ratio=True), below=True - ) - return frame.save_jpg() - - -add_meme("grab", grab, min_images=1, max_images=1, keywords=["抓"]) diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Easychat.py b/spaces/CofAI/chat/g4f/Provider/Providers/Easychat.py deleted file mode 100644 index eb740da991eb8f740489f6bc76a1ad55f006663b..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/Easychat.py +++ /dev/null @@ -1,55 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://free.easychat.work' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - headers = { - 'authority': 'free.easychat.work', - 'accept': 'text/event-stream', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'content-type': 'application/json', - 'endpoint': '', - 'origin': 'https://free.easychat.work', - 'plugins': '0', - 'referer': 'https://free.easychat.work/', - 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - 'usesearch': 'false', - 'x-requested-with': 'XMLHttpRequest', - } - - json_data = { - 'messages': messages, - 'stream': True, - 'model': model, - 'temperature': 0.5, - 'presence_penalty': 0, - 'frequency_penalty': 0, - 'top_p': 1, - } - - response = requests.post('https://free.easychat.work/api/openai/v1/chat/completions', - headers=headers, json=json_data) - - for chunk in response.iter_lines(): - if b'content' in chunk: - data = json.loads(chunk.decode().split('data: ')[1]) - yield (data['choices'][0]['delta']['content']) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/client_ws.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/client_ws.py deleted file mode 100644 index 9a8ba84ca5082ad6d672c3837d4810e467a8080e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/client_ws.py +++ /dev/null @@ -1,300 +0,0 @@ -"""WebSocket client for asyncio.""" - -import asyncio -from typing import Any, Optional, cast - -import async_timeout - -from .client_exceptions import ClientError -from .client_reqrep import ClientResponse -from .helpers import call_later, set_result -from .http import ( - WS_CLOSED_MESSAGE, - WS_CLOSING_MESSAGE, - WebSocketError, - WSCloseCode, - WSMessage, - WSMsgType, -) -from .http_websocket import WebSocketWriter # WSMessage -from .streams import EofStream, FlowControlDataQueue -from .typedefs import ( - DEFAULT_JSON_DECODER, - DEFAULT_JSON_ENCODER, - JSONDecoder, - JSONEncoder, -) - - -class ClientWebSocketResponse: - def __init__( - self, - reader: "FlowControlDataQueue[WSMessage]", - writer: WebSocketWriter, - protocol: Optional[str], - response: ClientResponse, - timeout: float, - autoclose: bool, - autoping: bool, - loop: asyncio.AbstractEventLoop, - *, - receive_timeout: Optional[float] = None, - heartbeat: Optional[float] = None, - compress: int = 0, - client_notakeover: bool = False, - ) -> None: - self._response = response - self._conn = response.connection - - self._writer = writer - self._reader = reader - self._protocol = protocol - self._closed = False - self._closing = False - self._close_code: Optional[int] = None - self._timeout = timeout - self._receive_timeout = receive_timeout - self._autoclose = autoclose - self._autoping = autoping - self._heartbeat = heartbeat - self._heartbeat_cb: Optional[asyncio.TimerHandle] = None - if heartbeat is not None: - self._pong_heartbeat = heartbeat / 2.0 - self._pong_response_cb: Optional[asyncio.TimerHandle] = None - self._loop = loop - self._waiting: Optional[asyncio.Future[bool]] = None - self._exception: Optional[BaseException] = None - self._compress = compress - self._client_notakeover = client_notakeover - - self._reset_heartbeat() - - def _cancel_heartbeat(self) -> None: - if self._pong_response_cb is not None: - self._pong_response_cb.cancel() - self._pong_response_cb = None - - if self._heartbeat_cb is not None: - self._heartbeat_cb.cancel() - self._heartbeat_cb = None - - def _reset_heartbeat(self) -> None: - self._cancel_heartbeat() - - if self._heartbeat is not None: - self._heartbeat_cb = call_later( - self._send_heartbeat, self._heartbeat, self._loop - ) - - def _send_heartbeat(self) -> None: - if self._heartbeat is not None and not self._closed: - # fire-and-forget a task is not perfect but maybe ok for - # sending ping. Otherwise we need a long-living heartbeat - # task in the class. - self._loop.create_task(self._writer.ping()) - - if self._pong_response_cb is not None: - self._pong_response_cb.cancel() - self._pong_response_cb = call_later( - self._pong_not_received, self._pong_heartbeat, self._loop - ) - - def _pong_not_received(self) -> None: - if not self._closed: - self._closed = True - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._exception = asyncio.TimeoutError() - self._response.close() - - @property - def closed(self) -> bool: - return self._closed - - @property - def close_code(self) -> Optional[int]: - return self._close_code - - @property - def protocol(self) -> Optional[str]: - return self._protocol - - @property - def compress(self) -> int: - return self._compress - - @property - def client_notakeover(self) -> bool: - return self._client_notakeover - - def get_extra_info(self, name: str, default: Any = None) -> Any: - """extra info from connection transport""" - conn = self._response.connection - if conn is None: - return default - transport = conn.transport - if transport is None: - return default - return transport.get_extra_info(name, default) - - def exception(self) -> Optional[BaseException]: - return self._exception - - async def ping(self, message: bytes = b"") -> None: - await self._writer.ping(message) - - async def pong(self, message: bytes = b"") -> None: - await self._writer.pong(message) - - async def send_str(self, data: str, compress: Optional[int] = None) -> None: - if not isinstance(data, str): - raise TypeError("data argument must be str (%r)" % type(data)) - await self._writer.send(data, binary=False, compress=compress) - - async def send_bytes(self, data: bytes, compress: Optional[int] = None) -> None: - if not isinstance(data, (bytes, bytearray, memoryview)): - raise TypeError("data argument must be byte-ish (%r)" % type(data)) - await self._writer.send(data, binary=True, compress=compress) - - async def send_json( - self, - data: Any, - compress: Optional[int] = None, - *, - dumps: JSONEncoder = DEFAULT_JSON_ENCODER, - ) -> None: - await self.send_str(dumps(data), compress=compress) - - async def close(self, *, code: int = WSCloseCode.OK, message: bytes = b"") -> bool: - # we need to break `receive()` cycle first, - # `close()` may be called from different task - if self._waiting is not None and not self._closed: - self._reader.feed_data(WS_CLOSING_MESSAGE, 0) - await self._waiting - - if not self._closed: - self._cancel_heartbeat() - self._closed = True - try: - await self._writer.close(code, message) - except asyncio.CancelledError: - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._response.close() - raise - except Exception as exc: - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._exception = exc - self._response.close() - return True - - if self._closing: - self._response.close() - return True - - while True: - try: - async with async_timeout.timeout(self._timeout): - msg = await self._reader.read() - except asyncio.CancelledError: - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._response.close() - raise - except Exception as exc: - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._exception = exc - self._response.close() - return True - - if msg.type == WSMsgType.CLOSE: - self._close_code = msg.data - self._response.close() - return True - else: - return False - - async def receive(self, timeout: Optional[float] = None) -> WSMessage: - while True: - if self._waiting is not None: - raise RuntimeError("Concurrent call to receive() is not allowed") - - if self._closed: - return WS_CLOSED_MESSAGE - elif self._closing: - await self.close() - return WS_CLOSED_MESSAGE - - try: - self._waiting = self._loop.create_future() - try: - async with async_timeout.timeout(timeout or self._receive_timeout): - msg = await self._reader.read() - self._reset_heartbeat() - finally: - waiter = self._waiting - self._waiting = None - set_result(waiter, True) - except (asyncio.CancelledError, asyncio.TimeoutError): - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - raise - except EofStream: - self._close_code = WSCloseCode.OK - await self.close() - return WSMessage(WSMsgType.CLOSED, None, None) - except ClientError: - self._closed = True - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - return WS_CLOSED_MESSAGE - except WebSocketError as exc: - self._close_code = exc.code - await self.close(code=exc.code) - return WSMessage(WSMsgType.ERROR, exc, None) - except Exception as exc: - self._exception = exc - self._closing = True - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - await self.close() - return WSMessage(WSMsgType.ERROR, exc, None) - - if msg.type == WSMsgType.CLOSE: - self._closing = True - self._close_code = msg.data - if not self._closed and self._autoclose: - await self.close() - elif msg.type == WSMsgType.CLOSING: - self._closing = True - elif msg.type == WSMsgType.PING and self._autoping: - await self.pong(msg.data) - continue - elif msg.type == WSMsgType.PONG and self._autoping: - continue - - return msg - - async def receive_str(self, *, timeout: Optional[float] = None) -> str: - msg = await self.receive(timeout) - if msg.type != WSMsgType.TEXT: - raise TypeError(f"Received message {msg.type}:{msg.data!r} is not str") - return cast(str, msg.data) - - async def receive_bytes(self, *, timeout: Optional[float] = None) -> bytes: - msg = await self.receive(timeout) - if msg.type != WSMsgType.BINARY: - raise TypeError(f"Received message {msg.type}:{msg.data!r} is not bytes") - return cast(bytes, msg.data) - - async def receive_json( - self, - *, - loads: JSONDecoder = DEFAULT_JSON_DECODER, - timeout: Optional[float] = None, - ) -> Any: - data = await self.receive_str(timeout=timeout) - return loads(data) - - def __aiter__(self) -> "ClientWebSocketResponse": - return self - - async def __anext__(self) -> WSMessage: - msg = await self.receive() - if msg.type in (WSMsgType.CLOSE, WSMsgType.CLOSING, WSMsgType.CLOSED): - raise StopAsyncIteration - return msg diff --git a/spaces/Dimalker/Faceswapper/roop/processors/frame/face_enhancer.py b/spaces/Dimalker/Faceswapper/roop/processors/frame/face_enhancer.py deleted file mode 100644 index e4c2dec05f834f7732ac62f0db6dcde416ed0b30..0000000000000000000000000000000000000000 --- a/spaces/Dimalker/Faceswapper/roop/processors/frame/face_enhancer.py +++ /dev/null @@ -1,81 +0,0 @@ -from typing import Any, List, Callable -import cv2 -import threading -import gfpgan - -import roop.globals -import roop.processors.frame.core -from roop.core import update_status -from roop.face_analyser import get_one_face -from roop.typing import Frame, Face -from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video - -FACE_ENHANCER = None -THREAD_SEMAPHORE = threading.Semaphore() -THREAD_LOCK = threading.Lock() -NAME = 'ROOP.FACE-ENHANCER' - - -def get_face_enhancer() -> Any: - global FACE_ENHANCER - - with THREAD_LOCK: - if FACE_ENHANCER is None: - model_path = resolve_relative_path('../models/GFPGANv1.4.pth') - # todo: set models path https://github.com/TencentARC/GFPGAN/issues/399 - FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined] - return FACE_ENHANCER - - -def pre_check() -> bool: - download_directory_path = resolve_relative_path('../models') - conditional_download(download_directory_path, ['https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth']) - return True - - -def pre_start() -> bool: - if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path): - update_status('Select an image or video for target path.', NAME) - return False - return True - - -def post_process() -> None: - global FACE_ENHANCER - - FACE_ENHANCER = None - - -def enhance_face(temp_frame: Frame) -> Frame: - with THREAD_SEMAPHORE: - _, _, temp_frame = get_face_enhancer().enhance( - temp_frame, - paste_back=True - ) - return temp_frame - - -def process_frame(source_face: Face, temp_frame: Frame) -> Frame: - target_face = get_one_face(temp_frame) - if target_face: - temp_frame = enhance_face(temp_frame) - return temp_frame - - -def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None: - for temp_frame_path in temp_frame_paths: - temp_frame = cv2.imread(temp_frame_path) - result = process_frame(None, temp_frame) - cv2.imwrite(temp_frame_path, result) - if update: - update() - - -def process_image(source_path: str, target_path: str, output_path: str) -> None: - target_frame = cv2.imread(target_path) - result = process_frame(None, target_frame) - cv2.imwrite(output_path, result) - - -def process_video(source_path: str, temp_frame_paths: List[str]) -> None: - roop.processors.frame.core.process_video(None, temp_frame_paths, process_frames) diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/train.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/train.py deleted file mode 100644 index 7295f159b0427aef89a5944a0d1eb4c23ee85a7f..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/train.py +++ /dev/null @@ -1,413 +0,0 @@ -import argparse -import math -import random -import os - -import numpy as np -import torch -from torch import nn, autograd, optim -from torch.nn import functional as F -from torch.utils import data -import torch.distributed as dist -from torchvision import transforms, utils -from tqdm import tqdm - -try: - import wandb - -except ImportError: - wandb = None - -from model import Generator, Discriminator -from dataset import MultiResolutionDataset -from distributed import ( - get_rank, - synchronize, - reduce_loss_dict, - reduce_sum, - get_world_size, -) - - -def data_sampler(dataset, shuffle, distributed): - if distributed: - return data.distributed.DistributedSampler(dataset, shuffle=shuffle) - - if shuffle: - return data.RandomSampler(dataset) - - else: - return data.SequentialSampler(dataset) - - -def requires_grad(model, flag=True): - for p in model.parameters(): - p.requires_grad = flag - - -def accumulate(model1, model2, decay=0.999): - par1 = dict(model1.named_parameters()) - par2 = dict(model2.named_parameters()) - - for k in par1.keys(): - par1[k].data.mul_(decay).add_(1 - decay, par2[k].data) - - -def sample_data(loader): - while True: - for batch in loader: - yield batch - - -def d_logistic_loss(real_pred, fake_pred): - real_loss = F.softplus(-real_pred) - fake_loss = F.softplus(fake_pred) - - return real_loss.mean() + fake_loss.mean() - - -def d_r1_loss(real_pred, real_img): - grad_real, = autograd.grad( - outputs=real_pred.sum(), inputs=real_img, create_graph=True - ) - grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean() - - return grad_penalty - - -def g_nonsaturating_loss(fake_pred): - loss = F.softplus(-fake_pred).mean() - - return loss - - -def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01): - noise = torch.randn_like(fake_img) / math.sqrt( - fake_img.shape[2] * fake_img.shape[3] - ) - grad, = autograd.grad( - outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True - ) - path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1)) - - path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length) - - path_penalty = (path_lengths - path_mean).pow(2).mean() - - return path_penalty, path_mean.detach(), path_lengths - - -def make_noise(batch, latent_dim, n_noise, device): - if n_noise == 1: - return torch.randn(batch, latent_dim, device=device) - - noises = torch.randn(n_noise, batch, latent_dim, device=device).unbind(0) - - return noises - - -def mixing_noise(batch, latent_dim, prob, device): - if prob > 0 and random.random() < prob: - return make_noise(batch, latent_dim, 2, device) - - else: - return [make_noise(batch, latent_dim, 1, device)] - - -def set_grad_none(model, targets): - for n, p in model.named_parameters(): - if n in targets: - p.grad = None - - -def train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device): - loader = sample_data(loader) - - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01) - - mean_path_length = 0 - - d_loss_val = 0 - r1_loss = torch.tensor(0.0, device=device) - g_loss_val = 0 - path_loss = torch.tensor(0.0, device=device) - path_lengths = torch.tensor(0.0, device=device) - mean_path_length_avg = 0 - loss_dict = {} - - if args.distributed: - g_module = generator.module - d_module = discriminator.module - - else: - g_module = generator - d_module = discriminator - - accum = 0.5 ** (32 / (10 * 1000)) - - sample_z = torch.randn(args.n_sample, args.latent, device=device) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - - break - - real_img = next(loader) - real_img = real_img.to(device) - - requires_grad(generator, False) - requires_grad(discriminator, True) - - noise = mixing_noise(args.batch, args.latent, args.mixing, device) - fake_img, _ = generator(noise) - fake_pred = discriminator(fake_img) - - real_pred = discriminator(real_img) - d_loss = d_logistic_loss(real_pred, fake_pred) - - loss_dict["d"] = d_loss - loss_dict["real_score"] = real_pred.mean() - loss_dict["fake_score"] = fake_pred.mean() - - discriminator.zero_grad() - d_loss.backward() - d_optim.step() - - d_regularize = i % args.d_reg_every == 0 - - if d_regularize: - real_img.requires_grad = True - real_pred = discriminator(real_img) - r1_loss = d_r1_loss(real_pred, real_img) - - discriminator.zero_grad() - (args.r1 / 2 * r1_loss * args.d_reg_every + 0 * real_pred[0]).backward() - - d_optim.step() - - loss_dict["r1"] = r1_loss - - requires_grad(generator, True) - requires_grad(discriminator, False) - - noise = mixing_noise(args.batch, args.latent, args.mixing, device) - fake_img, _ = generator(noise) - fake_pred = discriminator(fake_img) - g_loss = g_nonsaturating_loss(fake_pred) - - loss_dict["g"] = g_loss - - generator.zero_grad() - g_loss.backward() - g_optim.step() - - g_regularize = i % args.g_reg_every == 0 - - if g_regularize: - path_batch_size = max(1, args.batch // args.path_batch_shrink) - noise = mixing_noise(path_batch_size, args.latent, args.mixing, device) - fake_img, latents = generator(noise, return_latents=True) - - path_loss, mean_path_length, path_lengths = g_path_regularize( - fake_img, latents, mean_path_length - ) - - generator.zero_grad() - weighted_path_loss = args.path_regularize * args.g_reg_every * path_loss - - if args.path_batch_shrink: - weighted_path_loss += 0 * fake_img[0, 0, 0, 0] - - weighted_path_loss.backward() - - g_optim.step() - - mean_path_length_avg = ( - reduce_sum(mean_path_length).item() / get_world_size() - ) - - loss_dict["path"] = path_loss - loss_dict["path_length"] = path_lengths.mean() - - accumulate(g_ema, g_module, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - d_loss_val = loss_reduced["d"].mean().item() - g_loss_val = loss_reduced["g"].mean().item() - r1_val = loss_reduced["r1"].mean().item() - path_loss_val = loss_reduced["path"].mean().item() - real_score_val = loss_reduced["real_score"].mean().item() - fake_score_val = loss_reduced["fake_score"].mean().item() - path_length_val = loss_reduced["path_length"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"d: {d_loss_val:.4f}; g: {g_loss_val:.4f}; r1: {r1_val:.4f}; " - f"path: {path_loss_val:.4f}; mean path: {mean_path_length_avg:.4f}" - ) - ) - - if wandb and args.wandb: - wandb.log( - { - "Generator": g_loss_val, - "Discriminator": d_loss_val, - "R1": r1_val, - "Path Length Regularization": path_loss_val, - "Mean Path Length": mean_path_length, - "Real Score": real_score_val, - "Fake Score": fake_score_val, - "Path Length": path_length_val, - } - ) - - if i % 100 == 0: - with torch.no_grad(): - g_ema.eval() - sample, _ = g_ema([sample_z]) - utils.save_image( - sample, - f"sample/{str(i).zfill(6)}.png", - nrow=int(args.n_sample ** 0.5), - normalize=True, - range=(-1, 1), - ) - - if i % 10000 == 0: - torch.save( - { - "g": g_module.state_dict(), - "d": d_module.state_dict(), - "g_ema": g_ema.state_dict(), - "g_optim": g_optim.state_dict(), - "d_optim": d_optim.state_dict(), - }, - f"checkpoint/{str(i).zfill(6)}.pt", - ) - - -if __name__ == "__main__": - device = "cuda" - - parser = argparse.ArgumentParser() - - parser.add_argument("path", type=str) - parser.add_argument("--iter", type=int, default=800000) - parser.add_argument("--batch", type=int, default=16) - parser.add_argument("--n_sample", type=int, default=64) - parser.add_argument("--size", type=int, default=256) - parser.add_argument("--r1", type=float, default=10) - parser.add_argument("--path_regularize", type=float, default=2) - parser.add_argument("--path_batch_shrink", type=int, default=2) - parser.add_argument("--d_reg_every", type=int, default=16) - parser.add_argument("--g_reg_every", type=int, default=4) - parser.add_argument("--mixing", type=float, default=0.9) - parser.add_argument("--ckpt", type=str, default=None) - parser.add_argument("--lr", type=float, default=0.002) - parser.add_argument("--channel_multiplier", type=int, default=2) - parser.add_argument("--wandb", action="store_true") - parser.add_argument("--local_rank", type=int, default=0) - - args = parser.parse_args() - - n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1 - args.distributed = n_gpu > 1 - - if args.distributed: - torch.cuda.set_device(args.local_rank) - torch.distributed.init_process_group(backend="nccl", init_method="env://") - synchronize() - - args.latent = 512 - args.n_mlp = 8 - - args.start_iter = 0 - - generator = Generator( - args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier - ).to(device) - discriminator = Discriminator( - args.size, channel_multiplier=args.channel_multiplier - ).to(device) - g_ema = Generator( - args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier - ).to(device) - g_ema.eval() - accumulate(g_ema, generator, 0) - - g_reg_ratio = args.g_reg_every / (args.g_reg_every + 1) - d_reg_ratio = args.d_reg_every / (args.d_reg_every + 1) - - g_optim = optim.Adam( - generator.parameters(), - lr=args.lr * g_reg_ratio, - betas=(0 ** g_reg_ratio, 0.99 ** g_reg_ratio), - ) - d_optim = optim.Adam( - discriminator.parameters(), - lr=args.lr * d_reg_ratio, - betas=(0 ** d_reg_ratio, 0.99 ** d_reg_ratio), - ) - - if args.ckpt is not None: - print("load model:", args.ckpt) - - ckpt = torch.load(args.ckpt, map_location=lambda storage, loc: storage) - - try: - ckpt_name = os.path.basename(args.ckpt) - args.start_iter = int(os.path.splitext(ckpt_name)[0]) - - except ValueError: - pass - - generator.load_state_dict(ckpt["g"]) - discriminator.load_state_dict(ckpt["d"]) - g_ema.load_state_dict(ckpt["g_ema"]) - - g_optim.load_state_dict(ckpt["g_optim"]) - d_optim.load_state_dict(ckpt["d_optim"]) - - if args.distributed: - generator = nn.parallel.DistributedDataParallel( - generator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - ) - - discriminator = nn.parallel.DistributedDataParallel( - discriminator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - ) - - transform = transforms.Compose( - [ - transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True), - ] - ) - - dataset = MultiResolutionDataset(args.path, transform, args.size) - loader = data.DataLoader( - dataset, - batch_size=args.batch, - sampler=data_sampler(dataset, shuffle=True, distributed=args.distributed), - drop_last=True, - ) - - if get_rank() == 0 and wandb is not None and args.wandb: - wandb.init(project="stylegan 2") - - train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device) diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/diffusionmodules/model.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/diffusionmodules/model.py deleted file mode 100644 index d3a5db6aa2ef915e270f1ae135e4a9918fdd884c..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/diffusionmodules/model.py +++ /dev/null @@ -1,776 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, t=None): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x): - #assert x.shape[2] == x.shape[3] == self.resolution, "{}, {}, {}".format(x.shape[2], x.shape[3], self.resolution) - - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, **ignorekwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class VUNet(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - in_channels, c_channels, - resolution, z_channels, use_timestep=False, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(c_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - self.z_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=1, - stride=1, - padding=0) - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=2*block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, z): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - z = self.z_in(z) - h = torch.cat((h,z),dim=1) - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - diff --git a/spaces/EsoCode/text-generation-webui/css/chat.js b/spaces/EsoCode/text-generation-webui/css/chat.js deleted file mode 100644 index e304f1254732e475bf177ee849ac51d4f3e30f46..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/css/chat.js +++ /dev/null @@ -1,4 +0,0 @@ -document.getElementById("main").childNodes[0].style = "max-width: 800px; margin-left: auto; margin-right: auto"; -document.getElementById("extensions").style.setProperty("max-width", "800px"); -document.getElementById("extensions").style.setProperty("margin-left", "auto"); -document.getElementById("extensions").style.setProperty("margin-right", "auto"); diff --git a/spaces/Fawis/Awooga_xd/README.md b/spaces/Fawis/Awooga_xd/README.md deleted file mode 100644 index 8f7132589ea447189b044ea758df38a68bfe8793..0000000000000000000000000000000000000000 --- a/spaces/Fawis/Awooga_xd/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Awooga Xd -emoji: 📊 -colorFrom: pink -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/labels.py b/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/labels.py deleted file mode 100644 index a08714462a696c36bb5299ce9715c00b7331dbb2..0000000000000000000000000000000000000000 --- a/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/labels.py +++ /dev/null @@ -1,345 +0,0 @@ -coco_category_index = { - 1: {'id': 1, 'name': 'person'}, - 2: {'id': 2, 'name': 'bicycle'}, - 3: {'id': 3, 'name': 'car'}, - 4: {'id': 4, 'name': 'motorcycle'}, - 5: {'id': 5, 'name': 'airplane'}, - 6: {'id': 6, 'name': 'bus'}, - 7: {'id': 7, 'name': 'train'}, - 8: {'id': 8, 'name': 'truck'}, - 9: {'id': 9, 'name': 'boat'}, - 10: {'id': 10, 'name': 'traffic light'}, - 11: {'id': 11, 'name': 'fire hydrant'}, - 13: {'id': 13, 'name': 'stop sign'}, - 14: {'id': 14, 'name': 'parking meter'}, - 15: {'id': 15, 'name': 'bench'}, - 16: {'id': 16, 'name': 'bird'}, - 17: {'id': 17, 'name': 'cat'}, - 18: {'id': 18, 'name': 'dog'}, - 19: {'id': 19, 'name': 'horse'}, - 20: {'id': 20, 'name': 'sheep'}, - 21: {'id': 21, 'name': 'cow'}, - 22: {'id': 22, 'name': 'elephant'}, - 23: {'id': 23, 'name': 'bear'}, - 24: {'id': 24, 'name': 'zebra'}, - 25: {'id': 25, 'name': 'giraffe'}, - 27: {'id': 27, 'name': 'backpack'}, - 28: {'id': 28, 'name': 'umbrella'}, - 31: {'id': 31, 'name': 'handbag'}, - 32: {'id': 32, 'name': 'tie'}, - 33: {'id': 33, 'name': 'suitcase'}, - 34: {'id': 34, 'name': 'frisbee'}, - 35: {'id': 35, 'name': 'skis'}, - 36: {'id': 36, 'name': 'snowboard'}, - 37: {'id': 37, 'name': 'sports ball'}, - 38: {'id': 38, 'name': 'kite'}, - 39: {'id': 39, 'name': 'baseball bat'}, - 40: {'id': 40, 'name': 'baseball glove'}, - 41: {'id': 41, 'name': 'skateboard'}, - 42: {'id': 42, 'name': 'surfboard'}, - 43: {'id': 43, 'name': 'tennis racket'}, - 44: {'id': 44, 'name': 'bottle'}, - 46: {'id': 46, 'name': 'wine glass'}, - 47: {'id': 47, 'name': 'cup'}, - 48: {'id': 48, 'name': 'fork'}, - 49: {'id': 49, 'name': 'knife'}, - 50: {'id': 50, 'name': 'spoon'}, - 51: {'id': 51, 'name': 'bowl'}, - 52: {'id': 52, 'name': 'banana'}, - 53: {'id': 53, 'name': 'apple'}, - 54: {'id': 54, 'name': 'sandwich'}, - 55: {'id': 55, 'name': 'orange'}, - 56: {'id': 56, 'name': 'broccoli'}, - 57: {'id': 57, 'name': 'carrot'}, - 58: {'id': 58, 'name': 'hot dog'}, - 59: {'id': 59, 'name': 'pizza'}, - 60: {'id': 60, 'name': 'donut'}, - 61: {'id': 61, 'name': 'cake'}, - 62: {'id': 62, 'name': 'chair'}, - 63: {'id': 63, 'name': 'couch'}, - 64: {'id': 64, 'name': 'potted plant'}, - 65: {'id': 65, 'name': 'bed'}, - 67: {'id': 67, 'name': 'dining table'}, - 70: {'id': 70, 'name': 'toilet'}, - 72: {'id': 72, 'name': 'tv'}, - 73: {'id': 73, 'name': 'laptop'}, - 74: {'id': 74, 'name': 'mouse'}, - 75: {'id': 75, 'name': 'remote'}, - 76: {'id': 76, 'name': 'keyboard'}, - 77: {'id': 77, 'name': 'cell phone'}, - 78: {'id': 78, 'name': 'microwave'}, - 79: {'id': 79, 'name': 'oven'}, - 80: {'id': 80, 'name': 'toaster'}, - 81: {'id': 81, 'name': 'sink'}, - 82: {'id': 82, 'name': 'refrigerator'}, - 84: {'id': 84, 'name': 'book'}, - 85: {'id': 85, 'name': 'clock'}, - 86: {'id': 86, 'name': 'vase'}, - 87: {'id': 87, 'name': 'scissors'}, - 88: {'id': 88, 'name': 'teddy bear'}, - 89: {'id': 89, 'name': 'hair drier'}, - 90: {'id': 90, 'name': 'toothbrush'}, -} - -rgb_colors = { - 1: (240, 248, 255), - 2: (250, 235, 215), - 3: (0, 255, 255), - 4: (127, 255, 212), - 5: (240, 255, 255), - 6: (245, 245, 220), - 7: (255, 228, 196), - 8: (255, 255, 255), - 9: (255, 235, 205), - 10: (0, 0, 255), - 11: (138, 43, 226), - 12: (165, 42, 42), - 13: (222, 184, 135), - 14: (95, 158, 160), - 15: (127, 255, 0), - 16: (210, 105, 30), - 17: (255, 127, 80), - 18: (100, 149, 237), - 19: (255, 248, 220), - 20: (220, 20, 60), - 21: (0, 255, 255), - 22: (0, 0, 139), - 23: (0, 139, 139), - 24: (184, 134, 11), - 25: (169, 169, 169), - 26: (0, 100, 0), - 27: (169, 169, 169), - 28: (189, 183, 107), - 29: (139, 0, 139), - 30: (85, 107, 47), - 31: (255, 140, 0), - 32: (153, 50, 204), - 33: (139, 0, 0), - 34: (233, 150, 122), - 35: (143, 188, 143), - 36: (72, 61, 139), - 37: (47, 79, 79), - 38: (47, 79, 79), - 39: (0, 206, 209), - 40: (148, 0, 211), - 41: (255, 20, 147), - 42: (0, 191, 255), - 43: (105, 105, 105), - 44: (105, 105, 105), - 45: (30, 144, 255), - 46: (178, 34, 34), - 47: (255, 250, 240), - 48: (34, 139, 34), - 49: (255, 0, 255), - 50: (220, 220, 220), - 51: (248, 248, 255), - 52: (255, 215, 0), - 53: (218, 165, 32), - 54: (128, 128, 128), - 55: (0, 128, 0), - 56: (173, 255, 47), - 57: (128, 128, 128), - 58: (240, 255, 240), - 59: (255, 105, 180), - 60: (205, 92, 92), - 61: (75, 0, 130), - 62: (255, 0, 122), - 63: (240, 230, 140), - 64: (230, 230, 250), - 65: (255, 240, 245), - 66: (124, 252, 0), - 67: (255, 250, 205), - 68: (173, 216, 230), - 69: (240, 128, 128), - 70: (224, 255, 255), - 71: (250, 250, 210), - 72: (211, 211, 211), - 73: (144, 238, 144), - 74: (211, 211, 211), - 75: (255, 182, 193), - 76: (255, 160, 122), - 77: (32, 178, 170), - 78: (135, 206, 250), - 79: (119, 136, 153), - 80: (119, 136, 153), - 81: (176, 196, 222), - 82: (255, 255, 224), - 83: (0, 255, 0), - 84: (50, 205, 50), - 85: (250, 240, 230), - 86: (255, 0, 255), - 87: (128, 0, 0), - 88: (102, 205, 170), - 89: (0, 0, 205), - 90: (186, 85, 211), -} - -color_pose = { # BGR - "purple": (255, 0, 100), - "light_pink": (80, 0, 255), - "dark_pink": (220, 0, 255), - "light_orange": (255, 80, 0), - "dark_orange": (255, 220, 0.), - "yellow": (0, 220, 255), - "blue": (255, 0, 0), - "green": (0,255,0), -} - -color_pose_rgb= { # RGB - "purple": (100, 0, 255), - "light_pink": (255, 0, 80), - "dark_pink": (255, 0, 220), - "light_orange": (0, 80, 255), - "dark_orange": (0, 220, 255.), - "yellow": (255, 220, 0), - "blue": (0, 0, 255), - "green": (0,255,0), -} - - -color_pose_normalized = { - "purple": (100/255., 0/255., 255/255.), - "light_pink": (255/255., 0/255., 80/255.), - "dark_pink": (255/255., 0/255., 220/255.), - "light_orange": (255/255., 80/255., 0/255.), - "dark_orange": (255/255., 220/255., 0/255.), - "blue": (0/255., 0/255., 255/255.) -} - -pose_id_part = { - 0: "Nose",# purple - 1: "LEye",#light_pink - 2: "REye",#dark_pink - 3: "LEar",#light_orange - 4: "REar",#yellow - 5: "LShoulder", - 6: "RShoulder", - 7: "LElbow", - 8: "RElbow", - 9: "LWrist", - 10: "RWrist", - 11: "LHip", - 12: "RHip", - 13: "LKnee", - 14: "RKnee", - 15: "LAnkle", - 16: "RAnkle" -} - -rev_pose_id_part = {value: key for key, value in pose_id_part.items()} - -pose_id_part_openpose = { - 0: "Nose", - 1: "Neck", - 2: "RShoulder", - 3: "RElbow", - 4: "RWrist", - 5: "LShoulder", - 6: "LElbow", - 7: "LWrist", - 8: "MidHip", - 9: "RHip", - 10: "RKnee", - 11: "RAnkle", - 12: "LHip", - 13: "LKnee", - 14: "LAnkle", - 15: "REye", - 16: "LEye", - 17: "REar", - 18: "LEar", - 19: "LBigToe", - 20: "LSmallToe", - 21: "LHeel", - 22: "RBigToe", - 23: "RSmallToe", - 24: "RHeel", - 25: "Background" -} - -pose_id_part_zedcam = { - 0: "Nose", - 1: "Neck", - 2: "RShoulder", - 3: "RElbow", - 4: "RWrist", - 5: "LShoulder", - 6: "LElbow", - 7: "LWrist", - 8: "RHip", - 9: "RKnee", - 10: "RAnkle", - 11: "LHip", - 12: "LKnee", - 13: "LAnkle", - 14: "REye", - 15: "LEye", - 16: "REar", - 17: "LEar", -} -pose_id_part_centernet = { - 0: "Nose", - 1: "Neck", - 2: "RShoulder", - 3: "RElbow", - 4: "RWrist", - 5: "LShoulder", - 6: "LElbow", - 7: "LWrist", - 8: "MidHip", - 9: "RHip", - 10: "RKnee", - 11: "RAnkle", - 12: "LHip", - 13: "LKnee", - 14: "LAnkle", - 15: "REye", - 16: "LEye", - 17: "REar", - 18: "LEar", - 19: "LBigToe", - 20: "LSmallToe", - 21: "LHeel", - 22: "RBigToe", - 23: "RSmallToe", - 24: "RHeel", - 25: "Background" -} - -rev_pose_id_part_openpose = {value: key for key, value in pose_id_part_openpose.items()} - -face_category_index = { - 1: {'id': 1, 'name': 'Face'}, -} - -tracking_colors = { - 0: (255, 0, 0), - 1: (0, 255, 0), - 2: (0, 0, 255), - 3: (255, 0, 255), - 4: (255, 255, 0), - 5: (0, 255, 255), - 6: (255, 255, 255), - 7: (0, 0, 0), - 8: (128, 128, 128), - 9: (128, 0, 0), - 10: (0, 128, 0), - 11: (0, 0, 128), - 12: (128, 128, 0), - 13: (128, 0, 128), - 14: (0, 128, 128), -} - -body_parts = [(5, 6), (5, 7), (6, 8), (7, 9), (8, 10), (11, 12), (5, 11), (6, 12), (11, 13), (12, 14), (13, 15), (14, 16)] - -body_parts_openpose = [(5, 2), (5, 6), (2, 3), (6, 7), (3, 4), (12, 9), (5, 12), (2, 9), (12, 13), (9, 10), (13, 14), - (10, 11)] - -body_parts_zedcam = [(5, 2), (5, 6), (2, 3), (6, 7), (3, 4), (11, 8), (5, 11), (2, 8), (11, 12), (8, 9), (12, 13), - (9, 10)] - -face_points = [0, 1, 2, 3, 4] - -face_points_openpose = [0, 16, 15, 18, 17] - -face_points_zedcam = [0, 14, 15, 16, 17] \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/train/extract/extract_f0_print.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/train/extract/extract_f0_print.py deleted file mode 100644 index 14ef598d73b807974204664f100c828918199816..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/train/extract/extract_f0_print.py +++ /dev/null @@ -1,298 +0,0 @@ -import os -import sys -import traceback - -import parselmouth - -now_dir = os.getcwd() -sys.path.append(now_dir) -import logging -from LazyImport import lazyload - -import numpy as np -import pyworld -torchcrepe = lazyload("torchcrepe") # Fork Feature. Crepe algo for training and preprocess -torch = lazyload("torch") -#from torch import Tensor # Fork Feature. Used for pitch prediction for torch crepe. -tqdm = lazyload("tqdm") -from infer.lib.audio import load_audio - -logging.getLogger("numba").setLevel(logging.WARNING) -from multiprocessing import Process - -exp_dir = sys.argv[1] -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - -DoFormant = False -Quefrency = 1.0 -Timbre = 1.0 - -def printt(strr): - print(strr) - f.write(f"{strr}\n") - f.flush() - - -n_p = int(sys.argv[2]) -f0method = sys.argv[3] -extraction_crepe_hop_length = 0 -try: - extraction_crepe_hop_length = int(sys.argv[4]) -except: - print("Temp Issue. echl is not being passed with argument!") - extraction_crepe_hop_length = 128 - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def mncrepe(self, method, x, p_len, crepe_hop_length): - f0 = None - torch_device_index = 0 - torch_device = torch.device( - f"cuda:{torch_device_index % torch.cuda.device_count()}" - ) if torch.cuda.is_available() \ - else torch.device("mps") if torch.backends.mps.is_available() \ - else torch.device("cpu") - - audio = torch.from_numpy(x.astype(np.float32)).to(torch_device, copy=True) - audio /= torch.quantile(torch.abs(audio), 0.999) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - - if method == 'mangio-crepe': - pitch: torch.Tensor = torchcrepe.predict( - audio, - self.fs, - crepe_hop_length, - self.f0_min, - self.f0_max, - "full", - batch_size=crepe_hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // crepe_hop_length - # Resize the pitch - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - - elif method == 'crepe': - batch_size = 512 - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.fs, - 160, - self.f0_min, - self.f0_max, - "full", - batch_size=batch_size, - device=torch_device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 = f0[1:] # Get rid of extra first frame - - return f0 - - def get_pm(self, x, p_len): - f0 = parselmouth.Sound(x, self.fs).to_pitch_ac( - time_step=160 / 16000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ).selected_array["frequency"] - - return np.pad( - f0, - [[max(0, (p_len - len(f0) + 1) // 2), max(0, p_len - len(f0) - (p_len - len(f0) + 1) // 2)]], - mode="constant" - ) - - def get_harvest(self, x): - f0_spectral = pyworld.harvest( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs) - - def get_dio(self, x): - f0_spectral = pyworld.dio( - x.astype(np.double), - fs=self.fs, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop / self.fs, - ) - return pyworld.stonemask(x.astype(np.double), *f0_spectral, self.fs) - - def get_rmvpe(self, x): - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - print("Loading rmvpe model") - self.model_rmvpe = RMVPE( - "assets/rmvpe/rmvpe.pt", is_half=False, device="cpu" - ) - return self.model_rmvpe.infer_from_audio(x, thred=0.03) - - def get_rmvpe_dml(self, x): - ... - - def get_f0_method_dict(self): - return { - "pm": self.get_pm, - "harvest": self.get_harvest, - "dio": self.get_dio, - "rmvpe": self.get_rmvpe - } - - def get_f0_hybrid_computation( - self, - methods_str, - x, - p_len, - crepe_hop_length, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - for method in methods: - if method in self.f0_method_dict: - f0 = self.f0_method_dict[method](x, p_len) if method == 'pm' else self.f0_method_dict[method](x) - f0_computation_stack.append(f0) - elif method == 'crepe' or method == 'mangio-crepe': - self.the_other_complex_function(x, method, crepe_hop_length) - - if len(f0_computation_stack) != 0: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) if len(f0_computation_stack)>1 else f0_computation_stack[0] - return f0_median_hybrid - else: - raise ValueError("No valid methods were provided") - - def compute_f0(self, path, f0_method, crepe_hop_length): - x = load_audio(path, self.fs, DoFormant, Quefrency, Timbre) - p_len = x.shape[0] // self.hop - - if f0_method in self.f0_method_dict: - f0 = self.f0_method_dict[f0_method](x, p_len) if f0_method == 'pm' else self.f0_method_dict[f0_method](x) - elif f0_method in ['crepe', 'mangio-crepe']: - f0 = self.mncrepe(f0_method, x, p_len, crepe_hop_length) - elif "hybrid" in f0_method: # EXPERIMENTAL - # Perform hybrid median pitch estimation - f0 = self.get_f0_hybrid_computation( - f0_method, - x, - p_len, - crepe_hop_length, - ) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method, crepe_hop_length, thread_n): - if len(paths) == 0: - printt("no-f0-todo") - return - with tqdm.tqdm(total=len(paths), leave=True, position=thread_n) as pbar: - description = f"thread:{thread_n}, f0ing, Hop-Length:{crepe_hop_length}" - pbar.set_description(description) - - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if ( - os.path.exists(opt_path1 + ".npy") - and os.path.exists(opt_path2 + ".npy") - ): - pbar.update(1) - continue - - featur_pit = self.compute_f0(inp_path, f0_method, crepe_hop_length) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - pbar.update(1) - except Exception as e: - printt(f"f0fail-{idx}-{inp_path}-{traceback.format_exc()}") - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - - ps = [] - print("Using f0 method: " + f0method) - for i in range(n_p): - p = Process( - target=featureInput.go, - args=(paths[i::n_p], f0method, extraction_crepe_hop_length, i), - ) - ps.append(p) - p.start() - for i in range(n_p): - ps[i].join() \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/stack_block_pyramid.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/stack_block_pyramid.py deleted file mode 100644 index d528b32cbac3b789e4cb9a0b099a640cf63811f8..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/stack_block_pyramid.py +++ /dev/null @@ -1,60 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils -import pybullet as p - -class StackBlockPyramid(Task): - """Build a pyramid of colored blocks in a color sequence""" - - def __init__(self): - super().__init__() - self.max_steps = 12 - self.lang_template = "make the {row} row with {blocks}" - self.task_completed_desc = "done stacking block pyramid." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add base. - base_size = (0.05, 0.15, 0.005) - base_urdf = 'stacking/stand.urdf' - base_pose = self.get_random_pose(env, base_size) - env.add_object(base_urdf, base_pose, category='fixed') - - # Block colors. - colors = [ - utils.COLORS['purple'], utils.COLORS['blue'], utils.COLORS['green'], - utils.COLORS['yellow'], utils.COLORS['orange'], utils.COLORS['red'] - ] - - # Add blocks. - block_size = (0.04, 0.04, 0.04) - block_urdf = 'stacking/block.urdf' - - objs = [] - for i in range(6): - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=colors[i]) - objs.append(block_id) - - # IMPORTANT Associate placement locations for goals. - place_pos = [(0, -0.05, 0.03), (0, 0, 0.03), - (0, 0.05, 0.03), (0, -0.025, 0.08), - (0, 0.025, 0.08), (0, 0, 0.13)] - targs = [(utils.apply(base_pose, i), base_pose[1]) for i in place_pos] - - # Goal: blocks are stacked in a pyramid (bottom row: green, blue, purple). - language_goal = self.lang_template.format(blocks="the green, blue and purple blocks", row="bottom") - self.add_goal(objs=objs[:3], matches=np.ones((3, 3)), targ_poses=targs[:3], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 2, symmetries=[np.pi/2]*3, language_goal=language_goal) - - # Goal: blocks are stacked in a pyramid (middle row: yellow, orange). - language_goal = self.lang_template.format(blocks="the yellow and orange blocks", row="middle") - self.add_goal(objs=objs[3:5], matches=np.ones((2, 2)), targ_poses=targs[3:5], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 3, symmetries=[np.pi/2]*2, language_goal=language_goal) - - # Goal: blocks are stacked in a pyramid (top row: red). - language_goal = self.lang_template.format(blocks="the red block", row="top") - self.add_goal(objs=objs[5:], matches=np.ones((1, 1)), targ_poses=targs[5:], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 6, symmetries=[np.pi/2]*1, language_goal=language_goal) \ No newline at end of file diff --git a/spaces/Gifted030/movie_reviews_prediction/README.md b/spaces/Gifted030/movie_reviews_prediction/README.md deleted file mode 100644 index 2e42d0162f5b2d277ec890f26246e56b75e0fd17..0000000000000000000000000000000000000000 --- a/spaces/Gifted030/movie_reviews_prediction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Movie Reviews Prediction -emoji: 📉 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h b/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h deleted file mode 100644 index f88ab5d8cb343f97026966b402eaeed8831e356a..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h +++ /dev/null @@ -1,25 +0,0 @@ -#pragma once - -#include - -#include "libipc/def.h" -#include "libipc/prod_cons.h" - -#include "libipc/circ/elem_array.h" - -namespace ipc { -namespace policy { - -template