diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Asoftech Automation Crack Serial 11 Pros and Cons of the Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Asoftech Automation Crack Serial 11 Pros and Cons of the Software.md deleted file mode 100644 index 96fc2713b3d2d39007b79e1c6dcb293256205872..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Asoftech Automation Crack Serial 11 Pros and Cons of the Software.md +++ /dev/null @@ -1,127 +0,0 @@ - -

Asoftech Automation Crack Serial 11: What You Need to Know

-

If you are looking for a way to automate repetitive and tedious tasks on your computer, you might have heard of Asoftech Automation, a software tool that can help you create and run automation scripts with ease. But what if you don't want to pay for the software's license? You might be tempted to use a crack serial number that can unlock the full features of Asoftech Automation without any cost. However, before you do that, you should know what a crack serial number is, how it works, and what are the potential consequences of using it. In this article, we will explain everything you need to know about Asoftech Automation Crack Serial 11, including how to download, install, and use it.

-

asoftech automation crack serial 11


Download File ››››› https://byltly.com/2uKvJL



-

What is Asoftech Automation?

-

Asoftech Automation is a software tool that allows you to automate any combination of tasks on your computer. You can use it to record mouse movements and clicks, keyboard keystrokes, and other computer activities, and then replay them as many times as you want. You can also edit and customize your automation scripts with variables, loops, conditions, and other commands. With Asoftech Automation, you can save time and effort by automating tasks such as:

- -

Asoftech Automation has many features and benefits that make it a powerful and user-friendly automation tool. Some of them are:

- -

What is a crack serial number?

-

A crack serial number is a code that bypasses the software's registration and activation process. Normally, when you buy a software product, you need to enter a serial number or a license key that verifies your purchase and unlocks the full features of the software. However, some people use illegal methods to generate or obtain fake serial numbers or license keys that can trick the software into thinking that it is registered and activated. These fake codes are called crack serial numbers or cracks.

-

A crack serial number can be obtained from various sources on the internet, such as websites, forums, torrents, or peer-to-peer networks. However, using a crack serial number has many risks and disadvantages that outweigh any perceived benefits. Some of them are:

- -

How to download and install Asoftech Automation Crack Serial 11

-

If you still want to download and install Asoftech Automation Crack Serial 11 despite knowing its risks and disadvantages, here are the sources and steps for doing so:

- - - - - - - -
SourceSteps
  1. Go to https://mokuchinyu.tistory.com/34
  2. Click on the "Download" button at the bottom of the page.
  3. Extract the zip file to your desired location.
  4. Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.
  5. Follow the instructions on the screen.
  1. Go to https://selsoft.net/cracked/asoftech-automation-242-/99508.html
  2. Click on one of the "Download Link" buttons at the bottom of the page.
  3. Select one of the available servers to download from.
  4. Extract the zip file to your desired location.
  5. Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.
  6. Follow the instructions on the screen.
  1. Go to https://new.c.mi.com/my/post/470635/Asoftech_Automation_Crack_Serial_11_CRACKED
  2. Click on one of the "Download" buttons at the bottom of the page.
  3. Select one of the available servers to download from.
  4. Extract the zip file to your desired location.
  5. Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.
  6. Follow the instructions on the screen.
  1. Go to https://dreamlandit.com/wp-content/uploads/2022/10/immgard.pdf
  2. Click on one of the "Download" buttons at the bottom of the page.
  3. Select one of the available servers to download from.
  4. Extract the zip file to your desired location.
  5. Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.
  6. Follow the instructions on the screen.
  1. Go to https://sway.office.com/NZfBouy5VcpopbaF
  2. Click on one of the "Download" buttons at the bottom of the page.
  3. Select one of the available servers to download from.
  4. Extract the zip file to your desired location.
  5. Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.
  6. Follow the instructions on the screen.
-

How to use Asoftech Automation Crack Serial 11

-

After you have downloaded and installed Asoftech Automation Crack Serial 11, you can start using it to automate tasks on your computer. Here are the basic functions and operations of Asoftech Automation:

- -

Here are some tips and tricks for creating and running automation scripts with Asoftech Automation:

- -

Conclusion

-

Asoftech Automation Crack Serial 11 is a software tool that can help you automate tasks on your computer without paying for its license. However, using a crack serial number is illegal, unsafe, and unfair. You could face legal actions or penalties from the software developer or owner, expose your computer to viruses or malware, and deprive the software developer or owner of their income and recognition. Therefore, we do not recommend using Asoftech Automation Crack Serial 11. Instead, we suggest you to buy a legitimate license for Asoftech Automation from its official website: https://www.asoftech.com/auto-clicker/

-

asoftech automation 11 full version crack download
-how to get asoftech automation crack serial key for free
-asoftech automation 11 license code generator
-asoftech automation crack serial 11 torrent
-asoftech automation 11 activation key patch
-asoftech automation crack serial 11 review
-asoftech automation 11 registration code crack
-asoftech automation crack serial 11 alternative
-asoftech automation 11 keygen crack
-asoftech automation crack serial 11 tutorial
-asoftech automation 11 cracked software download
-asoftech automation crack serial 11 features
-asoftech automation 11 serial number crack
-asoftech automation crack serial 11 comparison
-asoftech automation 11 crack file download
-asoftech automation crack serial 11 benefits
-asoftech automation 11 unlock code crack
-asoftech automation crack serial 11 pros and cons
-asoftech automation 11 crack free download full version
-asoftech automation crack serial 11 system requirements
-asoftech automation 11 product key crack
-asoftech automation crack serial 11 price
-asoftech automation 11 crack latest version download
-asoftech automation crack serial 11 support
-asoftech automation 11 crack update download
-asoftech automation crack serial 11 testimonials
-asoftech automation 11 license key crack
-asoftech automation crack serial 11 discount
-asoftech automation 11 cracked apk download
-asoftech automation crack serial 11 demo
-asoftech automation 11 activation code crack
-asoftech automation crack serial 11 faq
-asoftech automation 11 key code crack
-asoftech automation crack serial 11 guide
-asoftech automation 11 cracked version download
-asoftech automation crack serial 11 ratings
-asoftech automation 11 registration key crack
-asoftech automation crack serial 11 coupon code
-asoftech automation 11 cracked app download
-asoftech automation crack serial 11 manual
-asoftech automation 11 license number crack
-asoftech automation crack serial 11 warranty
-asoftech automation 11 cracked software free download
-asoftech automation crack serial 11 feedbacks
-asoftech automation 11 activation key free download with crack
-asoftech automation crack serial number for version 11
-how to install and use asoftech automation with crack and serial key
-best sites to download cracked version of asoftech automation
-how to fix errors and bugs in cracked version of asoftecth automaton

-

FAQs

-

Q1: Is Asoftech Automation safe to use?

-

A1: Asoftech Automation is safe to use if you buy a legitimate license from its official website. However, if you use a crack serial number to activate Asoftech Automation, you could expose your computer to viruses or malware that could harm your system or steal your personal information.

-

Q2: Is Asoftech Automation legal to use?

-

A2: Asoftech Automation is legal to use if you buy a legitimate license from its official website. However, if you use a crack serial number to activate Asoftech Automation, you are violating the software's terms of service and infringing its intellectual property rights. You could face legal actions or penalties from the software developer or owner.

-

Q3: How can I get a legitimate license for Asoftech Automation?

-

A3: You can get a legitimate license for Asoftech Automation by visiting its official website: https://www.asoftech.com/auto-clicker/ and clicking on the "Buy Now" button. You can choose between a single-user license ($39.95) or a multi-user license ($99.95). You can pay with PayPal or credit card. After you complete your payment, you will receive an email with your license key and download link.

-

Q4: What are the alternatives to Asoftech Automation?

-

A4: There are many other software tools that can help you automate tasks on your computer. Some of them are:

- -

Q5: How can I contact Asoftech for support?

-

A5: You can contact Asoftech for support by visiting their website: https://www.asoftech.com/support.html and filling out their online form. You can also email them at support@asoftech.com or call them at +1-800-928-0387.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut the cable and stream live TV with these awesome apps.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut the cable and stream live TV with these awesome apps.md deleted file mode 100644 index aadff7aaa42e7809a1358dd7431ac52b1601c193..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut the cable and stream live TV with these awesome apps.md +++ /dev/null @@ -1,166 +0,0 @@ - -

How to Watch Live TV on Your Smartphone with These Apps

-

Do you love watching live TV but hate paying for cable or satellite? Do you want to watch your favorite shows, sports, news, and movies anytime, anywhere? If you answered yes to these questions, then you might want to try out some of these live TV apps that let you stream live TV channels on your smartphone.

-

Live TV apps are apps that allow you to watch live TV over the internet without a cable or satellite subscription. They offer a variety of channels from different genres and categories, such as entertainment, lifestyle, sports, news, kids, etc. Some of them also offer on-demand content, cloud DVR, multiple accounts, and other features that enhance your viewing experience.

-

live tv app download


DOWNLOAD » https://urlin.us/2uSYyv



-

Some of the benefits of watching live TV on your smartphone are:

- -

In this article, we will review four of the best live TV apps that you can download on your smartphone. We will compare their features, benefits, pricing, and availability. We will also provide a table that shows a side-by-side comparison of the four live TV apps based on key criteria such as number of channels, DVR storage, simultaneous streams, etc. We will also give a recommendation based on our personal preference or experience with any of these apps.

-

YouTube TV: The Best Overall Live TV App

-

YouTube TV is one of the most popular and well-rounded live TV apps that you can download on your smartphone. It offers cable-free live TV from over 85 networks, including ABC, CBS, FOX, NBC, ESPN, CNN, HGTV, Disney Channel, and more. You can also access YouTube Originals and YouTube videos with your subscription.

-

Some of the features and benefits of YouTube TV are:

- -

Here is a screenshot of the YouTube TV app interface:

-YouTube TV app interface -

The pricing and availability of YouTube TV are:

- -

FuboTV: The Best Live TV App for Sports and Spanish-Language Channels

-

FuboTV is another great live TV app that you can download on your smartphone. It offers over 100 networks, including 40+ sports channels like NFL Network, NBA TV, MLB Network, beIN Sports, and more. It also has a large selection of Spanish-language channels like Univision, Telemundo, Galavision, and more.

-

Some of the features and benefits of FuboTV are:

-

live tv app download for android
-live tv app download for pc
-live tv app download apk
-live tv app download free
-live tv app download for iphone
-live tv app download for windows 10
-live tv app download for laptop
-live tv app download for smart tv
-live tv app download for firestick
-live tv app download for ios
-live tv app download india
-live tv app download hd
-live tv app download 2021
-live tv app download jio
-live tv app download airtel
-live tv app download sony liv
-live tv app download zee5
-live tv app download hotstar
-live tv app download voot
-live tv app download mx player
-live tv app download youtube tv
-live tv app download hulu
-live tv app download sling tv
-live tv app download fubo tv
-live tv app download philo
-live tv app download oreo tv
-live tv app download thop tv
-live tv app download netflix
-live tv app download disney plus
-live tv app download prime video
-live tv app download kodi
-live tv app download plex
-live tv app download xfinity stream
-live tv app download spectrum tv
-live tv app download directv now
-live tv app download at&t watchtv
-live tv app download locast
-live tv app download pluto tv
-live tv app download tubi tv
-live tv app download crackle
-live tv app download ustvnow
-live tv app download redbox free live tv
-live tv app download peacock streaming service
-live tv app download paramount plus
-live tv app download discovery plus
-live nettv apk free android application
-ustv apk free android application
-tvcatchup apk free android application
-aos apk free android application

- -

Here is a screenshot of the FuboTV app interface:

-FuboTV app interface -

The pricing and availability of FuboTV are:

- -

Sling TV: The Most Affordable Live TV App with a Good Lineup

Sling TV is another live TV app that you can download on your smartphone. It offers customizable packages that let you choose the channels you want to watch. It has two base plans: Sling Orange and Sling Blue, each with a different channel lineup. You can also combine both plans or add extra channels with various add-ons.

-

Some of the features and benefits of Sling TV are:

- -

Here is a screenshot of the Sling TV app interface:

-Sling TV app interface -

The pricing and availability of Sling TV are:

- -

Philo TV: The Cheapest Live TV App for Entertainment and Lifestyle Channels

-

Philo TV is the cheapest live TV app that you can download on your smartphone. It offers 61 channels from various genres such as entertainment, lifestyle, comedy, reality, news, and more. Some of the channels include A&E, AMC, BET, Comedy Central, Discovery, Food Network, Hallmark Channel, MTV, Nickelodeon, TLC, and more.

-

Some of the features and benefits of Philo TV are:

- -

Here is a screenshot of the Philo TV app interface:

-Philo TV app interface -

The pricing and availability of Philo TV are:

- -

Conclusion

-

In this article, we have reviewed four of the best live TV apps that you can download on your smartphone. We have compared their features, benefits, pricing, and availability. We have also provided a table that shows a side-by-side comparison of the four live TV apps based on key criteria such as number of channels, DVR storage, simultaneous streams, etc.

- - - - - - -
Live TV AppNumber of ChannelsDVR StorageSimultaneous StreamsMonthly Cost
YouTube TV85+Unlimited (9 months)3$64. 99
FuboTV100+250 hours3$64.99
Sling TV50+50 hours (200 hours with add-on)1 (Sling Orange) or 3 (Sling Blue)$35 (Sling Orange or Sling Blue) or $50 (both)
Philo TV61Unlimited (30 days)3$25
-

Based on our comparison, we can say that each live TV app has its own strengths and weaknesses. There is no one-size-fits-all solution for everyone. The best live TV app for you depends on your preferences, budget, and viewing habits.

-

However, if we had to give a recommendation, we would say that YouTube TV is the best overall live TV app for most people. It has a good balance of features, benefits, pricing, and availability. It offers a wide range of channels from different genres and categories, including local and national networks. It also has a generous cloud DVR, multiple accounts, and no contracts. It is available nationwide in the US and supports most devices. It also has a free trial option that lets you try it out before you commit.

-

Of course, you can also try out the other live TV apps and see which one suits you better. You can take advantage of their free trial options and compare them yourself. You might find that one of them meets your needs better than YouTube TV.

-

The bottom line is that watching live TV on your smartphone is possible and convenient with these live TV apps. You can enjoy your favorite shows, sports, news, and movies anytime, anywhere without paying for cable or satellite. You can also save money by paying only for what you want to watch and canceling anytime without any fees or penalties.

-

We hope that this article has helped you learn more about the best live TV apps that you can download on your smartphone. We also hope that you have found the best live TV app for you or at least have a better idea of what to look for. Happy streaming!

-

Frequently Asked Questions

-

Here are some of the frequently asked questions about live TV apps:

-

What is the difference between live TV apps and streaming services?

-

Live TV apps are apps that allow you to watch live TV over the internet without a cable or satellite subscription. They offer a variety of channels from different genres and categories, such as entertainment, lifestyle, sports, news, kids, etc. Some of them also offer on-demand content, cloud DVR, multiple accounts, and other features that enhance your viewing experience.

-

Streaming services are apps that allow you to watch on-demand content over the internet without a cable or satellite subscription. They offer a library of movies, shows, documentaries, originals, and more that you can watch at your own pace. Some of them also offer live TV channels as an add-on option.

-

How much internet speed do I need to watch live TV on my smartphone?

-

The internet speed that you need to watch live TV on your smartphone depends on the quality of the video that you want to watch. Generally speaking, the higher the quality, the more bandwidth you need. Here are some of the recommended internet speeds for different video qualities:

- -

You can check your internet speed using online tools like Speedtest or Fast.com. You can also contact your internet service provider (ISP) if you have any issues with your internet speed or connection.

-

Can I watch live TV on my smartphone when I travel?

-

The answer to this question depends on the live TV app that you use and the location that you travel to. Some live TV apps are available only in certain countries or regions and may not work when you travel outside of those areas. Some live TV apps may also have geo-restrictions or blackouts on some channels or content depending on your location.

-

To avoid any issues when you travel, you should check the availability and terms of service of the live TV app that you use before you travel. You should also check the local laws and regulations regarding streaming content over the internet in the country or region that you travel to.

-

How can I watch live TV on my smartphone without using too much data?

Watching live TV on your smartphone can use a lot of data, especially if you watch in high quality or for a long time. To avoid using too much data, you can do the following:

- -

What are some of the drawbacks or challenges of watching live TV on my smartphone?

-

Watching live TV on your smartphone can be a convenient and enjoyable way to watch your favorite shows, sports, news, and movies. However, it can also have some drawbacks or challenges, such as:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Among Us Imposter Hack APK A Free and Easy Way to Be the Imposter in Every Game.md b/spaces/1phancelerku/anime-remove-background/Among Us Imposter Hack APK A Free and Easy Way to Be the Imposter in Every Game.md deleted file mode 100644 index 930b116ef32227d6a1b986af804fa7eeb7bdbdfa..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Among Us Imposter Hack APK A Free and Easy Way to Be the Imposter in Every Game.md +++ /dev/null @@ -1,130 +0,0 @@ -
-

Among Us Imposter Hack Apk: Is It Worth It?

-

Among Us is a popular online multiplayer game that has taken the internet by storm. In this game, you can either be a crewmate or an impostor on a spaceship or a base. As a crewmate, your goal is to complete tasks and find the impostor. As an impostor, your goal is to kill the crewmates and sabotage their mission.

-

But what if you want to always be the impostor and have more fun in the game? That's where the imposter hack apk comes in. This is a modified version of the game that allows you to always be the impostor, as well as use various cheats and mods to make the game easier or more interesting. But is it worth using this hack? What are the pros and cons of it? And are there any alternatives to it? In this article, we will answer these questions and more.

-

among us imposter hack apk


DOWNLOADhttps://jinyurl.com/2uNQ8Q



-

Pros and Cons of Using Imposter Hack Apk

-

The imposter hack apk can be tempting for many players who want to experience the thrill of being the impostor every time they play. However, before you download and install this hack, you should be aware of the advantages and disadvantages of using it.

-

Pros

- -

Cons

- -

Alternatives to Imposter Hack Apk

-

If you are looking for a way to play as impostor without using the hack apk, there are some alternatives that you can try. Here are some of them:

-

Use Legit Mods from GitHub or Other Sources

-

There are some legit mods for Among Us that you can download from GitHub or other sources. These mods are created by fans of the game who want to add new features or modes to it. For example, there are mods that add roles like sheriff, doctor, jester, etc. to the game. There are also mods that change the map, graphics, sounds, etc. of the game. These mods are usually safe and compatible with the original game, as long as you follow the instructions on how to install them.

-

Play with Friends Who Agree to Use Mods

-

If you want to use mods with other players, you should make sure that they agree to use them as well. This way, you can avoid getting reported or banned for using mods. You can also have more fun and variety in your games. You can create a private lobby with your friends and use a mod menu to select which mods you want to use. You can also join public lobbies that use mods by looking for codes on Discord or Reddit.

-

Practice as

Practice as Impostor in Freeplay Mode

-

If you want to improve your skills as impostor without using any hacks or mods, you can practice in the Freeplay mode. This mode allows you to play as impostor on any map with dummy crewmates. You can kill, vent, sabotage, and lie as much as you want without any consequences. You can also customize the game settings to make it easier or harder for yourself. This mode is a great way to learn the map layout, vent locations, task locations, etc. You can also practice your deception and persuasion skills by talking to yourself or recording your gameplay.

-

Conclusion

-

Among Us is a fun and exciting game that can be enjoyed by anyone who likes social deduction and deception games. However, some players may want to always be the impostor and use hacks or mods to achieve that. While this may seem like a good idea at first, it can also have some drawbacks and risks. Therefore, before you decide to use the imposter hack apk, you should weigh the pros and cons of it and consider the alternatives to it. You may find that playing as impostor without hacks or mods can be more rewarding and satisfying in the long run.

-

among us always imposter mod apk download
-among us hack apk imposter every time
-among us mod apk imposter menu
-among us imposter hack apk latest version
-among us imposter hack apk no ads
-among us imposter hack apk for pc
-among us imposter hack apk android
-among us imposter hack apk ios
-among us imposter hack apk 2021
-among us imposter hack apk free download
-among us imposter hack apk unlimited skins
-among us imposter hack apk online
-among us imposter hack apk no ban
-among us imposter hack apk 2020
-among us imposter hack apk no root
-among us imposter hack apk mediafıre
-among us imposter hack apk reddit
-among us imposter hack apk revdl
-among us imposter hack apk happymod
-among us imposter hack apk mod menu
-among us imposter hack apk god mode
-among us imposter hack apk anti ban
-among us imposter hack apk all unlocked
-among us imposter hack apk see imposters
-among us imposter hack apk unlimited money
-among us imposter hack apk mega mod
-among us imposter hack apk premium unlocked
-among us imposter hack apk no verification
-among us imposter hack apk invisible mode
-among us imposter hack apk kill cooldown
-among us imposter hack apk vent as crewmate
-among us imposter hack apk fake impostor
-among us imposter hack apk always win
-among us imposter hack apk wallhack
-among us imposter hack apk radar mod
-among us imposter hack apk speed mod
-among us imposter hack apk voice chat mod
-among us imposter hack apk pet mod
-among us imposter hack apk zoom mod
-among us imposter hack apk color mod
-among us imposter hack apk ghost mode mod
-among us imposter hack apk chat mod
-among us imposter hack apk role mod
-among us imposter hack apk custom game mod
-among us imposter hack apk map mod
-among us imposter hack apk task mod
-among us imposter hack apk vote mod
-among us imposter hack apk sabotage mod

-

Here are some tips on how to play as impostor without hacks or mods:

- -

FAQs

-

How to Download and Install Imposter Hack Apk?

-

If you still want to try the imposter hack apk, here are the steps on how to download and install it:

-
    -
  1. Find a reliable source that offers the imposter hack apk file. You can search on Google or YouTube for reviews or recommendations.
  2. -
  3. Download the apk file to your device. Make sure you have enough storage space and a good internet connection.
  4. -
  5. Enable the installation of unknown sources on your device. Go to Settings > Security > Unknown Sources and toggle it on.
  6. -
  7. Locate the apk file on your device and tap on it to install it.
  8. -
  9. Wait for the installation to finish and launch the game.
  10. -
-

How to Use Imposter Hack Apk in Among Us?

-

Once you have installed the imposter hack apk, you can use it in Among Us by following these steps:

-
    -
  1. Open the game and tap on the mod menu icon on the top left corner of the screen.
  2. -
  3. Select the cheats and mods that you want to use from the list. You can toggle them on or off as you wish.
  4. -
  5. Join or create a lobby and start the game. You will always be the impostor and have access to the cheats and mods that you selected.
  6. -
  7. Enjoy the game and try not to get caught or banned.
  8. -
-

How to Avoid Getting Banned for Using Imposter Hack Apk?

-

If you use the imposter hack apk, you run the risk of getting banned from the game by the developers or other players. To avoid this, you should follow these tips:

- -

How to Remove Imposter Hack Apk from Your Device?

-

If you want to uninstall the imposter hack apk from your device, you can do so by following these steps:

-
    -
  1. Go to Settings > Apps > Among Us and tap on Uninstall.
  2. -
  3. Confirm your action and wait for the uninstallation to finish.
  4. -
  5. Delete any leftover files or folders related to the hack apk from your device storage.
  6. -
  7. Restart your device and check if the game is completely removed.
  8. -
-

How to Report Someone Who is Using Imposter Hack Apk?

-

If you encounter someone who is using the imposter hack apk in Among Us, you can report them by following these steps:

-
    -
  1. Gather evidence of their hacking or modding such as screenshots, videos, chat logs, etc. that show their cheating or modding behavior.
  2. -
  3. Go to the game settings and tap on the report button next to their name.
  4. -
  5. Select the reason for your report and attach your evidence if possible.
  6. -
  7. Submit your report and wait for the developers to review it and take action.
  8. -
-

I hope this article has helped you understand more about the imposter hack apk and how to use it or avoid it in Among Us. If you have any questions or feedback, please leave a comment below. Thank you for reading and have a great day!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download and Stream Brazil Zonal LP 2019 ft. Tawinji - The Best Afrobeat Song of 2022.md b/spaces/1phancelerku/anime-remove-background/Download and Stream Brazil Zonal LP 2019 ft. Tawinji - The Best Afrobeat Song of 2022.md deleted file mode 100644 index 13bfcfe5257c1d9d1ebf0ebd74f899a95d138d3c..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download and Stream Brazil Zonal LP 2019 ft. Tawinji - The Best Afrobeat Song of 2022.md +++ /dev/null @@ -1,108 +0,0 @@ - -

Download Brazil Zonal LP 2019 MP3: A Guide for Music Lovers

-

If you are a fan of Afrobeat music, you might have heard of Brazil Zonal LP 2019, a popular album by NBM, a group of Nigerian artists. This album features 31 minutes of energetic and catchy songs that showcase the diversity and richness of African culture. In this article, we will show you how to download Brazil Zonal LP 2019 MP3 for free and legally, and how to enjoy it to the fullest.

-

What is Brazil Zonal LP 2019?

-

Brazil Zonal LP 2019 is an album by NBM, which stands for Neo Black Movement of Africa. NBM is a group of Nigerian musicians who are also members of a social movement that promotes African unity, solidarity, and liberation. The group was founded in 1977 at the University of Benin in Nigeria, and has since grown into a global network of chapters and zones.

-

download brazil zonal lp 2019 mp3


Download Filehttps://jinyurl.com/2uNNLS



-

The album was released in 2020, and consists of one track that lasts for 31 minutes. The track is a compilation of various songs that were performed by NBM members at their Brazil Zone Jollification event in 2019. The songs are in different languages, such as English, Yoruba, Igbo, Hausa, and Portuguese, and feature elements of Afrobeat, reggae, highlife, juju, and funk. The songs are upbeat, lively, and inspiring, and convey messages of freedom, justice, brotherhood, and love.

-

Why You Should Download Brazil Zonal LP 2019 MP3?

-

There are many reasons why you should download Brazil Zonal LP 2019 MP3. Here are some of them:

- -

How to Download Brazil Zonal LP

How to Download Brazil Zonal LP 2019 MP3?

-

There are several ways to download Brazil Zonal LP 2019 MP3 for free and legally. Here are some of the most common and reliable methods:

-

Download from YouTube

-

One of the easiest ways to download the album is from YouTube, where you can find the official video of the album uploaded by NBM Brazil Zone. To download the album from YouTube, you need to use a YouTube to MP3 converter, which is a tool that can convert any YouTube video into an MP3 file. There are many YouTube to MP3 converters available online, such as YTMP3, 4K Video Downloader, and Online Video Converter. Here are the steps to download the album from YouTube using YTMP3:

-
    -
  1. Go to the YouTube video of the album and copy its URL.
  2. -
  3. Go to YTMP3 website and paste the URL in the input box.
  4. -
  5. Select MP3 as the output format and click Convert.
  6. -
  7. Wait for the conversion to finish and click Download.
  8. -
  9. Save the MP3 file to your device and enjoy.
  10. -
-

Download from SoundCloud

-

Another way to download the album is from SoundCloud, where you can find the official track of the album uploaded by NBM Brazil Zone. To download the album from SoundCloud, you need to use a SoundCloud to MP3 converter, which is a tool that can convert any SoundCloud track into an MP3 file. There are many SoundCloud to MP3 converters available online, such as SCDL, SoundCloud Downloader, and KlickAud. Here are the steps to download the album from SoundCloud using SCDL:

-
    -
  1. Go to the SoundCloud track of the album and copy its URL.
  2. -
  3. Go to SCDL website and paste the URL in the input box.
  4. -
  5. Click Download and wait for the process to complete.
  6. -
  7. Save the MP3 file to your device and enjoy.
  8. -
-

Download from Other Websites

-

A third way to download the album is from other websites that offer free music downloads. These websites usually have a large collection of songs and albums that you can download without any registration or payment. However, you need to be careful when using these websites, as some of them might contain viruses, malware, or illegal content. Some of the websites that you can try are Mp3Juices, Free Music Archive, and Jamendo. Here are the steps to download the album from Mp3Juices:

-
    -
  1. Go to Mp3Juices website and type Brazil Zonal LP 2019 in the search box.
  2. -
  3. Select the result that matches the album and click Download.
  4. -
  5. Choose a server and wait for the download to start.
  6. -
  7. Save the MP3 file to your device and enjoy.
  8. -
-

How to Enjoy Brazil Zonal LP 2019 MP3?

-

Now that you have downloaded Brazil Zonal LP 2019 MP3, you might wonder how to enjoy it to the fullest. Here are some tips on how to listen to and appreciate the album:

-

Download NBM Brazil Zonal LP 2019 ft. Tawinji MP3 song
-Brazil Zonal LP 2019 by Tawinji - NBM of Africa Jollification
-Stream Brazil Zonal LP 2019 by Tawinji on Audiomack
-Brazil Zonal LP 2019 - YouTube video by uhuru fam tv
-How to download Brazil Zonal LP 2019 MP3 for free
-Brazil Zonal LP 2019 lyrics and translation
-Brazil Zonal LP 2019 MP3 download link
-Brazil Zonal LP 2019 review and rating
-Brazil Zonal LP 2019 by Tawinji - NBM Jolly song
-Brazil Zonal LP 2019 - NBM Wazobia Egidigbo National Jolly
-Download Brazil Zonal LP 2019 by Tawinji - Military Regime
-Brazil Zonal LP 2019 by Tawinji - Afrobeat genre
-Stream Brazil Zonal LP 2019 by Tawinji on Boomplay
-Brazil Zonal LP 2019 by Tawinji - NBM New York Zone
-How to listen to Brazil Zonal LP 2019 online
-Brazil Zonal LP 2019 by Tawinji - NBM Neo black movement of Africa
-Brazil Zonal LP 2019 MP3 file size and quality
-Brazil Zonal LP 2019 by Tawinji - NBM Turkey Zone Jolly
-Download Brazil Zonal LP 2019 by Tawinji - OYIMA FORUM JOLLY
-Brazil Zonal LP 2019 by Tawinji - NBM Kanta Ethiopia Revolution Jolly
-Stream Brazil Zonal LP 2019 by Tawinji on Spotify
-Brazil Zonal LP 2019 by Tawinji - NBM Ekpe HT Jolly
-Download Brazil Zonal LP 2019 by Tawinji - NGOR OKPALA C.A.F LP
-Brazil Zonal LP 2019 by Tawinji - NBM Omambala HT Lp (Golden Regime)
-How to buy Brazil Zonal LP 2019 MP3 on iTunes
-Brazil Zonal LP 2019 by Tawinji - NBM Oceano Mental album
-Download Brazil Zonal LP 2019 by Tawinji - Starting The Ritual song
-Brazil Zonal LP 2019 by Tawinji - NBM Cleo series
-Stream Brazil Zonal LP 2019 by Tawinji on SoundCloud
-Brazil Zonal LP 2019 by Tawinji - NBM Lola Bunny song
-Download Brazil Zonal LP 2019 by Tawinji - ABSURD song
-Brazil Zonal LP 2019 by Tawinji - NBM Scripts album
-Download Brazil Zonal LP 2019 by Tawinji - Nigeria song
-Brazil Zonal LP 2019 by Tawinji - NBM Egede Joburg Central Sub Zone Jolly
-Stream Brazil Zonal LP 2019 by Tawinji on Deezer
-Brazil Zonal LP 2019 by Tawinji - NBM Dahomey HT LP Jolly
-Download Brazil Zonal LP 2019 by TAWINJI - URATTA FORUM JOLLY
-Brazil Zonal LP 2019 by TAWINJI - NBM Jurist Confraternity South Africa Zone Jolly
-Stream Brazil Zonal LP 2019 by TAWINJI on Apple Music
-Brazil Zonal LP 2019 by TAWINJI - NBM TRIBUTE LP TO JN EJIKE
-Download Brazil ZONAL lp (2020) BY tAWINJI

-

Use a Good MP3 Player

-

To play the album with high quality and features, you need a good MP3 player that can support different formats, bitrates, and playlists. You can use either a dedicated MP3 player device or an app on your smartphone or tablet. Some of the best MP3 players that you can use are VLC Media Player, Winamp, and Poweramp. These players have advanced settings that allow you to adjust the volume, equalizer, bass, treble, and other aspects of the sound. They also have features that let you create playlists, shuffle songs, repeat songs, and more.

-

Use a Good Headphone or Speaker

-

To experience the best sound and experience, you need a good headphone or speaker that can deliver clear, balanced, and immersive sound. You can choose either a wired or wireless headphone or speaker, depending on your preference and budget. Some of the best headphones that you can use are Sony WH-1000XM4, Bose QuietComfort 35 II, and Sennheiser HD 650. These headphones have noise-canceling technology that blocks out any external noise and lets you focus on the music. They also have comfortable design and long battery life. Some of the best speakers that you can use are JBL Flip 5, Sonos One, and Bose SoundLink Revolve. These speakers have wireless connectivity, water resistance, and 360-degree sound. They also have compact design and long battery life.

-

Learn More About the Album and the Artists

-

To appreciate the album more, you can learn more about the album and the artists behind it. You can find more information and background about the album and the artists on their official websites, social media pages, and online articles. You can also watch their interviews, documentaries, and live performances on YouTube and other platforms. By learning more about the album and the artists, you can understand their vision, inspiration, and message better. You can also discover more of their songs and albums that you might like.

-

Conclusion

-

Brazil Zonal LP 2019 is a great album for music lovers who enjoy Afrobeat music and African culture. It is an album that showcases the talent, diversity, and spirit of NBM, a group of Nigerian artists and activists. In this article, we have shown you how to download Brazil Zonal LP 2019 MP3 for free and legally, and how to enjoy it to the fullest. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends and family who might be interested in this topic. Thank you for reading and happy listening!

-

FAQs

-

Here are some of the frequently asked questions and answers about Brazil Zonal LP 2019 MP3:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/components/ui/icons.tsx b/spaces/2023Liu2023/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/dataset.py b/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/dataset.py deleted file mode 100644 index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/dataset.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import random - -import numpy as np -import torch -import torch.utils.data -from tqdm import tqdm - -from . import spec_utils - - -class VocalRemoverValidationSet(torch.utils.data.Dataset): - def __init__(self, patch_list): - self.patch_list = patch_list - - def __len__(self): - return len(self.patch_list) - - def __getitem__(self, idx): - path = self.patch_list[idx] - data = np.load(path) - - X, y = data["X"], data["y"] - - X_mag = np.abs(X) - y_mag = np.abs(y) - - return X_mag, y_mag - - -def make_pair(mix_dir, inst_dir): - input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"] - - X_list = sorted( - [ - os.path.join(mix_dir, fname) - for fname in os.listdir(mix_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - y_list = sorted( - [ - os.path.join(inst_dir, fname) - for fname in os.listdir(inst_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - - filelist = list(zip(X_list, y_list)) - - return filelist - - -def train_val_split(dataset_dir, split_mode, val_rate, val_filelist): - if split_mode == "random": - filelist = make_pair( - os.path.join(dataset_dir, "mixtures"), - os.path.join(dataset_dir, "instruments"), - ) - - random.shuffle(filelist) - - if len(val_filelist) == 0: - val_size = int(len(filelist) * val_rate) - train_filelist = filelist[:-val_size] - val_filelist = filelist[-val_size:] - else: - train_filelist = [ - pair for pair in filelist if list(pair) not in val_filelist - ] - elif split_mode == "subdirs": - if len(val_filelist) != 0: - raise ValueError( - "The `val_filelist` option is not available in `subdirs` mode" - ) - - train_filelist = make_pair( - os.path.join(dataset_dir, "training/mixtures"), - os.path.join(dataset_dir, "training/instruments"), - ) - - val_filelist = make_pair( - os.path.join(dataset_dir, "validation/mixtures"), - os.path.join(dataset_dir, "validation/instruments"), - ) - - return train_filelist, val_filelist - - -def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha): - perm = np.random.permutation(len(X)) - for i, idx in enumerate(tqdm(perm)): - if np.random.uniform() < reduction_rate: - y[idx] = spec_utils.reduce_vocal_aggressively( - X[idx], y[idx], reduction_mask - ) - - if np.random.uniform() < 0.5: - # swap channel - X[idx] = X[idx, ::-1] - y[idx] = y[idx, ::-1] - if np.random.uniform() < 0.02: - # mono - X[idx] = X[idx].mean(axis=0, keepdims=True) - y[idx] = y[idx].mean(axis=0, keepdims=True) - if np.random.uniform() < 0.02: - # inst - X[idx] = y[idx] - - if np.random.uniform() < mixup_rate and i < len(perm) - 1: - lam = np.random.beta(mixup_alpha, mixup_alpha) - X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]] - y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]] - - return X, y - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset): - len_dataset = patches * len(filelist) - - X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches) - ends = starts + cropsize - for j in range(patches): - idx = i * patches + j - X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]] - y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]] - - return X_dataset, y_dataset - - -def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset): - patch_list = [] - patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format( - cropsize, sr, hop_length, n_fft, offset - ) - os.makedirs(patch_dir, exist_ok=True) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - basename = os.path.splitext(os.path.basename(X_path))[0] - - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - len_dataset = int(np.ceil(X.shape[2] / roi_size)) - for j in range(len_dataset): - outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j)) - start = j * roi_size - if not os.path.exists(outpath): - np.savez( - outpath, - X=X_pad[:, :, start : start + cropsize], - y=y_pad[:, :, start : start + cropsize], - ) - patch_list.append(outpath) - - return VocalRemoverValidationSet(patch_list) diff --git a/spaces/AB-TW/team-ai/agents/tools/smart_domain/db_entity_repository.py b/spaces/AB-TW/team-ai/agents/tools/smart_domain/db_entity_repository.py deleted file mode 100644 index 8edf3e0fe0a5e9518daa5838ee5258319c313ba2..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/agents/tools/smart_domain/db_entity_repository.py +++ /dev/null @@ -1,101 +0,0 @@ -from langchain.prompts import PromptTemplate -from agents.tools.smart_domain.common import getPrefix -from langchain.chains import LLMChain -from langchain.agents import tool -from models import llm - -db_entity_tech_stack = """Java17、reactor、lombok、Junit5、reactor test、Mockito、 Spring Data Reactive Couchbase、Couchbase""" - -db_entity_architecture = """ -* DbEntity: This component is use to define data structure that save to DB. ----eaxmple code: - @Document - public class FeatureDb {{ - @Version - private long version; - - @Id - @GeneratedValue(strategy = GenerationStrategy.UNIQUE) - private String id; - - private String featureKey; - - private Feature.FeatureDescription description; - }} ----end of eaxmple code -* Repository: This component is use to define the interface to access DB. - ---eaxmple code: - public interface FeatureDbRepository extends ReactiveCrudRepository {{ - Mono findByFeatureKey(String featureKey); - }} - ---end of eaxmple code -""" - -db_entity_test_strategy = """For the DbEntity And Repository, we can write component test to test the actual implementation of database operations, test class should extends RepositoryTestBase to use Testcontainers ability. ----eaxmple code: - class FeatureDbRepositoryTest extends RepositoryTestBase {{ - @Autowired - FeatureDbRepository repository; - - @BeforeEach - void setUp() {{ - repository.deleteAll().block(); - }} - - @AfterEach - void tearDown() {{ - repository.deleteAll().block(); - }} - - @Test - void should_save_Feature_success() {{ - var featureKey = "featureKey1"; - repository.save(FeatureTestUtil.createFeatureDb(featureKey)) - .as(StepVerifier::create) - .expectNextCount(1) - .verifyComplete(); - }} - - @Test - void should_add_same_featureKey_fail() {{ - var featureKey = "featureKey1"; - repository.save(FeatureTestUtil.createFeatureDb(featureKey)).block(); - - repository.save(FeatureTestUtil.createFeatureDb(featureKey)) - .as(StepVerifier::create) - .expectError() - .verify(); - }} - }} ----end of eaxmple code -""" - -db_entity_task = """Your task is to generate the DbEntity and Repository tests and product code.""" - -DB_ENTITY = getPrefix(db_entity_task, db_entity_tech_stack, db_entity_architecture, db_entity_test_strategy) + """ - -Use the following format: -request: the request that you need to fulfill - -Entity: -``` -the Entity code that you write to fulfill the request, follow TechStack and Architecture -``` - -Test: -``` -the test code that you write to fulfill the request, follow TechStack Architecture and TestStrategy -``` - -request: {input}""" - -DB_ENTITY_PROMPT = PromptTemplate(input_variables=["input"], template=DB_ENTITY,) - -db_entity_Repository_chain = LLMChain(llm = llm(temperature=0.1), prompt=DB_ENTITY_PROMPT) - - -@tool("Generate DBEntity and Repository Code", return_direct=True) -def dbEntityRepositoryCodeGenerator(input: str) -> str: - '''useful for when you need to generate DBEntity and Repository code''' - response = db_entity_Repository_chain.run(input) - return response \ No newline at end of file diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123812KB .py b/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123812KB .py deleted file mode 100644 index 9835dc0f0dd66a7ef3517101180ec2c54eb6011d..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123812KB .py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/model_param_init.py b/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/AIConsultant/MusicGen/audiocraft/modules/diffusion_schedule.py b/spaces/AIConsultant/MusicGen/audiocraft/modules/diffusion_schedule.py deleted file mode 100644 index 74ca6e3f2e7c4ff904d96dade315b0b46856778d..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/modules/diffusion_schedule.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Functions for Noise Schedule, defines diffusion process, reverse process and data processor. -""" - -from collections import namedtuple -import random -import typing as tp -import julius -import torch - -TrainingItem = namedtuple("TrainingItem", "noisy noise step") - - -def betas_from_alpha_bar(alpha_bar): - alphas = torch.cat([torch.Tensor([alpha_bar[0]]), alpha_bar[1:]/alpha_bar[:-1]]) - return 1 - alphas - - -class SampleProcessor(torch.nn.Module): - def project_sample(self, x: torch.Tensor): - """Project the original sample to the 'space' where the diffusion will happen.""" - return x - - def return_sample(self, z: torch.Tensor): - """Project back from diffusion space to the actual sample space.""" - return z - - -class MultiBandProcessor(SampleProcessor): - """ - MultiBand sample processor. The input audio is splitted across - frequency bands evenly distributed in mel-scale. - - Each band will be rescaled to match the power distribution - of Gaussian noise in that band, using online metrics - computed on the first few samples. - - Args: - n_bands (int): Number of mel-bands to split the signal over. - sample_rate (int): Sample rate of the audio. - num_samples (int): Number of samples to use to fit the rescaling - for each band. The processor won't be stable - until it has seen that many samples. - power_std (float or list/tensor): The rescaling factor computed to match the - power of Gaussian noise in each band is taken to - that power, i.e. `1.` means full correction of the energy - in each band, and values less than `1` means only partial - correction. Can be used to balance the relative importance - of low vs. high freq in typical audio signals. - """ - def __init__(self, n_bands: int = 8, sample_rate: float = 24_000, - num_samples: int = 10_000, power_std: tp.Union[float, tp.List[float], torch.Tensor] = 1.): - super().__init__() - self.n_bands = n_bands - self.split_bands = julius.SplitBands(sample_rate, n_bands=n_bands) - self.num_samples = num_samples - self.power_std = power_std - if isinstance(power_std, list): - assert len(power_std) == n_bands - power_std = torch.tensor(power_std) - self.register_buffer('counts', torch.zeros(1)) - self.register_buffer('sum_x', torch.zeros(n_bands)) - self.register_buffer('sum_x2', torch.zeros(n_bands)) - self.register_buffer('sum_target_x2', torch.zeros(n_bands)) - self.counts: torch.Tensor - self.sum_x: torch.Tensor - self.sum_x2: torch.Tensor - self.sum_target_x2: torch.Tensor - - @property - def mean(self): - mean = self.sum_x / self.counts - return mean - - @property - def std(self): - std = (self.sum_x2 / self.counts - self.mean**2).clamp(min=0).sqrt() - return std - - @property - def target_std(self): - target_std = self.sum_target_x2 / self.counts - return target_std - - def project_sample(self, x: torch.Tensor): - assert x.dim() == 3 - bands = self.split_bands(x) - if self.counts.item() < self.num_samples: - ref_bands = self.split_bands(torch.randn_like(x)) - self.counts += len(x) - self.sum_x += bands.mean(dim=(2, 3)).sum(dim=1) - self.sum_x2 += bands.pow(2).mean(dim=(2, 3)).sum(dim=1) - self.sum_target_x2 += ref_bands.pow(2).mean(dim=(2, 3)).sum(dim=1) - rescale = (self.target_std / self.std.clamp(min=1e-12)) ** self.power_std # same output size - bands = (bands - self.mean.view(-1, 1, 1, 1)) * rescale.view(-1, 1, 1, 1) - return bands.sum(dim=0) - - def return_sample(self, x: torch.Tensor): - assert x.dim() == 3 - bands = self.split_bands(x) - rescale = (self.std / self.target_std) ** self.power_std - bands = bands * rescale.view(-1, 1, 1, 1) + self.mean.view(-1, 1, 1, 1) - return bands.sum(dim=0) - - -class NoiseSchedule: - """Noise schedule for diffusion. - - Args: - beta_t0 (float): Variance of the first diffusion step. - beta_t1 (float): Variance of the last diffusion step. - beta_exp (float): Power schedule exponent - num_steps (int): Number of diffusion step. - variance (str): choice of the sigma value for the denoising eq. Choices: "beta" or "beta_tilde" - clip (float): clipping value for the denoising steps - rescale (float): rescaling value to avoid vanishing signals unused by default (i.e 1) - repartition (str): shape of the schedule only power schedule is supported - sample_processor (SampleProcessor): Module that normalize data to match better the gaussian distribution - noise_scale (float): Scaling factor for the noise - """ - def __init__(self, beta_t0: float = 1e-4, beta_t1: float = 0.02, num_steps: int = 1000, variance: str = 'beta', - clip: float = 5., rescale: float = 1., device='cuda', beta_exp: float = 1, - repartition: str = "power", alpha_sigmoid: dict = {}, n_bands: tp.Optional[int] = None, - sample_processor: SampleProcessor = SampleProcessor(), noise_scale: float = 1.0, **kwargs): - - self.beta_t0 = beta_t0 - self.beta_t1 = beta_t1 - self.variance = variance - self.num_steps = num_steps - self.clip = clip - self.sample_processor = sample_processor - self.rescale = rescale - self.n_bands = n_bands - self.noise_scale = noise_scale - assert n_bands is None - if repartition == "power": - self.betas = torch.linspace(beta_t0 ** (1 / beta_exp), beta_t1 ** (1 / beta_exp), num_steps, - device=device, dtype=torch.float) ** beta_exp - else: - raise RuntimeError('Not implemented') - self.rng = random.Random(1234) - - def get_beta(self, step: tp.Union[int, torch.Tensor]): - if self.n_bands is None: - return self.betas[step] - else: - return self.betas[:, step] # [n_bands, len(step)] - - def get_initial_noise(self, x: torch.Tensor): - if self.n_bands is None: - return torch.randn_like(x) - return torch.randn((x.size(0), self.n_bands, x.size(2))) - - def get_alpha_bar(self, step: tp.Optional[tp.Union[int, torch.Tensor]] = None) -> torch.Tensor: - """Return 'alpha_bar', either for a given step, or as a tensor with its value for each step.""" - if step is None: - return (1 - self.betas).cumprod(dim=-1) # works for simgle and multi bands - if type(step) is int: - return (1 - self.betas[:step + 1]).prod() - else: - return (1 - self.betas).cumprod(dim=0)[step].view(-1, 1, 1) - - def get_training_item(self, x: torch.Tensor, tensor_step: bool = False) -> TrainingItem: - """Create a noisy data item for diffusion model training: - - Args: - x (torch.Tensor): clean audio data torch.tensor(bs, 1, T) - tensor_step (bool): If tensor_step = false, only one step t is sample, - the whole batch is diffused to the same step and t is int. - If tensor_step = true, t is a tensor of size (x.size(0),) - every element of the batch is diffused to a independently sampled. - """ - step: tp.Union[int, torch.Tensor] - if tensor_step: - bs = x.size(0) - step = torch.randint(0, self.num_steps, size=(bs,), device=x.device) - else: - step = self.rng.randrange(self.num_steps) - alpha_bar = self.get_alpha_bar(step) # [batch_size, n_bands, 1] - - x = self.sample_processor.project_sample(x) - noise = torch.randn_like(x) - noisy = (alpha_bar.sqrt() / self.rescale) * x + (1 - alpha_bar).sqrt() * noise * self.noise_scale - return TrainingItem(noisy, noise, step) - - def generate(self, model: torch.nn.Module, initial: tp.Optional[torch.Tensor] = None, - condition: tp.Optional[torch.Tensor] = None, return_list: bool = False): - """Full ddpm reverse process. - - Args: - model (nn.Module): Diffusion model. - initial (tensor): Initial Noise. - condition (tensor): Input conditionning Tensor (e.g. encodec compressed representation). - return_list (bool): Whether to return the whole process or only the sampled point. - """ - alpha_bar = self.get_alpha_bar(step=self.num_steps - 1) - current = initial - iterates = [initial] - for step in range(self.num_steps)[::-1]: - with torch.no_grad(): - estimate = model(current, step, condition=condition).sample - alpha = 1 - self.betas[step] - previous = (current - (1 - alpha) / (1 - alpha_bar).sqrt() * estimate) / alpha.sqrt() - previous_alpha_bar = self.get_alpha_bar(step=step - 1) - if step == 0: - sigma2 = 0 - elif self.variance == 'beta': - sigma2 = 1 - alpha - elif self.variance == 'beta_tilde': - sigma2 = (1 - previous_alpha_bar) / (1 - alpha_bar) * (1 - alpha) - elif self.variance == 'none': - sigma2 = 0 - else: - raise ValueError(f'Invalid variance type {self.variance}') - - if sigma2 > 0: - previous += sigma2**0.5 * torch.randn_like(previous) * self.noise_scale - if self.clip: - previous = previous.clamp(-self.clip, self.clip) - current = previous - alpha_bar = previous_alpha_bar - if step == 0: - previous *= self.rescale - if return_list: - iterates.append(previous.cpu()) - - if return_list: - return iterates - else: - return self.sample_processor.return_sample(previous) - - def generate_subsampled(self, model: torch.nn.Module, initial: torch.Tensor, step_list: tp.Optional[list] = None, - condition: tp.Optional[torch.Tensor] = None, return_list: bool = False): - """Reverse process that only goes through Markov chain states in step_list.""" - if step_list is None: - step_list = list(range(1000))[::-50] + [0] - alpha_bar = self.get_alpha_bar(step=self.num_steps - 1) - alpha_bars_subsampled = (1 - self.betas).cumprod(dim=0)[list(reversed(step_list))].cpu() - betas_subsampled = betas_from_alpha_bar(alpha_bars_subsampled) - current = initial * self.noise_scale - iterates = [current] - for idx, step in enumerate(step_list[:-1]): - with torch.no_grad(): - estimate = model(current, step, condition=condition).sample * self.noise_scale - alpha = 1 - betas_subsampled[-1 - idx] - previous = (current - (1 - alpha) / (1 - alpha_bar).sqrt() * estimate) / alpha.sqrt() - previous_alpha_bar = self.get_alpha_bar(step_list[idx + 1]) - if step == step_list[-2]: - sigma2 = 0 - previous_alpha_bar = torch.tensor(1.0) - else: - sigma2 = (1 - previous_alpha_bar) / (1 - alpha_bar) * (1 - alpha) - if sigma2 > 0: - previous += sigma2**0.5 * torch.randn_like(previous) * self.noise_scale - if self.clip: - previous = previous.clamp(-self.clip, self.clip) - current = previous - alpha_bar = previous_alpha_bar - if step == 0: - previous *= self.rescale - if return_list: - iterates.append(previous.cpu()) - if return_list: - return iterates - else: - return self.sample_processor.return_sample(previous) diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_smpl.sh b/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_smpl.sh deleted file mode 100644 index 411325b509e891d96b859bf28f7b983005ca360a..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_smpl.sh +++ /dev/null @@ -1,13 +0,0 @@ - -mkdir -p body_models -cd body_models/ - -echo -e "The smpl files will be stored in the 'body_models/smpl/' folder\n" -gdown 1INYlGA76ak_cKGzvpOV2Pe6RkYTlXTW2 -rm -rf smpl - -unzip smpl.zip -echo -e "Cleaning\n" -rm smpl.zip - -echo -e "Downloading done!" \ No newline at end of file diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_nodes.py b/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_nodes.py deleted file mode 100644 index 9857c8221b7f6fb8530699bdf5593f8f0b74e152..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_nodes.py +++ /dev/null @@ -1,124 +0,0 @@ -import numpy as np -import pytest -from trimesh import transformations - -from pyrender import (DirectionalLight, PerspectiveCamera, Mesh, Node) - - -def test_nodes(): - - x = Node() - assert x.name is None - assert x.camera is None - assert x.children == [] - assert x.skin is None - assert np.allclose(x.matrix, np.eye(4)) - assert x.mesh is None - assert np.allclose(x.rotation, [0,0,0,1]) - assert np.allclose(x.scale, np.ones(3)) - assert np.allclose(x.translation, np.zeros(3)) - assert x.weights is None - assert x.light is None - - x.name = 'node' - - # Test node light/camera/mesh tests - c = PerspectiveCamera(yfov=2.0) - m = Mesh([]) - d = DirectionalLight() - x.camera = c - assert x.camera == c - with pytest.raises(TypeError): - x.camera = m - x.camera = d - x.camera = None - x.mesh = m - assert x.mesh == m - with pytest.raises(TypeError): - x.mesh = c - x.mesh = d - x.light = d - assert x.light == d - with pytest.raises(TypeError): - x.light = m - x.light = c - - # Test transformations getters/setters/etc... - # Set up test values - x = np.array([1.0, 0.0, 0.0]) - y = np.array([0.0, 1.0, 0.0]) - t = np.array([1.0, 2.0, 3.0]) - s = np.array([0.5, 2.0, 1.0]) - - Mx = transformations.rotation_matrix(np.pi / 2.0, x) - qx = np.roll(transformations.quaternion_about_axis(np.pi / 2.0, x), -1) - Mxt = Mx.copy() - Mxt[:3,3] = t - S = np.eye(4) - S[:3,:3] = np.diag(s) - Mxts = Mxt.dot(S) - - My = transformations.rotation_matrix(np.pi / 2.0, y) - qy = np.roll(transformations.quaternion_about_axis(np.pi / 2.0, y), -1) - Myt = My.copy() - Myt[:3,3] = t - - x = Node(matrix=Mx) - assert np.allclose(x.matrix, Mx) - assert np.allclose(x.rotation, qx) - assert np.allclose(x.translation, np.zeros(3)) - assert np.allclose(x.scale, np.ones(3)) - - x.matrix = My - assert np.allclose(x.matrix, My) - assert np.allclose(x.rotation, qy) - assert np.allclose(x.translation, np.zeros(3)) - assert np.allclose(x.scale, np.ones(3)) - x.translation = t - assert np.allclose(x.matrix, Myt) - assert np.allclose(x.rotation, qy) - x.rotation = qx - assert np.allclose(x.matrix, Mxt) - x.scale = s - assert np.allclose(x.matrix, Mxts) - - x = Node(matrix=Mxt) - assert np.allclose(x.matrix, Mxt) - assert np.allclose(x.rotation, qx) - assert np.allclose(x.translation, t) - assert np.allclose(x.scale, np.ones(3)) - - x = Node(matrix=Mxts) - assert np.allclose(x.matrix, Mxts) - assert np.allclose(x.rotation, qx) - assert np.allclose(x.translation, t) - assert np.allclose(x.scale, s) - - # Individual element getters - x.scale[0] = 0 - assert np.allclose(x.scale[0], 0) - - x.translation[0] = 0 - assert np.allclose(x.translation[0], 0) - - x.matrix = np.eye(4) - x.matrix[0,0] = 500 - assert x.matrix[0,0] == 1.0 - - # Failures - with pytest.raises(ValueError): - x.matrix = 5 * np.eye(4) - with pytest.raises(ValueError): - x.matrix = np.eye(5) - with pytest.raises(ValueError): - x.matrix = np.eye(4).dot([5,1,1,1]) - with pytest.raises(ValueError): - x.rotation = np.array([1,2]) - with pytest.raises(ValueError): - x.rotation = np.array([1,2,3]) - with pytest.raises(ValueError): - x.rotation = np.array([1,2,3,4]) - with pytest.raises(ValueError): - x.translation = np.array([1,2,3,4]) - with pytest.raises(ValueError): - x.scale = np.array([1,2,3,4]) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py deleted file mode 100644 index 0980d729dd3b579fee0380d0b9d7055e6843ba12..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py +++ /dev/null @@ -1,179 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchlibrosa.stft import Spectrogram, LogmelFilterBank - -def get_audio_encoder(name: str): - if name == "Cnn14": - return Cnn14 - else: - raise Exception('The audio encoder name {} is incorrect or not supported'.format(name)) - - -class ConvBlock(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.conv2 = nn.Conv2d(in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - self.bn2 = nn.BatchNorm2d(out_channels) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - x = F.relu_(self.bn2(self.conv2(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - - -class ConvBlock5x5(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock5x5, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(5, 5), stride=(1, 1), - padding=(2, 2), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - - -class AttBlock(nn.Module): - def __init__(self, n_in, n_out, activation='linear', temperature=1.): - super(AttBlock, self).__init__() - - self.activation = activation - self.temperature = temperature - self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True) - self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True) - - self.bn_att = nn.BatchNorm1d(n_out) - - def forward(self, x): - # x: (n_samples, n_in, n_time) - norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1) - cla = self.nonlinear_transform(self.cla(x)) - x = torch.sum(norm_att * cla, dim=2) - return x, norm_att, cla - - def nonlinear_transform(self, x): - if self.activation == 'linear': - return x - elif self.activation == 'sigmoid': - return torch.sigmoid(x) - - -class Cnn14(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num, out_emb): - - super(Cnn14, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.bn0 = nn.BatchNorm2d(64) - - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024) - self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048) - - # out_emb is 2048 for best Cnn14 - self.fc1 = nn.Linear(2048, out_emb, bias=True) - self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True) - - def forward(self, input, mixup_lambda=None): - """ - Input: (batch_size, data_length) - """ - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = torch.mean(x, dim=3) - - (x1, _) = torch.max(x, dim=2) - x2 = torch.mean(x, dim=2) - x = x1 + x2 - x = F.dropout(x, p=0.5, training=self.training) - x = F.relu_(self.fc1(x)) - embedding = F.dropout(x, p=0.5, training=self.training) - clipwise_output = torch.sigmoid(self.fc_audioset(x)) - - output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding} - - return output_dict \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/nar_tts_modules.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/nar_tts_modules.py deleted file mode 100644 index fd6b53cc488b3b4407e43c703bfc180c31b39607..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/nar_tts_modules.py +++ /dev/null @@ -1,138 +0,0 @@ -import torch -from torch import nn - -from text_to_speech.modules.commons.layers import LayerNorm -import torch.nn.functional as F - -class DurationPredictor(torch.nn.Module): - def __init__(self, idim, n_layers=2, n_chans=384, kernel_size=3, dropout_rate=0.1, offset=1.0): - super(DurationPredictor, self).__init__() - self.offset = offset - self.conv = torch.nn.ModuleList() - self.kernel_size = kernel_size - for idx in range(n_layers): - in_chans = idim if idx == 0 else n_chans - self.conv += [torch.nn.Sequential( - torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=kernel_size // 2), - torch.nn.ReLU(), - LayerNorm(n_chans, dim=1), - torch.nn.Dropout(dropout_rate) - )] - self.linear = nn.Sequential(torch.nn.Linear(n_chans, 1), nn.Softplus()) - - def forward(self, x, x_padding=None): - x = x.transpose(1, -1) # (B, idim, Tmax) - for f in self.conv: - x = f(x) # (B, C, Tmax) - if x_padding is not None: - x = x * (1 - x_padding.float())[:, None, :] - - x = self.linear(x.transpose(1, -1)) # [B, T, C] - x = x * (1 - x_padding.float())[:, :, None] # (B, T, C) - x = x[..., 0] # (B, Tmax) - return x - - -class SyntaDurationPredictor(torch.nn.Module): - def __init__(self, idim, n_layers=2, n_chans=384, kernel_size=3, dropout_rate=0.1, offset=1.0): - super(SyntaDurationPredictor, self).__init__() - from text_to_speech.modules.tts.syntaspeech.syntactic_graph_encoder import GraphAuxEnc - self.graph_encoder = GraphAuxEnc(in_dim=idim, hid_dim=idim, out_dim=idim) - self.offset = offset - self.conv = torch.nn.ModuleList() - self.kernel_size = kernel_size - for idx in range(n_layers): - in_chans = idim if idx == 0 else n_chans - self.conv += [torch.nn.Sequential( - torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=kernel_size // 2), - torch.nn.ReLU(), - LayerNorm(n_chans, dim=1), - torch.nn.Dropout(dropout_rate) - )] - self.linear = nn.Sequential(torch.nn.Linear(n_chans, 1), nn.Softplus()) - - def forward(self, x, x_padding=None, ph2word=None, graph_lst=None, etypes_lst=None): - x = x.transpose(1, -1) # (B, idim, Tmax) - assert ph2word is not None and graph_lst is not None and etypes_lst is not None - x_graph = self.graph_encoder(graph_lst, x, ph2word, etypes_lst) - x = x + x_graph * 1. - - for f in self.conv: - x = f(x) # (B, C, Tmax) - if x_padding is not None: - x = x * (1 - x_padding.float())[:, None, :] - - x = self.linear(x.transpose(1, -1)) # [B, T, C] - x = x * (1 - x_padding.float())[:, :, None] # (B, T, C) - x = x[..., 0] # (B, Tmax) - return x - - -class LengthRegulator(torch.nn.Module): - def __init__(self, pad_value=0.0): - super(LengthRegulator, self).__init__() - self.pad_value = pad_value - - def forward(self, dur, dur_padding=None, alpha=1.0): - """ - Example (no batch dim version): - 1. dur = [2,2,3] - 2. token_idx = [[1],[2],[3]], dur_cumsum = [2,4,7], dur_cumsum_prev = [0,2,4] - 3. token_mask = [[1,1,0,0,0,0,0], - [0,0,1,1,0,0,0], - [0,0,0,0,1,1,1]] - 4. token_idx * token_mask = [[1,1,0,0,0,0,0], - [0,0,2,2,0,0,0], - [0,0,0,0,3,3,3]] - 5. (token_idx * token_mask).sum(0) = [1,1,2,2,3,3,3] - - :param dur: Batch of durations of each frame (B, T_txt) - :param dur_padding: Batch of padding of each frame (B, T_txt) - :param alpha: duration rescale coefficient - :return: - mel2ph (B, T_speech) - assert alpha > 0 - """ - dur = torch.round(dur.float() * alpha).long() - if dur_padding is not None: - dur = dur * (1 - dur_padding.long()) - token_idx = torch.arange(1, dur.shape[1] + 1)[None, :, None].to(dur.device) - dur_cumsum = torch.cumsum(dur, 1) - dur_cumsum_prev = F.pad(dur_cumsum, [1, -1], mode='constant', value=0) - - pos_idx = torch.arange(dur.sum(-1).max())[None, None].to(dur.device) - token_mask = (pos_idx >= dur_cumsum_prev[:, :, None]) & (pos_idx < dur_cumsum[:, :, None]) - mel2token = (token_idx * token_mask.long()).sum(1) - return mel2token - - -class PitchPredictor(torch.nn.Module): - def __init__(self, idim, n_layers=5, n_chans=384, odim=2, kernel_size=5, dropout_rate=0.1): - super(PitchPredictor, self).__init__() - self.conv = torch.nn.ModuleList() - self.kernel_size = kernel_size - for idx in range(n_layers): - in_chans = idim if idx == 0 else n_chans - self.conv += [torch.nn.Sequential( - torch.nn.Conv1d(in_chans, n_chans, kernel_size, padding=kernel_size // 2), - torch.nn.ReLU(), - LayerNorm(n_chans, dim=1), - torch.nn.Dropout(dropout_rate) - )] - self.linear = torch.nn.Linear(n_chans, odim) - - def forward(self, x): - """ - - :param x: [B, T, H] - :return: [B, T, H] - """ - x = x.transpose(1, -1) # (B, idim, Tmax) - for f in self.conv: - x = f(x) # (B, C, Tmax) - x = self.linear(x.transpose(1, -1)) # (B, Tmax, H) - return x - - -class EnergyPredictor(PitchPredictor): - pass diff --git a/spaces/AIGuardians/SummarizeWikipediaDocument/app.py b/spaces/AIGuardians/SummarizeWikipediaDocument/app.py deleted file mode 100644 index 50ac072d3e121c1ada399ba9e349e6e79bd64470..0000000000000000000000000000000000000000 --- a/spaces/AIGuardians/SummarizeWikipediaDocument/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -import wikipedia -from transformers import pipeline -import os - -# Setting to use the 0th GPU -os.environ["CUDA_VISIBLE_DEVICES"] = "0" - - -def summarize(text): - # Setting to use the bart-large-cnn model for summarization - summarizer = pipeline("summarization") - - # To use the t5-base model for summarization: - # summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf") - - summary_text = summarizer(text, max_length=100, min_length=5, do_sample=False)[0]['summary_text'] - print(f'Length of initial text: {len(text)}') - print(f'Length of summary: {len(summary_text)}') - print(summary_text) - return summary_text - - -def greet(name): - return "Hello " + name.orig_name + "!" - - -def get_ocr(): - return '' - - -def search_wiki(text): - return wikipedia.search(text) - - -def get_wiki(search_term): - # text = wikipedia.summary(search_term) - orig_text_len = len(search_term) - text = summarize(search_term) - sum_length = len(text) - return [text, orig_text_len, sum_length] - - -# def inference(file): - # get_ocr() - # model = AutoModelForSeq2SeqLM.from_pretrained("sgugger/my-awesome-model") - -out_sum_text = gr.Textbox(label='Summarized Text', lines=15) -out_orig_test_len = gr.Number(label='Original Text Length') -out_sum_text_len = gr.Number(label='Summarized Text Length') - -iface = gr.Interface(fn=get_wiki, - inputs=gr.Textbox(lines=50, placeholder="Paste article here....", label='Article to Summarize'), - outputs=[out_sum_text, out_orig_test_len, out_sum_text_len], - title='Article Summary', - description='Paste in an article and it will be summarized.' - ) -iface.launch() # To create a public link, set `share=True` in `launch()`. diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptDuo.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptDuo.py deleted file mode 100644 index 119ff16b694866b52e0052e1710b4a9c530ef100..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptDuo.py +++ /dev/null @@ -1,57 +0,0 @@ -from __future__ import annotations - -from curl_cffi.requests import AsyncSession -from .base_provider import AsyncProvider, format_prompt - - -class ChatgptDuo(AsyncProvider): - url = "https://chatgptduo.com" - supports_gpt_35_turbo = True - working = True - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - timeout: int = 30, - **kwargs - ) -> str: - async with AsyncSession( - impersonate="chrome107", - proxies={"https": proxy}, - timeout=timeout - ) as session: - prompt = format_prompt(messages), - data = { - "prompt": prompt, - "search": prompt, - "purpose": "ask", - } - response = await session.post(f"{cls.url}/", data=data) - response.raise_for_status() - data = response.json() - - cls._sources = [{ - "title": source["title"], - "url": source["link"], - "snippet": source["snippet"] - } for source in data["results"]] - - return data["answer"] - - @classmethod - def get_sources(cls): - return cls._sources - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" \ No newline at end of file diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net.py deleted file mode 100644 index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net.py +++ /dev/null @@ -1,76 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, Interpolate, _make_encoder - - -class MidasNet(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=256, non_negative=True): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet, self).__init__() - - use_pretrained = False if path is None else True - - self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained) - - self.scratch.refinenet4 = FeatureFusionBlock(features) - self.scratch.refinenet3 = FeatureFusionBlock(features) - self.scratch.refinenet2 = FeatureFusionBlock(features) - self.scratch.refinenet1 = FeatureFusionBlock(features) - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - ) - - if path: - self.load(path) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/prisoner.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/prisoner.py deleted file mode 100644 index a96352049a60ed8597783e178f017de056573a23..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/prisoner.py +++ /dev/null @@ -1,49 +0,0 @@ -from __future__ import annotations - -from typing import TYPE_CHECKING, Any, List - -from . import describer_registry as DescriberRegistry -from .base import BaseDescriber - -if TYPE_CHECKING: - from agentverse.environments import BaseEnvironment - - -@DescriberRegistry.register("prisoner") -class PrisonerDescriber(BaseDescriber): - switch_func = { - "Both Suspects": "Suspect2", - "Suspect1": "Suspect2", - "Suspect2": "Suspect1", - } - receiver: str = "Both Suspects" - - def get_env_description(self, environment: BaseEnvironment) -> List[str]: - if environment.cnt_turn == 0: - environment.agents[0].set_receiver({"all"}) - environment.agents[1].set_receiver({"Police", "Suspect1"}) - environment.agents[2].set_receiver({"Police", "Suspect2"}) - - # only police have to choose to talk to suspect1 or suspect - description = [] - for i, agent in enumerate(environment.agents): - if i == 0: - # police -> suspect1 -> police -> suspect2 - if environment.cnt_turn % 2 == 1: - description.append("") - continue - - # Police will have to choose talk to which suspect - description.append(f"You are now talking to {self.receiver}") - - receiver = "all" if self.receiver == "Both Suspects" else self.receiver - self.receiver = self.switch_func[self.receiver] - agent.set_receiver({receiver}) - - else: - description.append("") - - return description - - def reset(self) -> None: - pass diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cleaners.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cleaners.py deleted file mode 100644 index c80e113b2b81a66134800dbdaa29c7d96a0152a7..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cleaners.py +++ /dev/null @@ -1,146 +0,0 @@ -import re - - -def japanese_cleaners(text): - from text.japanese import japanese_to_romaji_with_accent - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - from text.korean import latin_to_hangul, number_to_hangul, divide_hangul - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) - return text - - -def zh_ja_mixture_cleaners(text): - from text.mandarin import chinese_to_romaji - from text.japanese import japanese_to_romaji_with_accent - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if text[-1] != '।': - text += ' ।' - return text - - -def cjks_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_lazy_ipa - from text.sanskrit import devanagari_to_ipa - from text.english import english_to_lazy_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - from text.mandarin import chinese_to_ipa - from text.japanese import japanese_to_ipa2 - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - from text.thai import num_to_thai, latin_to_thai - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - from text.shanghainese import shanghainese_to_ipa - text = shanghainese_to_ipa(text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def chinese_dialect_cleaners(text): - from text.mandarin import chinese_to_ipa2 - from text.japanese import japanese_to_ipa3 - from text.shanghainese import shanghainese_to_ipa - from text.cantonese import cantonese_to_ipa - from text.english import english_to_lazy_ipa2 - from text.ngu_dialect import ngu_dialect_to_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/consistency_models/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/consistency_models/__init__.py deleted file mode 100644 index fd78ddb3aae232a734bd911e92d8c9a07019945d..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/consistency_models/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .pipeline_consistency_models import ConsistencyModelPipeline diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py deleted file mode 100644 index 271d20a2bc7d306139871bd3eb406ad9b005a00a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py +++ /dev/null @@ -1,645 +0,0 @@ -import copy -from dataclasses import dataclass -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -from torch.nn.functional import grid_sample -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import StableDiffusionPipeline, StableDiffusionSafetyChecker -from diffusers.schedulers import KarrasDiffusionSchedulers -from diffusers.utils import BaseOutput - - -def rearrange_0(tensor, f): - F, C, H, W = tensor.size() - tensor = torch.permute(torch.reshape(tensor, (F // f, f, C, H, W)), (0, 2, 1, 3, 4)) - return tensor - - -def rearrange_1(tensor): - B, C, F, H, W = tensor.size() - return torch.reshape(torch.permute(tensor, (0, 2, 1, 3, 4)), (B * F, C, H, W)) - - -def rearrange_3(tensor, f): - F, D, C = tensor.size() - return torch.reshape(tensor, (F // f, f, D, C)) - - -def rearrange_4(tensor): - B, F, D, C = tensor.size() - return torch.reshape(tensor, (B * F, D, C)) - - -class CrossFrameAttnProcessor: - """ - Cross frame attention processor. Each frame attends the first frame. - - Args: - batch_size: The number that represents actual batch size, other than the frames. - For example, calling unet with a single prompt and num_images_per_prompt=1, batch_size should be equal to - 2, due to classifier-free guidance. - """ - - def __init__(self, batch_size=2): - self.batch_size = batch_size - - def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None): - batch_size, sequence_length, _ = hidden_states.shape - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - query = attn.to_q(hidden_states) - - is_cross_attention = encoder_hidden_states is not None - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - # Cross Frame Attention - if not is_cross_attention: - video_length = key.size()[0] // self.batch_size - first_frame_index = [0] * video_length - - # rearrange keys to have batch and frames in the 1st and 2nd dims respectively - key = rearrange_3(key, video_length) - key = key[:, first_frame_index] - # rearrange values to have batch and frames in the 1st and 2nd dims respectively - value = rearrange_3(value, video_length) - value = value[:, first_frame_index] - - # rearrange back to original shape - key = rearrange_4(key) - value = rearrange_4(value) - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - return hidden_states - - -class CrossFrameAttnProcessor2_0: - """ - Cross frame attention processor with scaled_dot_product attention of Pytorch 2.0. - - Args: - batch_size: The number that represents actual batch size, other than the frames. - For example, calling unet with a single prompt and num_images_per_prompt=1, batch_size should be equal to - 2, due to classifier-free guidance. - """ - - def __init__(self, batch_size=2): - if not hasattr(F, "scaled_dot_product_attention"): - raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.") - self.batch_size = batch_size - - def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None): - batch_size, sequence_length, _ = ( - hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape - ) - inner_dim = hidden_states.shape[-1] - - if attention_mask is not None: - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - # scaled_dot_product_attention expects attention_mask shape to be - # (batch, heads, source_length, target_length) - attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1]) - - query = attn.to_q(hidden_states) - - is_cross_attention = encoder_hidden_states is not None - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - # Cross Frame Attention - if not is_cross_attention: - video_length = key.size()[0] // self.batch_size - first_frame_index = [0] * video_length - - # rearrange keys to have batch and frames in the 1st and 2nd dims respectively - key = rearrange_3(key, video_length) - key = key[:, first_frame_index] - # rearrange values to have batch and frames in the 1st and 2nd dims respectively - value = rearrange_3(value, video_length) - value = value[:, first_frame_index] - - # rearrange back to original shape - key = rearrange_4(key) - value = rearrange_4(value) - - head_dim = inner_dim // attn.heads - query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - - # the output of sdp = (batch, num_heads, seq_len, head_dim) - # TODO: add support for attn.scale when we move to Torch 2.1 - hidden_states = F.scaled_dot_product_attention( - query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False - ) - - hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim) - hidden_states = hidden_states.to(query.dtype) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - return hidden_states - - -@dataclass -class TextToVideoPipelineOutput(BaseOutput): - r""" - Output class for zero-shot text-to-video pipeline. - - Args: - images (`[List[PIL.Image.Image]`, `np.ndarray`]): - List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, - num_channels)`. - nsfw_content_detected (`[List[bool]]`): - List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or - `None` if safety checking could not be performed. - """ - images: Union[List[PIL.Image.Image], np.ndarray] - nsfw_content_detected: Optional[List[bool]] - - -def coords_grid(batch, ht, wd, device): - # Adapted from https://github.com/princeton-vl/RAFT/blob/master/core/utils/utils.py - coords = torch.meshgrid(torch.arange(ht, device=device), torch.arange(wd, device=device)) - coords = torch.stack(coords[::-1], dim=0).float() - return coords[None].repeat(batch, 1, 1, 1) - - -def warp_single_latent(latent, reference_flow): - """ - Warp latent of a single frame with given flow - - Args: - latent: latent code of a single frame - reference_flow: flow which to warp the latent with - - Returns: - warped: warped latent - """ - _, _, H, W = reference_flow.size() - _, _, h, w = latent.size() - coords0 = coords_grid(1, H, W, device=latent.device).to(latent.dtype) - - coords_t0 = coords0 + reference_flow - coords_t0[:, 0] /= W - coords_t0[:, 1] /= H - - coords_t0 = coords_t0 * 2.0 - 1.0 - coords_t0 = F.interpolate(coords_t0, size=(h, w), mode="bilinear") - coords_t0 = torch.permute(coords_t0, (0, 2, 3, 1)) - - warped = grid_sample(latent, coords_t0, mode="nearest", padding_mode="reflection") - return warped - - -def create_motion_field(motion_field_strength_x, motion_field_strength_y, frame_ids, device, dtype): - """ - Create translation motion field - - Args: - motion_field_strength_x: motion strength along x-axis - motion_field_strength_y: motion strength along y-axis - frame_ids: indexes of the frames the latents of which are being processed. - This is needed when we perform chunk-by-chunk inference - device: device - dtype: dtype - - Returns: - - """ - seq_length = len(frame_ids) - reference_flow = torch.zeros((seq_length, 2, 512, 512), device=device, dtype=dtype) - for fr_idx in range(seq_length): - reference_flow[fr_idx, 0, :, :] = motion_field_strength_x * (frame_ids[fr_idx]) - reference_flow[fr_idx, 1, :, :] = motion_field_strength_y * (frame_ids[fr_idx]) - return reference_flow - - -def create_motion_field_and_warp_latents(motion_field_strength_x, motion_field_strength_y, frame_ids, latents): - """ - Creates translation motion and warps the latents accordingly - - Args: - motion_field_strength_x: motion strength along x-axis - motion_field_strength_y: motion strength along y-axis - frame_ids: indexes of the frames the latents of which are being processed. - This is needed when we perform chunk-by-chunk inference - latents: latent codes of frames - - Returns: - warped_latents: warped latents - """ - motion_field = create_motion_field( - motion_field_strength_x=motion_field_strength_x, - motion_field_strength_y=motion_field_strength_y, - frame_ids=frame_ids, - device=latents.device, - dtype=latents.dtype, - ) - warped_latents = latents.clone().detach() - for i in range(len(warped_latents)): - warped_latents[i] = warp_single_latent(latents[i][None], motion_field[i][None]) - return warped_latents - - -class TextToVideoZeroPipeline(StableDiffusionPipeline): - r""" - Pipeline for zero-shot text-to-video generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer (`CLIPTokenizer`): - A [`~transformers.CLIPTokenizer`] to tokenize text. - unet ([`UNet2DConditionModel`]): - A [`UNet3DConditionModel`] to denoise the encoded video latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details - about a model's potential harms. - feature_extractor ([`CLIPImageProcessor`]): - A [`CLIPImageProcessor`] to extract features from generated images; used as inputs to the `safety_checker`. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__( - vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker - ) - processor = ( - CrossFrameAttnProcessor2_0(batch_size=2) - if hasattr(F, "scaled_dot_product_attention") - else CrossFrameAttnProcessor(batch_size=2) - ) - self.unet.set_attn_processor(processor) - - def forward_loop(self, x_t0, t0, t1, generator): - """ - Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. - - Args: - x_t0: - Latent code at time t0. - t0: - Timestep at t0. - t1: - Timestamp at t1. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - - Returns: - x_t1: - Forward process applied to x_t0 from time t0 to t1. - """ - eps = torch.randn(x_t0.size(), generator=generator, dtype=x_t0.dtype, device=x_t0.device) - alpha_vec = torch.prod(self.scheduler.alphas[t0:t1]) - x_t1 = torch.sqrt(alpha_vec) * x_t0 + torch.sqrt(1 - alpha_vec) * eps - return x_t1 - - def backward_loop( - self, - latents, - timesteps, - prompt_embeds, - guidance_scale, - callback, - callback_steps, - num_warmup_steps, - extra_step_kwargs, - cross_attention_kwargs=None, - ): - """ - Perform backward process given list of time steps. - - Args: - latents: - Latents at time timesteps[0]. - timesteps: - Time steps along which to perform backward process. - prompt_embeds: - Pre-generated text embeddings. - guidance_scale: - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - extra_step_kwargs: - Extra_step_kwargs. - cross_attention_kwargs: - A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in - [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - num_warmup_steps: - number of warmup steps. - - Returns: - latents: - Latents of backward process output at time timesteps[-1]. - """ - do_classifier_free_guidance = guidance_scale > 1.0 - num_steps = (len(timesteps) - num_warmup_steps) // self.scheduler.order - with self.progress_bar(total=num_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - return latents.clone().detach() - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - video_length: Optional[int] = 8, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_videos_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - motion_field_strength_x: float = 12, - motion_field_strength_y: float = 12, - output_type: Optional[str] = "tensor", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - t0: int = 44, - t1: int = 47, - frame_ids: Optional[List[int]] = None, - ): - """ - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - video_length (`int`, *optional*, defaults to 8): - The number of generated video frames. - height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in video generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_videos_per_prompt (`int`, *optional*, defaults to 1): - The number of videos to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"numpy"`): - The output format of the generated video. Choose between `"latent"` and `"numpy"`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a - [`~pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput`] instead of - a plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - motion_field_strength_x (`float`, *optional*, defaults to 12): - Strength of motion in generated video along x-axis. See the [paper](https://arxiv.org/abs/2303.13439), - Sect. 3.3.1. - motion_field_strength_y (`float`, *optional*, defaults to 12): - Strength of motion in generated video along y-axis. See the [paper](https://arxiv.org/abs/2303.13439), - Sect. 3.3.1. - t0 (`int`, *optional*, defaults to 44): - Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the - [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1. - t1 (`int`, *optional*, defaults to 47): - Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the - [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1. - frame_ids (`List[int]`, *optional*): - Indexes of the frames that are being generated. This is used when generating longer videos - chunk-by-chunk. - - Returns: - [`~pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput`]: - The output contains a `ndarray` of the generated video, when `output_type` != `"latent"`, otherwise a - latent code of generated videos and a list of `bool`s indicating whether the corresponding generated - video contains "not-safe-for-work" (nsfw) content.. - """ - assert video_length > 0 - if frame_ids is None: - frame_ids = list(range(video_length)) - assert len(frame_ids) == video_length - - assert num_videos_per_prompt == 1 - - if isinstance(prompt, str): - prompt = [prompt] - if isinstance(negative_prompt, str): - negative_prompt = [negative_prompt] - - # Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, device, num_videos_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_videos_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - # Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - - # Perform the first backward process up to time T_1 - x_1_t1 = self.backward_loop( - timesteps=timesteps[: -t1 - 1], - prompt_embeds=prompt_embeds, - latents=latents, - guidance_scale=guidance_scale, - callback=callback, - callback_steps=callback_steps, - extra_step_kwargs=extra_step_kwargs, - num_warmup_steps=num_warmup_steps, - ) - scheduler_copy = copy.deepcopy(self.scheduler) - - # Perform the second backward process up to time T_0 - x_1_t0 = self.backward_loop( - timesteps=timesteps[-t1 - 1 : -t0 - 1], - prompt_embeds=prompt_embeds, - latents=x_1_t1, - guidance_scale=guidance_scale, - callback=callback, - callback_steps=callback_steps, - extra_step_kwargs=extra_step_kwargs, - num_warmup_steps=0, - ) - - # Propagate first frame latents at time T_0 to remaining frames - x_2k_t0 = x_1_t0.repeat(video_length - 1, 1, 1, 1) - - # Add motion in latents at time T_0 - x_2k_t0 = create_motion_field_and_warp_latents( - motion_field_strength_x=motion_field_strength_x, - motion_field_strength_y=motion_field_strength_y, - latents=x_2k_t0, - frame_ids=frame_ids[1:], - ) - - # Perform forward process up to time T_1 - x_2k_t1 = self.forward_loop( - x_t0=x_2k_t0, - t0=timesteps[-t0 - 1].item(), - t1=timesteps[-t1 - 1].item(), - generator=generator, - ) - - # Perform backward process from time T_1 to 0 - x_1k_t1 = torch.cat([x_1_t1, x_2k_t1]) - b, l, d = prompt_embeds.size() - prompt_embeds = prompt_embeds[:, None].repeat(1, video_length, 1, 1).reshape(b * video_length, l, d) - - self.scheduler = scheduler_copy - x_1k_0 = self.backward_loop( - timesteps=timesteps[-t1 - 1 :], - prompt_embeds=prompt_embeds, - latents=x_1k_t1, - guidance_scale=guidance_scale, - callback=callback, - callback_steps=callback_steps, - extra_step_kwargs=extra_step_kwargs, - num_warmup_steps=0, - ) - latents = x_1k_0 - - # manually for max memory savings - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.unet.to("cpu") - torch.cuda.empty_cache() - - if output_type == "latent": - image = latents - has_nsfw_concept = None - else: - image = self.decode_latents(latents) - # Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return TextToVideoPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py deleted file mode 100644 index ddf663e4f0e1525490a493674b32b3dc4c781bb2..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - roi_head=dict( - bbox_head=dict( - reg_decoded_bbox=True, - loss_bbox=dict(type='IoULoss', loss_weight=10.0)))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco.py deleted file mode 100644 index 1b48a2104baf0df935954897ae4a991b38684d78..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py' -# learning policy -lr_config = dict(step=[28, 34]) -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/README.md deleted file mode 100644 index a843d355b6c95946517b50b6867d53f1ffcaf869..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# Searching for MobileNetV3 - -## Introduction - - - -```latex -@inproceedings{Howard_2019_ICCV, - title={Searching for MobileNetV3}, - author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and Le, Quoc V. and Adam, Hartwig}, - booktitle={The IEEE International Conference on Computer Vision (ICCV)}, - pages={1314-1324}, - month={October}, - year={2019}, - doi={10.1109/ICCV.2019.00140}} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | ------------------ | --------- | ------: | -------: | -------------- | ----: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| LRASPP | M-V3-D8 | 512x1024 | 320000 | 8.9 | 15.22 | 69.54 | 70.89 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes/lraspp_m-v3-d8_512x1024_320k_cityscapes_20201224_220337-cfe8fb07.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes/lraspp_m-v3-d8_512x1024_320k_cityscapes-20201224_220337.log.json) | -| LRASPP | M-V3-D8 (scratch) | 512x1024 | 320000 | 8.9 | 14.77 | 67.87 | 69.78 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes_20201224_220337-9f29cd72.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes-20201224_220337.log.json) | -| LRASPP | M-V3s-D8 | 512x1024 | 320000 | 5.3 | 23.64 | 64.11 | 66.42 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes/lraspp_m-v3s-d8_512x1024_320k_cityscapes_20201224_223935-61565b34.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes/lraspp_m-v3s-d8_512x1024_320k_cityscapes-20201224_223935.log.json) | -| LRASPP | M-V3s-D8 (scratch) | 512x1024 | 320000 | 5.3 | 24.50 | 62.74 | 65.01 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes_20201224_223935-03daeabb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes-20201224_223935.log.json) | diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py deleted file mode 100644 index e3924ad679cb3d7ba731322f9cdb67410baae59a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = '../deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py' -model = dict( - pretrained='open-mmlab://resnest101', - backbone=dict( - type='ResNeSt', - stem_channels=128, - radix=2, - reduction_factor=4, - avg_down_stride=True)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_readable_style.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_readable_style.css deleted file mode 100644 index 2cfa6f2b1fe54d2e3b73341903c8626a321706e9..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_readable_style.css +++ /dev/null @@ -1,33 +0,0 @@ -.container { - max-width: 600px; - margin-left: auto; - margin-right: auto; - background-color: rgb(31, 41, 55); - padding: 3em; - word-break: break-word; - overflow-wrap: anywhere; - color: #efefef !important; -} - -.container p, .container li { - font-size: 16px !important; - color: #efefef !important; - margin-bottom: 22px; - line-height: 1.4 !important; -} - -.container li > p { - display: inline !important; -} - -.container code { - overflow-x: auto; -} - -.container :not(pre) > code { - white-space: normal !important; -} - -.container .hoverable { - font-size: 14px; -} \ No newline at end of file diff --git a/spaces/Apex-X/Tm/roop/ui.py b/spaces/Apex-X/Tm/roop/ui.py deleted file mode 100644 index ba693dac116bd416b91518734fa550e9dfb95c7b..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/Tm/roop/ui.py +++ /dev/null @@ -1,231 +0,0 @@ -import os -import webbrowser -import customtkinter as ctk -from typing import Callable, Tuple -import cv2 -from PIL import Image, ImageOps - -import roop.globals -import roop.metadata -from roop.face_analyser import get_one_face -from roop.capturer import get_video_frame, get_video_frame_total -from roop.predicter import predict_frame -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import is_image, is_video, resolve_relative_path - -ROOT = None -ROOT_HEIGHT = 700 -ROOT_WIDTH = 600 - -PREVIEW = None -PREVIEW_MAX_HEIGHT = 700 -PREVIEW_MAX_WIDTH = 1200 - -RECENT_DIRECTORY_SOURCE = None -RECENT_DIRECTORY_TARGET = None -RECENT_DIRECTORY_OUTPUT = None - -preview_label = None -preview_slider = None -source_label = None -target_label = None -status_label = None - - -def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk: - global ROOT, PREVIEW - - ROOT = create_root(start, destroy) - PREVIEW = create_preview(ROOT) - - return ROOT - - -def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk: - global source_label, target_label, status_label - - ctk.deactivate_automatic_dpi_awareness() - ctk.set_appearance_mode('system') - ctk.set_default_color_theme(resolve_relative_path('ui.json')) - - root = ctk.CTk() - root.minsize(ROOT_WIDTH, ROOT_HEIGHT) - root.title(f'{roop.metadata.name} {roop.metadata.version}') - root.configure() - root.protocol('WM_DELETE_WINDOW', lambda: destroy()) - - source_label = ctk.CTkLabel(root, text=None) - source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25) - - target_label = ctk.CTkLabel(root, text=None) - target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25) - - source_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path()) - source_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1) - - target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path()) - target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1) - - keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps) - keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps)) - keep_fps_checkbox.place(relx=0.1, rely=0.6) - - keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames) - keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get())) - keep_frames_switch.place(relx=0.1, rely=0.65) - - keep_audio_value = ctk.BooleanVar(value=roop.globals.keep_audio) - keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_audio', keep_audio_value.get())) - keep_audio_switch.place(relx=0.6, rely=0.6) - - many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces) - many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get())) - many_faces_switch.place(relx=0.6, rely=0.65) - - start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start)) - start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05) - - stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy()) - stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05) - - preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview()) - preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05) - - status_label = ctk.CTkLabel(root, text=None, justify='center') - status_label.place(relx=0.1, rely=0.9, relwidth=0.8) - - donate_label = ctk.CTkLabel(root, text='^_^ Donate to project ^_^', justify='center', cursor='hand2') - donate_label.place(relx=0.1, rely=0.95, relwidth=0.8) - donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color')) - donate_label.bind('