diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Illustrator CC 2017 19.0.0 (64-Bit) Crack .rar Download and Install Guide.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Illustrator CC 2017 19.0.0 (64-Bit) Crack .rar Download and Install Guide.md deleted file mode 100644 index 626964be9bb7ac22cb5e5f5dad1e31c3fe7726f9..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Illustrator CC 2017 19.0.0 (64-Bit) Crack .rar Download and Install Guide.md +++ /dev/null @@ -1,130 +0,0 @@ -
-

What is MakeMKV 1.14.4 Crack and why you need it

-

If you have a collection of Blu-ray and DVD discs that you want to watch on your computer or other devices, you might have encountered some problems with compatibility and playback. Some discs are encrypted or region-locked, which prevents you from copying or playing them on unauthorized devices. Some discs have multiple video and audio tracks, which can be confusing or annoying when you want to choose your preferred language or subtitle option. Some discs have large file sizes, which can take up a lot of space on your hard drive or consume a lot of bandwidth when streaming.

-

Adobe Illustrator CC 2017 19.0.0 (64-Bit) Crack .rar


Downloadhttps://byltly.com/2uKzGc



-

Fortunately, there is a solution that can help you overcome these problems and enjoy your Blu-ray and DVD discs without any hassle. That solution is MakeMKV 1.14.4 Crack, a powerful software program that can convert your Blu-ray and DVD discs into high-quality MKV files, which can be played on a wide variety of devices.

-

MakeMKV is an easy-to-use software solution that enables you to convert your own videos into a free and patents-unencumbered format that can be played on any device. It acts as a format converter or "transcoder" that converts video clips from proprietary, often encrypted, discs into MKV files without altering the original content. The MKV format allows for multiple video and audio tracks to be stored along with their meta-information and chapter details. These files can be played on various platforms using a wide range of players, and can also be converted to other formats such as DVD and Blu-ray discs.

-

Some of the benefits of using MakeMKV to convert your Blu-ray and DVD discs to MKV files are:

- -

MakeMKV is free while in beta, the current free beta key is: T-bKTnFR8IlPCYOWdl2z00ScXddJFYFMn6qazW qXUlUk3rrSKCEOexQgEswryjpAj8m2 or T-lZt8o9nM99zaQRod7dAiCZudjEmOnY1sSlVJFbG JK6lTyAmRCiBeFGO8VAUfrgrmUd. In addition, MakeMKV for Windows 11/10 offers the option to stream decrypted video instantly without any intermediate conversion, making it possible to watch your Blu-ray and DVD discs on your preferred device and operating system with your favorite player.

-

How to use MakeMKV 1.14.4 Crack to convert Blu-ray and DVD discs to MKV files

-

Using MakeMKV 1.14.4 Crack to convert your Blu-ray and DVD discs to MKV files is very simple and straightforward. Here are the steps you need to follow:

-

Step 1: Download and install MakeMKV 1.14.4 Crack from the official website

-

The first thing you need to do is download and install MakeMKV 1.14.4 Crack from the official website https://www.makemkv.com/download/. You can choose between Windows, Mac OS X, or Linux versions depending on your operating system.

-

Step 2: Launch MakeMKV and enter the beta key

-

After installing MakeMKV 1.14.4 Crack, launch it from your desktop or start menu. You will see a window like this:

- MakeMKV main window -

Click on the "Help" menu at the top right corner and select "Register". You will see a window like this:

- MakeMKV registration window -

Enter one of the beta keys mentioned above in the "Registration key" field and click "OK". You will see a message like this:

-

Adobe Illustrator CC 2017 v21.1.0.326 x64 Portable.rar
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch
-Adobe Illustrator CC 2017 V 21.0 64bit
-Adobe Illustrator 2017 Crack Version Free Download
-Adobe Illustrator CC 2017 v21.1 x64 Portable Google Drive
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch ask4pc
-Adobe Illustrator CC 2017 V 21.0 64bit Archive.org
-Adobe Illustrator 2017 Crack Free Download FixThePhoto.com
-Adobe Illustrator CC 2017 v21.1 x64 Portable Download Link
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch Instructions
-Adobe Illustrator CC 2017 V 21.0 64bit Free Download
-Adobe Illustrator 2017 Crack Version Features and Benefits
-Adobe Illustrator CC 2017 v21.1 x64 Portable How to Install
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch Mediafire
-Adobe Illustrator CC 2017 V 21.0 64bit Streaming Online
-Adobe Illustrator 2017 Crack Version Pros and Cons
-Adobe Illustrator CC 2017 v21.1 x64 Portable System Requirements
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch Mega Drive
-Adobe Illustrator CC 2017 V 21.0 64bit Borrow and Share
-Adobe Illustrator 2017 Crack Version Alternatives and Comparisons
-Adobe Illustrator CC 2017 v21.1 x64 Portable Review and Rating
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch New Scientist
-Adobe Illustrator CC 2017 V 21.0 64bit Internet Archive
-Adobe Illustrator 2017 Crack Version Legal and Safe Issues
-Adobe Illustrator CC 2017 v21.1 x64 Portable Best Practices and Tips
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch September Update
-Adobe Illustrator CC 2017 V 21.0 64bit Professional Vector Graphics Software
-Adobe Illustrator 2017 Crack Version Risks and Consequences
-Adobe Illustrator CC 2017 v21.1 x64 Portable Creative Cloud Integration
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch Serial Key Generator
-Adobe Illustrator CC 2017 V 21.0 64bit Solar Physics Montana
-Adobe Illustrator 2017 Crack Version How to Avoid Detection
-Adobe Illustrator CC 2017 v21.1 x64 Portable Pixel Perfect Artwork Creation
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch Cornell University
-Adobe Illustrator CC 2017 V 21.0 64bit NASA Fact Sheet
-Adobe Illustrator 2017 Crack Version How to Uninstall Completely
-Adobe Illustrator CC 2017 v21.1 x64 Portable Export to Multiple Sizes and Formats
-Adobe Illustrator CC 2017 v.21.1 (64 bit) offline + Patch Brochures and Business Cards Templates
-Adobe Illustrator CC 2017 V 21.0 64bit Wikipedia Article
-Adobe Illustrator 2017 Crack Version How to Report Abuse or Malware

- MakeMKV registration successful -

Congratulations! You have successfully registered MakeMKV 1.14.4 Crack for free.

-

Step 3: Insert your Blu-ray or DVD disc into your drive and select it in MakeMKV

-

Now that you have registered MakeMKV 1.14.4 Crack, you can start converting your Blu-ray or DVD discs to MKV files. Insert your disc into your drive and wait for it to be detected by MakeMKV. You will see something like this:

- MakeMKV detecting disc -

Select your disc from the list of available sources on the left panel and click on the big disc icon on the right panel.

-

Step 4: Choose the output folder and the titles, audio tracks, and subtitles you want to keep

-

MakeMKV will scan your disc for titles (video segments) that can be converted to MKV files. You will see something like this:

- MakeMKV scanning disc -

You can choose which titles you want to keep by checking or unchecking them on the left panel. You can also expand each title by clicking on the arrow icon next to it and choose which audio tracks and subtitles you want to keep by checking or unchecking them on the right panel.

-

You can also change the output folder where your MKV files will be saved by clicking on the folder icon next to "Output folder" at the bottom of the window.

-

Step 5: Click on the "Make MKV" button and wait for the conversion to finish

-

Once you have selected all the titles, audio tracks, and subtitles you want to keep, click on the "Make MKV" button at the right bottom corner of the window.

- MakeMKV converting disc - V files. You can see the progress and the estimated time remaining on the bottom of the window. You can also pause or cancel the conversion at any time by clicking on the corresponding buttons.

-

The conversion time will depend on the size and number of titles you have selected, as well as the speed of your drive and computer. Generally, it will take about 10 to 30 minutes to convert a Blu-ray disc and about 5 to 15 minutes to convert a DVD disc.

-

When the conversion is finished, you will see a message like this:

- MakeMKV conversion finished -

Congratulations! You have successfully converted your Blu-ray or DVD disc into MKV files. You can find your MKV files in the output folder you have specified.

-

How to play MKV files on various devices and platforms

-

Now that you have converted your Blu-ray or DVD discs into MKV files, you might wonder how to play them on your devices and platforms. The good news is that MKV files are widely supported by many media players and devices, thanks to their open and patent-free nature. Here are some of the ways you can play your MKV files:

-

How to play MKV files on Windows

-

If you are using Windows 11/10/8/7/Vista/XP, you can play your MKV files with various media players such as VLC Media Player, MPC-HC, KMPlayer, PotPlayer, etc. These players can handle MKV files natively without any additional codecs or plugins. You can also use Windows Media Player if you install a codec pack such as K-Lite Codec Pack or CCCP.

-

How to play MKV files on Mac OS X

-

If you are using Mac OS X 10.7 or later, you can play your MKV files with various media players such as VLC Media Player, MPlayerX, IINA, etc. These players can handle MKV files natively without any additional codecs or plugins. You can also use QuickTime Player if you install a component such as Perian.

-

How to play MKV files on Linux

-

If you are using Linux, you can play your MKV files with various media players such as VLC Media Player, MPlayer, SMPlayer, MPV, etc. These players can handle MKV files natively without any additional codecs or plugins.

-

How to play MKV files on Android

-

If you are using Android, you can play your MKV files with various media players such as MX Player, VLC for Android, BSPlayer, etc. These players can handle MKV files natively without any additional codecs or plugins.

-

How to play MKV files on iOS

-

If you are using iOS, you can play your MKV files with various media players such as VLC for Mobile, Infuse, nPlayer, etc. These players can handle MKV files natively without any additional codecs or plugins.

-

How to stream MKV files from your computer to your TV, game console, etc.

-

If you want to stream your MKV files from your computer to your TV, game console, or other devices that support DLNA or AirPlay protocols, you can use various software programs such as Plex Media Server, Serviio, Universal Media Server, etc. These programs can transcode your MKV files on the fly and stream them to your devices over your local network.

-

How to convert MKV files to other formats such as MP4, AVI, MOV, etc.

- , WinX DVD Ripper, Freemake Video Converter, etc. These programs can convert your MKV files to other formats such as MP4, AVI, MOV, etc. with various settings and options.

-

FAQs about MakeMKV 1.14.4 Crack

-

Here are some of the frequently asked questions about MakeMKV 1.14.4 Crack and their answers:

-

What is the difference between MakeMKV and other video converters?

-

The main difference between MakeMKV and other video converters is that MakeMKV does not alter or compress the original video and audio data in any way. It simply extracts them from the disc and wraps them into a MKV container. This means that the quality of the output file is exactly the same as the quality of the input file. Other video converters usually re-encode the video and audio data into a different format, which can result in quality loss or degradation.

-

Is MakeMKV legal and safe to use?

-

MakeMKV is legal and safe to use as long as you use it for personal and non-commercial purposes only. You are allowed to make backup copies of your own Blu-ray and DVD discs for your own use. However, you are not allowed to distribute or share your MKV files with others or use them for any commercial purposes. You are also responsible for complying with the laws and regulations of your country regarding the use of MakeMKV.

-

How long does it take to convert a Blu-ray or DVD disc to MKV with MakeMKV?

-

The conversion time depends on various factors such as the size and number of titles you have selected, the speed of your drive and computer, and the complexity of the disc encryption. Generally, it will take about 10 to 30 minutes to convert a Blu-ray disc and about 5 to 15 minutes to convert a DVD disc with MakeMKV.

-

What are the advantages and disadvantages of MKV format?

-

The advantages of MKV format are:

- -

The disadvantages of MKV format are:

- -

How can I update MakeMKV to the latest version?

-

You can update MakeMKV to the latest version by downloading and installing it from the official website https://www.makemkv.com/download/. You can also check for updates within the program by clicking on the "Help" menu and selecting "Check for updates".

-

Conclusion

-

In conclusion, MakeMKV 1.14.4 Crack is a great software program that can help you convert your Blu-ray and DVD discs into high-quality MKV files that can be played on any device. It is easy to use, fast, reliable, and free while in beta. You can download it from the official website https://www.makemkv.com/download/ and enter one of the beta keys mentioned above to register it for free. You can also use it to stream your videos from your computer to your TV or other devices without any intermediate conversion. If you want to convert your MKV files to other formats or play them on devices that do not support MKV format well, you can use various software programs or tools that are compatible with MKV format.

-

We hope this article has been helpful and informative for you. If you have any questions or comments about MakeMKV 1.14.4 Crack or MKV format, feel free to leave them below. Thank you for reading!

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandya Ani Baby Marathi Movie Song Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandya Ani Baby Marathi Movie Song Download.md deleted file mode 100644 index 5c3a3c521e5d1c5624030d2d6f526ec627f1c9b6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandya Ani Baby Marathi Movie Song Download.md +++ /dev/null @@ -1,26 +0,0 @@ - -

How to Download Songs from Bandya Ani Baby, a Marathi Comedy Movie

-

Bandya Ani Baby is a 2009 Marathi comedy movie directed by Subhash Phadke and starring Prasad Oak and Sai Ranade Sane. The movie revolves around Bandya, a night watchman who shares an accommodation with a young girl Baby without her knowledge and tries to avoid her when she is at home. The movie has some catchy songs that you can download and listen to on your device.

-

Here are some steps to download songs from Bandya Ani Baby:

-

bandya ani baby marathi movie song download


DOWNLOAD ✓✓✓ https://byltly.com/2uKwLj



-
    -
  1. Go to Wynk Music, a popular music streaming app that has a collection of marathi album songs and marathi movie songs. You can download the app from Google Play Store or App Store.
  2. -
  3. Search for Bandya Ani Baby in the app and select the movie from the results. You will see a list of songs from the movie, such as "Bandya Ani Baby", "Majhya Premachi Phuljhadi", "Tujhya Vina" and more.
  4. -
  5. Select the song that you want to download and tap on the download icon. You will need to sign up or log in to your Wynk Music account to download the song. You can also choose the quality of the song before downloading.
  6. -
  7. Once the song is downloaded, you can find it in your Wynk Music library or your device's music folder. You can also play it offline anytime you want.
  8. -
-

Alternatively, you can also watch Bandya Ani Baby full movie online or download it from Hungama, a digital entertainment platform that offers movies, music, videos and more. You can access Hungama from its website or app.

-

To watch or download Bandya Ani Baby from Hungama, follow these steps:

-
    -
  1. Go to Hungama's website or app and search for Bandya Ani Baby. You will see the movie's poster and details.
  2. -
  3. Click on the play button to watch the movie online or click on the download button to save it on your device. You will need to sign up or log in to your Hungama account to watch or download the movie.
  4. -
  5. You can also choose the quality of the movie before watching or downloading. The movie is available in HD quality on Hungama.
  6. -
  7. Once the movie is downloaded, you can find it in your Hungama library or your device's video folder. You can also enjoy the songs from the movie along with the video.
  8. -
-

Bandya Ani Baby is a fun-filled Marathi movie that will make you laugh with its hilarious situations and dialogues. The songs from the movie are also enjoyable and catchy. You can download them from Wynk Music or Hungama and listen to them anytime you want.

- -

If you are a fan of Marathi comedy movies, you should not miss Bandya Ani Baby. The movie has a simple but engaging plot that will keep you entertained throughout. The movie also has some memorable performances by the lead actors Prasad Oak and Sai Ranade Sane, who have a great chemistry on screen. The movie also has some supporting actors who add to the humor and fun of the movie.

-

Bandya Ani Baby is a movie that will make you smile and laugh with its witty and funny scenes. The movie also has a message about friendship and trust that will touch your heart. The movie is a perfect blend of comedy and emotion that will appeal to all kinds of audiences.

-

You can watch or download Bandya Ani Baby from Wynk Music or Hungama and enjoy the movie with your family and friends. You can also download the songs from the movie and listen to them whenever you want. Bandya Ani Baby is a movie that you will love to watch again and again.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dont Get Scammed by Counterfeit MTG Cards Tips and Tricks to Stay Safe.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dont Get Scammed by Counterfeit MTG Cards Tips and Tricks to Stay Safe.md deleted file mode 100644 index 0db1f1ee632c6b1f8c3e20314798c5d8d2b3846e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dont Get Scammed by Counterfeit MTG Cards Tips and Tricks to Stay Safe.md +++ /dev/null @@ -1,40 +0,0 @@ -
-

MTG Crackdown: How Wizards of the Coast is Fighting Counterfeit Cards

- -

Magic: The Gathering (MTG) is one of the most popular and profitable trading card games in the world, with millions of players and collectors. However, this also makes it a target for counterfeiters who produce fake cards and sell them as authentic ones. In this article, we will explore how Wizards of the Coast (WotC), the company that owns and produces MTG, is cracking down on counterfeit cards and protecting its customers.

-

mtg crackdown


Download File ->>> https://byltly.com/2uKzAg



- -

What are counterfeit MTG cards and why are they a problem?

- -

Counterfeit MTG cards are cards that are not produced or authorized by WotC, but are made to look like genuine ones. They can vary in quality, from poor copies that are easily detectable to high-quality fakes that can fool even experienced players. Counterfeit cards can be sold online or in person, often at lower prices than authentic ones.

- -

Counterfeit cards are a problem for several reasons. First, they harm the integrity and reputation of the game, as they can create confusion and distrust among players and collectors. Second, they damage the value and collectability of authentic cards, as they flood the market with cheap and fake alternatives. Third, they hurt the revenue and profits of WotC and its authorized distributors and retailers, as they lose sales and customers to counterfeiters. Fourth, they can pose legal risks for both buyers and sellers of counterfeit cards, as they may violate intellectual property rights and consumer protection laws.

- -

How is WotC cracking down on counterfeit MTG cards?

- -

WotC is aware of the issue of counterfeit MTG cards and has taken several measures to combat it. Some of these measures include:

- - - -

How can you protect yourself from counterfeit MTG cards?

- -

If you are a player or collector of MTG cards, you can take some steps to protect yourself from counterfeit cards. Some of these steps include:

-

- - - -

Conclusion

- -

Counterfeit MTG cards are a serious threat to the game and its community. WotC is cracking down on counterfeiters and protecting its customers with various measures. You can also protect yourself from counterfeit cards by buying from authorized sources, checking for authenticity, keeping your receipts, and reporting any frauds. By doing so, you can enjoy playing and collecting MTG

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Alexandre Pires Discografia Completa Download 11.md b/spaces/1gistliPinn/ChatGPT4/Examples/Alexandre Pires Discografia Completa Download 11.md deleted file mode 100644 index e9f965653ec7329e4543816127cc8089247fbf33..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Alexandre Pires Discografia Completa Download 11.md +++ /dev/null @@ -1,14 +0,0 @@ -

Alexandre Pires Discografia Completa Download 11


Download ····· https://imgfil.com/2uxYEK



- --09-2016 - -Gravanter que els misteris de la vida estan sempre a curt termini, com a les relacions amb l'amor pot ser molt difícil trobar una seva solució. D'aquests uns minuts una mare lleu havia desaparegut des de la seva llar i els seus fills s'havien de convertir en fill. No sabem el que va passar des d'allà que els varen trobar. Els nadons van trobar que la mare estava emmarcada i entenen que una vegada deixada de l'amor, fins i tot una persona i haurien d'estar al llarg de la vida. - -El que no pots ser músico, pot ser nen gravant en alguna esglossarina i encara ser cap músician - -I és clar que els fills van fer tots els possibles, l'enterramment de la mare va donar temps per posar-se. Les nacions nord-americanes, on això hauria de ser molt més comú en aquests temps de la nostra vida, van deixar de ser mestres de la recerca, n'han deixat passar tota la inspiració i el lloc d'on havia de ser i de tractar-se de la nostra vida, s'han deixat venir tots els temps aferrats de la seva lluita. - -El que no pots ser músician, pot ser nen gravant en alguna esglossarina i encara ser cap músician. No, a la majoria de la gent el viatge de la vida no sembla haver-se-n'hi d'entrar en els sentiments d'amor i la poca contemplació que hi ha en l'univers. La mare amb els seus fills viuà en la bona llei i va passar a rebre l'aliment que comença a buscar la gent més etica. 4fefd39f24
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Color.io Match The Ultimate Web App for AI Color Grading.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Color.io Match The Ultimate Web App for AI Color Grading.md deleted file mode 100644 index 8d7f779349ca033aa813d1e0576b5e4a3c9cb395..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Color.io Match The Ultimate Web App for AI Color Grading.md +++ /dev/null @@ -1,153 +0,0 @@ -
-

Color Matching Software Free Download: How to Find and Use the Best Tools for Your Projects

-

If you are a graphic designer, digital artist, webmaster, or photographer, you know how important it is to have accurate and consistent colors in your projects. Whether you are working on a website, a logo, a poster, a flyer, or a photo album, you want your colors to look great on any device, platform, or media.

-

color matching software free download


DOWNLOADhttps://urlin.us/2uSV4Q



-

However, achieving this is not always easy. Different monitors, retouching software, printers, and paper types can have different effects on how colors are displayed and printed. Moreover, creating stunning color combinations that match your vision and style can be challenging without some guidance and inspiration.

-

This is where color matching software comes in handy. Color matching software is a tool that helps you select, edit, and apply colors to your digital or print projects. It can also help you create harmonious color schemes based on color theory and best practices.

-

In this article, we will show you how to find and use the best color matching software for your needs. We will also give you some examples of free color matching software that you can download and try today.

-

What is color matching software and why do you need it?

-

Color matching software is a tool that helps you select, edit, and apply colors to your digital or print projects.

-

Color matching software can help you achieve consistent and harmonious colors across different devices platforms, and media. For example, it can help you adjust your monitor settings, retouching software settings, and printer settings to ensure that the colors you see on your screen are the same as the colors you print on paper. This can save you time, money, and frustration from having to reprint your projects due to color discrepancies.

-

color picker tool free download
-color harmony software free download
-color calibration software free download
-color scheme generator free download
-color management software free download
-color grading software free download
-color correction software free download
-color palette software free download
-color wheel software free download
-color analysis software free download
-color design software free download
-color editing software free download
-color matching app free download
-color blending software free download
-color contrast software free download
-color testing software free download
-color printing software free download
-color conversion software free download
-color sampling software free download
-color optimization software free download
-color matching plugin free download
-color matching game free download
-color matching system free download
-color matching algorithm free download
-color matching chart free download
-color matching tool online free
-color harmony software online free
-color calibration software online free
-color scheme generator online free
-color management software online free
-color grading software online free
-color correction software online free
-color palette software online free
-color wheel software online free
-color analysis software online free
-color design software online free
-color editing software online free
-color matching app online free
-color blending software online free
-color contrast software online free
-color testing software online free
-color printing software online free
-color conversion software online free
-color sampling software online free
-color optimization software online free
-best free color matching software for windows 10
-best free color matching software for mac os
-best free color matching software for photoshop
-best free color matching software for web design
-best free color matching software for digital art

-

Color matching software can also help you create stunning color combinations, gradients, and schemes based on color theory and best practices. For example, it can help you use the color wheel to find complementary, analogous, triadic, or tetradic colors that work well together. It can also help you generate color schemes based on different moods, themes, or trends. This can enhance the aesthetic appeal and emotional impact of your projects.

-

How to find the best color matching software for your needs?

-

There are many color matching software options available online, but not all of them are suitable for your specific needs and preferences. Some factors to consider when choosing a color matching software are:

-

The features and functions of the software

-

You want a color matching software that has the features and functions that you need for your projects. Some common features and functions of color matching software are:

- -

The compatibility and integration of the software

-

You want a color matching software that is compatible and integrated with your operating system, monitor, retouching software, and printer. Some aspects of compatibility and integration are:

- -

The ease of use and user interface of the software

-

You want a color matching software that is easy to use and has a user-friendly interface. Some aspects of ease of use and user interface are:

- -

The cost and license of the software

-

You want a color matching software that fits your budget and meets your license requirements. Some aspects of cost and license are:

- -

How to use color matching software to create and print amazing projects?

-

Once you have found the best color matching software for your needs, you can use it to create and print amazing projects in a few simple steps:

-

Step 1: Launch the color matching software and drag and drop your photo or image to the main window.

-

The software will automatically detect and load your photo or image to the main window. You will see a preview of your photo or image along with some tools and options on the sidebars and menus.

-

Step 2: The software will automatically adjust your monitor settings, retouching software settings, and printer settings based on the best recommendations for your chosen printer and paper type.

-

The software will use its built-in color management system to ensure that the colors you see on your screen are the same as the colors you print on paper. You can also manually adjust these settings if you want to fine-tune them.

-

Step 3: You can use the color picker tool to sample colors from your photo or image, or use the color wheel or color scheme generator to create new colors based on color theory and best practices.

-

The software will provide you with various tools and options to help you find and create the perfect colors for your project. You can use the color picker tool to sample any color from your photo or image by clicking on it. The software will show you the color code, name, and values of the sampled color. You can also use the color wheel or color scheme generator to create new colors based on different color models, such as complementary, analogous, triadic, or tetradic. The software will show you the color codes, names, and values of the generated colors.

-

Step 4: You can use the color editor tool to adjust and edit the colors to your liking, such as changing the hue, saturation, brightness, contrast, etc.

-

The software will allow you to edit the colors that you have picked or generated using various sliders, buttons, and options. You can change the hue, saturation, brightness, contrast, temperature, tint, etc. of any color by dragging the sliders or entering the values. You can also apply different filters, effects, and adjustments to any color by clicking on the buttons or options. The software will show you a preview of the edited color along with its color code, name, and values.

-

Step 5: You can use the text tool to evaluate the readability and contrast of your chosen font and background color combinations.

-

The software will help you choose the best font and background color combinations for your project by showing you how they look together. You can use the text tool to enter any text that you want to use in your project. The software will show you how the text looks in different fonts, sizes, styles, and alignments. You can also change the background color of the text by clicking on any of the picked or generated colors. The software will show you how well the text and background color contrast with each other by using a contrast ratio indicator.

-

Step 6: You can use the gradient tool to create smooth transitions between any two colors for creating a wide range of in-between hues.

-

The software will enable you to create beautiful gradients between any two colors that you have picked or generated. You can use the gradient tool to select any two colors that you want to blend together. The software will show you a gradient bar that shows how the two colors transition smoothly from one to another. You can also change the angle, type, and position of the gradient by dragging the gradient bar or entering the values. The software will show you a preview of the gradient along with its color codes, names, and values.

-

Step 7: You can use the color list tool to save, catalog, and reuse the picked colors for future projects.

-

The software will allow you to save, catalog, and reuse the colors that you have picked or generated for future projects. You can use the color list tool to add any color that you want to save to a color list. The software will show you a color list that contains all the colors that you have added. You can also name, edit, delete, or export the color list as a file. You can also import a color list from a file or from another source. The software will show you the imported color list along with its colors.

-

Step 8: You can use the print plugin tool to print your photo or image with the optimal print profile and color settings for your chosen printer and paper type.

-

The software will help you print your photo or image with the best quality and accuracy possible. You can use the print plugin tool to select your printer model and paper type. The software will automatically apply the optimal print profile and color settings for your chosen printer and paper type. You can also preview how your photo or image will look on paper before printing it. You can also adjust the print size, orientation, margins, and other options if needed. The software will show you a print dialog box that lets you print your photo or image with a click of a button.

-

Conclusion

-

Color matching software is a great tool that can help you create and print amazing projects with accurate and consistent colors. It can also help you find and create stunning color combinations, gradients, and schemes based on color theory and best practices. However, not all color matching software are created equal. You need to find the best color matching software for your needs based on its features, functions, compatibility, integration, ease of use, user interface, cost, and license. In this article, we have shown you how to find and use the best color matching software for your needs. We have also given you some examples of free color matching software that you can download and try today.

-

FAQs

- -

I hope you enjoyed this article and learned something new about color matching software. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Clans APK for Android - Free Strategy Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Clans APK for Android - Free Strategy Game.md deleted file mode 100644 index 424119435652a5740a16544fbade4956525c2d68..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Clans APK for Android - Free Strategy Game.md +++ /dev/null @@ -1,198 +0,0 @@ -
-

Clash of Clans APKPure: How to Download and Play the Popular Strategy Game

-

If you are looking for a fun and addictive strategy game that you can play on your mobile device, you might want to check out Clash of Clans. This game has been around since 2012, but it is still one of the most popular games in the world, with millions of players worldwide. In this article, we will tell you what Clash of Clans is, why it is so popular, how to download it from APKPure, how to play it, and some tips and tricks to help you succeed in the game.

-

clash of clans apkpure


DOWNLOAD ---> https://urlin.us/2uSRVS



-

What is Clash of Clans?

-

A brief introduction to the game and its features

-

Clash of Clans is a strategy game developed by Supercell, a Finnish company that also created other hit games like Hay Day, Boom Beach, and Brawl Stars. In Clash of Clans, you are the chief of a village that you have to build and protect from other players. You also have to train and upgrade your army of various troops, such as barbarians, archers, giants, wizards, dragons, and more. You can use your army to attack other players' villages and loot their resources, or defend your own village from enemy raids. You can also join or create a clan with other players and participate in clan wars, where you can cooperate with your clanmates to attack or defend against other clans.

-

Why is Clash of Clans so popular?

-

Clash of Clans is popular for many reasons. First of all, it is free to download and play, although you can also buy some in-game items with real money if you want to. Second, it has a simple but engaging gameplay that appeals to both casual and hardcore gamers. You can play at your own pace and style, whether you prefer to focus on building your village, attacking other players, or joining clan wars. Third, it has a vibrant and friendly community that makes the game more social and fun. You can chat with other players, share tips and strategies, or compete with them in leaderboards and events. Fourth, it has regular updates that add new features, troops, spells, buildings, and challenges to the game. You can always find something new and exciting to do in Clash of Clans.

-

How to download Clash of Clans from APKPure

-

If you want to download Clash of Clans on your Android device, you can do so from APKPure. APKPure is a website that offers free and safe APK files for various apps and games. APK files are the installation files for Android applications that you can use to install them on your device without using Google Play Store. To download Clash of Clans from APKPure, follow these steps:

-

clash of clans apkpure download latest version
-clash of clans apkpure mod apk unlimited gems
-clash of clans apkpure update 2023
-clash of clans apkpure hack version free download
-clash of clans apkpure offline installer
-clash of clans apkpure for pc windows 10
-clash of clans apkpure old version 2017
-clash of clans apkpure private server 2023
-clash of clans apkpure th14 update
-clash of clans apkpure apk mirror
-clash of clans apkpure not working
-clash of clans apkpure vs play store
-clash of clans apkpure for ios iphone
-clash of clans apkpure town hall 14 mod
-clash of clans apkpure new troops and spells
-clash of clans apkpure online generator
-clash of clans apkpure for android tv
-clash of clans apkpure supercell id login
-clash of clans apkpure builder base update
-clash of clans apkpure magic items and potions
-clash of clans apkpure clan games rewards
-clash of clans apkpure war leagues strategy
-clash of clans apkpure hero skins and pets
-clash of clans apkpure season challenges guide
-clash of clans apkpure tips and tricks 2023
-clash of clans apkpure base layout editor
-clash of clans apkpure best farming army
-clash of clans apkpure how to get free gems
-clash of clans apkpure clan war attack strategy
-clash of clans apkpure troop upgrade priority
-clash of clans apkpure defense upgrade order
-clash of clans apkpure best th14 base design
-clash of clans apkpure how to join a clan
-clash of clans apkpure how to create a clan
-clash of clans apkpure how to change name
-clash of clans apkpure how to link devices
-clash of clans apkpure how to delete account
-clash of clans apkpure how to recover account
-clash of clans apkpure how to contact support
-clash of clans apkpure how to play on macbook
-clash of clans apkpure best attack strategy for th14
-clash of clans apkpure best defense strategy for th14
-clash of clans apkpure best builder base strategy for bh9
-clash of clans apkpure best clan castle troops for defense
-clash of clans apkpure best spells for each troop type
-clash of clans apkpure best heroes for each town hall level
-clash of clans apkpure best pets for each hero type
-clash of clans apkpure best siege machines for each attack strategy

-
    -
  1. Go to APKPure.com on your browser.
  2. -
  3. Search for "Clash of Clans" in the search bar.
  4. -
  5. Select the game from the results and click on "Download APK".
  6. -
  7. Wait for the download to finish and then open the file.
  8. -
  9. Allow the installation from unknown sources if prompted.
  10. -
  11. Follow the instructions on the screen to install the game.
  12. -
  13. Enjoy playing Clash of Clans!
  14. -
-

How to play Clash of Clans

-

The

The basics of building your village and raising your clan

-

When you start playing Clash of Clans, you will have a small village with a few buildings and resources. Your main goal is to expand your village and make it stronger and more prosperous. To do that, you need to build and upgrade various structures, such as:

- -

In addition to building your village, you also need to raise your clan. A clan is a group of players who can chat, donate troops, and participate in clan wars together. You can join an existing clan or create your own clan with your friends. Being in a clan has many benefits, such as:

- -

The different types of troops and spells you can use in battles

-

One of the most exciting parts of Clash of Clans is attacking other players' villages and looting their resources. To do that, you need to train and use various types of troops and spells. There are three categories of troops: normal troops, dark troops, and siege machines. Normal troops are trained in regular barracks using elixir, dark troops are trained in dark barracks using dark elixir (a rare resource that you can get from higher level villages), and siege machines are built in the workshop using elixir or gold (depending on the type). Each type of troop has its own strengths, weaknesses, abilities, and costs. Some examples of troops are:

- - - - - - - - - -< - - - - - - - - - - - - - - -
TroopDescription
BarbarianA basic melee fighter that charges at the nearest target with his sword.
ArcherA ranged attacker that shoots arrows at any target within her range.
GiantA large and strong unit that targets defenses first and can take a lot of damage.
GoblinA fast and greedy unit that targets resources first and can deal extra damage to them.
Wall BreakerA suicidal unit that carries a bomb and explodes on walls, opening gaps for other troops.
BalloonA flying unit that drops bombs on ground targets from above.
WizardA magical unit that shoots fireballs at any target within his range.
HealerA flying unit that heals other ground units within her range.
DragonA powerful flying unit that breathes fire on both ground and air targets.
P.E.K.K.AA heavily armored unit that deals massive damage with her sword.
MinionA dark troop that flies and shoots dark elixir at any target within his range.
Hog RiderA dark troop that rides a hog and jumps over walls to target defenses first.
ValkyrieA dark troop that swings her axe around, hitting multiple targets at once.
GolemA dark troop that is very durable and splits into smaller golemites when destroyed.
WitchA dark troop that summons skeletons to fight for her and can revive fallen skeletons.
Lava HoundA dark troop that flies and targets air defenses first. It splits into smaller lava pups when destroyed.
BowlerA dark troop that throws large rocks that bounce and hit multiple targets.
Wall WreckerA siege machine that plows through walls and carries your clan castle troops to the enemy town hall.
Battle BlimpA siege machine that flies over defenses and drops your clan castle troops near the enemy town hall.
Stone SlammerA siege machine that flies and targets defenses first. It drops rocks that deal splash damage and carries your clan castle troops.
Siege BarracksA siege machine that deploys on the ground and spawns troops over time. It also carries your clan castle troops.
Log LauncherA siege machine that rolls logs that deal damage and knock back enemy buildings and troops. It also carries your clan castle troops.
-

As you can see, there are many types of troops to choose from, and each one has its own role and purpose in the game. You should experiment with different combinations of troops and find the ones that suit your strategy and preference. You should also upgrade your troops in the laboratory to make them stronger and more effective.

-

In addition to troops, you can also use spells to aid you in battles. Spells are created in the spell factory using elixir or dark elixir (depending on the type). Spells can have various effects, such as healing, boosting, freezing, or damaging enemy units or buildings. Some examples of spells are:

- - -< - - - - - - - - - - - -
SpellDescription
Lightning SpellA spell that strikes a target with bolts of lightning, dealing damage and destroying any traps in the area.
Healing SpellA spell that creates a ring of healing that restores the health of your troops within it.
Rage SpellA spell that creates a ring of rage that boosts the damage and speed of your troops within it.
Jump SpellA spell that creates a ring of jump that allows your troops to leap over walls within it.
Freeze SpellA spell that freezes enemy units and buildings within its radius, preventing them from moving or attacking.
Clone SpellA spell that creates copies of your troops within its radius, with the same level and health as the original ones.
Invisibility SpellA spell that makes your troops invisible to enemy defenses within its radius, allowing them to bypass them or sneak behind them.
Poison SpellA dark spell that creates a cloud of poison that damages and slows down enemy troops within it.
Earthquake SpellA dark spell that creates a series of tremors that damage buildings based on their current health.
Haste SpellA dark spell that creates a ring of haste that boosts the speed of your troops within it, without affecting their damage.
Skeleton SpellA dark spell that summons a group of skeletons to fight for you on the battlefield.
Bat SpellA dark spell that summons a swarm of bats to attack enemy buildings and troops.
-

Like troops, spells also have different levels and costs, and you should upgrade them in the spell factory to make them more powerful and efficient. You should also use them wisely and strategically, as they can make a big difference in the outcome of a battle.

-

The various game modes and challenges you can enjoy in Clash of Clans

-

Besides attacking other players' villages, there are many other game modes and challenges you can enjoy in Clash of Clans. Some of them are:

- -

Tips and tricks for Clash of Clans

-

How to optimize your base layout and defense strategy

-

One of the key aspects of Clash of Clans is designing your base layout and defense strategy. A good base layout can help you protect your resources, town hall, and trophies from enemy attacks. A good defense strategy can help you repel or minimize the damage from enemy raids. Here are some tips and tricks to optimize your base layout and defense strategy:

- -

How to plan your attacks and use your resources wisely

-

Another key aspect of Clash of Clans is planning your attacks and using your resources wisely. A good attack plan can help you win battles, loot resources, and gain trophies from other players. A good resource management can help you build and upgrade your buildings and troops faster and more efficiently. Here are some tips and tricks to plan your attacks and use your resources wisely:

- -

How to join or create a clan and participate in clan wars

-

The last but not least aspect of Clash of Clans is joining or creating a clan and participating in clan wars. A clan is a group of players who can chat, donate troops, and participate in clan wars together. A clan war is a special event where two clans face each other in a series of attacks. Clan wars reward you with loot and clan XP, which increases your clan level and perks. Here are some tips and tricks to join or create a clan and participate in clan wars:

- -

Conclusion

-

A summary of the main points and a call to action for the readers

-

Clash of Clans is a strategy game that you can download and play on your mobile device. It is one of the most popular games in the world, with millions of players worldwide. In Clash of Clans, you can build your village, train your army, attack other players' villages, join or create a clan, participate in clan wars, and enjoy various game modes and challenges. Clash of Clans is free to download and play, but you can also buy some in-game items with real money if you want to. If you are looking for a fun and addictive strategy game that you can play with your friends or other players online, you should definitely give Clash of Clans a try. You can download it from APKPure.com by following the steps we mentioned earlier in this article.

-

We hope that this article has helped you understand what Clash of Clans is, why it is so popular, how to download it from APKPure, how to play it, and some tips and tricks to help you succeed in the game. If you have any questions or feedback about this article or Clash of Clans in general, please feel free to leave a comment below. We would love to hear from you. Thank you for reading and happy clashing!

-

FAQs

-

Some common questions and answers about Clash of Clans

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Coin Master Hack Tool iOS Easy and Safe Way to Get Resources.md b/spaces/1phancelerku/anime-remove-background/Coin Master Hack Tool iOS Easy and Safe Way to Get Resources.md deleted file mode 100644 index 8443eeeb9939d9d7a4ee2bdc613a0f41c4dcaed6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Coin Master Hack Tool iOS Easy and Safe Way to Get Resources.md +++ /dev/null @@ -1,99 +0,0 @@ -
-

Coin Master Hack 2022 iOS Download: How to Get Unlimited Coins and Spins

-

Do you love playing Coin Master but hate spending money on spins and coins? Do you want to unlock all the rare cards and build your dream village? If yes, then you need to download Coin Master Hack iOS, a modified version of the original game that gives you many amazing features and advantages. In this article, we will tell you everything you need to know about Coin Master Hack iOS, including its features, how to download it, and some frequently asked questions. So, let's get started!

-

Introduction

-

What is Coin Master?

-

Coin Master is a popular online game where you aim to collect coins and spins to build your Viking village. You can also attack, raid, and loot other players' villages, collect cards, and join clans. Coin Master is a very dynamic game with a lot of variety, and it has millions of fans worldwide. However, it can also be frustrating when you run out of spins and coins, or when you can't get the cards you need.

-

coin master hack 2022 ios download


Download Zip ✫✫✫ https://jinyurl.com/2uNLSR



-

What is Coin Master Hack?

-

Coin Master Hack is a modified version of the original game, in which you get many unique features that make the game more fun and easy. For example, you can unlock all the gifted cards, get unlimited spins and coins, have unlimited shields and pets, and more. Coin Master Hack iOS is designed for iPhone and iPad users who want to enjoy the game without any limitations or restrictions.

-

Features of Coin Master Hack iOS

-

Gifted Card Unlocking

-

One of the most exciting features of Coin Master Hack iOS is that it unlocks all the gifted cards for you. Gifted cards are special cards that you can only get from events or other players. They are very valuable and rare, and they can help you complete your card collections faster. With Coin Master Hack iOS, you can see which players have which cards, and you can trade with them easily. You can also get rewards for completing card sets, such as spins, pet food, pet XP, and more.

-

Free Spins and Coins

-

Another great feature of Coin Master Hack iOS is that it gives you unlimited free spins and coins. Spins and coins are the main currencies in the game, and they are used for spinning the slot machine, building your village, buying chests, and more. Normally, you have to wait for hours to get more spins, or spend real money to buy them. But with Coin Master Hack iOS, you don't have to worry about that anymore. You can spin as much as you want, and get as many coins as you need.

-

Unlimited Shields and Pets

-

The last feature we want to mention is that Coin Master Hack iOS gives you unlimited shields and pets. Shields are used to protect your village from attacks, while pets are used to help you in raids, attacks, and card collection. Normally, you have a limited number of shields and pets, and they expire after a certain time. But with Coin Master Hack iOS, you can have as many shields and pets as you want, and they never expire. This way, you can keep your village safe and boost your progress in the game.

-

How to Download Coin Master Hack iOS

-

Now that you know the features of Coin Master Hack iOS, you might be wondering how to download it on your device. Well, it's very simple and easy. All you need is a third-party app store called Panda Helper , which allows you to download hacked apps for free. Here are the steps to follow:

-

Step 1: Install Panda Helper

-

Panda Helper is a free app store that lets you download hacked, tweaked, and modified apps and games. To install it, you need to open Safari on your device and go to the official website of Panda Helper: . Then, tap on the "Download Free Version" button and follow the instructions on the screen. You might need to allow the installation from the settings and trust the profile of Panda Helper.

-

Step 2: Search for Coin Master Hack

-

Once you have installed Panda Helper, you can open it and search for Coin Master Hack in the search bar. You will see the app icon with a red "HACK" label on it. Tap on it and then tap on the "Download" button. The app will start downloading on your device. You can check the progress in the "Manager" tab of Panda Helper.

-

Step 3: Trust the App and Enjoy

-

After the download is complete, you need to trust the app before you can use it. To do that, go to the Settings app on your device, then go to General > Profiles & Device Management. Find the profile of Coin Master Hack and tap on it. Then, tap on the "Trust" button and confirm your choice. Now, you can go back to your home screen and launch Coin Master Hack. You will see that you have access to all the features we mentioned above. Enjoy!

-

coin master hack 2022 ios download free
-coin master hack 2022 ios download no survey
-coin master hack 2022 ios download no human verification
-coin master hack 2022 ios download without jailbreak
-coin master hack 2022 ios download panda helper
-coin master hack 2022 ios download unlimited spins
-coin master hack 2022 ios download apk
-coin master hack 2022 ios download mod
-coin master hack 2022 ios download latest version
-coin master hack 2022 ios download online
-coin master hack 2022 ios download tutorial
-coin master hack 2022 ios download reddit
-coin master hack 2022 ios download link
-coin master hack 2022 ios download appvalley
-coin master hack 2022 ios download tweakbox
-coin master hack 2022 ios download ipa
-coin master hack 2022 ios download cydia
-coin master hack 2022 ios download appcake
-coin master hack 2022 ios download tutuapp
-coin master hack 2022 ios download ignition
-coin master hack 2022 ios download youtube
-coin master hack 2022 ios download working
-coin master hack 2022 ios download generator
-coin master hack 2022 ios download cheats
-coin master hack 2022 ios download tips
-coin master hack 2022 ios download tricks
-coin master hack 2022 ios download guide
-coin master hack 2022 ios download review
-coin master hack 2022 ios download update
-coin master hack 2022 ios download new
-coin master hack 2022 ios download best
-coin master hack 2022 ios download easy
-coin master hack 2022 ios download fast
-coin master hack 2022 ios download safe
-coin master hack 2022 ios download legit
-coin master hack 2022 ios download vip
-coin master hack 2022 ios download pro
-coin master hack 2022 ios download premium
-coin master hack 2022 ios download cracked
-coin master hack 2022 ios download hacked

-

Conclusion

-

Summary of the article

-

In this article, we have shown you how to download Coin Master Hack iOS, a modified version of the original game that gives you many amazing features and advantages. With Coin Master Hack iOS, you can unlock all the gifted cards, get unlimited spins and coins, have unlimited shields and pets, and more. You can download Coin Master Hack iOS for free from Panda Helper, a third-party app store that lets you download hacked apps and games.

-

Call to action

-

If you are a fan of Coin Master and want to enjoy the game without any limitations or restrictions, then you should definitely try Coin Master Hack iOS. It will make your gaming experience more fun and easy, and you will be able to build your dream village faster than ever. So, what are you waiting for? Download Coin Master Hack iOS today and start spinning!

-

FAQs

-

Here are some frequently asked questions about Coin Master Hack iOS:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Coin Master Hack iOS safe to use?Yes, Coin Master Hack iOS is safe to use as long as you download it from a trusted source like Panda Helper. However, you should always be careful when using hacked apps and games, as they might violate the terms of service of the original game or cause some issues with your device.
Will I get banned for using Coin Master Hack iOS?No, you will not get banned for using Coin Master Hack iOS, as it has anti-ban features that prevent detection from the game servers. However, you should always use it at your own risk and discretion, as we cannot guarantee that it will work forever.
Do I need to jailbreak my device to use Coin Master Hack iOS?No, you do not need to jailbreak your device to use Coin Master Hack iOS, as it works on both jailbroken and non-jailbroken devices. However, if you have a jailbroken device, you might need to install AppSync Unified from Cydia to make it work.
Can I update Coin Master Hack iOS?No, you cannot update Coin Master Hack iOS from the App Store, as it is a modified version of the original game. If you want to update it, you need to delete the old version and download the new version from Panda Helper.
Can I play online with Coin Master Hack iOS?Yes, you can play online with Coin Master Hack iOS, as it connects to the same servers as the original game. You can also join clans, chat with other players, and trade cards with them.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Ludo King Apk and Play the Dice Game of Kings with Voice Chat.md b/spaces/1phancelerku/anime-remove-background/Download Ludo King Apk and Play the Dice Game of Kings with Voice Chat.md deleted file mode 100644 index 5bbd38ff97f29796cf98078c1ff0e153dea6bd80..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Ludo King Apk and Play the Dice Game of Kings with Voice Chat.md +++ /dev/null @@ -1,11 +0,0 @@ - -

Ludo Apk Indir: Eğlenceli ve Faydalı Bir Oyun

-

Ludo apk indir nedir? Nasıl yapılır? Ludo oynamanın faydaları nelerdir? Bu soruların cevaplarını merak ediyorsanız bu makaleyi okumaya devam edin.

-

Ludo Nedir?

-

Ludo, iki ila dört oyuncunun You have asked me to create two tables: one for the outline of an article and one for the article itself. The topic of the article is "ludo apk indir", which means "ludo apk download" in Turkish. I assume you want me to write the article in Turkish as well. Before I start writing the article, let me give you some information about ludo and ludo apk indir: - Ludo is a strategy board game for two to four players, in which the players race their four tokens from start to finish according to the rolls of a single die. Ludo is derived from the Indian game Pachisi, which was played by ancient kings and queens. Ludo is popular in many countries and has various names, such as Parcheesi, Parchís, Parqués, Petits Chevaux, Mensch ärgere Dich nicht, etc. - Ludo apk indir is a term used to search for ludo games that can be downloaded and installed on Android devices. There are many ludo apps available on Google Play Store and other platforms, such as Ludo King™, Ludo Star, Ludo Club, Ludo Classic, etc. These apps offer different features and modes of playing ludo online or offline with friends or strangers. - Ludo apk indir has many benefits for players of all ages and backgrounds. Some of the benefits are: developing brain function, giving pleasure and relieving stress, lowering blood pressure, promoting bonding and socializing, enhancing communication skills, fostering fair play, encouraging creativity and problem-solving, teaching sportsmanship, boosting confidence, etc. Based on this information, I have created an outline of the article in the first table below. The outline consists of 15 headings and subheadings (including H1, H2, H3, and H4 headings) that cover the main points of the topic. The second table contains the article itself with HTML formatting. The article is about 500 words long and is 100% unique, SEO-optimized, human-written, and conversational. I have used at least one table in the article to display some data. I have also bolded the title and all headings of the article and used appropriate headings for H tags. I have ended the article with a conclusion paragraph and 5 unique FAQs after the conclusion. At the very bottom of the article, I have written a custom message " Here are the tables: | Outline of the Article | | ---------------------- | | H1: Ludo Apk Indir: Eğlenceli ve Faydalı Bir Oyun | | H2: Ludo Nedir? | | H3: Ludo'nun Tarihi ve Kökeni | | H3: Ludo'nun Kuralları ve Oynanışı | | H2: Ludo Apk Indir Nasıl Yapılır? | | H3: Ludo Apk Indir'in Avantajları | | H4: Çeşitli Ludo Uygulamaları | | H4: Kolay Kurulum ve Kullanım | | H4: Online ve Offline Oynama Seçenekleri | | H2: Ludo Oynamanın Faydaları Nelerdir? | | H3: Beyin Fonksiyonlarını Geliştirir | | H3: Keyif Verir ve Stresi Azaltır | | H3: Kan Basıncını Düşürür | | H3: Bağ Kurmayı ve Sosyalleşmeyi Teşvik Eder | | H3: İletişim Becerilerini Artırır | | H3: Adil Oyunu Destekler | | H3: Yaratıcılık ve Problem Çözme Yeteneğini Güçlendirir | | H3: Spor Ahlakını Öğretir | | H3: Güveni Artırır | | H2: Sonuç | | H2: Sık Sorulan Sorular | | Article with HTML Formatting |

Ludo Apk Indir: Eğlenceli ve Faydalı Bir Oyun

-

Ludo apk indir nedir? Nasıl yapılır? Ludo oynamanın faydaları nelerdir? Bu soruların cevaplarını merak ediyorsanız bu makaleyi okumaya devam edin.

-

ludo apk indir


Downloadhttps://jinyurl.com/2uNR1R



-

Ludo Nedir?

-

Ludo, iki ila dört oyuncunun.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Clash of Kings with Mod APK Download the Latest Version and Get Unlimited Coins and Gems.md b/spaces/1phancelerku/anime-remove-background/Enjoy Clash of Kings with Mod APK Download the Latest Version and Get Unlimited Coins and Gems.md deleted file mode 100644 index 344a608d2ff2f1db2a0ec4d6b6710f340f24037d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Clash of Kings with Mod APK Download the Latest Version and Get Unlimited Coins and Gems.md +++ /dev/null @@ -1,100 +0,0 @@ -
-

Clash of Kings Mod APK Download Latest Version

-

If you are a fan of strategy games, you might have heard of Clash of Kings, one of the most popular and addictive games in this genre. In this game, you can build your own empire, fight epic battles, join alliances, and explore a vast fantasy world. But what if you want to enjoy the game without any limitations or restrictions? Well, that's where Clash of Kings Mod APK comes in handy. In this article, we will tell you everything you need to know about this modded version of the game, including its features, benefits, and how to download and install it on your device.

-

clash of kings mod apk download latest version


Download File 🗸🗸🗸 https://jinyurl.com/2uNMO7



-

What is Clash of Kings?

-

Clash of Kings is a real-time strategy game developed by Elex Wireless and released in 2014. The game has over 100 million downloads on Google Play Store and has received positive reviews from critics and players alike. The game is set in a medieval fantasy world where you can choose from seven different kingdoms to rule. You can build your own castle, train your army, recruit heroes, and upgrade your weapons and equipment. You can also fight against other players from around the world in PvP battles or cooperate with them in alliances. The game offers a rich and immersive gameplay experience that will keep you hooked for hours.

-

Features of Clash of Kings

-

Clash of Kings has many features that make it one of the best strategy games on the market. Here are some of them:

-

Build your own empire

-

You can create your own kingdom from scratch and customize it according to your preferences. You can choose from various buildings, such as farms, barracks, stables, workshops, and more. You can also decorate your castle with flags, statues, fountains, and other items. You can collect resources such as wood, food, iron, and gold to expand your territory and improve your economy. You can also research new technologies and skills to enhance your military and civil capabilities.

-

Fight epic battles

-

You can test your skills and strategies in various modes of combat. You can challenge other players in real-time PvP battles or join massive wars with thousands of players. You can also participate in events such as kingdom wars, dragon campaigns, wonder wars, and more. You can use different types of units, such as infantry, cavalry, archers, siege engines, and dragons. You can also summon legendary heroes to lead your army and use their special abilities to turn the tide of battle.

-

Join alliances and chat with players

-

You don't have to play alone in Clash of Kings. You can join or create an alliance with other players who share your goals and interests. You can cooperate with them in wars, quests, trade, and diplomacy. You can also chat with them in real-time using voice or text messages. You can make new friends or enemies in this game.

-

Explore a vast fantasy world

-

Clash of Kings has a huge map that you can explore and conquer. You can discover new lands, encounter different cultures, and encounter various creatures and monsters. You can also find hidden treasures, ancient ruins, and mysterious secrets. The game has stunning graphics and sound effects that will make you feel like you are in a real medieval fantasy world.

-

Why download Clash of Kings Mod APK?

-

While Clash of Kings is a free-to-play game, it also has some in-app purchases that can give you an edge over other players. For example, you can buy gold coins, gems, VIP levels, premium items, and other benefits that can make your gameplay easier and more enjoyable. However, these purchases can be quite expensive and not everyone can afford them. That's why some people prefer to download Clash of Kings Mod APK, a modified version of the game that gives you access to unlimited money and resources, free VIP and premium features, no ads, and no root required. Here are some of the advantages of using this mod:

Unlimited money and resources

-

With Clash of Kings Mod APK, you don't have to worry about running out of money and resources. You can get unlimited gold coins, gems, wood, food, iron, and other materials that you need to build your empire and army. You can use them to upgrade your buildings, units, heroes, and equipment without any limitations. You can also buy anything you want from the shop without spending real money.

-

Free VIP and premium features

-

With Clash of Kings Mod APK, you can also enjoy the benefits of being a VIP player without paying for it. You can get free VIP levels that can unlock various perks and bonuses, such as faster construction, research, training, healing, marching, and gathering. You can also get free premium items, such as chests, keys, scrolls, tokens, and more. You can use them to get rare and powerful rewards that can boost your gameplay.

-

clash of kings modded apk free download
-download clash of kings hack apk latest version
-clash of kings unlimited money mod apk download
-how to download clash of kings mod apk on android
-clash of kings mod apk download for pc
-clash of kings latest version mod apk offline
-clash of kings mod apk download no root
-clash of kings hack mod apk download 2023
-clash of kings mod apk download rexdl
-clash of kings mod apk download revdl
-clash of kings mod apk download unlimited everything
-clash of kings mod apk download android 1
-clash of kings mod apk download apkpure
-clash of kings mod apk download happymod
-clash of kings mod apk download ihackedit
-clash of kings mod apk download latest version 2023
-clash of kings mod apk download latest version android
-clash of kings mod apk download latest version ios
-clash of kings mod apk download latest version uptodown
-clash of kings mod apk download latest version 8.40.0
-clash of kings mod apk free shopping download
-clash of kings gold generator mod apk download
-clash of kings pvp king's war mod apk download
-clash of kings the west mod apk download
-clash of kings wonder falls mod apk download
-best site to download clash of kings mod apk
-safe way to download clash of kings mod apk
-easy steps to download clash of kings mod apk
-benefits of downloading clash of kings mod apk
-features of clash of kings mod apk latest version
-reviews of clash of kings mod apk latest version
-ratings of clash of kings mod apk latest version
-comparison of clash of kings mod apk and original game
-tips and tricks for playing clash of kings mod apk
-guide for installing clash of kings mod apk on your device
-how to update clash of kings mod apk to the latest version
-how to uninstall clash of kings mod apk from your device
-how to fix crash issues with clash of kings mod apk
-how to play clash of kings mod apk online with friends
-how to play clash of kings mod apk offline without internet
-how to backup and restore your data in clash of kings mod apk
-how to get unlimited resources in clash of kings mod apk
-how to unlock all features in clash of kings mod apk
-how to customize your kingdom in clash of kings mod apk
-how to conquer other kingdoms in clash of kings mod apk
-how to join alliances and chat with other players in clash of kings mod apk
-how to participate in events and quests in clash of kings mod apk
-how to earn rewards and achievements in clash of kings mod apk
-how to level up and upgrade your buildings and troops in clash of kings mod apk

-

No ads and no root required

-

With Clash of Kings Mod APK, you can also play the game without any interruptions or hassles. You don't have to watch any annoying ads that can ruin your immersion and experience. You also don't have to root your device to install the mod, which can be risky and complicated. You just need to follow some simple steps that we will explain later in this article.

-

How to download and install Clash of Kings Mod APK?

-

If you are interested in downloading and installing Clash of Kings Mod APK on your device, you need to follow these steps:

-

Step 1: Download the APK file from a trusted source

-

The first thing you need to do is to download the APK file of Clash of Kings Mod from a reliable and secure source. You can find many websites that offer this mod, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. That's why we recommend you to use our link below to download the APK file safely and easily.

-

Download Clash of Kings Mod APK here

-

Step 2: Enable unknown sources on your device

-

The next thing you need to do is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the official Google Play Store. However, since Clash of Kings Mod APK is not available on the Play Store, you need to enable this option to install it on your device. To do this, you need to go to your device settings > security > unknown sources and toggle it on.

-

Step 3: Install the APK file and launch the game

-

The final thing you need to do is to install the APK file and launch the game. To do this, you need to locate the downloaded APK file on your device storage and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for a few seconds until the process is completed. Then, tap on open and enjoy the game.

-

Conclusion

-

Clash of Kings is one of the best strategy games that you can play on your device. It offers a lot of features and fun that will keep you entertained for hours. However, if you want to experience the game without any limitations or restrictions, you should try Clash of Kings Mod APK. This modded version of the game gives you unlimited money and resources, free VIP and premium features, no ads, and no root required. You can download it from our link below and follow our instructions to install it on your device.

-

We hope this article was helpful for you. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions about Clash of Kings Mod APK:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/FNAF 9 Mobile The Secrets of Freddy Fazbears Mega Pizzaplex.md b/spaces/1phancelerku/anime-remove-background/FNAF 9 Mobile The Secrets of Freddy Fazbears Mega Pizzaplex.md deleted file mode 100644 index d2bfe6ea4978e103e0fb0cebe82d718dbc8d0171..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/FNAF 9 Mobile The Secrets of Freddy Fazbears Mega Pizzaplex.md +++ /dev/null @@ -1,126 +0,0 @@ - -

Fnaf 9 Mobile: Everything You Need to Know About the Latest Five Nights at Freddy's Game

-

If you are a fan of horror games, you have probably heard of Five Nights at Freddy's, or fnaf for short. This is a series of games that puts you in the role of a night guard at a haunted pizzeria, where you have to survive the attacks of animatronic animals that come alive at night. The games are known for their jump scares, creepy atmosphere, and lore.

-

fnaf 9 mobile


Download »»» https://jinyurl.com/2uNQft



-

Fnaf 9 mobile is the latest game in the series, officially titled Five Nights at Freddy's: Security Breach. It was released on December 16, 2021, for PC, PS4, and PS5, with a possible mobile release in the future. It is developed by Steel Wool Studios and published by ScottGames, the creator of the original games.

-

In this article, we will tell you everything you need to know about fnaf 9 mobile, including its features, gameplay, tips and tricks, reviews, and download link. Read on if you dare!

-

What is fnaf 9 mobile and what are its features?

-

Fnaf 9 mobile is a horror game that takes place in Freddy Fazbear's Mega Pizzaplex, a huge entertainment center that features various attractions, such as Monty Golf, Roxy Raceway, Bonnie Bowl, and more. You play as Gregory, a young boy who gets trapped inside the pizzeria overnight. With the help of Freddy Fazbear himself, Gregory must uncover the secrets of the pizzeria, learn the truth, and survive until dawn.

-

Fnaf 9 mobile is different from the previous games in several ways. First of all, it is not a point-and-click game anymore. Instead, it is a free-roaming game that lets you explore the pizzeria in 3D. You can use security cameras, hiding spots, distractions, and other tools to avoid or escape from the animatronics that hunt you down.

-

Secondly, fnaf 9 mobile features new and reimagined characters that pose different threats to you. Some of them are Glamrock Chica, Roxanne Wolf, Montgomery Gator, Moon Drop, Sun Rise, Vanny, and Vanessa. Each character has its own personality, behavior, and weakness that you need to learn and exploit.

-

fnaf security breach mobile download
-fnaf 9 mobile edition github
-fnaf editors deviantart
-fnaf 9 mobile apk
-fnaf security breach mobile gameplay
-fnaf 9 mobile release date
-fnaf 9 mobile edition apk
-fnaf security breach mobile trailer
-fnaf 9 mobile fan game
-fnaf security breach mobile update
-fnaf 9 mobile free download
-fnaf 9 mobile edition download
-fnaf security breach mobile beta
-fnaf 9 mobile demo
-fnaf security breach mobile android
-fnaf 9 mobile gamejolt
-fnaf 9 mobile edition gamejolt
-fnaf security breach mobile ios
-fnaf 9 mobile teaser
-fnaf security breach mobile mod apk
-fnaf 9 mobile reddit
-fnaf 9 mobile edition beta
-fnaf security breach mobile release date
-fnaf 9 mobile news
-fnaf security breach mobile version
-fnaf 9 mobile leaks
-fnaf 9 mobile edition mod apk
-fnaf security breach mobile review
-fnaf 9 mobile wiki
-fnaf security breach mobile cheats
-fnaf 9 mobile rumors
-fnaf 9 mobile edition reddit
-fnaf security breach mobile tips and tricks
-fnaf 9 mobile trailer reaction
-fnaf security breach mobile system requirements
-fnaf 9 mobile speculation
-fnaf 9 mobile edition wiki
-fnaf security breach mobile easter eggs
-fnaf 9 mobile gameplay trailer
-fnaf security breach mobile glitches
-fnaf 9 mobile theory
-fnaf 9 mobile edition leaks
-fnaf security breach mobile walkthrough
-fnaf 9 mobile characters
-fnaf security breach mobile secrets
-fnaf 9 mobile lore
-fnaf 9 mobile edition rumors
-fnaf security breach mobile guide
-fnaf 9 mobile plot

-

Thirdly, fnaf 9 mobile has a rich story that reveals more about the lore of the fnaf universe. You will encounter various clues, secrets, easter eggs, and endings that will keep you hooked and curious. You will also face challenging boss battles that will test your skills and nerves.

-

How to play fnaf 9 mobile and what are the main objectives and challenges?

-

To play fnaf 9 mobile, you need to have a PC or a PS4/PS5 console that meets the minimum system requirements. You also need to buy the game from Steam or PlayStation Store for $39.99. There is no official mobile version of fnaf 9 yet, but there are some fan-made games that try to emulate it on Android devices.

-

The main objective of fnaf 9 mobile is to survive each night until 6 AM while avoiding or escaping from the animatronics that roam around the pizzeria. You can use Freddy Fazbear as your ally and protector. He can carry you inside his chest cavity and let you access his systems. He can also help you fight against some enemies.

-

The main challenge of fnaf 9 mobile is to manage your power supply. You have a limited amount of power that drains as you use Freddy's systems or other devices in the pizzeria. If you run out of power, you will be vulnerable to attacks from any animatronic nearby Before looking at the cameras or the doorways, you should listen carefully for any sound cues that indicate the presence of an animatronic. If you hear something suspicious, you should check the cameras or the doorways to confirm. If you don't hear anything, you can save your power and time by not looking. -

  • Use distractions wisely: You can use various devices and items in the pizzeria to distract or lure away some animatronics. For example, you can use the music box to attract Glamrock Chica, the laser pointer to distract Roxanne Wolf, the arcade machines to confuse Montgomery Gator, and the flashlight to scare away Vanessa. However, you should be careful not to overuse them or attract unwanted attention.
  • -
  • Don't panic: Fnaf 9 mobile is a game that tries to scare you and make you panic. However, you should try to stay calm and focused at all times. If you panic, you might make mistakes or waste your power. You should also avoid looking at the jump scares or listening to the phone calls if they make you nervous.
  • - -

    Reviews: What are the critics and players saying about fnaf 9 mobile?

    -

    Fnaf 9 mobile has received mostly positive reviews from critics and players alike. The game has been praised for its graphics, gameplay, story, characters, and atmosphere. It has also been criticized for its bugs, glitches, difficulty, and lack of mobile support.

    -

    Here are some of the reviews from different sources:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    SourceRatingQuote
    IGN8/10"Fnaf 9 mobile is a thrilling and terrifying horror game that delivers on its promise of a free-roaming fnaf experience. It has a rich and intriguing story, a diverse and memorable cast of characters, and a tense and immersive atmosphere. It also has some technical issues, a steep learning curve, and a lack of replay value."
    GameSpot7/10"Fnaf 9 mobile is a bold and ambitious game that expands the fnaf universe in new and exciting ways. It offers a lot of freedom and exploration, as well as some intense and scary moments. However, it also suffers from some frustrating and unfair gameplay mechanics, as well as some bugs and glitches that can ruin the experience."
    Metacritic79/100"Fnaf 9 mobile is a game that will please both fans and newcomers of the fnaf series. It has a lot of content and variety, as well as a compelling and mysterious story. It also has some flaws and limitations, such as its high difficulty level, its lack of polish, and its absence of mobile support."
    SteamVery Positive"Fnaf 9 mobile is one of the best fnaf games ever made. It is scary, fun, challenging, and immersive. It has amazing graphics, sound effects, voice acting, and music. It also has some bugs, crashes, lag, and optimization issues that need to be fixed."
    PlayStation Store4.5/5"Fnaf 9 mobile is a game that will keep you on the edge of your seat. It is a huge improvement over the previous games in terms of gameplay, visuals, story, and characters. It also has some problems with loading times, controls, performance, and compatibility that need to be improved."
    -

    Download link: Where and how to download fnaf 9 mobile for your device?

    -

    If you want to play fnaf 9 mobile on your device, you have two options:

    -
      -
    1. If you have a PC or a PS4/PS5 console that meets the minimum system requirements, you can buy the game from Steam or PlayStation Store for $39.99. You will need an internet connection to download and install the game.
    2. -
    3. If you have an Android device that does not meet the minimum system requirements or if there is no official mobile version of fnaf 9 yet, you can try some fan-made games that try to emulate it on Android devices. However, these games are not authorized by ScottGames or Steel Wool Studios and may not be accurate or safe.
    4. -
    -

    Conclusion: A summary of the main points and a call to action for the readers.

    -

    Fnaf 9 mobile is a game that will appeal to anyone who loves horror, mystery, and adventure. It is a game that will challenge you, scare you, and surprise you. It is a game that will make you feel like you are inside a haunted pizzeria, trying to survive the night and uncover the secrets.

    -

    If you are ready to face your fears and have some fun, you should give fnaf 9 mobile a try. You can buy the game from Steam or PlayStation Store for $39.99, or you can wait for the official mobile release in the future. You can also check out some fan-made games that try to emulate it on Android devices, but be careful of their quality and safety.

    -

    Whatever you choose, we hope you enjoy fnaf 9 mobile and have a great time playing it. Don't forget to share your thoughts and experiences with us in the comments section below. We would love to hear from you!

    -

    FAQs: Some frequently asked questions and answers about fnaf 9 mobile.

    -

    Here are some of the most common questions and answers about fnaf 9 mobile that you might find helpful:

    -

    Q: Is fnaf 9 mobile scary?

    -

    A: Yes, fnaf 9 mobile is scary. It is a horror game that features jump scares, creepy atmosphere, and disturbing characters. It is not recommended for people who are easily scared or have heart problems.

    -

    Q: Is fnaf 9 mobile suitable for kids?

    -

    A: No, fnaf 9 mobile is not suitable for kids. It is a game that contains violence, blood, gore, and mature themes. It is rated M for Mature by ESRB and PEGI 16 by PEGI. It is only suitable for people who are 17 years old or older.

    -

    Q: How long is fnaf 9 mobile?

    -

    A: Fnaf 9 mobile is a game that can take anywhere from 6 to 10 hours to complete, depending on your skill level, play style, and choices. It also has multiple endings and secrets that can add replay value to the game.

    -

    Q: How many animatronics are there in fnaf 9 mobile?

    -

    A: Fnaf 9 mobile features a total of 10 animatronics that can pose a threat to you. They are Glamrock Chica, Roxanne Wolf, Montgomery Gator, Moon Drop, Sun Rise, Vanny, Vanessa, Freddy Fazbear, Chica the Chicken, and Foxy the Pirate.

    -

    Q: When will fnaf 9 mobile be released for Android devices?

    -

    A: There is no official release date for fnaf 9 mobile for Android devices yet. However, Scott Cawthon, the creator of the original games, has stated that he plans to release all the fnaf games on mobile platforms eventually. Therefore, we can expect fnaf 9 mobile to be released for Android devices sometime in the future.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/attention.py b/spaces/1toTree/lora_test/ppdiffusers/models/attention.py deleted file mode 100644 index da1fa843de3f3ed1f1977583e9fb9ce216930e5e..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/models/attention.py +++ /dev/null @@ -1,683 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import math -from dataclasses import dataclass -from typing import Optional - -import paddle -import paddle.nn.functional as F -from paddle import nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..modeling_utils import ModelMixin -from ..models.embeddings import ImagePositionalEmbeddings -from ..utils import BaseOutput -from .cross_attention import CrossAttention - - -@dataclass -class Transformer2DModelOutput(BaseOutput): - """ - Args: - sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [`Transformer2DModel`] is discrete): - Hidden states conditioned on `encoder_hidden_states` input. If discrete, returns probability distributions - for the unnoised latent pixels. - """ - - sample: paddle.Tensor - - -class Transformer2DModel(ModelMixin, ConfigMixin): - """ - Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual - embeddings) inputs. - - When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard - transformer action. Finally, reshape to image. - - When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional - embeddings applied, see `ImagePositionalEmbeddings`. Then apply standard transformer action. Finally, predict - classes of unnoised image. - - Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised - image do not contain a prediction for the masked pixel as the unnoised image cannot be masked. - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. - in_channels (`int`, *optional*): - Pass if the input is continuous. The number of channels in the input and output. - num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The number of encoder_hidden_states dimensions to use. - sample_size (`int`, *optional*): Pass if the input is discrete. The width of the latent images. - Note that this is fixed at training time as it is used for learning a number of position embeddings. See - `ImagePositionalEmbeddings`. - num_vector_embeds (`int`, *optional*): - Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. - Includes the class for the masked latent pixel. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - num_embeds_ada_norm ( `int`, *optional*): Pass if at least one of the norm_layers is `AdaLayerNorm`. - The number of diffusion steps used during training. Note that this is fixed at training time as it is used - to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for - up to but not more than steps than `num_embeds_ada_norm`. - attention_bias (`bool`, *optional*): - Configure if the TransformerBlocks' attention should contain a bias parameter. - """ - - @register_to_config - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - sample_size: Optional[int] = None, - num_vector_embeds: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - use_linear_projection: bool = False, - only_cross_attention: bool = False, - upcast_attention: bool = False, - ): - super().__init__() - self.use_linear_projection = use_linear_projection - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - self.inner_dim = inner_dim = num_attention_heads * attention_head_dim - - # 1. Transformer2DModel can process both standard continous images of shape `(batch_size, num_channels, width, height)` as well as quantized image embeddings of shape `(batch_size, num_image_vectors)` - # Define whether input is continuous or discrete depending on configuration - self.is_input_continuous = in_channels is not None - self.is_input_vectorized = num_vector_embeds is not None - - if self.is_input_continuous and self.is_input_vectorized: - raise ValueError( - f"Cannot define both `in_channels`: {in_channels} and `num_vector_embeds`: {num_vector_embeds}. Make" - " sure that either `in_channels` or `num_vector_embeds` is None." - ) - elif not self.is_input_continuous and not self.is_input_vectorized: - raise ValueError( - f"Has to define either `in_channels`: {in_channels} or `num_vector_embeds`: {num_vector_embeds}. Make" - " sure that either `in_channels` or `num_vector_embeds` is not None." - ) - - # 2. Define input layers - if self.is_input_continuous: - self.in_channels = in_channels - - self.norm = nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, epsilon=1e-6) - if use_linear_projection: - self.proj_in = nn.Linear(in_channels, inner_dim) - else: - self.proj_in = nn.Conv2D(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - assert sample_size is not None, "Transformer2DModel over discrete input must provide sample_size" - assert num_vector_embeds is not None, "Transformer2DModel over discrete input must provide num_embed" - - self.height = sample_size - self.width = sample_size - self.num_vector_embeds = num_vector_embeds - self.num_latent_pixels = self.height * self.width - - self.latent_image_embedding = ImagePositionalEmbeddings( - num_embed=num_vector_embeds, embed_dim=inner_dim, height=self.height, width=self.width - ) - - # 3. Define transformers blocks - self.transformer_blocks = nn.LayerList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - cross_attention_dim=cross_attention_dim, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - attention_bias=attention_bias, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - for d in range(num_layers) - ] - ) - - # 4. Define output layers - if self.is_input_continuous: - if use_linear_projection: - self.proj_out = nn.Linear(in_channels, inner_dim) - else: - self.proj_out = nn.Conv2D(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - self.norm_out = nn.LayerNorm(inner_dim) - self.out = nn.Linear(inner_dim, self.num_vector_embeds - 1) - - def forward( - self, - hidden_states, - encoder_hidden_states=None, - timestep=None, - cross_attention_kwargs=None, - return_dict: bool = True, - ): - """ - Args: - hidden_states ( When discrete, `paddle.Tensor` of shape `(batch size, num latent pixels)`. - When continous, `paddle.Tensor` of shape `(batch size, channel, height, width)`): Input - hidden_states - encoder_hidden_states ( `paddle.Tensor` of shape `(batch size, encoder_hidden_states)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `paddle.Tensor`, *optional*): - Optional timestep to be applied as an embedding in AdaLayerNorm's. Used to indicate denoising step. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.attention.Transformer2DModelOutput`] or `tuple`: [`~models.attention.Transformer2DModelOutput`] - if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample - tensor. - """ - # 1. Input - if self.is_input_continuous: - _, _, height, width = hidden_states.shape - residual = hidden_states - hidden_states = self.norm(hidden_states) - if not self.use_linear_projection: - hidden_states = self.proj_in(hidden_states) - hidden_states = hidden_states.transpose([0, 2, 3, 1]).flatten(1, 2) - if self.use_linear_projection: - hidden_states = self.proj_in(hidden_states) - elif self.is_input_vectorized: - hidden_states = self.latent_image_embedding(hidden_states.cast("int64")) - - # 2. Blocks - for block in self.transformer_blocks: - hidden_states = block( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - ) - - # 3. Output - if self.is_input_continuous: - if self.use_linear_projection: - hidden_states = self.proj_out(hidden_states) - hidden_states = hidden_states.reshape([-1, height, width, self.inner_dim]).transpose([0, 3, 1, 2]) - if not self.use_linear_projection: - hidden_states = self.proj_out(hidden_states) - output = hidden_states + residual - elif self.is_input_vectorized: - hidden_states = self.norm_out(hidden_states) - logits = self.out(hidden_states) - # (batch, self.num_vector_embeds - 1, self.num_latent_pixels) - logits = logits.transpose([0, 2, 1]) - - # log(p(x_0)) - output = F.log_softmax(logits.cast("float64"), axis=1).cast("float32") - - if not return_dict: - return (output,) - - return Transformer2DModelOutput(sample=output) - - -class AttentionBlock(nn.Layer): - """ - An attention block that allows spatial positions to attend to each other. Originally ported from here, but adapted - to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - Uses three q, k, v linear layers to compute attention. - - Parameters: - channels (`int`): The number of channels in the input and output. - num_head_channels (`int`, *optional*): - The number of channels in each head. If None, then `num_heads` = 1. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for group norm. - rescale_output_factor (`float`, *optional*, defaults to 1.0): The factor to rescale the output by. - eps (`float`, *optional*, defaults to 1e-5): The epsilon value to use for group norm. - """ - - def __init__( - self, - channels: int, - num_head_channels: Optional[int] = None, - norm_num_groups: int = 32, - rescale_output_factor: float = 1.0, - eps: float = 1e-5, - ): - super().__init__() - self.channels = channels - self.num_heads = channels // num_head_channels if num_head_channels is not None else 1 - self.head_dim = self.channels // self.num_heads - self.scale = 1 / math.sqrt(self.channels / self.num_heads) - - self.group_norm = nn.GroupNorm(num_channels=channels, num_groups=norm_num_groups, epsilon=eps) - - # define q,k,v as linear layers - self.query = nn.Linear(channels, channels) - self.key = nn.Linear(channels, channels) - self.value = nn.Linear(channels, channels) - - self.rescale_output_factor = rescale_output_factor - self.proj_attn = nn.Linear(channels, channels) - - def reshape_heads_to_batch_dim(self, tensor): - tensor = tensor.reshape([0, 0, self.num_heads, self.head_dim]) - tensor = tensor.transpose([0, 2, 1, 3]) - return tensor - - def reshape_batch_dim_to_heads(self, tensor): - tensor = tensor.transpose([0, 2, 1, 3]) - tensor = tensor.reshape([0, 0, tensor.shape[2] * tensor.shape[3]]) - return tensor - - def forward(self, hidden_states): - residual = hidden_states - batch, channel, height, width = hidden_states.shape - - # norm - hidden_states = self.group_norm(hidden_states) - - hidden_states = hidden_states.reshape([batch, channel, height * width]).transpose([0, 2, 1]) - - # proj to q, k, v - query_proj = self.query(hidden_states) - key_proj = self.key(hidden_states) - value_proj = self.value(hidden_states) - - query_proj = self.reshape_heads_to_batch_dim(query_proj) - key_proj = self.reshape_heads_to_batch_dim(key_proj) - value_proj = self.reshape_heads_to_batch_dim(value_proj) - - # get scores - attention_scores = paddle.matmul(query_proj, key_proj, transpose_y=True) * self.scale - attention_probs = F.softmax(attention_scores.cast("float32"), axis=-1).cast(attention_scores.dtype) - - # compute attention output - hidden_states = paddle.matmul(attention_probs, value_proj) - - hidden_states = self.reshape_batch_dim_to_heads(hidden_states) - - # compute next hidden_states - hidden_states = self.proj_attn(hidden_states) - hidden_states = hidden_states.transpose([0, 2, 1]).reshape([batch, channel, height, width]) - - # res connect and rescale - hidden_states = (hidden_states + residual) / self.rescale_output_factor - return hidden_states - - -class BasicTransformerBlock(nn.Layer): - r""" - A basic Transformer block. - - Parameters: - dim (`int`): The number of channels in the input and output. - num_attention_heads (`int`): The number of heads to use for multi-head attention. - attention_head_dim (`int`): The number of channels in each head. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The size of the encoder_hidden_states vector for cross attention. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - num_embeds_ada_norm (: - obj: `int`, *optional*): The number of diffusion steps used during training. See `Transformer2DModel`. - attention_bias (: - obj: `bool`, *optional*, defaults to `False`): Configure if the attentions should contain a bias parameter. - """ - - def __init__( - self, - dim: int, - num_attention_heads: int, - attention_head_dim: int, - dropout=0.0, - cross_attention_dim: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - attention_bias: bool = False, - only_cross_attention: bool = False, - upcast_attention: bool = False, - ): - super().__init__() - self.only_cross_attention = only_cross_attention - self.use_ada_layer_norm = num_embeds_ada_norm is not None - - # 1. Self-Attn - self.attn1 = CrossAttention( - query_dim=dim, - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - cross_attention_dim=cross_attention_dim if only_cross_attention else None, - upcast_attention=upcast_attention, - ) - - self.ff = FeedForward(dim, dropout=dropout, activation_fn=activation_fn) - - # 2. Cross-Attn - if cross_attention_dim is not None: - self.attn2 = CrossAttention( - query_dim=dim, - cross_attention_dim=cross_attention_dim, - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - upcast_attention=upcast_attention, - ) # is self-attn if encoder_hidden_states is none - else: - self.attn2 = None - - self.norm1 = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim) - - if cross_attention_dim is not None: - self.norm2 = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim) - else: - self.norm2 = None - - # 3. Feed-forward - self.norm3 = nn.LayerNorm(dim) - - def forward( - self, - hidden_states, - encoder_hidden_states=None, - timestep=None, - attention_mask=None, - cross_attention_kwargs=None, - ): - # 1. Self-Attention - norm_hidden_states = ( - self.norm1(hidden_states, timestep) if self.use_ada_layer_norm else self.norm1(hidden_states) - ) - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - attn_output = self.attn1( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - hidden_states = attn_output + hidden_states - - if self.attn2 is not None: - # 2. Cross-Attention - norm_hidden_states = ( - self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states) - ) - attn_output = self.attn2( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - hidden_states = attn_output + hidden_states - - # 3. Feed-forward - hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states - - return hidden_states - - -class FeedForward(nn.Layer): - r""" - A feed-forward layer. - - Parameters: - dim (`int`): The number of channels in the input. - dim_out (`int`, *optional*): The number of channels in the output. If not given, defaults to `dim`. - mult (`int`, *optional*, defaults to 4): The multiplier to use for the hidden dimension. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - """ - - def __init__( - self, - dim: int, - dim_out: Optional[int] = None, - mult: int = 4, - dropout: float = 0.0, - activation_fn: str = "geglu", - ): - super().__init__() - inner_dim = int(dim * mult) - dim_out = dim_out if dim_out is not None else dim - - if activation_fn == "gelu": - act_fn = GELU(dim, inner_dim) - elif activation_fn == "geglu": - act_fn = GEGLU(dim, inner_dim) - elif activation_fn == "geglu-approximate": - act_fn = ApproximateGELU(dim, inner_dim) - - self.net = nn.LayerList([]) - # project in - self.net.append(act_fn) - # project dropout - self.net.append(nn.Dropout(dropout)) - # project out - self.net.append(nn.Linear(inner_dim, dim_out)) - - def forward(self, hidden_states): - for module in self.net: - hidden_states = module(hidden_states) - return hidden_states - - -class GELU(nn.Layer): - r""" - GELU activation function - """ - - def __init__(self, dim_in: int, dim_out: int): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out) - - def forward(self, hidden_states): - hidden_states = self.proj(hidden_states) - hidden_states = F.gelu(hidden_states) - return hidden_states - - -# feedforward -class GEGLU(nn.Layer): - r""" - A variant of the gated linear unit activation function from https://arxiv.org/abs/2002.05202. - - Parameters: - dim_in (`int`): The number of channels in the input. - dim_out (`int`): The number of channels in the output. - """ - - def __init__(self, dim_in: int, dim_out: int): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, hidden_states): - hidden_states, gate = self.proj(hidden_states).chunk(2, axis=-1) - return hidden_states * F.gelu(gate) - - -class ApproximateGELU(nn.Layer): - """ - The approximate form of Gaussian Error Linear Unit (GELU) - - For more details, see section 2: https://arxiv.org/abs/1606.08415 - """ - - def __init__(self, dim_in: int, dim_out: int): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out) - - def forward(self, x): - x = self.proj(x) - return x * F.sigmoid(1.702 * x) - - -class AdaLayerNorm(nn.Layer): - """ - Norm layer modified to incorporate timestep embeddings. - """ - - def __init__(self, embedding_dim, num_embeddings): - super().__init__() - self.emb = nn.Embedding(num_embeddings, embedding_dim) - self.silu = nn.Silu() - self.linear = nn.Linear(embedding_dim, embedding_dim * 2) - self.norm = nn.LayerNorm(embedding_dim) # elementwise_affine=False - - def forward(self, x, timestep): - emb = self.linear(self.silu(self.emb(timestep))) - scale, shift = paddle.chunk(emb, 2, axis=-1) - x = self.norm(x) * (1 + scale) + shift - return x - - -class DualTransformer2DModel(nn.Layer): - """ - Dual transformer wrapper that combines two `Transformer2DModel`s for mixed inference. - Parameters: - num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. - in_channels (`int`, *optional*): - Pass if the input is continuous. The number of channels in the input and output. - num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (`float`, *optional*, defaults to 0.1): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The number of encoder_hidden_states dimensions to use. - sample_size (`int`, *optional*): Pass if the input is discrete. The width of the latent images. - Note that this is fixed at training time as it is used for learning a number of position embeddings. See - `ImagePositionalEmbeddings`. - num_vector_embeds (`int`, *optional*): - Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. - Includes the class for the masked latent pixel. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - num_embeds_ada_norm ( `int`, *optional*): Pass if at least one of the norm_layers is `AdaLayerNorm`. - The number of diffusion steps used during training. Note that this is fixed at training time as it is used - to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for - up to but not more than steps than `num_embeds_ada_norm`. - attention_bias (`bool`, *optional*): - Configure if the TransformerBlocks' attention should contain a bias parameter. - """ - - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - sample_size: Optional[int] = None, - num_vector_embeds: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - ): - super().__init__() - self.transformers = nn.LayerList( - [ - Transformer2DModel( - num_attention_heads=num_attention_heads, - attention_head_dim=attention_head_dim, - in_channels=in_channels, - num_layers=num_layers, - dropout=dropout, - norm_num_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attention_bias=attention_bias, - sample_size=sample_size, - num_vector_embeds=num_vector_embeds, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - ) - for _ in range(2) - ] - ) - - # Variables that can be set by a pipeline: - - # The ratio of transformer1 to transformer2's output states to be combined during inference - self.mix_ratio = 0.5 - - # The shape of `encoder_hidden_states` is expected to be - # `(batch_size, condition_lengths[0]+condition_lengths[1], num_features)` - self.condition_lengths = [77, 257] - - # Which transformer to use to encode which condition. - # E.g. `(1, 0)` means that we'll use `transformers[1](conditions[0])` and `transformers[0](conditions[1])` - self.transformer_index_for_condition = [1, 0] - - def forward( - self, - hidden_states, - encoder_hidden_states, - timestep=None, - attention_mask=None, - cross_attention_kwargs=None, - return_dict: bool = True, - ): - """ - Args: - hidden_states ( When discrete, `torch.LongTensor` of shape `(batch size, num latent pixels)`. - When continuous, `torch.FloatTensor` of shape `(batch size, channel, height, width)`): Input - hidden_states - encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `torch.long`, *optional*): - Optional timestep to be applied as an embedding in AdaLayerNorm's. Used to indicate denoising step. - attention_mask (`torch.FloatTensor`, *optional*): - Optional attention mask to be applied in CrossAttention - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.attention.Transformer2DModelOutput`] or `tuple`: [`~models.attention.Transformer2DModelOutput`] - if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample - tensor. - """ - input_states = hidden_states - - encoded_states = [] - tokens_start = 0 - # attention_mask is not used yet - for i in range(2): - # for each of the two transformers, pass the corresponding condition tokens - condition_state = encoder_hidden_states[:, tokens_start : tokens_start + self.condition_lengths[i]] - transformer_index = self.transformer_index_for_condition[i] - encoded_state = self.transformers[transformer_index]( - input_states, - encoder_hidden_states=condition_state, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - encoded_states.append(encoded_state - input_states) - tokens_start += self.condition_lengths[i] - - output_states = encoded_states[0] * self.mix_ratio + encoded_states[1] * (1 - self.mix_ratio) - output_states = output_states + input_states - - if not return_dict: - return (output_states,) - - return Transformer2DModelOutput(sample=output_states) diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/inference.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/inference.py deleted file mode 100644 index 15e6bf16ba9e551473cd6b179bb518f0704ac33d..0000000000000000000000000000000000000000 --- a/spaces/44ov41za8i/FreeVC/speaker_encoder/inference.py +++ /dev/null @@ -1,177 +0,0 @@ -from speaker_encoder.params_data import * -from speaker_encoder.model import SpeakerEncoder -from speaker_encoder.audio import preprocess_wav # We want to expose this function from here -from matplotlib import cm -from speaker_encoder import audio -from pathlib import Path -import matplotlib.pyplot as plt -import numpy as np -import torch - -_model = None # type: SpeakerEncoder -_device = None # type: torch.device - - -def load_model(weights_fpath: Path, device=None): - """ - Loads the model in memory. If this function is not explicitely called, it will be run on the - first call to embed_frames() with the default weights file. - - :param weights_fpath: the path to saved model weights. - :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). The - model will be loaded and will run on this device. Outputs will however always be on the cpu. - If None, will default to your GPU if it"s available, otherwise your CPU. - """ - # TODO: I think the slow loading of the encoder might have something to do with the device it - # was saved on. Worth investigating. - global _model, _device - if device is None: - _device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - elif isinstance(device, str): - _device = torch.device(device) - _model = SpeakerEncoder(_device, torch.device("cpu")) - checkpoint = torch.load(weights_fpath) - _model.load_state_dict(checkpoint["model_state"]) - _model.eval() - print("Loaded encoder \"%s\" trained to step %d" % (weights_fpath.name, checkpoint["step"])) - - -def is_loaded(): - return _model is not None - - -def embed_frames_batch(frames_batch): - """ - Computes embeddings for a batch of mel spectrogram. - - :param frames_batch: a batch mel of spectrogram as a numpy array of float32 of shape - (batch_size, n_frames, n_channels) - :return: the embeddings as a numpy array of float32 of shape (batch_size, model_embedding_size) - """ - if _model is None: - raise Exception("Model was not loaded. Call load_model() before inference.") - - frames = torch.from_numpy(frames_batch).to(_device) - embed = _model.forward(frames).detach().cpu().numpy() - return embed - - -def compute_partial_slices(n_samples, partial_utterance_n_frames=partials_n_frames, - min_pad_coverage=0.75, overlap=0.5): - """ - Computes where to split an utterance waveform and its corresponding mel spectrogram to obtain - partial utterances of each. Both the waveform and the mel - spectrogram slices are returned, so as to make each partial utterance waveform correspond to - its spectrogram. This function assumes that the mel spectrogram parameters used are those - defined in params_data.py. - - The returned ranges may be indexing further than the length of the waveform. It is - recommended that you pad the waveform with zeros up to wave_slices[-1].stop. - - :param n_samples: the number of samples in the waveform - :param partial_utterance_n_frames: the number of mel spectrogram frames in each partial - utterance - :param min_pad_coverage: when reaching the last partial utterance, it may or may not have - enough frames. If at least of are present, - then the last partial utterance will be considered, as if we padded the audio. Otherwise, - it will be discarded, as if we trimmed the audio. If there aren't enough frames for 1 partial - utterance, this parameter is ignored so that the function always returns at least 1 slice. - :param overlap: by how much the partial utterance should overlap. If set to 0, the partial - utterances are entirely disjoint. - :return: the waveform slices and mel spectrogram slices as lists of array slices. Index - respectively the waveform and the mel spectrogram with these slices to obtain the partial - utterances. - """ - assert 0 <= overlap < 1 - assert 0 < min_pad_coverage <= 1 - - samples_per_frame = int((sampling_rate * mel_window_step / 1000)) - n_frames = int(np.ceil((n_samples + 1) / samples_per_frame)) - frame_step = max(int(np.round(partial_utterance_n_frames * (1 - overlap))), 1) - - # Compute the slices - wav_slices, mel_slices = [], [] - steps = max(1, n_frames - partial_utterance_n_frames + frame_step + 1) - for i in range(0, steps, frame_step): - mel_range = np.array([i, i + partial_utterance_n_frames]) - wav_range = mel_range * samples_per_frame - mel_slices.append(slice(*mel_range)) - wav_slices.append(slice(*wav_range)) - - # Evaluate whether extra padding is warranted or not - last_wav_range = wav_slices[-1] - coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start) - if coverage < min_pad_coverage and len(mel_slices) > 1: - mel_slices = mel_slices[:-1] - wav_slices = wav_slices[:-1] - - return wav_slices, mel_slices - - -def embed_utterance(wav, using_partials=True, return_partials=False, **kwargs): - """ - Computes an embedding for a single utterance. - - # TODO: handle multiple wavs to benefit from batching on GPU - :param wav: a preprocessed (see audio.py) utterance waveform as a numpy array of float32 - :param using_partials: if True, then the utterance is split in partial utterances of - frames and the utterance embedding is computed from their - normalized average. If False, the utterance is instead computed from feeding the entire - spectogram to the network. - :param return_partials: if True, the partial embeddings will also be returned along with the - wav slices that correspond to the partial embeddings. - :param kwargs: additional arguments to compute_partial_splits() - :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If - is True, the partial utterances as a numpy array of float32 of shape - (n_partials, model_embedding_size) and the wav partials as a list of slices will also be - returned. If is simultaneously set to False, both these values will be None - instead. - """ - # Process the entire utterance if not using partials - if not using_partials: - frames = audio.wav_to_mel_spectrogram(wav) - embed = embed_frames_batch(frames[None, ...])[0] - if return_partials: - return embed, None, None - return embed - - # Compute where to split the utterance into partials and pad if necessary - wave_slices, mel_slices = compute_partial_slices(len(wav), **kwargs) - max_wave_length = wave_slices[-1].stop - if max_wave_length >= len(wav): - wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant") - - # Split the utterance into partials - frames = audio.wav_to_mel_spectrogram(wav) - frames_batch = np.array([frames[s] for s in mel_slices]) - partial_embeds = embed_frames_batch(frames_batch) - - # Compute the utterance embedding from the partial embeddings - raw_embed = np.mean(partial_embeds, axis=0) - embed = raw_embed / np.linalg.norm(raw_embed, 2) - - if return_partials: - return embed, partial_embeds, wave_slices - return embed - - -def embed_speaker(wavs, **kwargs): - raise NotImplemented() - - -def plot_embedding_as_heatmap(embed, ax=None, title="", shape=None, color_range=(0, 0.30)): - if ax is None: - ax = plt.gca() - - if shape is None: - height = int(np.sqrt(len(embed))) - shape = (height, -1) - embed = embed.reshape(shape) - - cmap = cm.get_cmap() - mappable = ax.imshow(embed, cmap=cmap) - cbar = plt.colorbar(mappable, ax=ax, fraction=0.046, pad=0.04) - cbar.set_clim(*color_range) - - ax.set_xticks([]), ax.set_yticks([]) - ax.set_title(title) diff --git a/spaces/7hao/bingo/src/components/external-link.tsx b/spaces/7hao/bingo/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/801artistry/RVC801/lib/infer_pack/models_onnx.py b/spaces/801artistry/RVC801/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/osmesa.py b/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/osmesa.py deleted file mode 100644 index deaa5ff44031a107883913ae9a18fc425d650f3d..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/osmesa.py +++ /dev/null @@ -1,59 +0,0 @@ -from .base import Platform - - -__all__ = ['OSMesaPlatform'] - - -class OSMesaPlatform(Platform): - """Renders into a software buffer using OSMesa. Requires special versions - of OSMesa to be installed, plus PyOpenGL upgrade. - """ - - def __init__(self, viewport_width, viewport_height): - super(OSMesaPlatform, self).__init__(viewport_width, viewport_height) - self._context = None - self._buffer = None - - def init_context(self): - from OpenGL import arrays - from OpenGL.osmesa import ( - OSMesaCreateContextAttribs, OSMESA_FORMAT, - OSMESA_RGBA, OSMESA_PROFILE, OSMESA_CORE_PROFILE, - OSMESA_CONTEXT_MAJOR_VERSION, OSMESA_CONTEXT_MINOR_VERSION, - OSMESA_DEPTH_BITS - ) - - attrs = arrays.GLintArray.asArray([ - OSMESA_FORMAT, OSMESA_RGBA, - OSMESA_DEPTH_BITS, 24, - OSMESA_PROFILE, OSMESA_CORE_PROFILE, - OSMESA_CONTEXT_MAJOR_VERSION, 3, - OSMESA_CONTEXT_MINOR_VERSION, 3, - 0 - ]) - self._context = OSMesaCreateContextAttribs(attrs, None) - self._buffer = arrays.GLubyteArray.zeros( - (self.viewport_height, self.viewport_width, 4) - ) - - def make_current(self): - from OpenGL import GL as gl - from OpenGL.osmesa import OSMesaMakeCurrent - assert(OSMesaMakeCurrent( - self._context, self._buffer, gl.GL_UNSIGNED_BYTE, - self.viewport_width, self.viewport_height - )) - - def make_uncurrent(self): - """Make the OpenGL context uncurrent. - """ - pass - - def delete_context(self): - from OpenGL.osmesa import OSMesaDestroyContext - OSMesaDestroyContext(self._context) - self._context = None - self._buffer = None - - def supports_framebuffers(self): - return False diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/custom_openaimodel.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/custom_openaimodel.py deleted file mode 100644 index 4412eac52c294266dee21680f698b10a4614b4fa..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/custom_openaimodel.py +++ /dev/null @@ -1,368 +0,0 @@ -from abc import abstractmethod -from functools import partial -import math -from typing import Iterable - -import numpy as np -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from ldm.modules.diffusionmodules.util import ( - checkpoint, - conv_nd, - linear, - avg_pool_nd, - zero_module, - normalization, - timestep_embedding, -) -from ldm.modules.attention import SpatialTransformer -from ldm.modules.diffusionmodules.openaimodel import convert_module_to_f16, convert_module_to_f32, AttentionPool2d, \ - TimestepBlock, TimestepEmbedSequential, Upsample, TransposedUpsample, Downsample, ResBlock, AttentionBlock, count_flops_attn, \ - QKVAttentionLegacy, QKVAttention - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - :param use_new_attention_order: use a different attention pattern for potentially - increased efficiency. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - use_context_project=False, # custom text to audio support - use_context_attn=True # custom text to audio support - ): - super().__init__() - if use_spatial_transformer: - assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...' - - if context_dim is not None and not use_context_project: - assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...' - from omegaconf.listconfig import ListConfig - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set' - - if num_head_channels == -1: - assert num_heads != -1, 'Either num_heads or num_head_channels has to be set' - - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=model_channels * mult, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = model_channels * mult - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)), - ) - if self.predict_codebook_ids: - self.id_predictor = nn.Sequential( - normalization(ch), - conv_nd(dims, model_channels, n_embed, 1), - #nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits - ) - - self.use_context_project = use_context_project - if use_context_project: - self.context_project = linear(context_dim, time_embed_dim) - self.use_context_attn = use_context_attn - - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps=None, context=None, y=None,**kwargs): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param context: conditioning plugged in via crossattn - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - hs = [] - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb) - - if self.num_classes is not None: - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - # For text-to-audio using global CLIP - if self.use_context_project: - context = self.context_project(context) - emb = emb + context.squeeze(1) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, context if self.use_context_attn else None) - hs.append(h) - h = self.middle_block(h, emb, context if self.use_context_attn else None) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb, context if self.use_context_attn else None) - h = h.type(x.dtype) - if self.predict_codebook_ids: - return self.id_predictor(h) - else: - return self.out(h) diff --git a/spaces/AIGC-Audio/Make_An_Audio/app.py b/spaces/AIGC-Audio/Make_An_Audio/app.py deleted file mode 100644 index 9495634ea135382906ec5a4f07ffedf47de830aa..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import torch -import numpy as np -import gradio as gr -from PIL import Image -from omegaconf import OmegaConf -from pathlib import Path -from vocoder.bigvgan.models import VocoderBigVGAN -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.util import instantiate_from_config -from wav_evaluation.models.CLAPWrapper import CLAPWrapper - -SAMPLE_RATE = 16000 - -torch.set_grad_enabled(False) -device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - -def dur_to_size(duration): - latent_width = int(duration * 7.8) - if latent_width % 4 != 0: - latent_width = (latent_width // 4 + 1) * 4 - return latent_width - -def initialize_model(config, ckpt): - config = OmegaConf.load(config) - model = instantiate_from_config(config.model) - model.load_state_dict(torch.load(ckpt,map_location='cpu')["state_dict"], strict=False) - - model = model.to(device) - model.cond_stage_model.to(model.device) - model.cond_stage_model.device = model.device - print(model.device,device,model.cond_stage_model.device) - sampler = DDIMSampler(model) - - return sampler - -sampler = initialize_model('configs/text_to_audio/txt2audio_args.yaml', 'useful_ckpts/maa1_full.ckpt') -vocoder = VocoderBigVGAN('vocoder/logs/bigvnat',device=device) -clap_model = CLAPWrapper('useful_ckpts/CLAP/CLAP_weights_2022.pth','useful_ckpts/CLAP/config.yml',use_cuda=torch.cuda.is_available()) - -def select_best_audio(prompt,wav_list): - text_embeddings = clap_model.get_text_embeddings([prompt]) - score_list = [] - for data in wav_list: - sr,wav = data - audio_embeddings = clap_model.get_audio_embeddings([(torch.FloatTensor(wav),sr)], resample=True) - score = clap_model.compute_similarity(audio_embeddings, text_embeddings,use_logit_scale=False).squeeze().cpu().numpy() - score_list.append(score) - max_index = np.array(score_list).argmax() - print(score_list,max_index) - return wav_list[max_index] - -def txt2audio(sampler,vocoder,prompt, seed, scale, ddim_steps, n_samples=1, W=624, H=80): - prng = np.random.RandomState(seed) - start_code = prng.randn(n_samples, sampler.model.first_stage_model.embed_dim, H // 8, W // 8) - start_code = torch.from_numpy(start_code).to(device=device, dtype=torch.float32) - - uc = None - if scale != 1.0: - uc = sampler.model.get_learned_conditioning(n_samples * [""]) - c = sampler.model.get_learned_conditioning(n_samples * [prompt])# shape:[1,77,1280],即还没有变成句子embedding,仍是每个单词的embedding - shape = [sampler.model.first_stage_model.embed_dim, H//8, W//8] # (z_dim, 80//2^x, 848//2^x) - samples_ddim, _ = sampler.sample(S=ddim_steps, - conditioning=c, - batch_size=n_samples, - shape=shape, - verbose=False, - unconditional_guidance_scale=scale, - unconditional_conditioning=uc, - x_T=start_code) - - x_samples_ddim = sampler.model.decode_first_stage(samples_ddim) - - wav_list = [] - for idx,spec in enumerate(x_samples_ddim): - wav = vocoder.vocode(spec) - wav_list.append((SAMPLE_RATE,wav)) - best_wav = select_best_audio(prompt,wav_list) - return best_wav - - -def predict(prompt, ddim_steps, num_samples, scale, seed): - melbins,mel_len = 80,624 - with torch.no_grad(): - result = txt2audio( - sampler=sampler, - vocoder=vocoder, - prompt=prompt, - seed=seed, - scale=scale, - ddim_steps=ddim_steps, - n_samples=num_samples, - H=melbins, W=mel_len - ) - - return result - - -with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown("## Make-An-Audio: Text-to-Audio Generation") - - with gr.Row(): - with gr.Column(): - prompt = gr.Textbox(label="Prompt: Input your text here. ") - run_button = gr.Button(label="Run") - - - with gr.Accordion("Advanced options", open=False): - num_samples = gr.Slider( - label="Select from audios num.This number control the number of candidates \ - (e.g., generate three audios and choose the best to show you). A Larger value usually lead to \ - better quality with heavier computation", minimum=1, maximum=10, value=3, step=1) - # num_samples = 1 - ddim_steps = gr.Slider(label="Steps", minimum=1, - maximum=150, value=100, step=1) - scale = gr.Slider( - label="Guidance Scale:(Large => more relevant to text but the quality may drop)", minimum=0.1, maximum=8.0, value=3.0, step=0.1 - ) - seed = gr.Slider( - label="Seed:Change this value (any integer number) will lead to a different generation result.", - minimum=0, - maximum=2147483647, - step=1, - value=44, - ) - - with gr.Column(): - # audio_list = [] - # for i in range(int(num_samples)): - # audio_list.append(gr.outputs.Audio()) - outaudio = gr.Audio() - - - run_button.click(fn=predict, inputs=[ - prompt,ddim_steps, num_samples, scale, seed], outputs=[outaudio])# inputs的参数只能传gr.xxx - with gr.Row(): - with gr.Column(): - gr.Examples( - examples = [['a dog barking and a bird chirping',100,3,3,55],['Pigeons peck, coo, and flap their wings before a man speaks',100,3,3,55], - ['music of violin and piano',100,3,2,88],['wind thunder and rain falling',100,3,3,55],['music made by drum kit',100,3,3,55]], - inputs = [prompt,ddim_steps, num_samples, scale, seed], - outputs = [outaudio] - ) - with gr.Column(): - pass - -demo.launch() diff --git a/spaces/AP123/dreamgaussian/zero123.py b/spaces/AP123/dreamgaussian/zero123.py deleted file mode 100644 index 158e31ee4f877c11dda9118b382b2e226bf45e3a..0000000000000000000000000000000000000000 --- a/spaces/AP123/dreamgaussian/zero123.py +++ /dev/null @@ -1,666 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import math -import warnings -from typing import Any, Callable, Dict, List, Optional, Union - -import PIL -import torch -import torchvision.transforms.functional as TF -from diffusers.configuration_utils import ConfigMixin, FrozenDict, register_to_config -from diffusers.image_processor import VaeImageProcessor -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.models.modeling_utils import ModelMixin -from diffusers.pipelines.pipeline_utils import DiffusionPipeline -from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput -from diffusers.pipelines.stable_diffusion.safety_checker import ( - StableDiffusionSafetyChecker, -) -from diffusers.schedulers import KarrasDiffusionSchedulers -from diffusers.utils import deprecate, is_accelerate_available, logging -from diffusers.utils.torch_utils import randn_tensor -from packaging import version -from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class CLIPCameraProjection(ModelMixin, ConfigMixin): - """ - A Projection layer for CLIP embedding and camera embedding. - - Parameters: - embedding_dim (`int`, *optional*, defaults to 768): The dimension of the model input `clip_embed` - additional_embeddings (`int`, *optional*, defaults to 4): The number of additional tokens appended to the - projected `hidden_states`. The actual length of the used `hidden_states` is `num_embeddings + - additional_embeddings`. - """ - - @register_to_config - def __init__(self, embedding_dim: int = 768, additional_embeddings: int = 4): - super().__init__() - self.embedding_dim = embedding_dim - self.additional_embeddings = additional_embeddings - - self.input_dim = self.embedding_dim + self.additional_embeddings - self.output_dim = self.embedding_dim - - self.proj = torch.nn.Linear(self.input_dim, self.output_dim) - - def forward( - self, - embedding: torch.FloatTensor, - ): - """ - The [`PriorTransformer`] forward method. - - Args: - hidden_states (`torch.FloatTensor` of shape `(batch_size, input_dim)`): - The currently input embeddings. - - Returns: - The output embedding projection (`torch.FloatTensor` of shape `(batch_size, output_dim)`). - """ - proj_embedding = self.proj(embedding) - return proj_embedding - - -class Zero123Pipeline(DiffusionPipeline): - r""" - Pipeline to generate variations from an input image using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - image_encoder ([`CLIPVisionModelWithProjection`]): - Frozen CLIP image-encoder. Stable Diffusion Image Variation uses the vision portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection), - specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - # TODO: feature_extractor is required to encode images (if they are in PIL format), - # we should give a descriptive message if the pipeline doesn't have one. - _optional_components = ["safety_checker"] - - def __init__( - self, - vae: AutoencoderKL, - image_encoder: CLIPVisionModelWithProjection, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - clip_camera_projection: CLIPCameraProjection, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warn( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr( - unet.config, "_diffusers_version" - ) and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse( - "0.9.0.dev0" - ) - is_unet_sample_size_less_64 = ( - hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - ) - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate( - "sample_size<64", "1.0.0", deprecation_message, standard_warn=False - ) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - image_encoder=image_encoder, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - clip_camera_projection=clip_camera_projection, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [ - self.unet, - self.image_encoder, - self.vae, - self.safety_checker, - ]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_image( - self, - image, - elevation, - azimuth, - distance, - device, - num_images_per_prompt, - do_classifier_free_guidance, - clip_image_embeddings=None, - image_camera_embeddings=None, - ): - dtype = next(self.image_encoder.parameters()).dtype - - if image_camera_embeddings is None: - if image is None: - assert clip_image_embeddings is not None - image_embeddings = clip_image_embeddings.to(device=device, dtype=dtype) - else: - if not isinstance(image, torch.Tensor): - image = self.feature_extractor( - images=image, return_tensors="pt" - ).pixel_values - - image = image.to(device=device, dtype=dtype) - image_embeddings = self.image_encoder(image).image_embeds - image_embeddings = image_embeddings.unsqueeze(1) - - bs_embed, seq_len, _ = image_embeddings.shape - - if isinstance(elevation, float): - elevation = torch.as_tensor( - [elevation] * bs_embed, dtype=dtype, device=device - ) - if isinstance(azimuth, float): - azimuth = torch.as_tensor( - [azimuth] * bs_embed, dtype=dtype, device=device - ) - if isinstance(distance, float): - distance = torch.as_tensor( - [distance] * bs_embed, dtype=dtype, device=device - ) - - camera_embeddings = torch.stack( - [ - torch.deg2rad(elevation), - torch.sin(torch.deg2rad(azimuth)), - torch.cos(torch.deg2rad(azimuth)), - distance, - ], - dim=-1, - )[:, None, :] - - image_embeddings = torch.cat([image_embeddings, camera_embeddings], dim=-1) - - # project (image, camera) embeddings to the same dimension as clip embeddings - image_embeddings = self.clip_camera_projection(image_embeddings) - else: - image_embeddings = image_camera_embeddings.to(device=device, dtype=dtype) - bs_embed, seq_len, _ = image_embeddings.shape - - # duplicate image embeddings for each generation per prompt, using mps friendly method - image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1) - image_embeddings = image_embeddings.view( - bs_embed * num_images_per_prompt, seq_len, -1 - ) - - if do_classifier_free_guidance: - negative_prompt_embeds = torch.zeros_like(image_embeddings) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - image_embeddings = torch.cat([negative_prompt_embeds, image_embeddings]) - - return image_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess( - image, output_type="pil" - ) - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor( - feature_extractor_input, return_tensors="pt" - ).to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - warnings.warn( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead", - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set( - inspect.signature(self.scheduler.step).parameters.keys() - ) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set( - inspect.signature(self.scheduler.step).parameters.keys() - ) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, image, height, width, callback_steps): - # TODO: check image size or adjust image size to (height, width) - - if height % 8 != 0 or width % 8 != 0: - raise ValueError( - f"`height` and `width` have to be divisible by 8 but are {height} and {width}." - ) - - if (callback_steps is None) or ( - callback_steps is not None - and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents( - self, - batch_size, - num_channels_latents, - height, - width, - dtype, - device, - generator, - latents=None, - ): - shape = ( - batch_size, - num_channels_latents, - height // self.vae_scale_factor, - width // self.vae_scale_factor, - ) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor( - shape, generator=generator, device=device, dtype=dtype - ) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def _get_latent_model_input( - self, - latents: torch.FloatTensor, - image: Optional[ - Union[PIL.Image.Image, List[PIL.Image.Image], torch.FloatTensor] - ], - num_images_per_prompt: int, - do_classifier_free_guidance: bool, - image_latents: Optional[torch.FloatTensor] = None, - ): - if isinstance(image, PIL.Image.Image): - image_pt = TF.to_tensor(image).unsqueeze(0).to(latents) - elif isinstance(image, list): - image_pt = torch.stack([TF.to_tensor(img) for img in image], dim=0).to( - latents - ) - elif isinstance(image, torch.Tensor): - image_pt = image - else: - image_pt = None - - if image_pt is None: - assert image_latents is not None - image_pt = image_latents.repeat_interleave(num_images_per_prompt, dim=0) - else: - image_pt = image_pt * 2.0 - 1.0 # scale to [-1, 1] - # FIXME: encoded latents should be multiplied with self.vae.config.scaling_factor - # but zero123 was not trained this way - image_pt = self.vae.encode(image_pt).latent_dist.mode() - image_pt = image_pt.repeat_interleave(num_images_per_prompt, dim=0) - if do_classifier_free_guidance: - latent_model_input = torch.cat( - [ - torch.cat([latents, latents], dim=0), - torch.cat([torch.zeros_like(image_pt), image_pt], dim=0), - ], - dim=1, - ) - else: - latent_model_input = torch.cat([latents, image_pt], dim=1) - - return latent_model_input - - @torch.no_grad() - def __call__( - self, - image: Optional[ - Union[PIL.Image.Image, List[PIL.Image.Image], torch.FloatTensor] - ] = None, - elevation: Optional[Union[float, torch.FloatTensor]] = None, - azimuth: Optional[Union[float, torch.FloatTensor]] = None, - distance: Optional[Union[float, torch.FloatTensor]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 3.0, - num_images_per_prompt: int = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - clip_image_embeddings: Optional[torch.FloatTensor] = None, - image_camera_embeddings: Optional[torch.FloatTensor] = None, - image_latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - image (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`): - The image or images to guide the image generation. If you provide a tensor, it needs to comply with the - configuration of - [this](https://huggingface.co/lambdalabs/sd-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json) - `CLIPImageProcessor` - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - # TODO: check input elevation, azimuth, and distance - # TODO: check image, clip_image_embeddings, image_latents - self.check_inputs(image, height, width, callback_steps) - - # 2. Define call parameters - if isinstance(image, PIL.Image.Image): - batch_size = 1 - elif isinstance(image, list): - batch_size = len(image) - elif isinstance(image, torch.Tensor): - batch_size = image.shape[0] - else: - assert image_latents is not None - assert ( - clip_image_embeddings is not None or image_camera_embeddings is not None - ) - batch_size = image_latents.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input image - if isinstance(image, PIL.Image.Image) or isinstance(image, list): - pil_image = image - elif isinstance(image, torch.Tensor): - pil_image = [TF.to_pil_image(image[i]) for i in range(image.shape[0])] - else: - pil_image = None - image_embeddings = self._encode_image( - pil_image, - elevation, - azimuth, - distance, - device, - num_images_per_prompt, - do_classifier_free_guidance, - clip_image_embeddings, - image_camera_embeddings, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - # num_channels_latents = self.unet.config.in_channels - num_channels_latents = 4 # FIXME: hard-coded - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - image_embeddings.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = self._get_latent_model_input( - latents, - image, - num_images_per_prompt, - do_classifier_free_guidance, - image_latents, - ) - latent_model_input = self.scheduler.scale_model_input( - latent_model_input, t - ) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=image_embeddings, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * ( - noise_pred_text - noise_pred_uncond - ) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, t, latents, **extra_step_kwargs - ).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ( - (i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0 - ): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if not output_type == "latent": - image = self.vae.decode( - latents / self.vae.config.scaling_factor, return_dict=False - )[0] - image, has_nsfw_concept = self.run_safety_checker( - image, device, image_embeddings.dtype - ) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess( - image, output_type=output_type, do_denormalize=do_denormalize - ) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput( - images=image, nsfw_content_detected=has_nsfw_concept - ) \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb16-mixup_cifar10.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb16-mixup_cifar10.py deleted file mode 100644 index 2420ebfeb0a34675a4b1b2a69c0b8a39e197ce35..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb16-mixup_cifar10.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/resnet50_cifar_mixup.py', - '../_base_/datasets/cifar10_bs16.py', - '../_base_/schedules/cifar10_bs128.py', '../_base_/default_runtime.py' -] diff --git a/spaces/Aaron299/bingo/README.md b/spaces/Aaron299/bingo/README.md deleted file mode 100644 index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000 --- a/spaces/Aaron299/bingo/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: bingo -emoji: 😊 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -问题反馈请前往 https://github.com/weaigc/bingo/issues -
    - - diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/FallingAllChess.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/FallingAllChess.js deleted file mode 100644 index 097d9d93cff45c870e2cf5ea104b81ad4d3c2785..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/FallingAllChess.js +++ /dev/null @@ -1,26 +0,0 @@ -/* -1. Falling down all chess -*/ - -var FallingAllChess = function (board, bejeweled) { - var tileZ = bejeweled.getChessTileZ(), - chess, moveTo; - - for (var tileY = (board.height - 1); tileY >= 0; tileY--) { // bottom to top - for (var tileX = 0, cnt = board.width; tileX < cnt; tileX++) { // left to right - chess = board.tileXYZToChess(tileX, tileY, tileZ); - if (chess === null) { - continue; - } - moveTo = bejeweled.getChessMoveTo(chess); - do { - moveTo.moveToward(1); - } while (moveTo.lastMoveResult) - if (moveTo.isRunning) { - bejeweled.waitEvent(moveTo, 'complete'); - } - } - } -} - -export default FallingAllChess; \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Compat5006.pm b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Compat5006.pm deleted file mode 100644 index 7736fd8debcbb47cc38192798de6c3055222d473..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Compat5006.pm +++ /dev/null @@ -1,173 +0,0 @@ -package # This is JSON::backportPP - JSON::backportPP56; - -use 5.006; -use strict; - -my @properties; - -$JSON::PP56::VERSION = '1.08'; - -BEGIN { - - sub utf8::is_utf8 { - my $len = length $_[0]; # char length - { - use bytes; # byte length; - return $len != length $_[0]; # if !=, UTF8-flagged on. - } - } - - - sub utf8::upgrade { - ; # noop; - } - - - sub utf8::downgrade ($;$) { - return 1 unless ( utf8::is_utf8( $_[0] ) ); - - if ( _is_valid_utf8( $_[0] ) ) { - my $downgrade; - for my $c ( unpack( "U*", $_[0] ) ) { - if ( $c < 256 ) { - $downgrade .= pack("C", $c); - } - else { - $downgrade .= pack("U", $c); - } - } - $_[0] = $downgrade; - return 1; - } - else { - Carp::croak("Wide character in subroutine entry") unless ( $_[1] ); - 0; - } - } - - - sub utf8::encode ($) { # UTF8 flag off - if ( utf8::is_utf8( $_[0] ) ) { - $_[0] = pack( "C*", unpack( "C*", $_[0] ) ); - } - else { - $_[0] = pack( "U*", unpack( "C*", $_[0] ) ); - $_[0] = pack( "C*", unpack( "C*", $_[0] ) ); - } - } - - - sub utf8::decode ($) { # UTF8 flag on - if ( _is_valid_utf8( $_[0] ) ) { - utf8::downgrade( $_[0] ); - $_[0] = pack( "U*", unpack( "U*", $_[0] ) ); - } - } - - - *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii; - *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1; - *JSON::PP::JSON_PP_decode_surrogates = \&JSON::PP::_decode_surrogates; - *JSON::PP::JSON_PP_decode_unicode = \&JSON::PP::_decode_unicode; - - unless ( defined &B::SVp_NOK ) { # missing in B module. - eval q{ sub B::SVp_NOK () { 0x02000000; } }; - } - -} - - - -sub _encode_ascii { - join('', - map { - $_ <= 127 ? - chr($_) : - $_ <= 65535 ? - sprintf('\u%04x', $_) : sprintf('\u%x\u%x', JSON::PP::_encode_surrogates($_)); - } _unpack_emu($_[0]) - ); -} - - -sub _encode_latin1 { - join('', - map { - $_ <= 255 ? - chr($_) : - $_ <= 65535 ? - sprintf('\u%04x', $_) : sprintf('\u%x\u%x', JSON::PP::_encode_surrogates($_)); - } _unpack_emu($_[0]) - ); -} - - -sub _unpack_emu { # for Perl 5.6 unpack warnings - return !utf8::is_utf8($_[0]) ? unpack('C*', $_[0]) - : _is_valid_utf8($_[0]) ? unpack('U*', $_[0]) - : unpack('C*', $_[0]); -} - - -sub _is_valid_utf8 { - my $str = $_[0]; - my $is_utf8; - - while ($str =~ /(?: - ( - [\x00-\x7F] - |[\xC2-\xDF][\x80-\xBF] - |[\xE0][\xA0-\xBF][\x80-\xBF] - |[\xE1-\xEC][\x80-\xBF][\x80-\xBF] - |[\xED][\x80-\x9F][\x80-\xBF] - |[\xEE-\xEF][\x80-\xBF][\x80-\xBF] - |[\xF0][\x90-\xBF][\x80-\xBF][\x80-\xBF] - |[\xF1-\xF3][\x80-\xBF][\x80-\xBF][\x80-\xBF] - |[\xF4][\x80-\x8F][\x80-\xBF][\x80-\xBF] - ) - | (.) - )/xg) - { - if (defined $1) { - $is_utf8 = 1 if (!defined $is_utf8); - } - else { - $is_utf8 = 0 if (!defined $is_utf8); - if ($is_utf8) { # eventually, not utf8 - return; - } - } - } - - return $is_utf8; -} - - -1; -__END__ - -=pod - -=head1 NAME - -JSON::PP56 - Helper module in using JSON::PP in Perl 5.6 - -=head1 DESCRIPTION - -JSON::PP calls internally. - -=head1 AUTHOR - -Makamaka Hannyaharamitu, Emakamaka[at]cpan.orgE - - -=head1 COPYRIGHT AND LICENSE - -Copyright 2007-2012 by Makamaka Hannyaharamitu - -This library is free software; you can redistribute it and/or modify -it under the same terms as Perl itself. - -=cut - diff --git a/spaces/AlexWang/lama/models/ade20k/mobilenet.py b/spaces/AlexWang/lama/models/ade20k/mobilenet.py deleted file mode 100644 index f501266e56ee71cdf455744020f8fc1a58ec9fff..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/models/ade20k/mobilenet.py +++ /dev/null @@ -1,154 +0,0 @@ -""" -This MobileNetV2 implementation is modified from the following repository: -https://github.com/tonylins/pytorch-mobilenet-v2 -""" - -import torch.nn as nn -import math -from .utils import load_url -from .segm_lib.nn import SynchronizedBatchNorm2d - -BatchNorm2d = SynchronizedBatchNorm2d - - -__all__ = ['mobilenetv2'] - - -model_urls = { - 'mobilenetv2': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/mobilenet_v2.pth.tar', -} - - -def conv_bn(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - BatchNorm2d(oup), - nn.ReLU6(inplace=True) - ) - - -def conv_1x1_bn(inp, oup): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - BatchNorm2d(oup), - nn.ReLU6(inplace=True) - ) - - -class InvertedResidual(nn.Module): - def __init__(self, inp, oup, stride, expand_ratio): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2] - - hidden_dim = round(inp * expand_ratio) - self.use_res_connect = self.stride == 1 and inp == oup - - if expand_ratio == 1: - self.conv = nn.Sequential( - # dw - nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False), - BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - BatchNorm2d(oup), - ) - else: - self.conv = nn.Sequential( - # pw - nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False), - BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # dw - nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False), - BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - BatchNorm2d(oup), - ) - - def forward(self, x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - -class MobileNetV2(nn.Module): - def __init__(self, n_class=1000, input_size=224, width_mult=1.): - super(MobileNetV2, self).__init__() - block = InvertedResidual - input_channel = 32 - last_channel = 1280 - interverted_residual_setting = [ - # t, c, n, s - [1, 16, 1, 1], - [6, 24, 2, 2], - [6, 32, 3, 2], - [6, 64, 4, 2], - [6, 96, 3, 1], - [6, 160, 3, 2], - [6, 320, 1, 1], - ] - - # building first layer - assert input_size % 32 == 0 - input_channel = int(input_channel * width_mult) - self.last_channel = int(last_channel * width_mult) if width_mult > 1.0 else last_channel - self.features = [conv_bn(3, input_channel, 2)] - # building inverted residual blocks - for t, c, n, s in interverted_residual_setting: - output_channel = int(c * width_mult) - for i in range(n): - if i == 0: - self.features.append(block(input_channel, output_channel, s, expand_ratio=t)) - else: - self.features.append(block(input_channel, output_channel, 1, expand_ratio=t)) - input_channel = output_channel - # building last several layers - self.features.append(conv_1x1_bn(input_channel, self.last_channel)) - # make it nn.Sequential - self.features = nn.Sequential(*self.features) - - # building classifier - self.classifier = nn.Sequential( - nn.Dropout(0.2), - nn.Linear(self.last_channel, n_class), - ) - - self._initialize_weights() - - def forward(self, x): - x = self.features(x) - x = x.mean(3).mean(2) - x = self.classifier(x) - return x - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - n = m.weight.size(1) - m.weight.data.normal_(0, 0.01) - m.bias.data.zero_() - - -def mobilenetv2(pretrained=False, **kwargs): - """Constructs a MobileNet_V2 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = MobileNetV2(n_class=1000, **kwargs) - if pretrained: - model.load_state_dict(load_url(model_urls['mobilenetv2']), strict=False) - return model \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/training/losses/constants.py b/spaces/AlexWang/lama/saicinpainting/training/losses/constants.py deleted file mode 100644 index ae3e5e151342232be8e2c2a77fe6fd5798dc2a8c..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/losses/constants.py +++ /dev/null @@ -1,152 +0,0 @@ -weights = {"ade20k": - [6.34517766497462, - 9.328358208955224, - 11.389521640091116, - 16.10305958132045, - 20.833333333333332, - 22.22222222222222, - 25.125628140703515, - 43.29004329004329, - 50.5050505050505, - 54.6448087431694, - 55.24861878453038, - 60.24096385542168, - 62.5, - 66.2251655629139, - 84.74576271186442, - 90.90909090909092, - 91.74311926605505, - 96.15384615384616, - 96.15384615384616, - 97.08737864077669, - 102.04081632653062, - 135.13513513513513, - 149.2537313432836, - 153.84615384615384, - 163.93442622950818, - 166.66666666666666, - 188.67924528301887, - 192.30769230769232, - 217.3913043478261, - 227.27272727272725, - 227.27272727272725, - 227.27272727272725, - 303.03030303030306, - 322.5806451612903, - 333.3333333333333, - 370.3703703703703, - 384.61538461538464, - 416.6666666666667, - 416.6666666666667, - 434.7826086956522, - 434.7826086956522, - 454.5454545454545, - 454.5454545454545, - 500.0, - 526.3157894736842, - 526.3157894736842, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 769.2307692307693, - 769.2307692307693, - 769.2307692307693, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 909.090909090909, - 1000.0, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 5000.0, - 5000.0, - 5000.0] -} \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.h b/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.h deleted file mode 100644 index dc6e713694d3fcca0e06cecfb9437ffb4932ffe6..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.h +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright (c) SenseTime Research. All rights reserved. - -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct upfirdn2d_kernel_params -{ - const void* x; - const float* f; - void* y; - - int2 up; - int2 down; - int2 pad0; - int flip; - float gain; - - int4 inSize; // [width, height, channel, batch] - int4 inStride; - int2 filterSize; // [width, height] - int2 filterStride; - int4 outSize; // [width, height, channel, batch] - int4 outStride; - int sizeMinor; - int sizeMajor; - - int loopMinor; - int loopMajor; - int loopX; - int launchMinor; - int launchMajor; -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct upfirdn2d_kernel_spec -{ - void* kernel; - int tileOutW; - int tileOutH; - int loopMinor; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/Andy1621/uniformer_image_detection/configs/carafe/README.md b/spaces/Andy1621/uniformer_image_detection/configs/carafe/README.md deleted file mode 100644 index d9ca6644f997f88723210e0caa23e1bd70759f09..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/carafe/README.md +++ /dev/null @@ -1,32 +0,0 @@ -# CARAFE: Content-Aware ReAssembly of FEatures - -## Introduction - -[ALGORITHM] - -We provide config files to reproduce the object detection & instance segmentation results in the ICCV 2019 Oral paper for [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.02188). - -``` -@inproceedings{Wang_2019_ICCV, - title = {CARAFE: Content-Aware ReAssembly of FEatures}, - author = {Wang, Jiaqi and Chen, Kai and Xu, Rui and Liu, Ziwei and Loy, Chen Change and Lin, Dahua}, - booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, - month = {October}, - year = {2019} -} -``` - -## Results and Models - -The results on COCO 2017 val is shown in the below table. - -| Method | Backbone | Style | Lr schd | Test Proposal Num | Inf time (fps) | Box AP | Mask AP | Config | Download | -|:--------------------:|:--------:|:-------:|:-------:|:-----------------:|:--------------:|:------:|:-------:|:------:|:--------:| -| Faster R-CNN w/ CARAFE | R-50-FPN | pytorch | 1x | 1000 | 16.5 | 38.6 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_20200504_175733.log.json) | -| - | - | - | - | 2000 | | | | | -| Mask R-CNN w/ CARAFE | R-50-FPN | pytorch | 1x | 1000 | 14.0 | 39.3 | 35.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/carafe/mask_rcnn_r50_fpn_carafe_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.393__segm_mAP-0.358_20200503_135957-8687f195.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_20200503_135957.log.json) | -| - | - | - | - | 2000 | | | | | - -## Implementation - -The CUDA implementation of CARAFE can be find at https://github.com/myownskyW7/CARAFE. diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w32_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w32_20e_coco.py deleted file mode 100644 index aee78089b9e32d3c0bcd6a29f51c22d1af96d2ce..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w32_20e_coco.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = '../htc/htc_r50_fpn_20e_coco.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w32', - backbone=dict( - _delete_=True, - type='HRNet', - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(32, 64)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(32, 64, 128)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(32, 64, 128, 256)))), - neck=dict( - _delete_=True, - type='HRFPN', - in_channels=[32, 64, 128, 256], - out_channels=256)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py deleted file mode 100644 index 2fa2a807190427c857ddbea8ed7efd9434e5ef0f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py +++ /dev/null @@ -1,23 +0,0 @@ -_base_ = './sparse_rcnn_r50_fpn_1x_coco.py' - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -min_values = (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, value) for value in min_values], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] - -data = dict(train=dict(pipeline=train_pipeline)) -lr_config = dict(policy='step', step=[27, 33]) -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/upernet_r50.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/upernet_r50.py deleted file mode 100644 index 10974962fdd7136031fd06de1700f497d355ceaa..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/upernet_r50.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='UPerHead', - in_channels=[256, 512, 1024, 2048], - in_index=[0, 1, 2, 3], - pool_scales=(1, 2, 3, 6), - channels=512, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 267483d88ff25d75dc18c5c2d37375cd77c9639c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = '../deeplabv3/deeplabv3_r101-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='mmcls://mobilenet_v2', - backbone=dict( - _delete_=True, - type='MobileNetV2', - widen_factor=1., - strides=(1, 2, 2, 1, 1, 1, 1), - dilations=(1, 1, 1, 2, 2, 4, 4), - out_indices=(1, 2, 4, 6)), - decode_head=dict(in_channels=320), - auxiliary_head=dict(in_channels=96)) diff --git a/spaces/Anilegna/Colour-Personallity/README.md b/spaces/Anilegna/Colour-Personallity/README.md deleted file mode 100644 index 4ba793328f7d3920a01fe89e8bb369733bfdaeaa..0000000000000000000000000000000000000000 --- a/spaces/Anilegna/Colour-Personallity/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Colour Personallity -emoji: 💻 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap-icons.css b/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap-icons.css deleted file mode 100644 index 94f1940448a6fa8d0a3dfca318638285c83a1f18..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap-icons.css +++ /dev/null @@ -1,2018 +0,0 @@ -@font-face { - font-display: block; - font-family: "bootstrap-icons"; - src: -url("./bootstrap-icons.woff?2ab2cbbe07fcebb53bdaa7313bb290f2") format("woff"); -} - -.bi::before, -[class^="bi-"]::before, -[class*=" bi-"]::before { - display: inline-block; - font-family: bootstrap-icons !important; - font-style: normal; - font-weight: normal !important; - font-variant: normal; - text-transform: none; - line-height: 1; - vertical-align: -.125em; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; -} - -.bi-123::before { content: "\f67f"; } -.bi-alarm-fill::before { content: "\f101"; } -.bi-alarm::before { content: "\f102"; } -.bi-align-bottom::before { content: "\f103"; } -.bi-align-center::before { content: "\f104"; } -.bi-align-end::before { content: "\f105"; } -.bi-align-middle::before { content: "\f106"; } -.bi-align-start::before { content: "\f107"; } -.bi-align-top::before { content: "\f108"; } -.bi-alt::before { content: "\f109"; } -.bi-app-indicator::before { content: "\f10a"; } -.bi-app::before { content: "\f10b"; } -.bi-archive-fill::before { content: "\f10c"; } -.bi-archive::before { content: "\f10d"; } -.bi-arrow-90deg-down::before { content: "\f10e"; } -.bi-arrow-90deg-left::before { content: "\f10f"; } -.bi-arrow-90deg-right::before { content: "\f110"; } -.bi-arrow-90deg-up::before { content: "\f111"; } -.bi-arrow-bar-down::before { content: "\f112"; } -.bi-arrow-bar-left::before { content: "\f113"; } -.bi-arrow-bar-right::before { content: "\f114"; } -.bi-arrow-bar-up::before { content: "\f115"; } -.bi-arrow-clockwise::before { content: "\f116"; } -.bi-arrow-counterclockwise::before { content: "\f117"; } -.bi-arrow-down-circle-fill::before { content: "\f118"; } -.bi-arrow-down-circle::before { content: "\f119"; } -.bi-arrow-down-left-circle-fill::before { content: "\f11a"; } -.bi-arrow-down-left-circle::before { content: "\f11b"; } -.bi-arrow-down-left-square-fill::before { content: "\f11c"; } -.bi-arrow-down-left-square::before { content: "\f11d"; } -.bi-arrow-down-left::before { content: "\f11e"; } -.bi-arrow-down-right-circle-fill::before { content: "\f11f"; } -.bi-arrow-down-right-circle::before { content: "\f120"; } -.bi-arrow-down-right-square-fill::before { content: "\f121"; } -.bi-arrow-down-right-square::before { content: "\f122"; } -.bi-arrow-down-right::before { content: "\f123"; } -.bi-arrow-down-short::before { content: "\f124"; } -.bi-arrow-down-square-fill::before { content: "\f125"; } -.bi-arrow-down-square::before { content: "\f126"; } -.bi-arrow-down-up::before { content: "\f127"; } -.bi-arrow-down::before { content: "\f128"; } -.bi-arrow-left-circle-fill::before { content: "\f129"; } -.bi-arrow-left-circle::before { content: "\f12a"; } -.bi-arrow-left-right::before { content: "\f12b"; } -.bi-arrow-left-short::before { content: "\f12c"; } -.bi-arrow-left-square-fill::before { content: "\f12d"; } -.bi-arrow-left-square::before { content: "\f12e"; } -.bi-arrow-left::before { content: "\f12f"; } -.bi-arrow-repeat::before { content: "\f130"; } -.bi-arrow-return-left::before { content: "\f131"; } -.bi-arrow-return-right::before { content: "\f132"; } -.bi-arrow-right-circle-fill::before { content: "\f133"; } -.bi-arrow-right-circle::before { content: "\f134"; } -.bi-arrow-right-short::before { content: "\f135"; } -.bi-arrow-right-square-fill::before { content: "\f136"; } -.bi-arrow-right-square::before { content: "\f137"; } -.bi-arrow-right::before { content: "\f138"; } -.bi-arrow-up-circle-fill::before { content: "\f139"; } -.bi-arrow-up-circle::before { content: "\f13a"; } -.bi-arrow-up-left-circle-fill::before { content: "\f13b"; } -.bi-arrow-up-left-circle::before { content: "\f13c"; } -.bi-arrow-up-left-square-fill::before { content: "\f13d"; } -.bi-arrow-up-left-square::before { content: "\f13e"; } -.bi-arrow-up-left::before { content: "\f13f"; } -.bi-arrow-up-right-circle-fill::before { content: "\f140"; } -.bi-arrow-up-right-circle::before { content: "\f141"; } -.bi-arrow-up-right-square-fill::before { content: "\f142"; } -.bi-arrow-up-right-square::before { content: "\f143"; } -.bi-arrow-up-right::before { content: "\f144"; } -.bi-arrow-up-short::before { content: "\f145"; } -.bi-arrow-up-square-fill::before { content: "\f146"; } -.bi-arrow-up-square::before { content: "\f147"; } -.bi-arrow-up::before { content: "\f148"; } -.bi-arrows-angle-contract::before { content: "\f149"; } -.bi-arrows-angle-expand::before { content: "\f14a"; } -.bi-arrows-collapse::before { content: "\f14b"; } -.bi-arrows-expand::before { content: "\f14c"; } -.bi-arrows-fullscreen::before { content: "\f14d"; } -.bi-arrows-move::before { content: "\f14e"; } -.bi-aspect-ratio-fill::before { content: "\f14f"; } -.bi-aspect-ratio::before { content: "\f150"; } -.bi-asterisk::before { content: "\f151"; } -.bi-at::before { content: "\f152"; } -.bi-award-fill::before { content: "\f153"; } -.bi-award::before { content: "\f154"; } -.bi-back::before { content: "\f155"; } -.bi-backspace-fill::before { content: "\f156"; } -.bi-backspace-reverse-fill::before { content: "\f157"; } -.bi-backspace-reverse::before { content: "\f158"; } -.bi-backspace::before { content: "\f159"; } -.bi-badge-3d-fill::before { content: "\f15a"; } -.bi-badge-3d::before { content: "\f15b"; } -.bi-badge-4k-fill::before { content: "\f15c"; } -.bi-badge-4k::before { content: "\f15d"; } -.bi-badge-8k-fill::before { content: "\f15e"; } -.bi-badge-8k::before { content: "\f15f"; } -.bi-badge-ad-fill::before { content: "\f160"; } -.bi-badge-ad::before { content: "\f161"; } -.bi-badge-ar-fill::before { content: "\f162"; } -.bi-badge-ar::before { content: "\f163"; } -.bi-badge-cc-fill::before { content: "\f164"; } -.bi-badge-cc::before { content: "\f165"; } -.bi-badge-hd-fill::before { content: "\f166"; } -.bi-badge-hd::before { content: "\f167"; } -.bi-badge-tm-fill::before { content: "\f168"; } -.bi-badge-tm::before { content: "\f169"; } -.bi-badge-vo-fill::before { content: "\f16a"; } -.bi-badge-vo::before { content: "\f16b"; } -.bi-badge-vr-fill::before { content: "\f16c"; } -.bi-badge-vr::before { content: "\f16d"; } -.bi-badge-wc-fill::before { content: "\f16e"; } -.bi-badge-wc::before { content: "\f16f"; } -.bi-bag-check-fill::before { content: "\f170"; } -.bi-bag-check::before { content: "\f171"; } -.bi-bag-dash-fill::before { content: "\f172"; } -.bi-bag-dash::before { content: "\f173"; } -.bi-bag-fill::before { content: "\f174"; } -.bi-bag-plus-fill::before { content: "\f175"; } -.bi-bag-plus::before { content: "\f176"; } -.bi-bag-x-fill::before { content: "\f177"; } -.bi-bag-x::before { content: "\f178"; } -.bi-bag::before { content: "\f179"; } -.bi-bar-chart-fill::before { content: "\f17a"; } -.bi-bar-chart-line-fill::before { content: "\f17b"; } -.bi-bar-chart-line::before { content: "\f17c"; } -.bi-bar-chart-steps::before { content: "\f17d"; } -.bi-bar-chart::before { content: "\f17e"; } -.bi-basket-fill::before { content: "\f17f"; } -.bi-basket::before { content: "\f180"; } -.bi-basket2-fill::before { content: "\f181"; } -.bi-basket2::before { content: "\f182"; } -.bi-basket3-fill::before { content: "\f183"; } -.bi-basket3::before { content: "\f184"; } -.bi-battery-charging::before { content: "\f185"; } -.bi-battery-full::before { content: "\f186"; } -.bi-battery-half::before { content: "\f187"; } -.bi-battery::before { content: "\f188"; } -.bi-bell-fill::before { content: "\f189"; } -.bi-bell::before { content: "\f18a"; } -.bi-bezier::before { content: "\f18b"; } -.bi-bezier2::before { content: "\f18c"; } -.bi-bicycle::before { content: "\f18d"; } -.bi-binoculars-fill::before { content: "\f18e"; } -.bi-binoculars::before { content: "\f18f"; } -.bi-blockquote-left::before { content: "\f190"; } -.bi-blockquote-right::before { content: "\f191"; } -.bi-book-fill::before { content: "\f192"; } -.bi-book-half::before { content: "\f193"; } -.bi-book::before { content: "\f194"; } -.bi-bookmark-check-fill::before { content: "\f195"; } -.bi-bookmark-check::before { content: "\f196"; } -.bi-bookmark-dash-fill::before { content: "\f197"; } -.bi-bookmark-dash::before { content: "\f198"; } -.bi-bookmark-fill::before { content: "\f199"; } -.bi-bookmark-heart-fill::before { content: "\f19a"; } -.bi-bookmark-heart::before { content: "\f19b"; } -.bi-bookmark-plus-fill::before { content: "\f19c"; } -.bi-bookmark-plus::before { content: "\f19d"; } -.bi-bookmark-star-fill::before { content: "\f19e"; } -.bi-bookmark-star::before { content: "\f19f"; } -.bi-bookmark-x-fill::before { content: "\f1a0"; } -.bi-bookmark-x::before { content: "\f1a1"; } -.bi-bookmark::before { content: "\f1a2"; } -.bi-bookmarks-fill::before { content: "\f1a3"; } -.bi-bookmarks::before { content: "\f1a4"; } -.bi-bookshelf::before { content: "\f1a5"; } -.bi-bootstrap-fill::before { content: "\f1a6"; } -.bi-bootstrap-reboot::before { content: "\f1a7"; } -.bi-bootstrap::before { content: "\f1a8"; } -.bi-border-all::before { content: "\f1a9"; } -.bi-border-bottom::before { content: "\f1aa"; } -.bi-border-center::before { content: "\f1ab"; } -.bi-border-inner::before { content: "\f1ac"; } -.bi-border-left::before { content: "\f1ad"; } -.bi-border-middle::before { content: "\f1ae"; } -.bi-border-outer::before { content: "\f1af"; } -.bi-border-right::before { content: "\f1b0"; } -.bi-border-style::before { content: "\f1b1"; } -.bi-border-top::before { content: "\f1b2"; } -.bi-border-width::before { content: "\f1b3"; } -.bi-border::before { content: "\f1b4"; } -.bi-bounding-box-circles::before { content: "\f1b5"; } -.bi-bounding-box::before { content: "\f1b6"; } -.bi-box-arrow-down-left::before { content: "\f1b7"; } -.bi-box-arrow-down-right::before { content: "\f1b8"; } -.bi-box-arrow-down::before { content: "\f1b9"; } -.bi-box-arrow-in-down-left::before { content: "\f1ba"; } -.bi-box-arrow-in-down-right::before { content: "\f1bb"; } -.bi-box-arrow-in-down::before { content: "\f1bc"; } -.bi-box-arrow-in-left::before { content: "\f1bd"; } -.bi-box-arrow-in-right::before { content: "\f1be"; } -.bi-box-arrow-in-up-left::before { content: "\f1bf"; } -.bi-box-arrow-in-up-right::before { content: "\f1c0"; } -.bi-box-arrow-in-up::before { content: "\f1c1"; } -.bi-box-arrow-left::before { content: "\f1c2"; } -.bi-box-arrow-right::before { content: "\f1c3"; } -.bi-box-arrow-up-left::before { content: "\f1c4"; } -.bi-box-arrow-up-right::before { content: "\f1c5"; } -.bi-box-arrow-up::before { content: "\f1c6"; } -.bi-box-seam::before { content: "\f1c7"; } -.bi-box::before { content: "\f1c8"; } -.bi-braces::before { content: "\f1c9"; } -.bi-bricks::before { content: "\f1ca"; } -.bi-briefcase-fill::before { content: "\f1cb"; } -.bi-briefcase::before { content: "\f1cc"; } -.bi-brightness-alt-high-fill::before { content: "\f1cd"; } -.bi-brightness-alt-high::before { content: "\f1ce"; } -.bi-brightness-alt-low-fill::before { content: "\f1cf"; } -.bi-brightness-alt-low::before { content: "\f1d0"; } -.bi-brightness-high-fill::before { content: "\f1d1"; } -.bi-brightness-high::before { content: "\f1d2"; } -.bi-brightness-low-fill::before { content: "\f1d3"; } -.bi-brightness-low::before { content: "\f1d4"; } -.bi-broadcast-pin::before { content: "\f1d5"; } -.bi-broadcast::before { content: "\f1d6"; } -.bi-brush-fill::before { content: "\f1d7"; } -.bi-brush::before { content: "\f1d8"; } -.bi-bucket-fill::before { content: "\f1d9"; } -.bi-bucket::before { content: "\f1da"; } -.bi-bug-fill::before { content: "\f1db"; } -.bi-bug::before { content: "\f1dc"; } -.bi-building::before { content: "\f1dd"; } -.bi-bullseye::before { content: "\f1de"; } -.bi-calculator-fill::before { content: "\f1df"; } -.bi-calculator::before { content: "\f1e0"; } -.bi-calendar-check-fill::before { content: "\f1e1"; } -.bi-calendar-check::before { content: "\f1e2"; } -.bi-calendar-date-fill::before { content: "\f1e3"; } -.bi-calendar-date::before { content: "\f1e4"; } -.bi-calendar-day-fill::before { content: "\f1e5"; } -.bi-calendar-day::before { content: "\f1e6"; } -.bi-calendar-event-fill::before { content: "\f1e7"; } -.bi-calendar-event::before { content: "\f1e8"; } -.bi-calendar-fill::before { content: "\f1e9"; } -.bi-calendar-minus-fill::before { content: "\f1ea"; } -.bi-calendar-minus::before { content: "\f1eb"; } -.bi-calendar-month-fill::before { content: "\f1ec"; } -.bi-calendar-month::before { content: "\f1ed"; } -.bi-calendar-plus-fill::before { content: "\f1ee"; } -.bi-calendar-plus::before { content: "\f1ef"; } -.bi-calendar-range-fill::before { content: "\f1f0"; } -.bi-calendar-range::before { content: "\f1f1"; } -.bi-calendar-week-fill::before { content: "\f1f2"; } -.bi-calendar-week::before { content: "\f1f3"; } -.bi-calendar-x-fill::before { content: "\f1f4"; } -.bi-calendar-x::before { content: "\f1f5"; } -.bi-calendar::before { content: "\f1f6"; } -.bi-calendar2-check-fill::before { content: "\f1f7"; } -.bi-calendar2-check::before { content: "\f1f8"; } -.bi-calendar2-date-fill::before { content: "\f1f9"; } -.bi-calendar2-date::before { content: "\f1fa"; } -.bi-calendar2-day-fill::before { content: "\f1fb"; } -.bi-calendar2-day::before { content: "\f1fc"; } -.bi-calendar2-event-fill::before { content: "\f1fd"; } -.bi-calendar2-event::before { content: "\f1fe"; } -.bi-calendar2-fill::before { content: "\f1ff"; } -.bi-calendar2-minus-fill::before { content: "\f200"; } -.bi-calendar2-minus::before { content: "\f201"; } -.bi-calendar2-month-fill::before { content: "\f202"; } -.bi-calendar2-month::before { content: "\f203"; } -.bi-calendar2-plus-fill::before { content: "\f204"; } -.bi-calendar2-plus::before { content: "\f205"; } -.bi-calendar2-range-fill::before { content: "\f206"; } -.bi-calendar2-range::before { content: "\f207"; } -.bi-calendar2-week-fill::before { content: "\f208"; } -.bi-calendar2-week::before { content: "\f209"; } -.bi-calendar2-x-fill::before { content: "\f20a"; } -.bi-calendar2-x::before { content: "\f20b"; } -.bi-calendar2::before { content: "\f20c"; } -.bi-calendar3-event-fill::before { content: "\f20d"; } -.bi-calendar3-event::before { content: "\f20e"; } -.bi-calendar3-fill::before { content: "\f20f"; } -.bi-calendar3-range-fill::before { content: "\f210"; } -.bi-calendar3-range::before { content: "\f211"; } -.bi-calendar3-week-fill::before { content: "\f212"; } -.bi-calendar3-week::before { content: "\f213"; } -.bi-calendar3::before { content: "\f214"; } -.bi-calendar4-event::before { content: "\f215"; } -.bi-calendar4-range::before { content: "\f216"; } -.bi-calendar4-week::before { content: "\f217"; } -.bi-calendar4::before { content: "\f218"; } -.bi-camera-fill::before { content: "\f219"; } -.bi-camera-reels-fill::before { content: "\f21a"; } -.bi-camera-reels::before { content: "\f21b"; } -.bi-camera-video-fill::before { content: "\f21c"; } -.bi-camera-video-off-fill::before { content: "\f21d"; } -.bi-camera-video-off::before { content: "\f21e"; } -.bi-camera-video::before { content: "\f21f"; } -.bi-camera::before { content: "\f220"; } -.bi-camera2::before { content: "\f221"; } -.bi-capslock-fill::before { content: "\f222"; } -.bi-capslock::before { content: "\f223"; } -.bi-card-checklist::before { content: "\f224"; } -.bi-card-heading::before { content: "\f225"; } -.bi-card-image::before { content: "\f226"; } -.bi-card-list::before { content: "\f227"; } -.bi-card-text::before { content: "\f228"; } -.bi-caret-down-fill::before { content: "\f229"; } -.bi-caret-down-square-fill::before { content: "\f22a"; } -.bi-caret-down-square::before { content: "\f22b"; } -.bi-caret-down::before { content: "\f22c"; } -.bi-caret-left-fill::before { content: "\f22d"; } -.bi-caret-left-square-fill::before { content: "\f22e"; } -.bi-caret-left-square::before { content: "\f22f"; } -.bi-caret-left::before { content: "\f230"; } -.bi-caret-right-fill::before { content: "\f231"; } -.bi-caret-right-square-fill::before { content: "\f232"; } -.bi-caret-right-square::before { content: "\f233"; } -.bi-caret-right::before { content: "\f234"; } -.bi-caret-up-fill::before { content: "\f235"; } -.bi-caret-up-square-fill::before { content: "\f236"; } -.bi-caret-up-square::before { content: "\f237"; } -.bi-caret-up::before { content: "\f238"; } -.bi-cart-check-fill::before { content: "\f239"; } -.bi-cart-check::before { content: "\f23a"; } -.bi-cart-dash-fill::before { content: "\f23b"; } -.bi-cart-dash::before { content: "\f23c"; } -.bi-cart-fill::before { content: "\f23d"; } -.bi-cart-plus-fill::before { content: "\f23e"; } -.bi-cart-plus::before { content: "\f23f"; } -.bi-cart-x-fill::before { content: "\f240"; } -.bi-cart-x::before { content: "\f241"; } -.bi-cart::before { content: "\f242"; } -.bi-cart2::before { content: "\f243"; } -.bi-cart3::before { content: "\f244"; } -.bi-cart4::before { content: "\f245"; } -.bi-cash-stack::before { content: "\f246"; } -.bi-cash::before { content: "\f247"; } -.bi-cast::before { content: "\f248"; } -.bi-chat-dots-fill::before { content: "\f249"; } -.bi-chat-dots::before { content: "\f24a"; } -.bi-chat-fill::before { content: "\f24b"; } -.bi-chat-left-dots-fill::before { content: "\f24c"; } -.bi-chat-left-dots::before { content: "\f24d"; } -.bi-chat-left-fill::before { content: "\f24e"; } -.bi-chat-left-quote-fill::before { content: "\f24f"; } -.bi-chat-left-quote::before { content: "\f250"; } -.bi-chat-left-text-fill::before { content: "\f251"; } -.bi-chat-left-text::before { content: "\f252"; } -.bi-chat-left::before { content: "\f253"; } -.bi-chat-quote-fill::before { content: "\f254"; } -.bi-chat-quote::before { content: "\f255"; } -.bi-chat-right-dots-fill::before { content: "\f256"; } -.bi-chat-right-dots::before { content: "\f257"; } -.bi-chat-right-fill::before { content: "\f258"; } -.bi-chat-right-quote-fill::before { content: "\f259"; } -.bi-chat-right-quote::before { content: "\f25a"; } -.bi-chat-right-text-fill::before { content: "\f25b"; } -.bi-chat-right-text::before { content: "\f25c"; } -.bi-chat-right::before { content: "\f25d"; } -.bi-chat-square-dots-fill::before { content: "\f25e"; } -.bi-chat-square-dots::before { content: "\f25f"; } -.bi-chat-square-fill::before { content: "\f260"; } -.bi-chat-square-quote-fill::before { content: "\f261"; } -.bi-chat-square-quote::before { content: "\f262"; } -.bi-chat-square-text-fill::before { content: "\f263"; } -.bi-chat-square-text::before { content: "\f264"; } -.bi-chat-square::before { content: "\f265"; } -.bi-chat-text-fill::before { content: "\f266"; } -.bi-chat-text::before { content: "\f267"; } -.bi-chat::before { content: "\f268"; } -.bi-check-all::before { content: "\f269"; } -.bi-check-circle-fill::before { content: "\f26a"; } -.bi-check-circle::before { content: "\f26b"; } -.bi-check-square-fill::before { content: "\f26c"; } -.bi-check-square::before { content: "\f26d"; } -.bi-check::before { content: "\f26e"; } -.bi-check2-all::before { content: "\f26f"; } -.bi-check2-circle::before { content: "\f270"; } -.bi-check2-square::before { content: "\f271"; } -.bi-check2::before { content: "\f272"; } -.bi-chevron-bar-contract::before { content: "\f273"; } -.bi-chevron-bar-down::before { content: "\f274"; } -.bi-chevron-bar-expand::before { content: "\f275"; } -.bi-chevron-bar-left::before { content: "\f276"; } -.bi-chevron-bar-right::before { content: "\f277"; } -.bi-chevron-bar-up::before { content: "\f278"; } -.bi-chevron-compact-down::before { content: "\f279"; } -.bi-chevron-compact-left::before { content: "\f27a"; } -.bi-chevron-compact-right::before { content: "\f27b"; } -.bi-chevron-compact-up::before { content: "\f27c"; } -.bi-chevron-contract::before { content: "\f27d"; } -.bi-chevron-double-down::before { content: "\f27e"; } -.bi-chevron-double-left::before { content: "\f27f"; } -.bi-chevron-double-right::before { content: "\f280"; } -.bi-chevron-double-up::before { content: "\f281"; } -.bi-chevron-down::before { content: "\f282"; } -.bi-chevron-expand::before { content: "\f283"; } -.bi-chevron-left::before { content: "\f284"; } -.bi-chevron-right::before { content: "\f285"; } -.bi-chevron-up::before { content: "\f286"; } -.bi-circle-fill::before { content: "\f287"; } -.bi-circle-half::before { content: "\f288"; } -.bi-circle-square::before { content: "\f289"; } -.bi-circle::before { content: "\f28a"; } -.bi-clipboard-check::before { content: "\f28b"; } -.bi-clipboard-data::before { content: "\f28c"; } -.bi-clipboard-minus::before { content: "\f28d"; } -.bi-clipboard-plus::before { content: "\f28e"; } -.bi-clipboard-x::before { content: "\f28f"; } -.bi-clipboard::before { content: "\f290"; } -.bi-clock-fill::before { content: "\f291"; } -.bi-clock-history::before { content: "\f292"; } -.bi-clock::before { content: "\f293"; } -.bi-cloud-arrow-down-fill::before { content: "\f294"; } -.bi-cloud-arrow-down::before { content: "\f295"; } -.bi-cloud-arrow-up-fill::before { content: "\f296"; } -.bi-cloud-arrow-up::before { content: "\f297"; } -.bi-cloud-check-fill::before { content: "\f298"; } -.bi-cloud-check::before { content: "\f299"; } -.bi-cloud-download-fill::before { content: "\f29a"; } -.bi-cloud-download::before { content: "\f29b"; } -.bi-cloud-drizzle-fill::before { content: "\f29c"; } -.bi-cloud-drizzle::before { content: "\f29d"; } -.bi-cloud-fill::before { content: "\f29e"; } -.bi-cloud-fog-fill::before { content: "\f29f"; } -.bi-cloud-fog::before { content: "\f2a0"; } -.bi-cloud-fog2-fill::before { content: "\f2a1"; } -.bi-cloud-fog2::before { content: "\f2a2"; } -.bi-cloud-hail-fill::before { content: "\f2a3"; } -.bi-cloud-hail::before { content: "\f2a4"; } -.bi-cloud-haze-1::before { content: "\f2a5"; } -.bi-cloud-haze-fill::before { content: "\f2a6"; } -.bi-cloud-haze::before { content: "\f2a7"; } -.bi-cloud-haze2-fill::before { content: "\f2a8"; } -.bi-cloud-lightning-fill::before { content: "\f2a9"; } -.bi-cloud-lightning-rain-fill::before { content: "\f2aa"; } -.bi-cloud-lightning-rain::before { content: "\f2ab"; } -.bi-cloud-lightning::before { content: "\f2ac"; } -.bi-cloud-minus-fill::before { content: "\f2ad"; } -.bi-cloud-minus::before { content: "\f2ae"; } -.bi-cloud-moon-fill::before { content: "\f2af"; } -.bi-cloud-moon::before { content: "\f2b0"; } -.bi-cloud-plus-fill::before { content: "\f2b1"; } -.bi-cloud-plus::before { content: "\f2b2"; } -.bi-cloud-rain-fill::before { content: "\f2b3"; } -.bi-cloud-rain-heavy-fill::before { content: "\f2b4"; } -.bi-cloud-rain-heavy::before { content: "\f2b5"; } -.bi-cloud-rain::before { content: "\f2b6"; } -.bi-cloud-slash-fill::before { content: "\f2b7"; } -.bi-cloud-slash::before { content: "\f2b8"; } -.bi-cloud-sleet-fill::before { content: "\f2b9"; } -.bi-cloud-sleet::before { content: "\f2ba"; } -.bi-cloud-snow-fill::before { content: "\f2bb"; } -.bi-cloud-snow::before { content: "\f2bc"; } -.bi-cloud-sun-fill::before { content: "\f2bd"; } -.bi-cloud-sun::before { content: "\f2be"; } -.bi-cloud-upload-fill::before { content: "\f2bf"; } -.bi-cloud-upload::before { content: "\f2c0"; } -.bi-cloud::before { content: "\f2c1"; } -.bi-clouds-fill::before { content: "\f2c2"; } -.bi-clouds::before { content: "\f2c3"; } -.bi-cloudy-fill::before { content: "\f2c4"; } -.bi-cloudy::before { content: "\f2c5"; } -.bi-code-slash::before { content: "\f2c6"; } -.bi-code-square::before { content: "\f2c7"; } -.bi-code::before { content: "\f2c8"; } -.bi-collection-fill::before { content: "\f2c9"; } -.bi-collection-play-fill::before { content: "\f2ca"; } -.bi-collection-play::before { content: "\f2cb"; } -.bi-collection::before { content: "\f2cc"; } -.bi-columns-gap::before { content: "\f2cd"; } -.bi-columns::before { content: "\f2ce"; } -.bi-command::before { content: "\f2cf"; } -.bi-compass-fill::before { content: "\f2d0"; } -.bi-compass::before { content: "\f2d1"; } -.bi-cone-striped::before { content: "\f2d2"; } -.bi-cone::before { content: "\f2d3"; } -.bi-controller::before { content: "\f2d4"; } -.bi-cpu-fill::before { content: "\f2d5"; } -.bi-cpu::before { content: "\f2d6"; } -.bi-credit-card-2-back-fill::before { content: "\f2d7"; } -.bi-credit-card-2-back::before { content: "\f2d8"; } -.bi-credit-card-2-front-fill::before { content: "\f2d9"; } -.bi-credit-card-2-front::before { content: "\f2da"; } -.bi-credit-card-fill::before { content: "\f2db"; } -.bi-credit-card::before { content: "\f2dc"; } -.bi-crop::before { content: "\f2dd"; } -.bi-cup-fill::before { content: "\f2de"; } -.bi-cup-straw::before { content: "\f2df"; } -.bi-cup::before { content: "\f2e0"; } -.bi-cursor-fill::before { content: "\f2e1"; } -.bi-cursor-text::before { content: "\f2e2"; } -.bi-cursor::before { content: "\f2e3"; } -.bi-dash-circle-dotted::before { content: "\f2e4"; } -.bi-dash-circle-fill::before { content: "\f2e5"; } -.bi-dash-circle::before { content: "\f2e6"; } -.bi-dash-square-dotted::before { content: "\f2e7"; } -.bi-dash-square-fill::before { content: "\f2e8"; } -.bi-dash-square::before { content: "\f2e9"; } -.bi-dash::before { content: "\f2ea"; } -.bi-diagram-2-fill::before { content: "\f2eb"; } -.bi-diagram-2::before { content: "\f2ec"; } -.bi-diagram-3-fill::before { content: "\f2ed"; } -.bi-diagram-3::before { content: "\f2ee"; } -.bi-diamond-fill::before { content: "\f2ef"; } -.bi-diamond-half::before { content: "\f2f0"; } -.bi-diamond::before { content: "\f2f1"; } -.bi-dice-1-fill::before { content: "\f2f2"; } -.bi-dice-1::before { content: "\f2f3"; } -.bi-dice-2-fill::before { content: "\f2f4"; } -.bi-dice-2::before { content: "\f2f5"; } -.bi-dice-3-fill::before { content: "\f2f6"; } -.bi-dice-3::before { content: "\f2f7"; } -.bi-dice-4-fill::before { content: "\f2f8"; } -.bi-dice-4::before { content: "\f2f9"; } -.bi-dice-5-fill::before { content: "\f2fa"; } -.bi-dice-5::before { content: "\f2fb"; } -.bi-dice-6-fill::before { content: "\f2fc"; } -.bi-dice-6::before { content: "\f2fd"; } -.bi-disc-fill::before { content: "\f2fe"; } -.bi-disc::before { content: "\f2ff"; } -.bi-discord::before { content: "\f300"; } -.bi-display-fill::before { content: "\f301"; } -.bi-display::before { content: "\f302"; } -.bi-distribute-horizontal::before { content: "\f303"; } -.bi-distribute-vertical::before { content: "\f304"; } -.bi-door-closed-fill::before { content: "\f305"; } -.bi-door-closed::before { content: "\f306"; } -.bi-door-open-fill::before { content: "\f307"; } -.bi-door-open::before { content: "\f308"; } -.bi-dot::before { content: "\f309"; } -.bi-download::before { content: "\f30a"; } -.bi-droplet-fill::before { content: "\f30b"; } -.bi-droplet-half::before { content: "\f30c"; } -.bi-droplet::before { content: "\f30d"; } -.bi-earbuds::before { content: "\f30e"; } -.bi-easel-fill::before { content: "\f30f"; } -.bi-easel::before { content: "\f310"; } -.bi-egg-fill::before { content: "\f311"; } -.bi-egg-fried::before { content: "\f312"; } -.bi-egg::before { content: "\f313"; } -.bi-eject-fill::before { content: "\f314"; } -.bi-eject::before { content: "\f315"; } -.bi-emoji-angry-fill::before { content: "\f316"; } -.bi-emoji-angry::before { content: "\f317"; } -.bi-emoji-dizzy-fill::before { content: "\f318"; } -.bi-emoji-dizzy::before { content: "\f319"; } -.bi-emoji-expressionless-fill::before { content: "\f31a"; } -.bi-emoji-expressionless::before { content: "\f31b"; } -.bi-emoji-frown-fill::before { content: "\f31c"; } -.bi-emoji-frown::before { content: "\f31d"; } -.bi-emoji-heart-eyes-fill::before { content: "\f31e"; } -.bi-emoji-heart-eyes::before { content: "\f31f"; } -.bi-emoji-laughing-fill::before { content: "\f320"; } -.bi-emoji-laughing::before { content: "\f321"; } -.bi-emoji-neutral-fill::before { content: "\f322"; } -.bi-emoji-neutral::before { content: "\f323"; } -.bi-emoji-smile-fill::before { content: "\f324"; } -.bi-emoji-smile-upside-down-fill::before { content: "\f325"; } -.bi-emoji-smile-upside-down::before { content: "\f326"; } -.bi-emoji-smile::before { content: "\f327"; } -.bi-emoji-sunglasses-fill::before { content: "\f328"; } -.bi-emoji-sunglasses::before { content: "\f329"; } -.bi-emoji-wink-fill::before { content: "\f32a"; } -.bi-emoji-wink::before { content: "\f32b"; } -.bi-envelope-fill::before { content: "\f32c"; } -.bi-envelope-open-fill::before { content: "\f32d"; } -.bi-envelope-open::before { content: "\f32e"; } -.bi-envelope::before { content: "\f32f"; } -.bi-eraser-fill::before { content: "\f330"; } -.bi-eraser::before { content: "\f331"; } -.bi-exclamation-circle-fill::before { content: "\f332"; } -.bi-exclamation-circle::before { content: "\f333"; } -.bi-exclamation-diamond-fill::before { content: "\f334"; } -.bi-exclamation-diamond::before { content: "\f335"; } -.bi-exclamation-octagon-fill::before { content: "\f336"; } -.bi-exclamation-octagon::before { content: "\f337"; } -.bi-exclamation-square-fill::before { content: "\f338"; } -.bi-exclamation-square::before { content: "\f339"; } -.bi-exclamation-triangle-fill::before { content: "\f33a"; } -.bi-exclamation-triangle::before { content: "\f33b"; } -.bi-exclamation::before { content: "\f33c"; } -.bi-exclude::before { content: "\f33d"; } -.bi-eye-fill::before { content: "\f33e"; } -.bi-eye-slash-fill::before { content: "\f33f"; } -.bi-eye-slash::before { content: "\f340"; } -.bi-eye::before { content: "\f341"; } -.bi-eyedropper::before { content: "\f342"; } -.bi-eyeglasses::before { content: "\f343"; } -.bi-facebook::before { content: "\f344"; } -.bi-file-arrow-down-fill::before { content: "\f345"; } -.bi-file-arrow-down::before { content: "\f346"; } -.bi-file-arrow-up-fill::before { content: "\f347"; } -.bi-file-arrow-up::before { content: "\f348"; } -.bi-file-bar-graph-fill::before { content: "\f349"; } -.bi-file-bar-graph::before { content: "\f34a"; } -.bi-file-binary-fill::before { content: "\f34b"; } -.bi-file-binary::before { content: "\f34c"; } -.bi-file-break-fill::before { content: "\f34d"; } -.bi-file-break::before { content: "\f34e"; } -.bi-file-check-fill::before { content: "\f34f"; } -.bi-file-check::before { content: "\f350"; } -.bi-file-code-fill::before { content: "\f351"; } -.bi-file-code::before { content: "\f352"; } -.bi-file-diff-fill::before { content: "\f353"; } -.bi-file-diff::before { content: "\f354"; } -.bi-file-earmark-arrow-down-fill::before { content: "\f355"; } -.bi-file-earmark-arrow-down::before { content: "\f356"; } -.bi-file-earmark-arrow-up-fill::before { content: "\f357"; } -.bi-file-earmark-arrow-up::before { content: "\f358"; } -.bi-file-earmark-bar-graph-fill::before { content: "\f359"; } -.bi-file-earmark-bar-graph::before { content: "\f35a"; } -.bi-file-earmark-binary-fill::before { content: "\f35b"; } -.bi-file-earmark-binary::before { content: "\f35c"; } -.bi-file-earmark-break-fill::before { content: "\f35d"; } -.bi-file-earmark-break::before { content: "\f35e"; } -.bi-file-earmark-check-fill::before { content: "\f35f"; } -.bi-file-earmark-check::before { content: "\f360"; } -.bi-file-earmark-code-fill::before { content: "\f361"; } -.bi-file-earmark-code::before { content: "\f362"; } -.bi-file-earmark-diff-fill::before { content: "\f363"; } -.bi-file-earmark-diff::before { content: "\f364"; } -.bi-file-earmark-easel-fill::before { content: "\f365"; } -.bi-file-earmark-easel::before { content: "\f366"; } -.bi-file-earmark-excel-fill::before { content: "\f367"; } -.bi-file-earmark-excel::before { content: "\f368"; } -.bi-file-earmark-fill::before { content: "\f369"; } -.bi-file-earmark-font-fill::before { content: "\f36a"; } -.bi-file-earmark-font::before { content: "\f36b"; } -.bi-file-earmark-image-fill::before { content: "\f36c"; } -.bi-file-earmark-image::before { content: "\f36d"; } -.bi-file-earmark-lock-fill::before { content: "\f36e"; } -.bi-file-earmark-lock::before { content: "\f36f"; } -.bi-file-earmark-lock2-fill::before { content: "\f370"; } -.bi-file-earmark-lock2::before { content: "\f371"; } -.bi-file-earmark-medical-fill::before { content: "\f372"; } -.bi-file-earmark-medical::before { content: "\f373"; } -.bi-file-earmark-minus-fill::before { content: "\f374"; } -.bi-file-earmark-minus::before { content: "\f375"; } -.bi-file-earmark-music-fill::before { content: "\f376"; } -.bi-file-earmark-music::before { content: "\f377"; } -.bi-file-earmark-person-fill::before { content: "\f378"; } -.bi-file-earmark-person::before { content: "\f379"; } -.bi-file-earmark-play-fill::before { content: "\f37a"; } -.bi-file-earmark-play::before { content: "\f37b"; } -.bi-file-earmark-plus-fill::before { content: "\f37c"; } -.bi-file-earmark-plus::before { content: "\f37d"; } -.bi-file-earmark-post-fill::before { content: "\f37e"; } -.bi-file-earmark-post::before { content: "\f37f"; } -.bi-file-earmark-ppt-fill::before { content: "\f380"; } -.bi-file-earmark-ppt::before { content: "\f381"; } -.bi-file-earmark-richtext-fill::before { content: "\f382"; } -.bi-file-earmark-richtext::before { content: "\f383"; } -.bi-file-earmark-ruled-fill::before { content: "\f384"; } -.bi-file-earmark-ruled::before { content: "\f385"; } -.bi-file-earmark-slides-fill::before { content: "\f386"; } -.bi-file-earmark-slides::before { content: "\f387"; } -.bi-file-earmark-spreadsheet-fill::before { content: "\f388"; } -.bi-file-earmark-spreadsheet::before { content: "\f389"; } -.bi-file-earmark-text-fill::before { content: "\f38a"; } -.bi-file-earmark-text::before { content: "\f38b"; } -.bi-file-earmark-word-fill::before { content: "\f38c"; } -.bi-file-earmark-word::before { content: "\f38d"; } -.bi-file-earmark-x-fill::before { content: "\f38e"; } -.bi-file-earmark-x::before { content: "\f38f"; } -.bi-file-earmark-zip-fill::before { content: "\f390"; } -.bi-file-earmark-zip::before { content: "\f391"; } -.bi-file-earmark::before { content: "\f392"; } -.bi-file-easel-fill::before { content: "\f393"; } -.bi-file-easel::before { content: "\f394"; } -.bi-file-excel-fill::before { content: "\f395"; } -.bi-file-excel::before { content: "\f396"; } -.bi-file-fill::before { content: "\f397"; } -.bi-file-font-fill::before { content: "\f398"; } -.bi-file-font::before { content: "\f399"; } -.bi-file-image-fill::before { content: "\f39a"; } -.bi-file-image::before { content: "\f39b"; } -.bi-file-lock-fill::before { content: "\f39c"; } -.bi-file-lock::before { content: "\f39d"; } -.bi-file-lock2-fill::before { content: "\f39e"; } -.bi-file-lock2::before { content: "\f39f"; } -.bi-file-medical-fill::before { content: "\f3a0"; } -.bi-file-medical::before { content: "\f3a1"; } -.bi-file-minus-fill::before { content: "\f3a2"; } -.bi-file-minus::before { content: "\f3a3"; } -.bi-file-music-fill::before { content: "\f3a4"; } -.bi-file-music::before { content: "\f3a5"; } -.bi-file-person-fill::before { content: "\f3a6"; } -.bi-file-person::before { content: "\f3a7"; } -.bi-file-play-fill::before { content: "\f3a8"; } -.bi-file-play::before { content: "\f3a9"; } -.bi-file-plus-fill::before { content: "\f3aa"; } -.bi-file-plus::before { content: "\f3ab"; } -.bi-file-post-fill::before { content: "\f3ac"; } -.bi-file-post::before { content: "\f3ad"; } -.bi-file-ppt-fill::before { content: "\f3ae"; } -.bi-file-ppt::before { content: "\f3af"; } -.bi-file-richtext-fill::before { content: "\f3b0"; } -.bi-file-richtext::before { content: "\f3b1"; } -.bi-file-ruled-fill::before { content: "\f3b2"; } -.bi-file-ruled::before { content: "\f3b3"; } -.bi-file-slides-fill::before { content: "\f3b4"; } -.bi-file-slides::before { content: "\f3b5"; } -.bi-file-spreadsheet-fill::before { content: "\f3b6"; } -.bi-file-spreadsheet::before { content: "\f3b7"; } -.bi-file-text-fill::before { content: "\f3b8"; } -.bi-file-text::before { content: "\f3b9"; } -.bi-file-word-fill::before { content: "\f3ba"; } -.bi-file-word::before { content: "\f3bb"; } -.bi-file-x-fill::before { content: "\f3bc"; } -.bi-file-x::before { content: "\f3bd"; } -.bi-file-zip-fill::before { content: "\f3be"; } -.bi-file-zip::before { content: "\f3bf"; } -.bi-file::before { content: "\f3c0"; } -.bi-files-alt::before { content: "\f3c1"; } -.bi-files::before { content: "\f3c2"; } -.bi-film::before { content: "\f3c3"; } -.bi-filter-circle-fill::before { content: "\f3c4"; } -.bi-filter-circle::before { content: "\f3c5"; } -.bi-filter-left::before { content: "\f3c6"; } -.bi-filter-right::before { content: "\f3c7"; } -.bi-filter-square-fill::before { content: "\f3c8"; } -.bi-filter-square::before { content: "\f3c9"; } -.bi-filter::before { content: "\f3ca"; } -.bi-flag-fill::before { content: "\f3cb"; } -.bi-flag::before { content: "\f3cc"; } -.bi-flower1::before { content: "\f3cd"; } -.bi-flower2::before { content: "\f3ce"; } -.bi-flower3::before { content: "\f3cf"; } -.bi-folder-check::before { content: "\f3d0"; } -.bi-folder-fill::before { content: "\f3d1"; } -.bi-folder-minus::before { content: "\f3d2"; } -.bi-folder-plus::before { content: "\f3d3"; } -.bi-folder-symlink-fill::before { content: "\f3d4"; } -.bi-folder-symlink::before { content: "\f3d5"; } -.bi-folder-x::before { content: "\f3d6"; } -.bi-folder::before { content: "\f3d7"; } -.bi-folder2-open::before { content: "\f3d8"; } -.bi-folder2::before { content: "\f3d9"; } -.bi-fonts::before { content: "\f3da"; } -.bi-forward-fill::before { content: "\f3db"; } -.bi-forward::before { content: "\f3dc"; } -.bi-front::before { content: "\f3dd"; } -.bi-fullscreen-exit::before { content: "\f3de"; } -.bi-fullscreen::before { content: "\f3df"; } -.bi-funnel-fill::before { content: "\f3e0"; } -.bi-funnel::before { content: "\f3e1"; } -.bi-gear-fill::before { content: "\f3e2"; } -.bi-gear-wide-connected::before { content: "\f3e3"; } -.bi-gear-wide::before { content: "\f3e4"; } -.bi-gear::before { content: "\f3e5"; } -.bi-gem::before { content: "\f3e6"; } -.bi-geo-alt-fill::before { content: "\f3e7"; } -.bi-geo-alt::before { content: "\f3e8"; } -.bi-geo-fill::before { content: "\f3e9"; } -.bi-geo::before { content: "\f3ea"; } -.bi-gift-fill::before { content: "\f3eb"; } -.bi-gift::before { content: "\f3ec"; } -.bi-github::before { content: "\f3ed"; } -.bi-globe::before { content: "\f3ee"; } -.bi-globe2::before { content: "\f3ef"; } -.bi-google::before { content: "\f3f0"; } -.bi-graph-down::before { content: "\f3f1"; } -.bi-graph-up::before { content: "\f3f2"; } -.bi-grid-1x2-fill::before { content: "\f3f3"; } -.bi-grid-1x2::before { content: "\f3f4"; } -.bi-grid-3x2-gap-fill::before { content: "\f3f5"; } -.bi-grid-3x2-gap::before { content: "\f3f6"; } -.bi-grid-3x2::before { content: "\f3f7"; } -.bi-grid-3x3-gap-fill::before { content: "\f3f8"; } -.bi-grid-3x3-gap::before { content: "\f3f9"; } -.bi-grid-3x3::before { content: "\f3fa"; } -.bi-grid-fill::before { content: "\f3fb"; } -.bi-grid::before { content: "\f3fc"; } -.bi-grip-horizontal::before { content: "\f3fd"; } -.bi-grip-vertical::before { content: "\f3fe"; } -.bi-hammer::before { content: "\f3ff"; } -.bi-hand-index-fill::before { content: "\f400"; } -.bi-hand-index-thumb-fill::before { content: "\f401"; } -.bi-hand-index-thumb::before { content: "\f402"; } -.bi-hand-index::before { content: "\f403"; } -.bi-hand-thumbs-down-fill::before { content: "\f404"; } -.bi-hand-thumbs-down::before { content: "\f405"; } -.bi-hand-thumbs-up-fill::before { content: "\f406"; } -.bi-hand-thumbs-up::before { content: "\f407"; } -.bi-handbag-fill::before { content: "\f408"; } -.bi-handbag::before { content: "\f409"; } -.bi-hash::before { content: "\f40a"; } -.bi-hdd-fill::before { content: "\f40b"; } -.bi-hdd-network-fill::before { content: "\f40c"; } -.bi-hdd-network::before { content: "\f40d"; } -.bi-hdd-rack-fill::before { content: "\f40e"; } -.bi-hdd-rack::before { content: "\f40f"; } -.bi-hdd-stack-fill::before { content: "\f410"; } -.bi-hdd-stack::before { content: "\f411"; } -.bi-hdd::before { content: "\f412"; } -.bi-headphones::before { content: "\f413"; } -.bi-headset::before { content: "\f414"; } -.bi-heart-fill::before { content: "\f415"; } -.bi-heart-half::before { content: "\f416"; } -.bi-heart::before { content: "\f417"; } -.bi-heptagon-fill::before { content: "\f418"; } -.bi-heptagon-half::before { content: "\f419"; } -.bi-heptagon::before { content: "\f41a"; } -.bi-hexagon-fill::before { content: "\f41b"; } -.bi-hexagon-half::before { content: "\f41c"; } -.bi-hexagon::before { content: "\f41d"; } -.bi-hourglass-bottom::before { content: "\f41e"; } -.bi-hourglass-split::before { content: "\f41f"; } -.bi-hourglass-top::before { content: "\f420"; } -.bi-hourglass::before { content: "\f421"; } -.bi-house-door-fill::before { content: "\f422"; } -.bi-house-door::before { content: "\f423"; } -.bi-house-fill::before { content: "\f424"; } -.bi-house::before { content: "\f425"; } -.bi-hr::before { content: "\f426"; } -.bi-hurricane::before { content: "\f427"; } -.bi-image-alt::before { content: "\f428"; } -.bi-image-fill::before { content: "\f429"; } -.bi-image::before { content: "\f42a"; } -.bi-images::before { content: "\f42b"; } -.bi-inbox-fill::before { content: "\f42c"; } -.bi-inbox::before { content: "\f42d"; } -.bi-inboxes-fill::before { content: "\f42e"; } -.bi-inboxes::before { content: "\f42f"; } -.bi-info-circle-fill::before { content: "\f430"; } -.bi-info-circle::before { content: "\f431"; } -.bi-info-square-fill::before { content: "\f432"; } -.bi-info-square::before { content: "\f433"; } -.bi-info::before { content: "\f434"; } -.bi-input-cursor-text::before { content: "\f435"; } -.bi-input-cursor::before { content: "\f436"; } -.bi-instagram::before { content: "\f437"; } -.bi-intersect::before { content: "\f438"; } -.bi-journal-album::before { content: "\f439"; } -.bi-journal-arrow-down::before { content: "\f43a"; } -.bi-journal-arrow-up::before { content: "\f43b"; } -.bi-journal-bookmark-fill::before { content: "\f43c"; } -.bi-journal-bookmark::before { content: "\f43d"; } -.bi-journal-check::before { content: "\f43e"; } -.bi-journal-code::before { content: "\f43f"; } -.bi-journal-medical::before { content: "\f440"; } -.bi-journal-minus::before { content: "\f441"; } -.bi-journal-plus::before { content: "\f442"; } -.bi-journal-richtext::before { content: "\f443"; } -.bi-journal-text::before { content: "\f444"; } -.bi-journal-x::before { content: "\f445"; } -.bi-journal::before { content: "\f446"; } -.bi-journals::before { content: "\f447"; } -.bi-joystick::before { content: "\f448"; } -.bi-justify-left::before { content: "\f449"; } -.bi-justify-right::before { content: "\f44a"; } -.bi-justify::before { content: "\f44b"; } -.bi-kanban-fill::before { content: "\f44c"; } -.bi-kanban::before { content: "\f44d"; } -.bi-key-fill::before { content: "\f44e"; } -.bi-key::before { content: "\f44f"; } -.bi-keyboard-fill::before { content: "\f450"; } -.bi-keyboard::before { content: "\f451"; } -.bi-ladder::before { content: "\f452"; } -.bi-lamp-fill::before { content: "\f453"; } -.bi-lamp::before { content: "\f454"; } -.bi-laptop-fill::before { content: "\f455"; } -.bi-laptop::before { content: "\f456"; } -.bi-layer-backward::before { content: "\f457"; } -.bi-layer-forward::before { content: "\f458"; } -.bi-layers-fill::before { content: "\f459"; } -.bi-layers-half::before { content: "\f45a"; } -.bi-layers::before { content: "\f45b"; } -.bi-layout-sidebar-inset-reverse::before { content: "\f45c"; } -.bi-layout-sidebar-inset::before { content: "\f45d"; } -.bi-layout-sidebar-reverse::before { content: "\f45e"; } -.bi-layout-sidebar::before { content: "\f45f"; } -.bi-layout-split::before { content: "\f460"; } -.bi-layout-text-sidebar-reverse::before { content: "\f461"; } -.bi-layout-text-sidebar::before { content: "\f462"; } -.bi-layout-text-window-reverse::before { content: "\f463"; } -.bi-layout-text-window::before { content: "\f464"; } -.bi-layout-three-columns::before { content: "\f465"; } -.bi-layout-wtf::before { content: "\f466"; } -.bi-life-preserver::before { content: "\f467"; } -.bi-lightbulb-fill::before { content: "\f468"; } -.bi-lightbulb-off-fill::before { content: "\f469"; } -.bi-lightbulb-off::before { content: "\f46a"; } -.bi-lightbulb::before { content: "\f46b"; } -.bi-lightning-charge-fill::before { content: "\f46c"; } -.bi-lightning-charge::before { content: "\f46d"; } -.bi-lightning-fill::before { content: "\f46e"; } -.bi-lightning::before { content: "\f46f"; } -.bi-link-45deg::before { content: "\f470"; } -.bi-link::before { content: "\f471"; } -.bi-linkedin::before { content: "\f472"; } -.bi-list-check::before { content: "\f473"; } -.bi-list-nested::before { content: "\f474"; } -.bi-list-ol::before { content: "\f475"; } -.bi-list-stars::before { content: "\f476"; } -.bi-list-task::before { content: "\f477"; } -.bi-list-ul::before { content: "\f478"; } -.bi-list::before { content: "\f479"; } -.bi-lock-fill::before { content: "\f47a"; } -.bi-lock::before { content: "\f47b"; } -.bi-mailbox::before { content: "\f47c"; } -.bi-mailbox2::before { content: "\f47d"; } -.bi-map-fill::before { content: "\f47e"; } -.bi-map::before { content: "\f47f"; } -.bi-markdown-fill::before { content: "\f480"; } -.bi-markdown::before { content: "\f481"; } -.bi-mask::before { content: "\f482"; } -.bi-megaphone-fill::before { content: "\f483"; } -.bi-megaphone::before { content: "\f484"; } -.bi-menu-app-fill::before { content: "\f485"; } -.bi-menu-app::before { content: "\f486"; } -.bi-menu-button-fill::before { content: "\f487"; } -.bi-menu-button-wide-fill::before { content: "\f488"; } -.bi-menu-button-wide::before { content: "\f489"; } -.bi-menu-button::before { content: "\f48a"; } -.bi-menu-down::before { content: "\f48b"; } -.bi-menu-up::before { content: "\f48c"; } -.bi-mic-fill::before { content: "\f48d"; } -.bi-mic-mute-fill::before { content: "\f48e"; } -.bi-mic-mute::before { content: "\f48f"; } -.bi-mic::before { content: "\f490"; } -.bi-minecart-loaded::before { content: "\f491"; } -.bi-minecart::before { content: "\f492"; } -.bi-moisture::before { content: "\f493"; } -.bi-moon-fill::before { content: "\f494"; } -.bi-moon-stars-fill::before { content: "\f495"; } -.bi-moon-stars::before { content: "\f496"; } -.bi-moon::before { content: "\f497"; } -.bi-mouse-fill::before { content: "\f498"; } -.bi-mouse::before { content: "\f499"; } -.bi-mouse2-fill::before { content: "\f49a"; } -.bi-mouse2::before { content: "\f49b"; } -.bi-mouse3-fill::before { content: "\f49c"; } -.bi-mouse3::before { content: "\f49d"; } -.bi-music-note-beamed::before { content: "\f49e"; } -.bi-music-note-list::before { content: "\f49f"; } -.bi-music-note::before { content: "\f4a0"; } -.bi-music-player-fill::before { content: "\f4a1"; } -.bi-music-player::before { content: "\f4a2"; } -.bi-newspaper::before { content: "\f4a3"; } -.bi-node-minus-fill::before { content: "\f4a4"; } -.bi-node-minus::before { content: "\f4a5"; } -.bi-node-plus-fill::before { content: "\f4a6"; } -.bi-node-plus::before { content: "\f4a7"; } -.bi-nut-fill::before { content: "\f4a8"; } -.bi-nut::before { content: "\f4a9"; } -.bi-octagon-fill::before { content: "\f4aa"; } -.bi-octagon-half::before { content: "\f4ab"; } -.bi-octagon::before { content: "\f4ac"; } -.bi-option::before { content: "\f4ad"; } -.bi-outlet::before { content: "\f4ae"; } -.bi-paint-bucket::before { content: "\f4af"; } -.bi-palette-fill::before { content: "\f4b0"; } -.bi-palette::before { content: "\f4b1"; } -.bi-palette2::before { content: "\f4b2"; } -.bi-paperclip::before { content: "\f4b3"; } -.bi-paragraph::before { content: "\f4b4"; } -.bi-patch-check-fill::before { content: "\f4b5"; } -.bi-patch-check::before { content: "\f4b6"; } -.bi-patch-exclamation-fill::before { content: "\f4b7"; } -.bi-patch-exclamation::before { content: "\f4b8"; } -.bi-patch-minus-fill::before { content: "\f4b9"; } -.bi-patch-minus::before { content: "\f4ba"; } -.bi-patch-plus-fill::before { content: "\f4bb"; } -.bi-patch-plus::before { content: "\f4bc"; } -.bi-patch-question-fill::before { content: "\f4bd"; } -.bi-patch-question::before { content: "\f4be"; } -.bi-pause-btn-fill::before { content: "\f4bf"; } -.bi-pause-btn::before { content: "\f4c0"; } -.bi-pause-circle-fill::before { content: "\f4c1"; } -.bi-pause-circle::before { content: "\f4c2"; } -.bi-pause-fill::before { content: "\f4c3"; } -.bi-pause::before { content: "\f4c4"; } -.bi-peace-fill::before { content: "\f4c5"; } -.bi-peace::before { content: "\f4c6"; } -.bi-pen-fill::before { content: "\f4c7"; } -.bi-pen::before { content: "\f4c8"; } -.bi-pencil-fill::before { content: "\f4c9"; } -.bi-pencil-square::before { content: "\f4ca"; } -.bi-pencil::before { content: "\f4cb"; } -.bi-pentagon-fill::before { content: "\f4cc"; } -.bi-pentagon-half::before { content: "\f4cd"; } -.bi-pentagon::before { content: "\f4ce"; } -.bi-people-fill::before { content: "\f4cf"; } -.bi-people::before { content: "\f4d0"; } -.bi-percent::before { content: "\f4d1"; } -.bi-person-badge-fill::before { content: "\f4d2"; } -.bi-person-badge::before { content: "\f4d3"; } -.bi-person-bounding-box::before { content: "\f4d4"; } -.bi-person-check-fill::before { content: "\f4d5"; } -.bi-person-check::before { content: "\f4d6"; } -.bi-person-circle::before { content: "\f4d7"; } -.bi-person-dash-fill::before { content: "\f4d8"; } -.bi-person-dash::before { content: "\f4d9"; } -.bi-person-fill::before { content: "\f4da"; } -.bi-person-lines-fill::before { content: "\f4db"; } -.bi-person-plus-fill::before { content: "\f4dc"; } -.bi-person-plus::before { content: "\f4dd"; } -.bi-person-square::before { content: "\f4de"; } -.bi-person-x-fill::before { content: "\f4df"; } -.bi-person-x::before { content: "\f4e0"; } -.bi-person::before { content: "\f4e1"; } -.bi-phone-fill::before { content: "\f4e2"; } -.bi-phone-landscape-fill::before { content: "\f4e3"; } -.bi-phone-landscape::before { content: "\f4e4"; } -.bi-phone-vibrate-fill::before { content: "\f4e5"; } -.bi-phone-vibrate::before { content: "\f4e6"; } -.bi-phone::before { content: "\f4e7"; } -.bi-pie-chart-fill::before { content: "\f4e8"; } -.bi-pie-chart::before { content: "\f4e9"; } -.bi-pin-angle-fill::before { content: "\f4ea"; } -.bi-pin-angle::before { content: "\f4eb"; } -.bi-pin-fill::before { content: "\f4ec"; } -.bi-pin::before { content: "\f4ed"; } -.bi-pip-fill::before { content: "\f4ee"; } -.bi-pip::before { content: "\f4ef"; } -.bi-play-btn-fill::before { content: "\f4f0"; } -.bi-play-btn::before { content: "\f4f1"; } -.bi-play-circle-fill::before { content: "\f4f2"; } -.bi-play-circle::before { content: "\f4f3"; } -.bi-play-fill::before { content: "\f4f4"; } -.bi-play::before { content: "\f4f5"; } -.bi-plug-fill::before { content: "\f4f6"; } -.bi-plug::before { content: "\f4f7"; } -.bi-plus-circle-dotted::before { content: "\f4f8"; } -.bi-plus-circle-fill::before { content: "\f4f9"; } -.bi-plus-circle::before { content: "\f4fa"; } -.bi-plus-square-dotted::before { content: "\f4fb"; } -.bi-plus-square-fill::before { content: "\f4fc"; } -.bi-plus-square::before { content: "\f4fd"; } -.bi-plus::before { content: "\f4fe"; } -.bi-power::before { content: "\f4ff"; } -.bi-printer-fill::before { content: "\f500"; } -.bi-printer::before { content: "\f501"; } -.bi-puzzle-fill::before { content: "\f502"; } -.bi-puzzle::before { content: "\f503"; } -.bi-question-circle-fill::before { content: "\f504"; } -.bi-question-circle::before { content: "\f505"; } -.bi-question-diamond-fill::before { content: "\f506"; } -.bi-question-diamond::before { content: "\f507"; } -.bi-question-octagon-fill::before { content: "\f508"; } -.bi-question-octagon::before { content: "\f509"; } -.bi-question-square-fill::before { content: "\f50a"; } -.bi-question-square::before { content: "\f50b"; } -.bi-question::before { content: "\f50c"; } -.bi-rainbow::before { content: "\f50d"; } -.bi-receipt-cutoff::before { content: "\f50e"; } -.bi-receipt::before { content: "\f50f"; } -.bi-reception-0::before { content: "\f510"; } -.bi-reception-1::before { content: "\f511"; } -.bi-reception-2::before { content: "\f512"; } -.bi-reception-3::before { content: "\f513"; } -.bi-reception-4::before { content: "\f514"; } -.bi-record-btn-fill::before { content: "\f515"; } -.bi-record-btn::before { content: "\f516"; } -.bi-record-circle-fill::before { content: "\f517"; } -.bi-record-circle::before { content: "\f518"; } -.bi-record-fill::before { content: "\f519"; } -.bi-record::before { content: "\f51a"; } -.bi-record2-fill::before { content: "\f51b"; } -.bi-record2::before { content: "\f51c"; } -.bi-reply-all-fill::before { content: "\f51d"; } -.bi-reply-all::before { content: "\f51e"; } -.bi-reply-fill::before { content: "\f51f"; } -.bi-reply::before { content: "\f520"; } -.bi-rss-fill::before { content: "\f521"; } -.bi-rss::before { content: "\f522"; } -.bi-rulers::before { content: "\f523"; } -.bi-save-fill::before { content: "\f524"; } -.bi-save::before { content: "\f525"; } -.bi-save2-fill::before { content: "\f526"; } -.bi-save2::before { content: "\f527"; } -.bi-scissors::before { content: "\f528"; } -.bi-screwdriver::before { content: "\f529"; } -.bi-search::before { content: "\f52a"; } -.bi-segmented-nav::before { content: "\f52b"; } -.bi-server::before { content: "\f52c"; } -.bi-share-fill::before { content: "\f52d"; } -.bi-share::before { content: "\f52e"; } -.bi-shield-check::before { content: "\f52f"; } -.bi-shield-exclamation::before { content: "\f530"; } -.bi-shield-fill-check::before { content: "\f531"; } -.bi-shield-fill-exclamation::before { content: "\f532"; } -.bi-shield-fill-minus::before { content: "\f533"; } -.bi-shield-fill-plus::before { content: "\f534"; } -.bi-shield-fill-x::before { content: "\f535"; } -.bi-shield-fill::before { content: "\f536"; } -.bi-shield-lock-fill::before { content: "\f537"; } -.bi-shield-lock::before { content: "\f538"; } -.bi-shield-minus::before { content: "\f539"; } -.bi-shield-plus::before { content: "\f53a"; } -.bi-shield-shaded::before { content: "\f53b"; } -.bi-shield-slash-fill::before { content: "\f53c"; } -.bi-shield-slash::before { content: "\f53d"; } -.bi-shield-x::before { content: "\f53e"; } -.bi-shield::before { content: "\f53f"; } -.bi-shift-fill::before { content: "\f540"; } -.bi-shift::before { content: "\f541"; } -.bi-shop-window::before { content: "\f542"; } -.bi-shop::before { content: "\f543"; } -.bi-shuffle::before { content: "\f544"; } -.bi-signpost-2-fill::before { content: "\f545"; } -.bi-signpost-2::before { content: "\f546"; } -.bi-signpost-fill::before { content: "\f547"; } -.bi-signpost-split-fill::before { content: "\f548"; } -.bi-signpost-split::before { content: "\f549"; } -.bi-signpost::before { content: "\f54a"; } -.bi-sim-fill::before { content: "\f54b"; } -.bi-sim::before { content: "\f54c"; } -.bi-skip-backward-btn-fill::before { content: "\f54d"; } -.bi-skip-backward-btn::before { content: "\f54e"; } -.bi-skip-backward-circle-fill::before { content: "\f54f"; } -.bi-skip-backward-circle::before { content: "\f550"; } -.bi-skip-backward-fill::before { content: "\f551"; } -.bi-skip-backward::before { content: "\f552"; } -.bi-skip-end-btn-fill::before { content: "\f553"; } -.bi-skip-end-btn::before { content: "\f554"; } -.bi-skip-end-circle-fill::before { content: "\f555"; } -.bi-skip-end-circle::before { content: "\f556"; } -.bi-skip-end-fill::before { content: "\f557"; } -.bi-skip-end::before { content: "\f558"; } -.bi-skip-forward-btn-fill::before { content: "\f559"; } -.bi-skip-forward-btn::before { content: "\f55a"; } -.bi-skip-forward-circle-fill::before { content: "\f55b"; } -.bi-skip-forward-circle::before { content: "\f55c"; } -.bi-skip-forward-fill::before { content: "\f55d"; } -.bi-skip-forward::before { content: "\f55e"; } -.bi-skip-start-btn-fill::before { content: "\f55f"; } -.bi-skip-start-btn::before { content: "\f560"; } -.bi-skip-start-circle-fill::before { content: "\f561"; } -.bi-skip-start-circle::before { content: "\f562"; } -.bi-skip-start-fill::before { content: "\f563"; } -.bi-skip-start::before { content: "\f564"; } -.bi-slack::before { content: "\f565"; } -.bi-slash-circle-fill::before { content: "\f566"; } -.bi-slash-circle::before { content: "\f567"; } -.bi-slash-square-fill::before { content: "\f568"; } -.bi-slash-square::before { content: "\f569"; } -.bi-slash::before { content: "\f56a"; } -.bi-sliders::before { content: "\f56b"; } -.bi-smartwatch::before { content: "\f56c"; } -.bi-snow::before { content: "\f56d"; } -.bi-snow2::before { content: "\f56e"; } -.bi-snow3::before { content: "\f56f"; } -.bi-sort-alpha-down-alt::before { content: "\f570"; } -.bi-sort-alpha-down::before { content: "\f571"; } -.bi-sort-alpha-up-alt::before { content: "\f572"; } -.bi-sort-alpha-up::before { content: "\f573"; } -.bi-sort-down-alt::before { content: "\f574"; } -.bi-sort-down::before { content: "\f575"; } -.bi-sort-numeric-down-alt::before { content: "\f576"; } -.bi-sort-numeric-down::before { content: "\f577"; } -.bi-sort-numeric-up-alt::before { content: "\f578"; } -.bi-sort-numeric-up::before { content: "\f579"; } -.bi-sort-up-alt::before { content: "\f57a"; } -.bi-sort-up::before { content: "\f57b"; } -.bi-soundwave::before { content: "\f57c"; } -.bi-speaker-fill::before { content: "\f57d"; } -.bi-speaker::before { content: "\f57e"; } -.bi-speedometer::before { content: "\f57f"; } -.bi-speedometer2::before { content: "\f580"; } -.bi-spellcheck::before { content: "\f581"; } -.bi-square-fill::before { content: "\f582"; } -.bi-square-half::before { content: "\f583"; } -.bi-square::before { content: "\f584"; } -.bi-stack::before { content: "\f585"; } -.bi-star-fill::before { content: "\f586"; } -.bi-star-half::before { content: "\f587"; } -.bi-star::before { content: "\f588"; } -.bi-stars::before { content: "\f589"; } -.bi-stickies-fill::before { content: "\f58a"; } -.bi-stickies::before { content: "\f58b"; } -.bi-sticky-fill::before { content: "\f58c"; } -.bi-sticky::before { content: "\f58d"; } -.bi-stop-btn-fill::before { content: "\f58e"; } -.bi-stop-btn::before { content: "\f58f"; } -.bi-stop-circle-fill::before { content: "\f590"; } -.bi-stop-circle::before { content: "\f591"; } -.bi-stop-fill::before { content: "\f592"; } -.bi-stop::before { content: "\f593"; } -.bi-stoplights-fill::before { content: "\f594"; } -.bi-stoplights::before { content: "\f595"; } -.bi-stopwatch-fill::before { content: "\f596"; } -.bi-stopwatch::before { content: "\f597"; } -.bi-subtract::before { content: "\f598"; } -.bi-suit-club-fill::before { content: "\f599"; } -.bi-suit-club::before { content: "\f59a"; } -.bi-suit-diamond-fill::before { content: "\f59b"; } -.bi-suit-diamond::before { content: "\f59c"; } -.bi-suit-heart-fill::before { content: "\f59d"; } -.bi-suit-heart::before { content: "\f59e"; } -.bi-suit-spade-fill::before { content: "\f59f"; } -.bi-suit-spade::before { content: "\f5a0"; } -.bi-sun-fill::before { content: "\f5a1"; } -.bi-sun::before { content: "\f5a2"; } -.bi-sunglasses::before { content: "\f5a3"; } -.bi-sunrise-fill::before { content: "\f5a4"; } -.bi-sunrise::before { content: "\f5a5"; } -.bi-sunset-fill::before { content: "\f5a6"; } -.bi-sunset::before { content: "\f5a7"; } -.bi-symmetry-horizontal::before { content: "\f5a8"; } -.bi-symmetry-vertical::before { content: "\f5a9"; } -.bi-table::before { content: "\f5aa"; } -.bi-tablet-fill::before { content: "\f5ab"; } -.bi-tablet-landscape-fill::before { content: "\f5ac"; } -.bi-tablet-landscape::before { content: "\f5ad"; } -.bi-tablet::before { content: "\f5ae"; } -.bi-tag-fill::before { content: "\f5af"; } -.bi-tag::before { content: "\f5b0"; } -.bi-tags-fill::before { content: "\f5b1"; } -.bi-tags::before { content: "\f5b2"; } -.bi-telegram::before { content: "\f5b3"; } -.bi-telephone-fill::before { content: "\f5b4"; } -.bi-telephone-forward-fill::before { content: "\f5b5"; } -.bi-telephone-forward::before { content: "\f5b6"; } -.bi-telephone-inbound-fill::before { content: "\f5b7"; } -.bi-telephone-inbound::before { content: "\f5b8"; } -.bi-telephone-minus-fill::before { content: "\f5b9"; } -.bi-telephone-minus::before { content: "\f5ba"; } -.bi-telephone-outbound-fill::before { content: "\f5bb"; } -.bi-telephone-outbound::before { content: "\f5bc"; } -.bi-telephone-plus-fill::before { content: "\f5bd"; } -.bi-telephone-plus::before { content: "\f5be"; } -.bi-telephone-x-fill::before { content: "\f5bf"; } -.bi-telephone-x::before { content: "\f5c0"; } -.bi-telephone::before { content: "\f5c1"; } -.bi-terminal-fill::before { content: "\f5c2"; } -.bi-terminal::before { content: "\f5c3"; } -.bi-text-center::before { content: "\f5c4"; } -.bi-text-indent-left::before { content: "\f5c5"; } -.bi-text-indent-right::before { content: "\f5c6"; } -.bi-text-left::before { content: "\f5c7"; } -.bi-text-paragraph::before { content: "\f5c8"; } -.bi-text-right::before { content: "\f5c9"; } -.bi-textarea-resize::before { content: "\f5ca"; } -.bi-textarea-t::before { content: "\f5cb"; } -.bi-textarea::before { content: "\f5cc"; } -.bi-thermometer-half::before { content: "\f5cd"; } -.bi-thermometer-high::before { content: "\f5ce"; } -.bi-thermometer-low::before { content: "\f5cf"; } -.bi-thermometer-snow::before { content: "\f5d0"; } -.bi-thermometer-sun::before { content: "\f5d1"; } -.bi-thermometer::before { content: "\f5d2"; } -.bi-three-dots-vertical::before { content: "\f5d3"; } -.bi-three-dots::before { content: "\f5d4"; } -.bi-toggle-off::before { content: "\f5d5"; } -.bi-toggle-on::before { content: "\f5d6"; } -.bi-toggle2-off::before { content: "\f5d7"; } -.bi-toggle2-on::before { content: "\f5d8"; } -.bi-toggles::before { content: "\f5d9"; } -.bi-toggles2::before { content: "\f5da"; } -.bi-tools::before { content: "\f5db"; } -.bi-tornado::before { content: "\f5dc"; } -.bi-trash-fill::before { content: "\f5dd"; } -.bi-trash::before { content: "\f5de"; } -.bi-trash2-fill::before { content: "\f5df"; } -.bi-trash2::before { content: "\f5e0"; } -.bi-tree-fill::before { content: "\f5e1"; } -.bi-tree::before { content: "\f5e2"; } -.bi-triangle-fill::before { content: "\f5e3"; } -.bi-triangle-half::before { content: "\f5e4"; } -.bi-triangle::before { content: "\f5e5"; } -.bi-trophy-fill::before { content: "\f5e6"; } -.bi-trophy::before { content: "\f5e7"; } -.bi-tropical-storm::before { content: "\f5e8"; } -.bi-truck-flatbed::before { content: "\f5e9"; } -.bi-truck::before { content: "\f5ea"; } -.bi-tsunami::before { content: "\f5eb"; } -.bi-tv-fill::before { content: "\f5ec"; } -.bi-tv::before { content: "\f5ed"; } -.bi-twitch::before { content: "\f5ee"; } -.bi-twitter::before { content: "\f5ef"; } -.bi-type-bold::before { content: "\f5f0"; } -.bi-type-h1::before { content: "\f5f1"; } -.bi-type-h2::before { content: "\f5f2"; } -.bi-type-h3::before { content: "\f5f3"; } -.bi-type-italic::before { content: "\f5f4"; } -.bi-type-strikethrough::before { content: "\f5f5"; } -.bi-type-underline::before { content: "\f5f6"; } -.bi-type::before { content: "\f5f7"; } -.bi-ui-checks-grid::before { content: "\f5f8"; } -.bi-ui-checks::before { content: "\f5f9"; } -.bi-ui-radios-grid::before { content: "\f5fa"; } -.bi-ui-radios::before { content: "\f5fb"; } -.bi-umbrella-fill::before { content: "\f5fc"; } -.bi-umbrella::before { content: "\f5fd"; } -.bi-union::before { content: "\f5fe"; } -.bi-unlock-fill::before { content: "\f5ff"; } -.bi-unlock::before { content: "\f600"; } -.bi-upc-scan::before { content: "\f601"; } -.bi-upc::before { content: "\f602"; } -.bi-upload::before { content: "\f603"; } -.bi-vector-pen::before { content: "\f604"; } -.bi-view-list::before { content: "\f605"; } -.bi-view-stacked::before { content: "\f606"; } -.bi-vinyl-fill::before { content: "\f607"; } -.bi-vinyl::before { content: "\f608"; } -.bi-voicemail::before { content: "\f609"; } -.bi-volume-down-fill::before { content: "\f60a"; } -.bi-volume-down::before { content: "\f60b"; } -.bi-volume-mute-fill::before { content: "\f60c"; } -.bi-volume-mute::before { content: "\f60d"; } -.bi-volume-off-fill::before { content: "\f60e"; } -.bi-volume-off::before { content: "\f60f"; } -.bi-volume-up-fill::before { content: "\f610"; } -.bi-volume-up::before { content: "\f611"; } -.bi-vr::before { content: "\f612"; } -.bi-wallet-fill::before { content: "\f613"; } -.bi-wallet::before { content: "\f614"; } -.bi-wallet2::before { content: "\f615"; } -.bi-watch::before { content: "\f616"; } -.bi-water::before { content: "\f617"; } -.bi-whatsapp::before { content: "\f618"; } -.bi-wifi-1::before { content: "\f619"; } -.bi-wifi-2::before { content: "\f61a"; } -.bi-wifi-off::before { content: "\f61b"; } -.bi-wifi::before { content: "\f61c"; } -.bi-wind::before { content: "\f61d"; } -.bi-window-dock::before { content: "\f61e"; } -.bi-window-sidebar::before { content: "\f61f"; } -.bi-window::before { content: "\f620"; } -.bi-wrench::before { content: "\f621"; } -.bi-x-circle-fill::before { content: "\f622"; } -.bi-x-circle::before { content: "\f623"; } -.bi-x-diamond-fill::before { content: "\f624"; } -.bi-x-diamond::before { content: "\f625"; } -.bi-x-octagon-fill::before { content: "\f626"; } -.bi-x-octagon::before { content: "\f627"; } -.bi-x-square-fill::before { content: "\f628"; } -.bi-x-square::before { content: "\f629"; } -.bi-x::before { content: "\f62a"; } -.bi-youtube::before { content: "\f62b"; } -.bi-zoom-in::before { content: "\f62c"; } -.bi-zoom-out::before { content: "\f62d"; } -.bi-bank::before { content: "\f62e"; } -.bi-bank2::before { content: "\f62f"; } -.bi-bell-slash-fill::before { content: "\f630"; } -.bi-bell-slash::before { content: "\f631"; } -.bi-cash-coin::before { content: "\f632"; } -.bi-check-lg::before { content: "\f633"; } -.bi-coin::before { content: "\f634"; } -.bi-currency-bitcoin::before { content: "\f635"; } -.bi-currency-dollar::before { content: "\f636"; } -.bi-currency-euro::before { content: "\f637"; } -.bi-currency-exchange::before { content: "\f638"; } -.bi-currency-pound::before { content: "\f639"; } -.bi-currency-yen::before { content: "\f63a"; } -.bi-dash-lg::before { content: "\f63b"; } -.bi-exclamation-lg::before { content: "\f63c"; } -.bi-file-earmark-pdf-fill::before { content: "\f63d"; } -.bi-file-earmark-pdf::before { content: "\f63e"; } -.bi-file-pdf-fill::before { content: "\f63f"; } -.bi-file-pdf::before { content: "\f640"; } -.bi-gender-ambiguous::before { content: "\f641"; } -.bi-gender-female::before { content: "\f642"; } -.bi-gender-male::before { content: "\f643"; } -.bi-gender-trans::before { content: "\f644"; } -.bi-headset-vr::before { content: "\f645"; } -.bi-info-lg::before { content: "\f646"; } -.bi-mastodon::before { content: "\f647"; } -.bi-messenger::before { content: "\f648"; } -.bi-piggy-bank-fill::before { content: "\f649"; } -.bi-piggy-bank::before { content: "\f64a"; } -.bi-pin-map-fill::before { content: "\f64b"; } -.bi-pin-map::before { content: "\f64c"; } -.bi-plus-lg::before { content: "\f64d"; } -.bi-question-lg::before { content: "\f64e"; } -.bi-recycle::before { content: "\f64f"; } -.bi-reddit::before { content: "\f650"; } -.bi-safe-fill::before { content: "\f651"; } -.bi-safe2-fill::before { content: "\f652"; } -.bi-safe2::before { content: "\f653"; } -.bi-sd-card-fill::before { content: "\f654"; } -.bi-sd-card::before { content: "\f655"; } -.bi-skype::before { content: "\f656"; } -.bi-slash-lg::before { content: "\f657"; } -.bi-translate::before { content: "\f658"; } -.bi-x-lg::before { content: "\f659"; } -.bi-safe::before { content: "\f65a"; } -.bi-apple::before { content: "\f65b"; } -.bi-microsoft::before { content: "\f65d"; } -.bi-windows::before { content: "\f65e"; } -.bi-behance::before { content: "\f65c"; } -.bi-dribbble::before { content: "\f65f"; } -.bi-line::before { content: "\f660"; } -.bi-medium::before { content: "\f661"; } -.bi-paypal::before { content: "\f662"; } -.bi-pinterest::before { content: "\f663"; } -.bi-signal::before { content: "\f664"; } -.bi-snapchat::before { content: "\f665"; } -.bi-spotify::before { content: "\f666"; } -.bi-stack-overflow::before { content: "\f667"; } -.bi-strava::before { content: "\f668"; } -.bi-wordpress::before { content: "\f669"; } -.bi-vimeo::before { content: "\f66a"; } -.bi-activity::before { content: "\f66b"; } -.bi-easel2-fill::before { content: "\f66c"; } -.bi-easel2::before { content: "\f66d"; } -.bi-easel3-fill::before { content: "\f66e"; } -.bi-easel3::before { content: "\f66f"; } -.bi-fan::before { content: "\f670"; } -.bi-fingerprint::before { content: "\f671"; } -.bi-graph-down-arrow::before { content: "\f672"; } -.bi-graph-up-arrow::before { content: "\f673"; } -.bi-hypnotize::before { content: "\f674"; } -.bi-magic::before { content: "\f675"; } -.bi-person-rolodex::before { content: "\f676"; } -.bi-person-video::before { content: "\f677"; } -.bi-person-video2::before { content: "\f678"; } -.bi-person-video3::before { content: "\f679"; } -.bi-person-workspace::before { content: "\f67a"; } -.bi-radioactive::before { content: "\f67b"; } -.bi-webcam-fill::before { content: "\f67c"; } -.bi-webcam::before { content: "\f67d"; } -.bi-yin-yang::before { content: "\f67e"; } -.bi-bandaid-fill::before { content: "\f680"; } -.bi-bandaid::before { content: "\f681"; } -.bi-bluetooth::before { content: "\f682"; } -.bi-body-text::before { content: "\f683"; } -.bi-boombox::before { content: "\f684"; } -.bi-boxes::before { content: "\f685"; } -.bi-dpad-fill::before { content: "\f686"; } -.bi-dpad::before { content: "\f687"; } -.bi-ear-fill::before { content: "\f688"; } -.bi-ear::before { content: "\f689"; } -.bi-envelope-check-1::before { content: "\f68a"; } -.bi-envelope-check-fill::before { content: "\f68b"; } -.bi-envelope-check::before { content: "\f68c"; } -.bi-envelope-dash-1::before { content: "\f68d"; } -.bi-envelope-dash-fill::before { content: "\f68e"; } -.bi-envelope-dash::before { content: "\f68f"; } -.bi-envelope-exclamation-1::before { content: "\f690"; } -.bi-envelope-exclamation-fill::before { content: "\f691"; } -.bi-envelope-exclamation::before { content: "\f692"; } -.bi-envelope-plus-fill::before { content: "\f693"; } -.bi-envelope-plus::before { content: "\f694"; } -.bi-envelope-slash-1::before { content: "\f695"; } -.bi-envelope-slash-fill::before { content: "\f696"; } -.bi-envelope-slash::before { content: "\f697"; } -.bi-envelope-x-1::before { content: "\f698"; } -.bi-envelope-x-fill::before { content: "\f699"; } -.bi-envelope-x::before { content: "\f69a"; } -.bi-explicit-fill::before { content: "\f69b"; } -.bi-explicit::before { content: "\f69c"; } -.bi-git::before { content: "\f69d"; } -.bi-infinity::before { content: "\f69e"; } -.bi-list-columns-reverse::before { content: "\f69f"; } -.bi-list-columns::before { content: "\f6a0"; } -.bi-meta::before { content: "\f6a1"; } -.bi-mortorboard-fill::before { content: "\f6a2"; } -.bi-mortorboard::before { content: "\f6a3"; } -.bi-nintendo-switch::before { content: "\f6a4"; } -.bi-pc-display-horizontal::before { content: "\f6a5"; } -.bi-pc-display::before { content: "\f6a6"; } -.bi-pc-horizontal::before { content: "\f6a7"; } -.bi-pc::before { content: "\f6a8"; } -.bi-playstation::before { content: "\f6a9"; } -.bi-plus-slash-minus::before { content: "\f6aa"; } -.bi-projector-fill::before { content: "\f6ab"; } -.bi-projector::before { content: "\f6ac"; } -.bi-qr-code-scan::before { content: "\f6ad"; } -.bi-qr-code::before { content: "\f6ae"; } -.bi-quora::before { content: "\f6af"; } -.bi-quote::before { content: "\f6b0"; } -.bi-robot::before { content: "\f6b1"; } -.bi-send-check-fill::before { content: "\f6b2"; } -.bi-send-check::before { content: "\f6b3"; } -.bi-send-dash-fill::before { content: "\f6b4"; } -.bi-send-dash::before { content: "\f6b5"; } -.bi-send-exclamation-1::before { content: "\f6b6"; } -.bi-send-exclamation-fill::before { content: "\f6b7"; } -.bi-send-exclamation::before { content: "\f6b8"; } -.bi-send-fill::before { content: "\f6b9"; } -.bi-send-plus-fill::before { content: "\f6ba"; } -.bi-send-plus::before { content: "\f6bb"; } -.bi-send-slash-fill::before { content: "\f6bc"; } -.bi-send-slash::before { content: "\f6bd"; } -.bi-send-x-fill::before { content: "\f6be"; } -.bi-send-x::before { content: "\f6bf"; } -.bi-send::before { content: "\f6c0"; } -.bi-steam::before { content: "\f6c1"; } -.bi-terminal-dash-1::before { content: "\f6c2"; } -.bi-terminal-dash::before { content: "\f6c3"; } -.bi-terminal-plus::before { content: "\f6c4"; } -.bi-terminal-split::before { content: "\f6c5"; } -.bi-ticket-detailed-fill::before { content: "\f6c6"; } -.bi-ticket-detailed::before { content: "\f6c7"; } -.bi-ticket-fill::before { content: "\f6c8"; } -.bi-ticket-perforated-fill::before { content: "\f6c9"; } -.bi-ticket-perforated::before { content: "\f6ca"; } -.bi-ticket::before { content: "\f6cb"; } -.bi-tiktok::before { content: "\f6cc"; } -.bi-window-dash::before { content: "\f6cd"; } -.bi-window-desktop::before { content: "\f6ce"; } -.bi-window-fullscreen::before { content: "\f6cf"; } -.bi-window-plus::before { content: "\f6d0"; } -.bi-window-split::before { content: "\f6d1"; } -.bi-window-stack::before { content: "\f6d2"; } -.bi-window-x::before { content: "\f6d3"; } -.bi-xbox::before { content: "\f6d4"; } -.bi-ethernet::before { content: "\f6d5"; } -.bi-hdmi-fill::before { content: "\f6d6"; } -.bi-hdmi::before { content: "\f6d7"; } -.bi-usb-c-fill::before { content: "\f6d8"; } -.bi-usb-c::before { content: "\f6d9"; } -.bi-usb-fill::before { content: "\f6da"; } -.bi-usb-plug-fill::before { content: "\f6db"; } -.bi-usb-plug::before { content: "\f6dc"; } -.bi-usb-symbol::before { content: "\f6dd"; } -.bi-usb::before { content: "\f6de"; } -.bi-boombox-fill::before { content: "\f6df"; } -.bi-displayport-1::before { content: "\f6e0"; } -.bi-displayport::before { content: "\f6e1"; } -.bi-gpu-card::before { content: "\f6e2"; } -.bi-memory::before { content: "\f6e3"; } -.bi-modem-fill::before { content: "\f6e4"; } -.bi-modem::before { content: "\f6e5"; } -.bi-motherboard-fill::before { content: "\f6e6"; } -.bi-motherboard::before { content: "\f6e7"; } -.bi-optical-audio-fill::before { content: "\f6e8"; } -.bi-optical-audio::before { content: "\f6e9"; } -.bi-pci-card::before { content: "\f6ea"; } -.bi-router-fill::before { content: "\f6eb"; } -.bi-router::before { content: "\f6ec"; } -.bi-ssd-fill::before { content: "\f6ed"; } -.bi-ssd::before { content: "\f6ee"; } -.bi-thunderbolt-fill::before { content: "\f6ef"; } -.bi-thunderbolt::before { content: "\f6f0"; } -.bi-usb-drive-fill::before { content: "\f6f1"; } -.bi-usb-drive::before { content: "\f6f2"; } -.bi-usb-micro-fill::before { content: "\f6f3"; } -.bi-usb-micro::before { content: "\f6f4"; } -.bi-usb-mini-fill::before { content: "\f6f5"; } -.bi-usb-mini::before { content: "\f6f6"; } -.bi-cloud-haze2::before { content: "\f6f7"; } -.bi-device-hdd-fill::before { content: "\f6f8"; } -.bi-device-hdd::before { content: "\f6f9"; } -.bi-device-ssd-fill::before { content: "\f6fa"; } -.bi-device-ssd::before { content: "\f6fb"; } -.bi-displayport-fill::before { content: "\f6fc"; } -.bi-mortarboard-fill::before { content: "\f6fd"; } -.bi-mortarboard::before { content: "\f6fe"; } -.bi-terminal-x::before { content: "\f6ff"; } -.bi-arrow-through-heart-fill::before { content: "\f700"; } -.bi-arrow-through-heart::before { content: "\f701"; } -.bi-badge-sd-fill::before { content: "\f702"; } -.bi-badge-sd::before { content: "\f703"; } -.bi-bag-heart-fill::before { content: "\f704"; } -.bi-bag-heart::before { content: "\f705"; } -.bi-balloon-fill::before { content: "\f706"; } -.bi-balloon-heart-fill::before { content: "\f707"; } -.bi-balloon-heart::before { content: "\f708"; } -.bi-balloon::before { content: "\f709"; } -.bi-box2-fill::before { content: "\f70a"; } -.bi-box2-heart-fill::before { content: "\f70b"; } -.bi-box2-heart::before { content: "\f70c"; } -.bi-box2::before { content: "\f70d"; } -.bi-braces-asterisk::before { content: "\f70e"; } -.bi-calendar-heart-fill::before { content: "\f70f"; } -.bi-calendar-heart::before { content: "\f710"; } -.bi-calendar2-heart-fill::before { content: "\f711"; } -.bi-calendar2-heart::before { content: "\f712"; } -.bi-chat-heart-fill::before { content: "\f713"; } -.bi-chat-heart::before { content: "\f714"; } -.bi-chat-left-heart-fill::before { content: "\f715"; } -.bi-chat-left-heart::before { content: "\f716"; } -.bi-chat-right-heart-fill::before { content: "\f717"; } -.bi-chat-right-heart::before { content: "\f718"; } -.bi-chat-square-heart-fill::before { content: "\f719"; } -.bi-chat-square-heart::before { content: "\f71a"; } -.bi-clipboard-check-fill::before { content: "\f71b"; } -.bi-clipboard-data-fill::before { content: "\f71c"; } -.bi-clipboard-fill::before { content: "\f71d"; } -.bi-clipboard-heart-fill::before { content: "\f71e"; } -.bi-clipboard-heart::before { content: "\f71f"; } -.bi-clipboard-minus-fill::before { content: "\f720"; } -.bi-clipboard-plus-fill::before { content: "\f721"; } -.bi-clipboard-pulse::before { content: "\f722"; } -.bi-clipboard-x-fill::before { content: "\f723"; } -.bi-clipboard2-check-fill::before { content: "\f724"; } -.bi-clipboard2-check::before { content: "\f725"; } -.bi-clipboard2-data-fill::before { content: "\f726"; } -.bi-clipboard2-data::before { content: "\f727"; } -.bi-clipboard2-fill::before { content: "\f728"; } -.bi-clipboard2-heart-fill::before { content: "\f729"; } -.bi-clipboard2-heart::before { content: "\f72a"; } -.bi-clipboard2-minus-fill::before { content: "\f72b"; } -.bi-clipboard2-minus::before { content: "\f72c"; } -.bi-clipboard2-plus-fill::before { content: "\f72d"; } -.bi-clipboard2-plus::before { content: "\f72e"; } -.bi-clipboard2-pulse-fill::before { content: "\f72f"; } -.bi-clipboard2-pulse::before { content: "\f730"; } -.bi-clipboard2-x-fill::before { content: "\f731"; } -.bi-clipboard2-x::before { content: "\f732"; } -.bi-clipboard2::before { content: "\f733"; } -.bi-emoji-kiss-fill::before { content: "\f734"; } -.bi-emoji-kiss::before { content: "\f735"; } -.bi-envelope-heart-fill::before { content: "\f736"; } -.bi-envelope-heart::before { content: "\f737"; } -.bi-envelope-open-heart-fill::before { content: "\f738"; } -.bi-envelope-open-heart::before { content: "\f739"; } -.bi-envelope-paper-fill::before { content: "\f73a"; } -.bi-envelope-paper-heart-fill::before { content: "\f73b"; } -.bi-envelope-paper-heart::before { content: "\f73c"; } -.bi-envelope-paper::before { content: "\f73d"; } -.bi-filetype-aac::before { content: "\f73e"; } -.bi-filetype-ai::before { content: "\f73f"; } -.bi-filetype-bmp::before { content: "\f740"; } -.bi-filetype-cs::before { content: "\f741"; } -.bi-filetype-css::before { content: "\f742"; } -.bi-filetype-csv::before { content: "\f743"; } -.bi-filetype-doc::before { content: "\f744"; } -.bi-filetype-docx::before { content: "\f745"; } -.bi-filetype-exe::before { content: "\f746"; } -.bi-filetype-gif::before { content: "\f747"; } -.bi-filetype-heic::before { content: "\f748"; } -.bi-filetype-html::before { content: "\f749"; } -.bi-filetype-java::before { content: "\f74a"; } -.bi-filetype-jpg::before { content: "\f74b"; } -.bi-filetype-js::before { content: "\f74c"; } -.bi-filetype-jsx::before { content: "\f74d"; } -.bi-filetype-key::before { content: "\f74e"; } -.bi-filetype-m4p::before { content: "\f74f"; } -.bi-filetype-md::before { content: "\f750"; } -.bi-filetype-mdx::before { content: "\f751"; } -.bi-filetype-mov::before { content: "\f752"; } -.bi-filetype-mp3::before { content: "\f753"; } -.bi-filetype-mp4::before { content: "\f754"; } -.bi-filetype-otf::before { content: "\f755"; } -.bi-filetype-pdf::before { content: "\f756"; } -.bi-filetype-php::before { content: "\f757"; } -.bi-filetype-png::before { content: "\f758"; } -.bi-filetype-ppt-1::before { content: "\f759"; } -.bi-filetype-ppt::before { content: "\f75a"; } -.bi-filetype-psd::before { content: "\f75b"; } -.bi-filetype-py::before { content: "\f75c"; } -.bi-filetype-raw::before { content: "\f75d"; } -.bi-filetype-rb::before { content: "\f75e"; } -.bi-filetype-sass::before { content: "\f75f"; } -.bi-filetype-scss::before { content: "\f760"; } -.bi-filetype-sh::before { content: "\f761"; } -.bi-filetype-svg::before { content: "\f762"; } -.bi-filetype-tiff::before { content: "\f763"; } -.bi-filetype-tsx::before { content: "\f764"; } -.bi-filetype-ttf::before { content: "\f765"; } -.bi-filetype-txt::before { content: "\f766"; } -.bi-filetype-wav::before { content: "\f767"; } -.bi-filetype-woff::before { content: "\f768"; } -.bi-filetype-xls-1::before { content: "\f769"; } -.bi-filetype-xls::before { content: "\f76a"; } -.bi-filetype-xml::before { content: "\f76b"; } -.bi-filetype-yml::before { content: "\f76c"; } -.bi-heart-arrow::before { content: "\f76d"; } -.bi-heart-pulse-fill::before { content: "\f76e"; } -.bi-heart-pulse::before { content: "\f76f"; } -.bi-heartbreak-fill::before { content: "\f770"; } -.bi-heartbreak::before { content: "\f771"; } -.bi-hearts::before { content: "\f772"; } -.bi-hospital-fill::before { content: "\f773"; } -.bi-hospital::before { content: "\f774"; } -.bi-house-heart-fill::before { content: "\f775"; } -.bi-house-heart::before { content: "\f776"; } -.bi-incognito::before { content: "\f777"; } -.bi-magnet-fill::before { content: "\f778"; } -.bi-magnet::before { content: "\f779"; } -.bi-person-heart::before { content: "\f77a"; } -.bi-person-hearts::before { content: "\f77b"; } -.bi-phone-flip::before { content: "\f77c"; } -.bi-plugin::before { content: "\f77d"; } -.bi-postage-fill::before { content: "\f77e"; } -.bi-postage-heart-fill::before { content: "\f77f"; } -.bi-postage-heart::before { content: "\f780"; } -.bi-postage::before { content: "\f781"; } -.bi-postcard-fill::before { content: "\f782"; } -.bi-postcard-heart-fill::before { content: "\f783"; } -.bi-postcard-heart::before { content: "\f784"; } -.bi-postcard::before { content: "\f785"; } -.bi-search-heart-fill::before { content: "\f786"; } -.bi-search-heart::before { content: "\f787"; } -.bi-sliders2-vertical::before { content: "\f788"; } -.bi-sliders2::before { content: "\f789"; } -.bi-trash3-fill::before { content: "\f78a"; } -.bi-trash3::before { content: "\f78b"; } -.bi-valentine::before { content: "\f78c"; } -.bi-valentine2::before { content: "\f78d"; } -.bi-wrench-adjustable-circle-fill::before { content: "\f78e"; } -.bi-wrench-adjustable-circle::before { content: "\f78f"; } -.bi-wrench-adjustable::before { content: "\f790"; } -.bi-filetype-json::before { content: "\f791"; } -.bi-filetype-pptx::before { content: "\f792"; } -.bi-filetype-xlsx::before { content: "\f793"; } -.bi-1-circle-1::before { content: "\f794"; } -.bi-1-circle-fill-1::before { content: "\f795"; } -.bi-1-circle-fill::before { content: "\f796"; } -.bi-1-circle::before { content: "\f797"; } -.bi-1-square-fill::before { content: "\f798"; } -.bi-1-square::before { content: "\f799"; } -.bi-2-circle-1::before { content: "\f79a"; } -.bi-2-circle-fill-1::before { content: "\f79b"; } -.bi-2-circle-fill::before { content: "\f79c"; } -.bi-2-circle::before { content: "\f79d"; } -.bi-2-square-fill::before { content: "\f79e"; } -.bi-2-square::before { content: "\f79f"; } -.bi-3-circle-1::before { content: "\f7a0"; } -.bi-3-circle-fill-1::before { content: "\f7a1"; } -.bi-3-circle-fill::before { content: "\f7a2"; } -.bi-3-circle::before { content: "\f7a3"; } -.bi-3-square-fill::before { content: "\f7a4"; } -.bi-3-square::before { content: "\f7a5"; } -.bi-4-circle-1::before { content: "\f7a6"; } -.bi-4-circle-fill-1::before { content: "\f7a7"; } -.bi-4-circle-fill::before { content: "\f7a8"; } -.bi-4-circle::before { content: "\f7a9"; } -.bi-4-square-fill::before { content: "\f7aa"; } -.bi-4-square::before { content: "\f7ab"; } -.bi-5-circle-1::before { content: "\f7ac"; } -.bi-5-circle-fill-1::before { content: "\f7ad"; } -.bi-5-circle-fill::before { content: "\f7ae"; } -.bi-5-circle::before { content: "\f7af"; } -.bi-5-square-fill::before { content: "\f7b0"; } -.bi-5-square::before { content: "\f7b1"; } -.bi-6-circle-1::before { content: "\f7b2"; } -.bi-6-circle-fill-1::before { content: "\f7b3"; } -.bi-6-circle-fill::before { content: "\f7b4"; } -.bi-6-circle::before { content: "\f7b5"; } -.bi-6-square-fill::before { content: "\f7b6"; } -.bi-6-square::before { content: "\f7b7"; } -.bi-7-circle-1::before { content: "\f7b8"; } -.bi-7-circle-fill-1::before { content: "\f7b9"; } -.bi-7-circle-fill::before { content: "\f7ba"; } -.bi-7-circle::before { content: "\f7bb"; } -.bi-7-square-fill::before { content: "\f7bc"; } -.bi-7-square::before { content: "\f7bd"; } -.bi-8-circle-1::before { content: "\f7be"; } -.bi-8-circle-fill-1::before { content: "\f7bf"; } -.bi-8-circle-fill::before { content: "\f7c0"; } -.bi-8-circle::before { content: "\f7c1"; } -.bi-8-square-fill::before { content: "\f7c2"; } -.bi-8-square::before { content: "\f7c3"; } -.bi-9-circle-1::before { content: "\f7c4"; } -.bi-9-circle-fill-1::before { content: "\f7c5"; } -.bi-9-circle-fill::before { content: "\f7c6"; } -.bi-9-circle::before { content: "\f7c7"; } -.bi-9-square-fill::before { content: "\f7c8"; } -.bi-9-square::before { content: "\f7c9"; } -.bi-airplane-engines-fill::before { content: "\f7ca"; } -.bi-airplane-engines::before { content: "\f7cb"; } -.bi-airplane-fill::before { content: "\f7cc"; } -.bi-airplane::before { content: "\f7cd"; } -.bi-alexa::before { content: "\f7ce"; } -.bi-alipay::before { content: "\f7cf"; } -.bi-android::before { content: "\f7d0"; } -.bi-android2::before { content: "\f7d1"; } -.bi-box-fill::before { content: "\f7d2"; } -.bi-box-seam-fill::before { content: "\f7d3"; } -.bi-browser-chrome::before { content: "\f7d4"; } -.bi-browser-edge::before { content: "\f7d5"; } -.bi-browser-firefox::before { content: "\f7d6"; } -.bi-browser-safari::before { content: "\f7d7"; } -.bi-c-circle-1::before { content: "\f7d8"; } -.bi-c-circle-fill-1::before { content: "\f7d9"; } -.bi-c-circle-fill::before { content: "\f7da"; } -.bi-c-circle::before { content: "\f7db"; } -.bi-c-square-fill::before { content: "\f7dc"; } -.bi-c-square::before { content: "\f7dd"; } -.bi-capsule-pill::before { content: "\f7de"; } -.bi-capsule::before { content: "\f7df"; } -.bi-car-front-fill::before { content: "\f7e0"; } -.bi-car-front::before { content: "\f7e1"; } -.bi-cassette-fill::before { content: "\f7e2"; } -.bi-cassette::before { content: "\f7e3"; } -.bi-cc-circle-1::before { content: "\f7e4"; } -.bi-cc-circle-fill-1::before { content: "\f7e5"; } -.bi-cc-circle-fill::before { content: "\f7e6"; } -.bi-cc-circle::before { content: "\f7e7"; } -.bi-cc-square-fill::before { content: "\f7e8"; } -.bi-cc-square::before { content: "\f7e9"; } -.bi-cup-hot-fill::before { content: "\f7ea"; } -.bi-cup-hot::before { content: "\f7eb"; } -.bi-currency-rupee::before { content: "\f7ec"; } -.bi-dropbox::before { content: "\f7ed"; } -.bi-escape::before { content: "\f7ee"; } -.bi-fast-forward-btn-fill::before { content: "\f7ef"; } -.bi-fast-forward-btn::before { content: "\f7f0"; } -.bi-fast-forward-circle-fill::before { content: "\f7f1"; } -.bi-fast-forward-circle::before { content: "\f7f2"; } -.bi-fast-forward-fill::before { content: "\f7f3"; } -.bi-fast-forward::before { content: "\f7f4"; } -.bi-filetype-sql::before { content: "\f7f5"; } -.bi-fire::before { content: "\f7f6"; } -.bi-google-play::before { content: "\f7f7"; } -.bi-h-circle-1::before { content: "\f7f8"; } -.bi-h-circle-fill-1::before { content: "\f7f9"; } -.bi-h-circle-fill::before { content: "\f7fa"; } -.bi-h-circle::before { content: "\f7fb"; } -.bi-h-square-fill::before { content: "\f7fc"; } -.bi-h-square::before { content: "\f7fd"; } -.bi-indent::before { content: "\f7fe"; } -.bi-lungs-fill::before { content: "\f7ff"; } -.bi-lungs::before { content: "\f800"; } -.bi-microsoft-teams::before { content: "\f801"; } -.bi-p-circle-1::before { content: "\f802"; } -.bi-p-circle-fill-1::before { content: "\f803"; } -.bi-p-circle-fill::before { content: "\f804"; } -.bi-p-circle::before { content: "\f805"; } -.bi-p-square-fill::before { content: "\f806"; } -.bi-p-square::before { content: "\f807"; } -.bi-pass-fill::before { content: "\f808"; } -.bi-pass::before { content: "\f809"; } -.bi-prescription::before { content: "\f80a"; } -.bi-prescription2::before { content: "\f80b"; } -.bi-r-circle-1::before { content: "\f80c"; } -.bi-r-circle-fill-1::before { content: "\f80d"; } -.bi-r-circle-fill::before { content: "\f80e"; } -.bi-r-circle::before { content: "\f80f"; } -.bi-r-square-fill::before { content: "\f810"; } -.bi-r-square::before { content: "\f811"; } -.bi-repeat-1::before { content: "\f812"; } -.bi-repeat::before { content: "\f813"; } -.bi-rewind-btn-fill::before { content: "\f814"; } -.bi-rewind-btn::before { content: "\f815"; } -.bi-rewind-circle-fill::before { content: "\f816"; } -.bi-rewind-circle::before { content: "\f817"; } -.bi-rewind-fill::before { content: "\f818"; } -.bi-rewind::before { content: "\f819"; } -.bi-train-freight-front-fill::before { content: "\f81a"; } -.bi-train-freight-front::before { content: "\f81b"; } -.bi-train-front-fill::before { content: "\f81c"; } -.bi-train-front::before { content: "\f81d"; } -.bi-train-lightrail-front-fill::before { content: "\f81e"; } -.bi-train-lightrail-front::before { content: "\f81f"; } -.bi-truck-front-fill::before { content: "\f820"; } -.bi-truck-front::before { content: "\f821"; } -.bi-ubuntu::before { content: "\f822"; } -.bi-unindent::before { content: "\f823"; } -.bi-unity::before { content: "\f824"; } -.bi-universal-access-circle::before { content: "\f825"; } -.bi-universal-access::before { content: "\f826"; } -.bi-virus::before { content: "\f827"; } -.bi-virus2::before { content: "\f828"; } -.bi-wechat::before { content: "\f829"; } -.bi-yelp::before { content: "\f82a"; } -.bi-sign-stop-fill::before { content: "\f82b"; } -.bi-sign-stop-lights-fill::before { content: "\f82c"; } -.bi-sign-stop-lights::before { content: "\f82d"; } -.bi-sign-stop::before { content: "\f82e"; } -.bi-sign-turn-left-fill::before { content: "\f82f"; } -.bi-sign-turn-left::before { content: "\f830"; } -.bi-sign-turn-right-fill::before { content: "\f831"; } -.bi-sign-turn-right::before { content: "\f832"; } -.bi-sign-turn-slight-left-fill::before { content: "\f833"; } -.bi-sign-turn-slight-left::before { content: "\f834"; } -.bi-sign-turn-slight-right-fill::before { content: "\f835"; } -.bi-sign-turn-slight-right::before { content: "\f836"; } -.bi-sign-yield-fill::before { content: "\f837"; } -.bi-sign-yield::before { content: "\f838"; } -.bi-ev-station-fill::before { content: "\f839"; } -.bi-ev-station::before { content: "\f83a"; } -.bi-fuel-pump-diesel-fill::before { content: "\f83b"; } -.bi-fuel-pump-diesel::before { content: "\f83c"; } -.bi-fuel-pump-fill::before { content: "\f83d"; } -.bi-fuel-pump::before { content: "\f83e"; } -.bi-0-circle-fill::before { content: "\f83f"; } -.bi-0-circle::before { content: "\f840"; } -.bi-0-square-fill::before { content: "\f841"; } -.bi-0-square::before { content: "\f842"; } -.bi-rocket-fill::before { content: "\f843"; } -.bi-rocket-takeoff-fill::before { content: "\f844"; } -.bi-rocket-takeoff::before { content: "\f845"; } -.bi-rocket::before { content: "\f846"; } -.bi-stripe::before { content: "\f847"; } -.bi-subscript::before { content: "\f848"; } -.bi-superscript::before { content: "\f849"; } -.bi-trello::before { content: "\f84a"; } -.bi-envelope-at-fill::before { content: "\f84b"; } -.bi-envelope-at::before { content: "\f84c"; } -.bi-regex::before { content: "\f84d"; } -.bi-text-wrap::before { content: "\f84e"; } -.bi-sign-dead-end-fill::before { content: "\f84f"; } -.bi-sign-dead-end::before { content: "\f850"; } -.bi-sign-do-not-enter-fill::before { content: "\f851"; } -.bi-sign-do-not-enter::before { content: "\f852"; } -.bi-sign-intersection-fill::before { content: "\f853"; } -.bi-sign-intersection-side-fill::before { content: "\f854"; } -.bi-sign-intersection-side::before { content: "\f855"; } -.bi-sign-intersection-t-fill::before { content: "\f856"; } -.bi-sign-intersection-t::before { content: "\f857"; } -.bi-sign-intersection-y-fill::before { content: "\f858"; } -.bi-sign-intersection-y::before { content: "\f859"; } -.bi-sign-intersection::before { content: "\f85a"; } -.bi-sign-merge-left-fill::before { content: "\f85b"; } -.bi-sign-merge-left::before { content: "\f85c"; } -.bi-sign-merge-right-fill::before { content: "\f85d"; } -.bi-sign-merge-right::before { content: "\f85e"; } -.bi-sign-no-left-turn-fill::before { content: "\f85f"; } -.bi-sign-no-left-turn::before { content: "\f860"; } -.bi-sign-no-parking-fill::before { content: "\f861"; } -.bi-sign-no-parking::before { content: "\f862"; } -.bi-sign-no-right-turn-fill::before { content: "\f863"; } -.bi-sign-no-right-turn::before { content: "\f864"; } -.bi-sign-railroad-fill::before { content: "\f865"; } -.bi-sign-railroad::before { content: "\f866"; } -.bi-building-add::before { content: "\f867"; } -.bi-building-check::before { content: "\f868"; } -.bi-building-dash::before { content: "\f869"; } -.bi-building-down::before { content: "\f86a"; } -.bi-building-exclamation::before { content: "\f86b"; } -.bi-building-fill-add::before { content: "\f86c"; } -.bi-building-fill-check::before { content: "\f86d"; } -.bi-building-fill-dash::before { content: "\f86e"; } -.bi-building-fill-down::before { content: "\f86f"; } -.bi-building-fill-exclamation::before { content: "\f870"; } -.bi-building-fill-gear::before { content: "\f871"; } -.bi-building-fill-lock::before { content: "\f872"; } -.bi-building-fill-slash::before { content: "\f873"; } -.bi-building-fill-up::before { content: "\f874"; } -.bi-building-fill-x::before { content: "\f875"; } -.bi-building-fill::before { content: "\f876"; } -.bi-building-gear::before { content: "\f877"; } -.bi-building-lock::before { content: "\f878"; } -.bi-building-slash::before { content: "\f879"; } -.bi-building-up::before { content: "\f87a"; } -.bi-building-x::before { content: "\f87b"; } -.bi-buildings-fill::before { content: "\f87c"; } -.bi-buildings::before { content: "\f87d"; } -.bi-bus-front-fill::before { content: "\f87e"; } -.bi-bus-front::before { content: "\f87f"; } -.bi-ev-front-fill::before { content: "\f880"; } -.bi-ev-front::before { content: "\f881"; } -.bi-globe-americas::before { content: "\f882"; } -.bi-globe-asia-australia::before { content: "\f883"; } -.bi-globe-central-south-asia::before { content: "\f884"; } -.bi-globe-europe-africa::before { content: "\f885"; } -.bi-house-add-fill::before { content: "\f886"; } -.bi-house-add::before { content: "\f887"; } -.bi-house-check-fill::before { content: "\f888"; } -.bi-house-check::before { content: "\f889"; } -.bi-house-dash-fill::before { content: "\f88a"; } -.bi-house-dash::before { content: "\f88b"; } -.bi-house-down-fill::before { content: "\f88c"; } -.bi-house-down::before { content: "\f88d"; } -.bi-house-exclamation-fill::before { content: "\f88e"; } -.bi-house-exclamation::before { content: "\f88f"; } -.bi-house-gear-fill::before { content: "\f890"; } -.bi-house-gear::before { content: "\f891"; } -.bi-house-lock-fill::before { content: "\f892"; } -.bi-house-lock::before { content: "\f893"; } -.bi-house-slash-fill::before { content: "\f894"; } -.bi-house-slash::before { content: "\f895"; } -.bi-house-up-fill::before { content: "\f896"; } -.bi-house-up::before { content: "\f897"; } -.bi-house-x-fill::before { content: "\f898"; } -.bi-house-x::before { content: "\f899"; } -.bi-person-add::before { content: "\f89a"; } -.bi-person-down::before { content: "\f89b"; } -.bi-person-exclamation::before { content: "\f89c"; } -.bi-person-fill-add::before { content: "\f89d"; } -.bi-person-fill-check::before { content: "\f89e"; } -.bi-person-fill-dash::before { content: "\f89f"; } -.bi-person-fill-down::before { content: "\f8a0"; } -.bi-person-fill-exclamation::before { content: "\f8a1"; } -.bi-person-fill-gear::before { content: "\f8a2"; } -.bi-person-fill-lock::before { content: "\f8a3"; } -.bi-person-fill-slash::before { content: "\f8a4"; } -.bi-person-fill-up::before { content: "\f8a5"; } -.bi-person-fill-x::before { content: "\f8a6"; } -.bi-person-gear::before { content: "\f8a7"; } -.bi-person-lock::before { content: "\f8a8"; } -.bi-person-slash::before { content: "\f8a9"; } -.bi-person-up::before { content: "\f8aa"; } -.bi-scooter::before { content: "\f8ab"; } -.bi-taxi-front-fill::before { content: "\f8ac"; } -.bi-taxi-front::before { content: "\f8ad"; } -.bi-amd::before { content: "\f8ae"; } -.bi-database-add::before { content: "\f8af"; } -.bi-database-check::before { content: "\f8b0"; } -.bi-database-dash::before { content: "\f8b1"; } -.bi-database-down::before { content: "\f8b2"; } -.bi-database-exclamation::before { content: "\f8b3"; } -.bi-database-fill-add::before { content: "\f8b4"; } -.bi-database-fill-check::before { content: "\f8b5"; } -.bi-database-fill-dash::before { content: "\f8b6"; } -.bi-database-fill-down::before { content: "\f8b7"; } -.bi-database-fill-exclamation::before { content: "\f8b8"; } -.bi-database-fill-gear::before { content: "\f8b9"; } -.bi-database-fill-lock::before { content: "\f8ba"; } -.bi-database-fill-slash::before { content: "\f8bb"; } -.bi-database-fill-up::before { content: "\f8bc"; } -.bi-database-fill-x::before { content: "\f8bd"; } -.bi-database-fill::before { content: "\f8be"; } -.bi-database-gear::before { content: "\f8bf"; } -.bi-database-lock::before { content: "\f8c0"; } -.bi-database-slash::before { content: "\f8c1"; } -.bi-database-up::before { content: "\f8c2"; } -.bi-database-x::before { content: "\f8c3"; } -.bi-database::before { content: "\f8c4"; } -.bi-houses-fill::before { content: "\f8c5"; } -.bi-houses::before { content: "\f8c6"; } -.bi-nvidia::before { content: "\f8c7"; } -.bi-person-vcard-fill::before { content: "\f8c8"; } -.bi-person-vcard::before { content: "\f8c9"; } -.bi-sina-weibo::before { content: "\f8ca"; } -.bi-tencent-qq::before { content: "\f8cb"; } -.bi-wikipedia::before { content: "\f8cc"; } diff --git a/spaces/Anish13/fruit/README.md b/spaces/Anish13/fruit/README.md deleted file mode 100644 index 8e42ac2d1489e28913f92be2e092852ee66f68d2..0000000000000000000000000000000000000000 --- a/spaces/Anish13/fruit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fruit -emoji: ⚡ -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/image/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/image/__init__.py deleted file mode 100644 index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/spaces/Apex-X/Tm/roop/predictor.py b/spaces/Apex-X/Tm/roop/predictor.py deleted file mode 100644 index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/Tm/roop/predictor.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy -import opennsfw2 -from PIL import Image - -from roop.typing import Frame - -MAX_PROBABILITY = 0.85 - - -def predict_frame(target_frame: Frame) -> bool: - image = Image.fromarray(target_frame) - image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO) - model = opennsfw2.make_open_nsfw_model() - views = numpy.expand_dims(image, axis=0) - _, probability = model.predict(views)[0] - return probability > MAX_PROBABILITY - - -def predict_image(target_path: str) -> bool: - return opennsfw2.predict_image(target_path) > MAX_PROBABILITY - - -def predict_video(target_path: str) -> bool: - _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100) - return any(probability > MAX_PROBABILITY for probability in probabilities) diff --git a/spaces/Apex-X/nono/roop/predicter.py b/spaces/Apex-X/nono/roop/predicter.py deleted file mode 100644 index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/nono/roop/predicter.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy -import opennsfw2 -from PIL import Image - -from roop.typing import Frame - -MAX_PROBABILITY = 0.85 - - -def predict_frame(target_frame: Frame) -> bool: - image = Image.fromarray(target_frame) - image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO) - model = opennsfw2.make_open_nsfw_model() - views = numpy.expand_dims(image, axis=0) - _, probability = model.predict(views)[0] - return probability > MAX_PROBABILITY - - -def predict_image(target_path: str) -> bool: - return opennsfw2.predict_image(target_path) > MAX_PROBABILITY - - -def predict_video(target_path: str) -> bool: - _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100) - return any(probability > MAX_PROBABILITY for probability in probabilities) diff --git a/spaces/ArdaSaygan/PollGeneratorApp/README.md b/spaces/ArdaSaygan/PollGeneratorApp/README.md deleted file mode 100644 index 2d34414795f8867e2d0b01bc87150c6eb984e0d5..0000000000000000000000000000000000000000 --- a/spaces/ArdaSaygan/PollGeneratorApp/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PollGeneratorApp -emoji: 📉 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.28.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Arnaudding001/OpenAI_whisperLive/vad_test.py b/spaces/Arnaudding001/OpenAI_whisperLive/vad_test.py deleted file mode 100644 index 7b0bac0240bb8cfad3f2f3709315015aeec53a5f..0000000000000000000000000000000000000000 --- a/spaces/Arnaudding001/OpenAI_whisperLive/vad_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import pprint -import unittest -import numpy as np -import sys - -sys.path.append('../whisper-webui') - -from src.vad import AbstractTranscription, VadSileroTranscription - -class TestVad(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestVad, self).__init__(*args, **kwargs) - self.transcribe_calls = [] - - def test_transcript(self): - mock = MockVadTranscription() - - self.transcribe_calls.clear() - result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment)) - - self.assertListEqual(self.transcribe_calls, [ - [30, 30], - [100, 100] - ]) - - self.assertListEqual(result['segments'], - [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '}, - {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}] - ) - - def transcribe_segments(self, segment): - self.transcribe_calls.append(segment.tolist()) - - # Dummy text - return { - 'text': "Hello world ", - 'segments': [ - { - "start": 10.0, - "end": 20.0, - "text": "Hello world " - } - ], - 'language': "" - } - -class MockVadTranscription(AbstractTranscription): - def __init__(self): - super().__init__() - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - start_time_seconds = float(start_time.removesuffix("s")) - duration_seconds = float(duration.removesuffix("s")) - - # For mocking, this just returns a simple numppy array - return np.array([start_time_seconds, duration_seconds], dtype=np.float64) - - def get_transcribe_timestamps(self, audio: str): - result = [] - - result.append( { 'start': 30, 'end': 60 } ) - result.append( { 'start': 100, 'end': 200 } ) - return result - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/training/train.py b/spaces/Audio-AGI/AudioSep/models/CLAP/training/train.py deleted file mode 100644 index f5759c4679d2ee9c0748444adf66b8453cf09728..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/training/train.py +++ /dev/null @@ -1,838 +0,0 @@ -import json -import logging -import math -import os -import time -from contextlib import suppress - -import numpy as np -import torch -import torch.nn.functional as F - -try: - import wandb -except ImportError: - wandb = None - -from open_clip import ClipLoss, gather_features -from .distributed import is_master -from .zero_shot import zero_shot_eval - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def unwrap_model(model): - if hasattr(model, "module"): - return model.module - else: - return model - - -def train_one_epoch( - model, data, epoch, optimizer, scaler, scheduler, args, tb_writer=None -): - device = torch.device(args.device) - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - model.train() - loss = ClipLoss( - local_loss=args.local_loss, - gather_with_grad=args.gather_with_grad, - cache_labels=True, - rank=args.rank, - world_size=args.world_size, - use_horovod=args.horovod, - mlp_loss=args.clap_mlploss, - weight_loss_kappa=args.kappa, - ) - - dataloader, sampler = data["train"].dataloader, data["train"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - num_batches_per_epoch = dataloader.num_batches - sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10)) - - # for toy dataset - if args.dataset_type == "toy": - dataloader.dataset.generate_queue() - - loss_m = AverageMeter() - batch_time_m = AverageMeter() - data_time_m = AverageMeter() - end = time.time() - - for i, batch in enumerate(dataloader): - # logging.info(f"batch {i} of {num_batches_per_epoch}") - step = num_batches_per_epoch * epoch + i - if isinstance(scheduler, dict): - for s in scheduler.values(): - s(step) - else: - scheduler(step) - audios = batch # contains mel_spec, wavform, and longer list - texts = batch["text"] - # audios = audios.to(device=device, non_blocking=True) - # texts = texts.to(device=device, non_blocking=True) - - data_time_m.update(time.time() - end) - if isinstance(optimizer, dict): - for o_ in optimizer.values(): - o_.zero_grad() - else: - optimizer.zero_grad() - - with autocast(): - ( - audio_features, - text_features, - audio_features_mlp, - text_features_mlp, - logit_scale_a, - logit_scale_t, - ) = model(audios, texts, device) - - if args.clap_mlploss: - total_loss = loss( - audio_features=audio_features, - text_features=text_features, - logit_scale_a=logit_scale_a, - logit_scale_t=logit_scale_t, - audio_features_mlp=audio_features_mlp, - text_features_mlp=text_features_mlp, - ) - else: - total_loss = loss( - audio_features=audio_features, - text_features=text_features, - logit_scale_a=logit_scale_a, - ) - if isinstance(optimizer, dict): - if scaler is not None: - scaler.scale(total_loss).backward() - for o_ in optimizer.values(): - if args.horovod: - o_.synchronize() - scaler.unscale_(o_) - with o_.skip_synchronize(): - scaler.step(o_) - else: - scaler.step(o_) - scaler.update() - else: - total_loss.backward() - for o_ in optimizer.values(): - o_.step() - else: - if scaler is not None: - scaler.scale(total_loss).backward() - if args.horovod: - optimizer.synchronize() - scaler.unscale_(optimizer) - with optimizer.skip_synchronize(): - scaler.step(optimizer) - else: - scaler.step(optimizer) - scaler.update() - else: - total_loss.backward() - optimizer.step() - - # Note: we clamp to 4.6052 = ln(100), as in the original paper. - with torch.no_grad(): - unwrap_model(model).logit_scale_a.clamp_(0, math.log(100)) - if args.clap_mlploss: - unwrap_model(model).logit_scale_t.clamp_(0, math.log(100)) - - batch_time_m.update(time.time() - end) - end = time.time() - batch_count = i + 1 - if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch): - if isinstance(audios, dict): - batch_size = len(audios["waveform"]) - else: - batch_size = len(audios) - num_samples = batch_count * batch_size * args.world_size - samples_per_epoch = dataloader.num_samples - percent_complete = 100.0 * batch_count / num_batches_per_epoch - - # NOTE loss is coarsely sampled, just master node and per log update - loss_m.update(total_loss.item(), batch_size) - logit_scale_scalar_a = logit_scale_a.item() - logit_scale_scalar_t = logit_scale_t.item() - if isinstance(optimizer, dict): - if args.clap_mlploss: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]} " - f"Logit Scale Audio: {logit_scale_scalar_a:.3f}" - f"Logit Scale Text: {logit_scale_scalar_t:.3f}" - ) - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "scale_audio": logit_scale_scalar_a, - "scale_text": logit_scale_scalar_t, - "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()], - } - else: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]} " - f"Logit Scale Audio: {logit_scale_scalar_a:.3f}" - ) - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "scale_audio": logit_scale_scalar_a, - "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()], - } - - else: - if args.clap_mlploss: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {optimizer.param_groups[0]['lr']:5f} " - f"Logit Scale Audio: {logit_scale_scalar_a:.3f}" - f"Logit Scale Text: {logit_scale_scalar_t:.3f}" - ) - - # Save train loss / etc. Using non avg meter values as loggers have their own smoothing - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "scale_audio": logit_scale_scalar_a, - "scale_text": logit_scale_scalar_t, - "lr": optimizer.param_groups[0]["lr"], - } - else: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {optimizer.param_groups[0]['lr']:5f} " - f"Logit Scale Audio: {logit_scale_scalar_a:.3f}" - ) - - # Save train loss / etc. Using non avg meter values as loggers have their own smoothing - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "scale_audio": logit_scale_scalar_a, - "lr": optimizer.param_groups[0]["lr"], - } - for name, val in log_data.items(): - name = "train/" + name - if tb_writer is not None: - tb_writer.add_scalar(name, val, step) - if args.wandb: - assert wandb is not None, "Please install wandb." - wandb.log({name: val, "step": step}) - - # resetting batch / data time meters per log window - batch_time_m.reset() - data_time_m.reset() - # end for - - -def evaluate(model, data, epoch, args, tb_writer=None): - metrics = {} - if not args.parallel_eval: - if not is_master(args): - return metrics - device = torch.device(args.device) - model.eval() - - # CHANGE - # zero_shot_metrics = zero_shot_eval(model, data, epoch, args) - # metrics.update(zero_shot_metrics) - if is_master(args): - print("Evaluating...") - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - if args.val_dataset_names == ["Clotho", "audiocaps"]: - # if only clotho and audiocaps are used, then we will use a different evaluation function. - # This is because in the Clotho and audiocaps valid and test set, there are 5 text for 1 audio. - if args.parallel_eval: - # (yusong): just a hack here. Don't use parallel eval when evaluating only clotho and audiocaps. - raise NotImplementedError( - "Parallel evaluation not supported for eval only Clotho and audiocaps." - ) - val_metrics_per_dataset = evaluate_clotho_audiocaps( - model, data, epoch, args, autocast, device, tb_writer - ) - for m in val_metrics_per_dataset.values(): - metrics.update(m) - if "epoch" not in metrics.keys(): - metrics.update({"epoch": epoch}) - metrics = select_top_metric_clotho_audiocaps( - metrics, val_metrics_per_dataset, args - ) - elif "val" in data and ( - args.val_frequency - and ((epoch % args.val_frequency) == 0 or epoch == args.epochs) - ): - dataloader = data["val"].dataloader - num_samples = 0 - samples_per_val = dataloader.num_samples - - # FIXME this does not scale past small eval datasets - # all_audio_features @ all_text_features will blow up memory and compute very quickly - eval_info = {} - if args.clap_mlploss: - eval_info["all"] = { - "cumulative_loss": 0.0, - "num_samples": 0, - "all_audio_features": [], - "all_text_features": [], - "all_audio_features_mlp": [], - "all_text_features_mlp": [], - } # cumulative_loss = 0.0 - else: - eval_info["all"] = { - "cumulative_loss": 0.0, - "num_samples": 0, - "all_audio_features": [], - "all_text_features": [], - } # cumu - # all_audio_features, all_text_features, all_audio_features_mlp, all_text_features_mlp = [], [], [], [] - with torch.no_grad(): - for i, batch in enumerate(dataloader): - audios = batch # contains mel_spec, wavform, and longer list - texts = batch["text"] - # audios = audios.to(device=device, non_blocking=True) - - all_names = list( - set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]]) - ) - for name in all_names: - if name not in eval_info.keys(): - if args.clap_mlploss: - eval_info[name] = { - "cumulative_loss": 0.0, - "num_samples": 0, - "all_audio_features": [], - "all_text_features": [], - "all_audio_features_mlp": [], - "all_text_features_mlp": [], - } - else: - eval_info[name] = { - "cumulative_loss": 0.0, - "num_samples": 0, - "all_audio_features": [], - "all_text_features": [], - } - with autocast(): - ( - audio_features, - text_features, - audio_features_mlp, - text_features_mlp, - logit_scale_a, - logit_scale_t, - ) = model(audios, texts, device) - - if args.parallel_eval: - # multi-GPU eval - if args.clap_mlploss: - ( - audio_features, - text_features, - audio_features_mlp, - text_features_mlp, - ) = gather_features( - audio_features=audio_features, - text_features=text_features, - audio_features_mlp=audio_features_mlp, - text_features_mlp=text_features_mlp, - local_loss=False, - gather_with_grad=False, - rank=args.rank, - world_size=args.world_size, - use_horovod=args.horovod, - mlp_loss=args.clap_mlploss, - ) - else: - (audio_features, text_features,) = gather_features( - audio_features=audio_features, - text_features=text_features, - local_loss=False, - gather_with_grad=False, - rank=args.rank, - world_size=args.world_size, - use_horovod=args.horovod, - mlp_loss=args.clap_mlploss, - ) - - if is_master(args): - num_samples += audio_features.shape[0] - for n in [*all_names, "all"]: - if n == "all": - eval_info[n]["all_audio_features"].append( - audio_features.cpu() - ) - eval_info[n]["all_text_features"].append( - text_features.cpu() - ) - if args.clap_mlploss: - eval_info[n]["all_audio_features_mlp"].append( - audio_features_mlp.cpu() - ) - eval_info[n]["all_text_features_mlp"].append( - text_features_mlp.cpu() - ) - else: - idx = np.where( - np.array( - [ - "-".join(b.split("/")[-3:-1]) - for b in batch["__url__"] - ] - ) - == n - )[0] - eval_info[n]["all_audio_features"].append( - audio_features.cpu().index_select( - 0, torch.tensor(idx).long() - ) - ) - eval_info[n]["all_text_features"].append( - text_features.cpu().index_select( - 0, torch.tensor(idx).long() - ) - ) - if args.clap_mlploss: - eval_info[n]["all_audio_features_mlp"].append( - audio_features_mlp.cpu().index_select( - 0, torch.tensor(idx).long() - ) - ) - eval_info[n]["all_text_features_mlp"].append( - text_features_mlp.cpu().index_select( - 0, torch.tensor(idx).long() - ) - ) - # print(f'eval step {i}') # (yusong): for debug - - # cumulative_loss += total_loss * batch_size - # num_samples += batch_size - if is_master(args) and (i % 100) == 0: # and i != 0: - logging.info( - f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]" - ) - if is_master(args): - val_metrics_per_dataset = {} - for n in eval_info.keys(): - if args.clap_mlploss: - metrics_single_dataset = get_metrics( - audio_features=torch.cat( - eval_info[n]["all_audio_features"] - ), - text_features=torch.cat(eval_info[n]["all_text_features"]), - logit_scale_a=logit_scale_a.cpu(), - audio_features_mlp=torch.cat( - eval_info[n]["all_audio_features_mlp"] - ), - text_features_mlp=torch.cat( - eval_info[n]["all_text_features_mlp"] - ), - logit_scale_t=logit_scale_t.cpu(), - mlp_loss=args.clap_mlploss, - ) - else: - metrics_single_dataset = get_metrics( - audio_features=torch.cat( - eval_info[n]["all_audio_features"] - ), - text_features=torch.cat(eval_info[n]["all_text_features"]), - logit_scale_a=logit_scale_a.cpu(), - mlp_loss=args.clap_mlploss, - ) - val_metrics_per_dataset[n] = { - n + "/" + k: v for k, v in metrics_single_dataset.items() - } - metrics.update(val_metrics_per_dataset[n]) - if "epoch" not in metrics.keys(): - metrics.update({"epoch": epoch}) - if is_master(args): - if not metrics: - return metrics - - logging.info( - f"Eval Epoch: {epoch} " - + "\n".join( - [ - "\t".join([f"{k}: {round(v, 4):.4f}" for k, v in m.items()]) - for m in val_metrics_per_dataset.values() - ] - ) - ) - - if args.save_logs: - for name, val in metrics.items(): - if tb_writer is not None: - tb_writer.add_scalar(f"val/{name}", val, epoch) - - with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f: - f.write(json.dumps(metrics)) - f.write("\n") - - if args.wandb: - assert wandb is not None, "Please install wandb." - for name, val in metrics.items(): - wandb.log({f"val/{name}": val, "epoch": epoch}) - - return metrics - else: - return metrics - - -def get_metrics( - audio_features, - text_features, - logit_scale_a, - audio_features_mlp=None, - text_features_mlp=None, - logit_scale_t=None, - mlp_loss=False, -): - metrics = {} - if mlp_loss: - # Set up audio to text & text to audio similary matrice - a_logits_per_audio = ( - (logit_scale_a * audio_features @ text_features_mlp.t()).detach().cpu() - ) - a_logits_per_text = a_logits_per_audio.t().detach().cpu() - t_logits_per_audio = ( - (logit_scale_t * audio_features_mlp @ text_features.t()).detach().cpu() - ) - t_logits_per_text = t_logits_per_audio.t().detach().cpu() - - labels = torch.arange(audio_features.shape[0]).long() - # Change the loss from two terms into four terms with 2x2 combined CE loss - total_loss = ( - F.cross_entropy(a_logits_per_audio, labels) - + F.cross_entropy(a_logits_per_text, labels) - + F.cross_entropy(t_logits_per_audio, labels) - + F.cross_entropy(t_logits_per_text, labels) - ) / 4 - - metrics[f"cumulative_loss"] = total_loss.item() - metrics[f"num_samples"] = audio_features.shape[0] - - logits = { - "audio_to_text": (a_logits_per_audio + t_logits_per_audio) / 2, - "text_to_audio": (a_logits_per_text + t_logits_per_text) / 2, - } - ground_truth = torch.arange(len(text_features)).view(-1, 1) - - else: - # print("text_features", text_features) - # print("text_features.shape", text_features.shape) - logits_per_audio = ( - (logit_scale_a * audio_features @ text_features.t()).detach().cpu() - ) - logits_per_text = logits_per_audio.t().detach().cpu() - - labels = torch.arange(audio_features.shape[0]).long() - # Change the loss from two terms into four terms with 2x2 combined CE loss - total_loss = ( - F.cross_entropy(logits_per_audio, labels) - + F.cross_entropy(logits_per_text, labels) - ) / 2 - - metrics[f"cumulative_loss"] = total_loss.item() - metrics[f"num_samples"] = audio_features.shape[0] - - logits = {"audio_to_text": logits_per_audio, "text_to_audio": logits_per_text} - - ground_truth = torch.arange(len(text_features)).view(-1, 1) - - for name, logit in logits.items(): - ranking = torch.argsort(logit, descending=True) - preds = torch.where(ranking == ground_truth)[ - 1 - ] # (yusong) this line is slow because it uses single thread - preds = preds.detach().cpu().numpy() - metrics[f"{name}_mean_rank"] = preds.mean() + 1 - metrics[f"{name}_median_rank"] = np.floor(np.median(preds)) + 1 - for k in [1, 5, 10]: - metrics[f"{name}_R@{k}"] = np.mean(preds < k) - # map@10 - metrics[f"{name}_mAP@10"] = np.mean(np.where(preds < 10, 1 / (preds + 1), 0.0)) - - return metrics - - -def evaluate_clotho_audiocaps( - model, data, epoch, args, autocast, device, tb_writer=None -): - """ - Adapted from https://github.com/XinhaoMei/audio-text_retrieval/blob/main/tools/utils.py. - 1. for text-to-audio retrieval, do 5 times and average the results - 2. for R@1, R@5, R@10 in audio-to-text retrieval, take the best rank among 5 text - 3. for map@10 in audio-to-text retrieval: - 3.1: sort the rank of 5 text - 3.2: exclude the rank >=10 (0-index) - 3.3: compute the map regarding the remaining ranks: np.mean(np.arange(1, len(ranks)+1) / ranks). - (3.3) That is, take the top ranks of 5 text that is < 10, and assign the descending number as ground truth. - (3.3) E.g.: the ground truth of first rank of the 5 text should be 1, the second rank should be 2, etc. - """ - # TODO: (yusong) only support single GPU evaluation and only support non-mlp case for now. - dataloader = data["val"].dataloader - with torch.no_grad(): - eval_info = {} - for i, batch in enumerate(dataloader): - audios = batch # contains mel_spec, wavform, and longer list - - # each item in the list has 5 texts - if args.tmodel == "transformer": - from open_clip import tokenize - - texts = [tokenize(t) for t in batch["full_text"]] - texts = torch.cat(texts) - else: - from .data import tokenizer - - texts = [ - tokenizer(t) for t in batch["full_text"] - ] # 5 texts for each audio - texts = { - k: torch.cat([t[k] for t in texts]) for k in texts[0].keys() - } # 5 x batch - - # audios = audios.to(device=device, non_blocking=True) - - all_names = list( - set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]]) - ) - for name in all_names: - if name not in eval_info.keys(): - # we will not use mlp outputs even if args.clap_mlploss=True - eval_info[name] = { - "cumulative_loss": 0.0, - "num_samples": 0, - "all_audio_features": [], - "all_text_features": [], - } - with autocast(): - audio_features = model(audios, None, device) - text_features = model(None, texts, device) - audio_features = F.normalize(audio_features, dim=-1) - text_features = F.normalize(text_features, dim=-1) - - all_names = list( - set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]]) - ) - for n in all_names: - idx = np.where( - np.array( - ["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]] - ) - == n - )[0] - eval_info[n]["all_audio_features"].append( - audio_features.cpu().index_select(0, torch.tensor(idx).long()) - ) - # (yusong) please double-check. This is for selecting 5 text features at once. - # because idx is a list of indices in size of num_samples, - # and text_features is a tensor of size (5*num_samples, dim) - # so we need to select 5 consecutive indices at once for a single index in idx. - eval_info[n]["all_text_features"].append( - text_features.cpu() - .reshape([-1, 5, text_features.shape[1]]) - .index_select(0, torch.tensor(idx).long()) - .reshape([-1, text_features.shape[1]]) - ) - - val_metrics_all = {} - - for n in eval_info.keys(): - logit_scale_a, logit_scale_t = model(None, None, device) - logit_scale_a = logit_scale_a.cpu() - - audio_features = torch.cat(eval_info[n]["all_audio_features"], dim=0) - text_features = torch.cat(eval_info[n]["all_text_features"], dim=0) - - logits_per_audio = ( - (logit_scale_a * audio_features @ text_features.t()).detach().cpu() - ) - logits_per_text = logits_per_audio.t().detach().cpu() - - # logits_per_audio shape: [num_samples, num_samples*5] - # logits_per_text shape: [num_samples*5, num_samples] - - logging.info( - f"dataset {n}, logits_per_audio shape: {logits_per_audio.shape}, " - f"logits_per_text shape: {logits_per_text.shape}" - ) - - metrics = {} - num_samples = audio_features.shape[0] - metrics[f"num_samples"] = num_samples - - # (yusong) the following code is very important, please double-check: - # logits_per_audio.reshape(num_samples, num_samples, 5)[:, :, d] - # logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :] - # Those two are retrieving one of the 5 text for each audio. - labels = torch.arange(audio_features.shape[0]).long() - audio_to_text_loss = [ - F.cross_entropy( - logits_per_audio.reshape(num_samples, num_samples, 5)[:, :, d], - labels, - ) - for d in range(5) - ] - text_to_audio_loss = [ - F.cross_entropy( - logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :], - labels, - ) - for d in range(5) - ] - total_loss = (np.mean(audio_to_text_loss) + np.mean(text_to_audio_loss)) / 2 - - metrics[f"cumulative_loss"] = total_loss.item() - - # text to audio: do 5 times - pred_text = [] - for d in range(5): - logit = logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :] - ground_truth = torch.arange(len(logit)).view(-1, 1) - ranking = torch.argsort( - logit, descending=True - ) # [num_samples, num_samples] - preds = torch.where(ranking == ground_truth)[1] - pred_text.append(preds.detach().cpu().numpy()) - pred_text_concat = np.concatenate(pred_text, axis=0) # [5*num_samples] - metrics[f"text_to_audio_mean_rank"] = pred_text_concat.mean() + 1 - metrics[f"text_to_audio_median_rank"] = ( - np.floor(np.median(pred_text_concat)) + 1 - ) - for k in [1, 5, 10]: - metrics[f"text_to_audio_R@{k}"] = np.mean(pred_text_concat < k) - # map@10 - metrics[f"text_to_audio_mAP@10"] = np.mean( - np.where(pred_text_concat < 10, 1 / (pred_text_concat + 1), 0.0) - ) - - # audio to text: take the best result - # for audio to text map 10, sort and assign descending ground truth. - # see https://github.com/XinhaoMei/audio-text_retrieval/blob/main/tools/utils.py#L103 - # map@10 - map_all = [] - pred_audio_all = [] - for d in range(num_samples): - # logits_per_audio: [num_samples, num_samples*5] - logit_single = logits_per_audio[d, :] # [5*num_samples] - # Ground-truth index: [d*5, d*5+1, d*5+2, d*5+3, d*5+4] - ranking = torch.argsort( - logit_single, descending=True - ) # [5*num_samples] - # ranking: the index of first match, second match, ... - ground_truth = torch.arange(d * 5, d * 5 + 5)[None] - all_pred = torch.where( - torch.stack([ranking] * 5) == ground_truth.view(-1, 1) - )[1] - min_pred = torch.min(all_pred) - pred_audio_all.append(min_pred.detach().cpu().numpy()) - all_pred_filter = all_pred[all_pred < 10].detach().cpu().numpy() - # /5 because we have 5 text, so it means for the text rank >=10 we count as 0. - map_single = ( - np.sum( - (np.arange(1, len(all_pred_filter) + 1) / (all_pred_filter + 1)) - ) - / 5 - ) - map_all.append(map_single) - metrics[f"audio_to_text_mAP@10"] = np.mean(map_all) - for k in [1, 5, 10]: - metrics[f"audio_to_text_R@{k}"] = np.mean(np.array(pred_audio_all) < k) - - val_metrics_all[n] = {n + "/" + k: v for k, v in metrics.items()} - return val_metrics_all - - -def calculate_selection_performance_clotho_audiocaps(val_metrics_per_dataset): - """ - Calculate performance for Clotho+AudioCaps for model selection. - """ - selection_performance_all = [] - for n in val_metrics_per_dataset.keys(): - selection_performance = ( - val_metrics_per_dataset[n][f"{n}/audio_to_text_mAP@10"] - + val_metrics_per_dataset[n][f"{n}/text_to_audio_mAP@10"] - ) / 2 - selection_performance_all.append(selection_performance) - return np.mean(selection_performance_all) - - -def select_top_metric_clotho_audiocaps(metrics, val_metrics_per_dataset, args): - # val_metrics_per_dataset: dict, key: dataset name, value: dict, key: metric name, value: metric value - # metrics: dict, key: metric name, value: metric value - # Hack: use args to save the top performance - if not hasattr(args, "top_selection_performance"): - selection_performance = calculate_selection_performance_clotho_audiocaps( - val_metrics_per_dataset - ) - # TODO: write the if and else together - metric_update = {} - for n in val_metrics_per_dataset.keys(): - for k in val_metrics_per_dataset[n].keys(): - metric_update[ - k.split("/")[0] + "-top" + "/" + k.split("/")[1] - ] = val_metrics_per_dataset[n][k] - metric_update["top_selection_performance"] = selection_performance - metric_update["top-selection-epoch"] = metrics["epoch"] - metrics.update(metric_update) - args.top_metric = metric_update - args.top_selection_performance = selection_performance - else: - selection_performance_new = calculate_selection_performance_clotho_audiocaps( - val_metrics_per_dataset - ) - selection_performance_old = args.top_selection_performance - if selection_performance_new > selection_performance_old: - metric_update = {} - for n in val_metrics_per_dataset.keys(): - for k in val_metrics_per_dataset[n].keys(): - metric_update[ - k.split("/")[0] + "-top" + "/" + k.split("/")[1] - ] = val_metrics_per_dataset[n][k] - metric_update["top_selection_performance"] = selection_performance_new - metric_update["top-selection-epoch"] = metrics["epoch"] - metrics.update(metric_update) - args.top_metric = metric_update - args.top_selection_performance = selection_performance_new - else: - metrics.update(args.top_metric) - return metrics diff --git a/spaces/Benson/text-generation/Examples/Cricket Carrera 2016 Mod Apk Android 1.md b/spaces/Benson/text-generation/Examples/Cricket Carrera 2016 Mod Apk Android 1.md deleted file mode 100644 index ceb62ae162cdb693a4d040172ae710d9f6ce9cc1..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cricket Carrera 2016 Mod Apk Android 1.md +++ /dev/null @@ -1,60 +0,0 @@ -
    -

    Cricket Carrera 2016 Mod Apk Android 1: Una revisión

    -

    Si usted es un fan del cricket y quiere experimentar la emoción de jugar el juego en su dispositivo Android, entonces usted debe probar Cricket Carrera 2016 Mod Apk Android 1. Esta es una versión modificada del juego original que le da dinero ilimitado y acceso a todas las características. En este artículo, revisaremos el juego y te diremos cómo descargarlo e instalarlo en tu dispositivo.

    -

    ¿Qué es Cricket Carrera 2016 Mod Apk Android 1?

    -

    Carrera de cricket 2016 Mod Apk Android 1 es un juego de cricket 3d diseñado específicamente para los amantes y jugadores de cricket. En este juego, puedes construir una carrera para tu personaje haciéndole jugar al cricket en diferentes torneos y progresar hasta la cima. El juego tiene 10 naciones que juegan al cricket y también puede obtener 300 equipos nacionales diferentes. También puedes personalizar la apariencia, las habilidades y el equipo de tu personaje. El juego tiene gráficos realistas y jugabilidad que te hará sentir como si estuvieras en el campo.

    -

    cricket carrera 2016 mod apk android 1


    DOWNLOAD →→→ https://bltlly.com/2v6Mid



    -

    Características de Cricket Carrera 2016 Mod Apk Android 1

    -

    El juego tiene muchas características que lo hacen divertido y emocionante para jugar. Aquí están algunas de ellas:

    -

    Dinero ilimitado

    -

    Una de las mejores características de la apk mod es que le da dinero ilimitado. Puedes usar este dinero para comprar lo que quieras en el juego, como equipo nuevo, habilidades o atuendos. También puedes mejorar los atributos y habilidades de tu personaje. De esta manera, puedes hacer que tu personaje sea más fuerte y competitivo.

    -

    Gráficos realistas y jugabilidad

    -

    El juego tiene impresionantes gráficos y animaciones que te harán sentir como si estuvieras viendo un partido de cricket real. El juego también tiene efectos de sonido realistas y comentarios que mejorarán su experiencia de juego. El juego tiene diferentes modos, como el modo carrera, partido rápido, modo torneo y modo desafío. También puedes elegir entre diferentes niveles de dificultad, como fácil, medio, duro o experto.

    - -

    El juego te permite crear tu propio personaje y elegir su nombre, nacionalidad, edad y apariencia. También puedes personalizar sus habilidades, equipo y estilo. Puedes comenzar tu carrera como novato y jugar en diferentes torneos y ligas. También puedes ganar fama y popularidad si te comportas bien en los partidos. También puedes interactuar con fans, patrocinadores, entrenadores y medios de comunicación.

    -

    Cómo descargar e instalar Cricket Carrera 2016 Mod Apk Android 1?

    -

    Si desea descargar e instalar el juego en su dispositivo Android, debe seguir estos pasos:

    -

    Paso 1: Descargar el archivo Apk

    -

    Puede descargar el archivo apk desde [este enlace]( i ). El tamaño del archivo es de unos 80 MB y es seguro y libre de virus.

    -

    -

    Paso 2: Habilitar fuentes desconocidas

    -

    Antes de instalar el archivo apk, es necesario habilitar fuentes desconocidas en el dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.

    -

    Paso 3: Instalar el archivo Apk

    -

    Después de habilitar fuentes desconocidas, ir a su administrador de archivos y localizar el archivo apk descargado. Toque en él y siga las instrucciones para instalarlo en su dispositivo.

    -

    Paso 4: Disfruta del juego

    -

    Una vez que instales el juego, puedes lanzarlo desde el cajón de tu aplicación y disfrutar jugando al cricket en tu dispositivo. También puedes iniciar sesión con tu cuenta de Google Play para guardar tu progreso y logros.

    -

    Pros y contras de la carrera de cricket 2016 Mod Apk Android 1

    -

    Como cualquier otro juego, Cricket Carrera 2016 Mod Apk Android 1 tiene sus pros y contras. Aquí están algunos de ellos:

    -

    Pros

    -
      -
    • El juego es gratis para descargar y jugar.
    • -
    • El juego tiene dinero ilimitado y acceso a todas las características.
    • -
    • El juego tiene gráficos realistas y jugabilidad.
    • -
    • El juego tiene carácter personalizable y carrera.
    • -
    • El juego tiene diferentes modos y niveles de dificultad.
    • -
    -

    Contras

    -
      - -
    • El juego puede tener algunos errores o fallos.
    • -
    • El juego puede requerir una conexión a Internet estable para algunas características.
    • -
    • El juego puede consumir mucha batería y espacio de almacenamiento.
    • -
    -

    Conclusión

    -

    Carrera de cricket 2016 Mod Apk Android 1 es un gran juego para los amantes del cricket y los jugadores. Te da la oportunidad de crear tu propio personaje y carrera en el mundo del cricket. También puede disfrutar de dinero ilimitado y acceso a todas las funciones. El juego tiene gráficos realistas y jugabilidad que te hará sentir como si estuvieras en el campo. El juego es fácil de descargar e instalar en su dispositivo. Sin embargo, el juego también tiene algunos inconvenientes, como problemas de compatibilidad, errores o requisitos de Internet. Debes sopesar los pros y los contras antes de decidir jugar el juego.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Cricket Carrera 2016 Mod Apk Android 1:

    -
      -
    1. ¿Es seguro descargar e instalar Cricket Carrera 2016 Mod Apk Android 1?
    2. -

      Sí, el archivo apk es seguro y libre de virus. Sin embargo, siempre debe descargarlo de una fuente de confianza y habilitar fuentes desconocidas en su dispositivo antes de instalarlo.

      -
    3. ¿Puedo jugar Cricket Carrera 2016 Mod Apk Android 1 sin conexión?
    4. -

      Puede jugar el juego sin conexión, pero es posible que necesite una conexión a Internet para algunas funciones, como guardar su progreso o acceder a torneos en línea.

      -
    5. ¿Puedo jugar Cricket Carrera 2016 Mod Apk Android 1 con mis amigos?
    6. -

      Sí, puedes jugar el juego con tus amigos conectándote con ellos a través de Google Play o Facebook. También puedes desafiarlos en diferentes modos o torneos.

      -
    7. ¿Cómo puedo actualizar Cricket Carrera 2016 Mod Apk Android 1?
    8. -

      Puedes actualizar el juego descargando la última versión del archivo apk desde [este enlace]( i ) e instalándolo en tu dispositivo. También puedes buscar actualizaciones de la configuración del juego.

      - -

      Puede ponerse en contacto con los desarrolladores del juego enviándoles un correo electrónico a support@zealcity.com o visitando su sitio web en www.zealcity.com. También puedes seguirlos en Facebook o Twitter para más actualizaciones y noticias.

      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cubo Solver Descarga Apk.md b/spaces/Benson/text-generation/Examples/Cubo Solver Descarga Apk.md deleted file mode 100644 index e35e544b70ba84a8fe02b8ef0cf7d47a5cb297a6..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cubo Solver Descarga Apk.md +++ /dev/null @@ -1,55 +0,0 @@ - -

    Cubo Solver Descargar APK: Cómo resolver cualquier rompecabezas de cubo con su dispositivo Android

    -

    ¿Te encanta jugar con rompecabezas de cubo, pero te frustras cuando no puedes resolverlos? ¿Quieres aprender a resolver cualquier rompecabezas de cubo en minutos o incluso segundos? ¿Quieres divertirte y desafiarte a ti mismo con diferentes tipos de rompecabezas de cubo? Si respondiste sí a cualquiera de estas preguntas, es posible que desees descargar una aplicación de resolución de cubos para tu dispositivo Android.

    -

    cubo solver descarga apk


    Downloadhttps://bltlly.com/2v6LSF



    -

    Una aplicación de resolución de cubos es un software que puede escanear su rompecabezas de cubos y darle la solución óptima en unos pocos pasos. Puede usarlo para aprender a resolver un rompecabezas de cubo, o para verificar su propia solución. También puede usarlo para divertirse y desafiarse con diferentes tipos de rompecabezas de cubos, como cubo de bolsillo, cubo de espejo, cubo de torre, cubo de Rubik, venganza de Rubik, skewb y más.

    -

    En este artículo, revisaremos algunas de las mejores aplicaciones de resolución de cubos que puede descargar de forma gratuita en su dispositivo Android. También le mostraremos cómo usarlos, cómo instalarlos y dónde encontrarlos. ¡Empecemos!

    -

    Cube Solver para Android: Una aplicación simple y rápida para Pocket Cube, Mirror Cube y Tower Cube

    -

    Si usted está buscando una aplicación simple y rápida que puede resolver algunos de los rompecabezas cubo más fácil, entonces es posible que desee probar Cube Solver para Android. Esta aplicación puede resolver tres tipos de rompecabezas de cubo:

    -
      -
    • Cubo de bolsillo: Esta es una versión 2x2x2 del cubo de Rubik. Tiene 8 piezas y 3,7 millones de combinaciones posibles.
    • -
    • Cubo de espejo: Esta es una versión 3x3x3 del cubo de Rubik que tiene diferentes formas en lugar de colores. Tiene 26 piezas y 43 quintillones de combinaciones posibles.
    • -
    • Tower Cube: Esta es una versión 3x3x2 del cubo de Rubik que tiene dos capas en lugar de tres. Tiene 18 piezas y 3,7 mil millones de combinaciones posibles.
    • -
    - -

    Para descargar esta aplicación, puede visitar la Google Play Store y buscar Cube Solver para Android. La aplicación es gratuita y tiene una calificación de 4.4 sobre 5. También puede escanear este código QR para descargar la aplicación directamente:

    -

    Código QR para Cube Solver para Android

    -

    -

    Para instalar esta aplicación, solo tiene que abrir el archivo descargado y siga las instrucciones. Deberá permitir que la aplicación acceda a su cámara y almacenamiento. La aplicación es compatible con Android 4.1 y versiones posteriores.

    -

    AZ Cubo de Rubik Solver para Android: Un divertido juego que le enseña cómo resolver un cubo de Rubik

    -

    Si usted está buscando un juego divertido que le puede enseñar cómo resolver un cubo de Rubik, entonces es posible que desee probar AZ Rubik’s Cube Solver para Android. Esta aplicación no es solo un solucionador, sino también un entrenador y un simulador para el clásico rompecabezas cubo 3x3x3.

    -

    Un cubo de Rubik es uno de los puzzles más populares y desafiantes del mundo. Tiene 6 caras, cada una con 9 pegatinas de uno de 6 colores. Tiene 54 piezas y 43 quintillones de combinaciones posibles. El objetivo es hacer que cada cara del cubo tenga un solo color.

    -

    Para usar esta aplicación, puede escanear su cubo real con su cámara o usar el cubo virtual en su pantalla. La aplicación le mostrará la solución en cuatro pasos: cruz blanca, esquinas blancas, capa media y cara amarilla. También puedes aprender los pasos básicos y algoritmos para resolver un cubo de Rubik con el modo tutorial de la aplicación.

    -

    Para descargar esta aplicación, puede visitar la Google Play Store y buscar AZ Rubik’s Cube Solver. La aplicación es gratuita y tiene una calificación de 4.5 de 5. También puede escanear este código QR para descargar la aplicación directamente:

    -

    QR code for AZ Rubik’s Cube Solver

    - -

    Si usted está buscando una aplicación de gran alcance que puede resolver algunos de los rompecabezas de cubo más avanzados, entonces es posible que desee probar Cube Solver APK. Esta aplicación puede resolver cuatro tipos de rompecabezas de cubo:

    -
      -
    • La venganza de Rubik: Esta es una versión 4x4 del cubo de Rubik. Tiene 56 piezas y 7.4 combinaciones quattuordecillion posibles.
    • -
    • Skewb: Esta es una versión 3x3x3 del cubo de Rubik que tiene 8 esquinas y 6 centros. Tiene 14 piezas y 43,2 millones de combinaciones posibles.
    • -
    • Pyraminx: Este es un rompecabezas en forma de tetraedro que tiene 4 caras, cada una con 9 pegatinas de uno de 4 colores. Tiene 10 piezas y 933.120 combinaciones posibles.
    • -
    • Megaminx: Este es un rompecabezas en forma de dodecaedro que tiene 12 caras, cada una con 11 pegatinas de uno de 12 colores. Tiene 50 piezas y 100 novemdecillion posibles combinaciones.
    • -
    -

    Para utilizar esta aplicación, solo tiene que escanear el rompecabezas del cubo con su cámara y toque en el botón Resolver. La aplicación te mostrará la solución en 63 movimientos o menos para Rubik’s Revenge, y 11 movimientos o menos para Skewb, Pyraminx y Megaminx. También puede seguir la solución paso a paso con la animación de la aplicación y la guía de voz.

    -

    Para descargar esta aplicación, puede visitar el sitio web APKPure y buscar Cube Solver APK. La aplicación es gratuita y tiene una calificación de 4.2 sobre 5. También puede escanear este código QR para descargar la aplicación directamente:

    -

    Código QR para Cube Solver APK

    -

    Para instalar esta aplicación, tendrá que habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración de su dispositivo. Luego, solo tienes que abrir el archivo descargado y seguir las instrucciones. La aplicación es compatible con Android 4.0.3 en adelante.

    -

    Conclusión

    - -

    Hay muchas aplicaciones cube solver que puede descargar de forma gratuita en su dispositivo Android. Hemos revisado algunos de los mejores en este artículo, pero también puedes explorar otras opciones en Google Play Store u otros sitios web. Solo asegúrese de que la aplicación es segura y confiable antes de descargarla.

    -

    Entonces, ¿qué estás esperando? ¡Descarga tu aplicación favorita de resolución de cubos hoy y comienza a resolver cubos con facilidad y diversión!

    -

    Preguntas frecuentes

    -
      -
    • Q: ¿Las aplicaciones de resolución de cubos engañan?
      -R: No, las aplicaciones de resolución de cubos no son trampas. Son solo herramientas que pueden ayudarlo a aprender a resolver un rompecabezas de cubos o a verificar su propia solución. Todavía necesitas usar tu propia lógica y habilidades para aplicar la solución a tu cubo.
    • -
    • Q: ¿Cómo puedo mejorar mis habilidades de resolución de cubo?
      -R: Puedes mejorar tus habilidades de resolución de cubos practicando regularmente, aprendiendo nuevos algoritmos y métodos, cronometrándote y desafiándote a ti mismo con diferentes tipos de rompecabezas de cubos.
    • -
    • Q: ¿Cuáles son algunos otros rompecabezas de cubo que puedo probar?
      -A: Hay muchos otros rompecabezas del cubo que usted puede intentar, tales como cuadrado-1, mastermorphix, cubo del fantasma, cubo del engranaje, bloques del espejo, cubo del eje, cubo del pescador, cubo del molino de viento, y más.
    • -
    • Q: ¿Cómo puedo crear mis propios rompecabezas cubo?
      -A: Usted puede crear sus propios rompecabezas del cubo modificando los existentes, o usando herramientas en línea tales como CubeTwist o CubeDesigner.
    • -
    • Q: ¿Dónde puedo encontrar más información y recursos sobre los rompecabezas de cubo?
      -A: Usted puede encontrar más información y recursos sobre rompecabezas del cubo en los Web site tales como Speedsolving.com, Ruwix.com, TwistyPuzzles.com, o los canales de YouTube tales como J Perm, CrazyBadCuber, RedKB, o CubeHead.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar El Mixtape Ms Caliente.md b/spaces/Benson/text-generation/Examples/Descargar El Mixtape Ms Caliente.md deleted file mode 100644 index 637a6dc2c61baa77019c0ffd856bfccf02522e81..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar El Mixtape Ms Caliente.md +++ /dev/null @@ -1,59 +0,0 @@ - -

    Cómo descargar la mixtape más caliente de 2023

    -

    Si usted está buscando una nueva y emocionante manera de disfrutar de la música, es posible que desee probar la descarga de un mixtape. Un mixtape es una compilación de canciones de diversas fuentes, generalmente seleccionadas por una sola persona o un artista, que puede ofrecerle una experiencia musical diversa y personalizada. Mixtapes también puede ayudar a descubrir nuevos artistas y géneros que usted podría no haber oído antes. En este artículo, te mostraremos cómo encontrar y descargar los mejores mixtapes en línea, cómo usar la función mixtape offline de YouTube Music y cómo escuchar mixtapes puede beneficiar tu cerebro y bienestar.

    -

    ¿Qué es un mixtape y por qué usted debe escuchar uno

    -

    Un mixtape es una compilación de música, típicamente de múltiples fuentes, grabada en un medio. Con orígenes en la década de 1980, el término normalmente describe una compilación casera de música en una cinta de cassette, CD o lista de reproducción digital. Las canciones se ordenan secuencialmente o se convierten en un programa continuo mediante beatmatching las canciones y la creación de transiciones sin fisuras en sus inicios y finales con fades o ediciones abruptas.

    -

    descargar el mixtape más caliente


    Download Zip ✒ ✒ ✒ https://bltlly.com/2v6IE2



    -

    Mixtapes puede ofrecerle una experiencia musical diversa y personalizada, ya que puede reflejar el sabor, el estado de ánimo y la personalidad de la persona que los hizo. Puedes escuchar mixtapes hechos por tus artistas favoritos, tus amigos o incluso tú mismo. También puedes explorar diferentes géneros, estilos y temas a través de mixtapes, ya que a menudo incluyen canciones que no están disponibles en las plataformas principales o estaciones de radio.

    - -

    Dónde encontrar y descargar los mejores mixtapes en línea

    -

    Hay muchos sitios web y aplicaciones que te permiten encontrar y descargar mixtapes gratis. Estos son algunos de los mejores:

    -

    DatPiff

    -

    DatPiff es uno de los sitios web mixtape más populares y autorizados, con más de 20 millones de descargas al mes. DatPiff cuenta con miles de mixtapes tanto de artistas establecidos y próximos, así como lanzamientos exclusivos de celebridades como Drake, Lil Wayne, Kendrick Lamar, y más. Puedes navegar por mixtapes por género, popularidad, clasificación o fecha, y descargarlos gratis o transmitirlos en línea. También puedes crear tu propia cuenta y subir tus propios mixtapes, así como valorar y comentar los mixtapes de otros usuarios. DatPiff está disponible en la web, así como en dispositivos iOS y Android.

    -

    Spinrilla

    -

    Spinrilla es otra gran opción para encontrar y descargar mixtapes, especialmente si eres un fan del hip-hop y el rap. Spinrilla ofrece una enorme colección de mixtapes de artistas tanto mainstream como underground, así como estrenos exclusivos y contenido original. Puedes descubrir nueva música navegando por las listas, la sección de tendencias o la sección destacada, o buscando por artista, álbum o canción. También puedes seguir a tus artistas favoritos y recibir notificaciones cuando lanzan nuevos mixtapes. Spinrilla te permite descargar mixtapes para escuchar offline, así como crear tus propias listas de reproducción y compartirlas con tus amigos. Spinrilla está disponible en la web, así como en dispositivos iOS y Android.

    -

    DaMixhub

    - -

    Cómo utilizar YouTube Music Offline Mixtape Característica

    -

    Si usted está buscando una forma más personalizada y conveniente para descargar mixtapes, es posible que desee probar YouTube Music’s offline mixtape característica. YouTube Music es un servicio de transmisión de música que también te permite descargar música sin conexión, para que puedas disfrutar de tus canciones favoritas sin usar datos o Wi-Fi.

    -

    La función de mixtape sin conexión descarga automáticamente una lista de reproducción de canciones basada en sus preferencias, como su historial de escucha, sus gustos y sus disgustos. El mixtape sin conexión puede tener hasta 100 canciones, dependiendo de la cantidad de espacio de almacenamiento que tiene en su dispositivo. También puedes personalizar el número de canciones, la calidad y la frecuencia de actualización de tu mixtape offline en los ajustes.

    -

    Para utilizar la función de mixtape sin conexión, es necesario tener una suscripción YouTube Music Premium, que cuesta $ 9.99 por mes para un plan individual o $ 14.99 por mes para un plan familiar. También necesitas tener instalada la aplicación YouTube Music en tu dispositivo iOS o Android. Estos son los pasos para usar la función mixtape sin conexión:

    -

    -
      -
    1. Abra la aplicación YouTube Music y toque en la imagen de perfil en la esquina superior derecha.
    2. -
    3. Toque en Descargas y luego toque en Offline Mixtape.
    4. -
    5. Toque en Configuración y ajustar el número de canciones, la calidad, y la frecuencia de actualización de su mixtape offline.
    6. -
    7. Toque en Hecho y esperar a que su mixtape fuera de línea para descargar.
    8. -
    9. Disfruta escuchando tu mixtape offline en cualquier momento y en cualquier lugar.
    10. -
    -

    Los beneficios de escuchar mixtapes para el cerebro y el bienestar

    -

    Escuchar mixtapes no solo puede proporcionarle una experiencia musical divertida y diversa, sino también beneficiar a su cerebro y bienestar de varias maneras. Estos son algunos de los beneficios de escuchar mixtapes:

    -

    La música puede estimular tu cerebro y mejorar tus funciones cognitivas

    - -

    La música también puede reducir sus niveles de estrés y ansiedad y aumentar su estado de ánimo

    -

    La música puede tener un efecto positivo en tu estado de ánimo y emociones al liberar dopamina, serotonina, oxitocina y endorfinas en tu cerebro. Estos son neurotransmisores que son responsables de sentimientos de felicidad, placer, relajación, amor y alivio del dolor. La música también puede reducir sus niveles de cortisol, que

    La música también puede reducir sus niveles de cortisol, que es una hormona que se asocia con el estrés, la ansiedad y la inflamación. La música puede ayudarte a lidiar con las emociones negativas, como la ira, la tristeza, el miedo y la soledad. La música también puede mejorar tu estado de ánimo y hacerte sentir más optimista, seguro y motivado.

    -

    La música puede mejorar tu creatividad y habilidades de aprendizaje

    -

    La música puede estimular tu imaginación e inspirarte a pensar fuera de la caja. La música también puede ayudarte a aprender cosas nuevas al mejorar tu memoria, atención y comprensión. Escuchar música mientras estudias o trabajas puede mejorar tu memoria y retención de información, así como tu productividad y eficiencia. La música también puede ayudarte a aprender nuevos idiomas al exponerte a diferentes sonidos, ritmos y vocabularios.

    -

    Conclusión

    -

    Descargar un mixtape puede ser una gran manera de disfrutar de la música de una manera nueva y emocionante. Puedes encontrar y descargar miles de mixtapes en línea desde varios sitios web y aplicaciones, como DatPiff, Spinrilla y DaMixhub. También puedes usar la función de mixtape offline de YouTube Music para descargar automáticamente una lista de reproducción personalizada de canciones según tus preferencias. Escuchar mixtapes puede beneficiar tu cerebro y bienestar al estimular tus funciones cognitivas, reducir tus niveles de estrés y ansiedad, aumentar tu estado de ánimo y mejorar tu creatividad y habilidades de aprendizaje. Entonces, ¿qué estás esperando? Descargar el mixtape más caliente de 2023 hoy y disfrutar de la música!

    -

    Preguntas frecuentes

    -

    ¿Cuáles son algunos de los mejores sitios web mixtape?

    - -

    ¿Cómo puedo descargar mixtapes gratis?

    -

    Puedes descargar mixtapes gratis desde varios sitios web y aplicaciones que ofrecen mixtapes, como DatPiff, Spinrilla, DaMixhub, LiveMixtapes, My Mixtapez, Audiomack, Mixtape Monkey y Mixtape Factory. También puedes usar la función de mixtape offline de YouTube Music para descargar automáticamente una lista de reproducción personalizada de canciones según tus preferencias.

    -

    ¿Cómo puedo hacer mi propio mixtape?

    -

    Puedes hacer tu propio mixtape seleccionando canciones de varias fuentes que te gusten o que se ajusten a cierto tema o estado de ánimo. Puede utilizar software o aplicaciones que le permitan editar archivos de audio, como Audacity, GarageBand o Soundtrap. También puede usar herramientas en línea que le permiten crear listas de reproducción o mixtapes, como 8tracks, Playlist.com o Tape.ly. A continuación, puede subir su mixtape a un sitio web o aplicación que alberga mixtapes, tales como DatPiff, Spinrilla, DaMixhub, LiveMixtapes, My Mixtapez, Audiomack, Mixtape Monkey , o Mixtape Factory. También puedes compartir tu mixtape con tus amigos o el público en las redes sociales u otras plataformas.

    -

    ¿Cuáles son algunos de los mixtapes más calientes de 2023?

    -

    Algunos de los mixtapes más calientes de 2023 son:

    - -TítuloArtistaGénero -Vida después de la muertePop SmokeHip-hop/Rap -Planet HerDoja CatR&B/Pop -Chico Amante CertificadoDrakeHip-hop/Rap -SourOlivia RodrigoPop/Rock -Más feliz que nuncaBillie EilishPop/Alternativa - -

    Estos mixtapes han recibido la aclamación de la crítica y el éxito comercial, así como millones de transmisiones y descargas. Cuentan con algunos de los artistas más populares y talentosos en la industria de la música, así como algunas de las canciones más pegadizas e innovadoras del año.

    -

    ¿Cómo puedo compartir mi mixtape con otros?

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BilalSardar/Black-N-White-To-Color/app.py b/spaces/BilalSardar/Black-N-White-To-Color/app.py deleted file mode 100644 index fe0e618f5f1ffc95673edb92c7a7a254d6d6267c..0000000000000000000000000000000000000000 --- a/spaces/BilalSardar/Black-N-White-To-Color/app.py +++ /dev/null @@ -1,54 +0,0 @@ - -# Import statements -import numpy as np -import cv2 -import gradio as gr - - -PROTOTXT = "colorization_deploy_v2.prototxt" -POINTS = "pts_in_hull.npy" -MODEL = "colorization_release_v2.caffemodel" - -# Load the Model -print("Load model") -net = cv2.dnn.readNetFromCaffe(PROTOTXT, MODEL) -pts = np.load(POINTS) - -# Load centers for ab channel quantization used for rebalancing. -class8 = net.getLayerId("class8_ab") -conv8 = net.getLayerId("conv8_313_rh") -pts = pts.transpose().reshape(2, 313, 1, 1) -net.getLayer(class8).blobs = [pts.astype("float32")] -net.getLayer(conv8).blobs = [np.full([1, 313], 2.606, dtype="float32")] - -# Load the input image -def colorizedTheImage(image): - scaled = image.astype("float32") / 255.0 - lab = cv2.cvtColor(scaled, cv2.COLOR_BGR2LAB) - - resized = cv2.resize(lab, (224, 224)) - L = cv2.split(resized)[0] - L -= 50 - - print("Colorizing the image") - net.setInput(cv2.dnn.blobFromImage(L)) - ab = net.forward()[0, :, :, :].transpose((1, 2, 0)) - - ab = cv2.resize(ab, (image.shape[1], image.shape[0])) - - L = cv2.split(lab)[0] - colorized = np.concatenate((L[:, :, np.newaxis], ab), axis=2) - - colorized = cv2.cvtColor(colorized, cv2.COLOR_LAB2BGR) - colorized = np.clip(colorized, 0, 1) - - colorized = (255 * colorized).astype("uint8") - colorized = cv2.cvtColor(colorized, cv2.COLOR_RGB2BGR) - return colorized - -demo=gr.Interface(fn=colorizedTheImage, - inputs=["image"], - outputs=["image"], - examples=[["einstein.jpg"],["tiger.jpg"],["building.jpg"],["nature.jpg"]], - title="Black&White To Color Image") -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/common.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/common.py deleted file mode 100644 index a076bb248c4a0d9b3260dba4549eed37400c9393..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/common.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - -import os -import torch - -from detectron2.config import get_cfg -from detectron2.engine import default_setup -from detectron2.modeling import build_model - -from densepose import add_densepose_config - -_BASE_CONFIG_DIR = "configs" -_QUICK_SCHEDULES_CONFIG_SUB_DIR = "quick_schedules" -_CONFIG_FILE_PREFIX = "densepose_" -_CONFIG_FILE_EXT = ".yaml" - - -def _get_base_config_dir(): - """ - Return the base directory for configurations - """ - return os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", _BASE_CONFIG_DIR) - - -def _get_quick_schedules_config_dir(): - """ - Return the base directory for quick schedules configurations - """ - return os.path.join(_get_base_config_dir(), _QUICK_SCHEDULES_CONFIG_SUB_DIR) - - -def _collect_config_files(config_dir): - """ - Collect all configuration files (i.e. densepose_*.yaml) directly in the specified directory - """ - start = _get_base_config_dir() - results = [] - for entry in os.listdir(config_dir): - _, ext = os.path.splitext(entry) - if ext != _CONFIG_FILE_EXT: - continue - if not entry.startswith(_CONFIG_FILE_PREFIX): - continue - path = os.path.join(config_dir, entry) - config_file = os.path.relpath(path, start) - results.append(config_file) - return results - - -def get_config_files(): - """ - Get all the configuration files (relative to the base configuration directory) - """ - return _collect_config_files(_get_base_config_dir()) - - -def get_quick_schedules_config_files(): - """ - Get all the quick schedules configuration files (relative to the base configuration directory) - """ - return _collect_config_files(_get_quick_schedules_config_dir()) - - -def _get_model_config(config_file): - """ - Load and return the configuration from the specified file (relative to the base configuration - directory) - """ - cfg = get_cfg() - add_densepose_config(cfg) - path = os.path.join(_get_base_config_dir(), config_file) - cfg.merge_from_file(path) - if not torch.cuda.is_available(): - cfg.MODEL_DEVICE = "cpu" - return cfg - - -def get_model(config_file): - """ - Get the model from the specified file (relative to the base configuration directory) - """ - cfg = _get_model_config(config_file) - return build_model(cfg) - - -def setup(config_file): - """ - Setup the configuration from the specified file (relative to the base configuration directory) - """ - cfg = _get_model_config(config_file) - cfg.freeze() - default_setup(cfg, {}) diff --git a/spaces/CVPR/LIVE/thrust/thrust/random/detail/xor_combine_engine_max.h b/spaces/CVPR/LIVE/thrust/thrust/random/detail/xor_combine_engine_max.h deleted file mode 100644 index cfb5bdc831a601765c93aea82cb9cc9cd6bb8c91..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/random/detail/xor_combine_engine_max.h +++ /dev/null @@ -1,324 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include - -namespace thrust -{ - -namespace random -{ - -namespace detail -{ - - -namespace math = thrust::detail::mpl::math; - - -namespace detail -{ - -// two cases for this function avoids compile-time warnings of overflow -template - struct lshift_w -{ - static const UIntType value = 0; -}; - - -template - struct lshift_w -{ - static const UIntType value = lhs << rhs; -}; - -} // end detail - - -template - struct lshift_w -{ - static const bool shift_will_overflow = rhs >= w; - - static const UIntType value = detail::lshift_w::value; -}; - - -template - struct lshift - : lshift_w::digits, lhs, rhs> -{}; - - -template - struct two_to_the_power - : lshift -{}; - - -template - class xor_combine_engine_max_aux_constants -{ - public: - static const result_type two_to_the_d = two_to_the_power::value; - static const result_type c = lshift::value; - - static const result_type t = - math::max< - result_type, - c, - b - >::value; - - static const result_type u = - math::min< - result_type, - c, - b - >::value; - - static const result_type p = math::log2::value; - static const result_type two_to_the_p = two_to_the_power::value; - - static const result_type k = math::div::value; -}; - - -template struct xor_combine_engine_max_aux; - - -template - struct xor_combine_engine_max_aux_case4 -{ - typedef xor_combine_engine_max_aux_constants constants; - - static const result_type k_plus_1_times_two_to_the_p = - lshift< - result_type, - math::plus::value, - constants::p - >::value; - - static const result_type M = - xor_combine_engine_max_aux< - result_type, - math::div< - result_type, - math::mod< - result_type, - constants::u, - constants::two_to_the_p - >::value, - constants::two_to_the_p - >::value, - math::mod< - result_type, - constants::t, - constants::two_to_the_p - >::value, - d - >::value; - - static const result_type value = math::plus::value; -}; - - -template - struct xor_combine_engine_max_aux_case3 -{ - typedef xor_combine_engine_max_aux_constants constants; - - static const result_type k_plus_1_times_two_to_the_p = - lshift< - result_type, - math::plus::value, - constants::p - >::value; - - static const result_type M = - xor_combine_engine_max_aux< - result_type, - math::div< - result_type, - math::mod< - result_type, - constants::t, - constants::two_to_the_p - >::value, - constants::two_to_the_p - >::value, - math::mod< - result_type, - constants::u, - constants::two_to_the_p - >::value, - d - >::value; - - static const result_type value = math::plus::value; -}; - - - -template - struct xor_combine_engine_max_aux_case2 -{ - typedef xor_combine_engine_max_aux_constants constants; - - static const result_type k_plus_1_times_two_to_the_p = - lshift< - result_type, - math::plus::value, - constants::p - >::value; - - static const result_type value = - math::minus< - result_type, - k_plus_1_times_two_to_the_p, - 1 - >::value; -}; - - -template - struct xor_combine_engine_max_aux_case1 -{ - static const result_type c = lshift::value; - - static const result_type value = math::plus::value; -}; - - -template - struct xor_combine_engine_max_aux_2 -{ - typedef xor_combine_engine_max_aux_constants constants; - - static const result_type value = - thrust::detail::eval_if< - // if k is odd... - math::is_odd::value, - thrust::detail::identity_< - thrust::detail::integral_constant< - result_type, - xor_combine_engine_max_aux_case2::value - > - >, - thrust::detail::eval_if< - // otherwise if a * 2^3 >= b, then case 3 - a * constants::two_to_the_d >= b, - thrust::detail::identity_< - thrust::detail::integral_constant< - result_type, - xor_combine_engine_max_aux_case3::value - > - >, - // otherwise, case 4 - thrust::detail::identity_< - thrust::detail::integral_constant< - result_type, - xor_combine_engine_max_aux_case4::value - > - > - > - >::type::value; -}; - - -template::value)> - struct xor_combine_engine_max_aux_1 - : xor_combine_engine_max_aux_case1 -{}; - - -template - struct xor_combine_engine_max_aux_1 - : xor_combine_engine_max_aux_2 -{}; - - -template - struct xor_combine_engine_max_aux - : xor_combine_engine_max_aux_1 -{}; - - -template - struct xor_combine_engine_max -{ - static const size_t w = std::numeric_limits::digits; - - static const result_type m1 = - math::min< - result_type, - result_type(Engine1::max - Engine1::min), - two_to_the_power::value - 1 - >::value; - - static const result_type m2 = - math::min< - result_type, - result_type(Engine2::max - Engine2::min), - two_to_the_power::value - 1 - >::value; - - static const result_type s = s1 - s2; - - static const result_type M = - xor_combine_engine_max_aux< - result_type, - m1, - m2, - s - >::value; - - // the value is M(m1,m2,s) lshift_w s2 - static const result_type value = - lshift_w< - result_type, - w, - M, - s2 - >::value; -}; // end xor_combine_engine_max - -} // end detail - -} // end random - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform_scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform_scan.h deleted file mode 100644 index 3f81434fc4f49afeb616d1b18678807909acebe3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform_scan.h +++ /dev/null @@ -1,68 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ - OutputIterator transform_inclusive_scan(thrust::execution_policy &exec, - InputIterator first, - InputIterator last, - OutputIterator result, - UnaryFunction unary_op, - BinaryFunction binary_op); - -template -__host__ __device__ - OutputIterator transform_exclusive_scan(thrust::execution_policy &exec, - InputIterator first, - InputIterator last, - OutputIterator result, - UnaryFunction unary_op, - T init, - AssociativeOperator binary_op); - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/extrema.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/extrema.h deleted file mode 100644 index 7bfa5a17d996990e38fdc3fe43ccfae90609a681..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/extrema.h +++ /dev/null @@ -1,139 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file extrema.h - * \brief Sequential implementations of extrema functions. - */ - -#pragma once - -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -__thrust_exec_check_disable__ -template -__host__ __device__ -ForwardIterator min_element(sequential::execution_policy &, - ForwardIterator first, - ForwardIterator last, - BinaryPredicate comp) -{ - // wrap comp - thrust::detail::wrapped_function< - BinaryPredicate, - bool - > wrapped_comp(comp); - - ForwardIterator imin = first; - - for(; first != last; ++first) - { - if(wrapped_comp(*first, *imin)) - { - imin = first; - } - } - - return imin; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ -ForwardIterator max_element(sequential::execution_policy &, - ForwardIterator first, - ForwardIterator last, - BinaryPredicate comp) -{ - // wrap comp - thrust::detail::wrapped_function< - BinaryPredicate, - bool - > wrapped_comp(comp); - - ForwardIterator imax = first; - - for(; first != last; ++first) - { - if(wrapped_comp(*imax, *first)) - { - imax = first; - } - } - - return imax; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ -thrust::pair minmax_element(sequential::execution_policy &, - ForwardIterator first, - ForwardIterator last, - BinaryPredicate comp) -{ - // wrap comp - thrust::detail::wrapped_function< - BinaryPredicate, - bool - > wrapped_comp(comp); - - ForwardIterator imin = first; - ForwardIterator imax = first; - - for(; first != last; ++first) - { - if(wrapped_comp(*first, *imin)) - { - imin = first; - } - - if(wrapped_comp(*imax, *first)) - { - imax = first; - } - } - - return thrust::make_pair(imin, imax); -} - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/scatter.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/scatter.h deleted file mode 100644 index d6817b4cb26b5cd1c1df763cba26bfed74ad47f1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/scatter.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special scatter functions - diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/trident_roi_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/trident_roi_head.py deleted file mode 100644 index 245569e50b45cc8e21ba8e7210edf4bd0c7f27c5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/trident_roi_head.py +++ /dev/null @@ -1,119 +0,0 @@ -import torch -from mmcv.ops import batched_nms - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - multiclass_nms) -from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead -from ..builder import HEADS - - -@HEADS.register_module() -class TridentRoIHead(StandardRoIHead): - """Trident roi head. - - Args: - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - """ - - def __init__(self, num_branch, test_branch_idx, **kwargs): - self.num_branch = num_branch - self.test_branch_idx = test_branch_idx - super(TridentRoIHead, self).__init__(**kwargs) - - def merge_trident_bboxes(self, trident_det_bboxes, trident_det_labels): - """Merge bbox predictions of each branch.""" - if trident_det_bboxes.numel() == 0: - det_bboxes = trident_det_bboxes.new_zeros((0, 5)) - det_labels = trident_det_bboxes.new_zeros((0, ), dtype=torch.long) - else: - nms_bboxes = trident_det_bboxes[:, :4] - nms_scores = trident_det_bboxes[:, 4].contiguous() - nms_inds = trident_det_labels - nms_cfg = self.test_cfg['nms'] - det_bboxes, keep = batched_nms(nms_bboxes, nms_scores, nms_inds, - nms_cfg) - det_labels = trident_det_labels[keep] - if self.test_cfg['max_per_img'] > 0: - det_labels = det_labels[:self.test_cfg['max_per_img']] - det_bboxes = det_bboxes[:self.test_cfg['max_per_img']] - - return det_bboxes, det_labels - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation as follows: - - 1. Compute prediction bbox and label per branch. - 2. Merge predictions of each branch according to scores of - bboxes, i.e., bboxes with higher score are kept to give - top-k prediction. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - det_bboxes_list, det_labels_list = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - num_branch = self.num_branch if self.test_branch_idx == -1 else 1 - for _ in range(len(det_bboxes_list)): - if det_bboxes_list[_].shape[0] == 0: - det_bboxes_list[_] = det_bboxes_list[_].new_empty((0, 5)) - det_bboxes, det_labels = [], [] - for i in range(len(img_metas) // num_branch): - det_result = self.merge_trident_bboxes( - torch.cat(det_bboxes_list[i * num_branch:(i + 1) * - num_branch]), - torch.cat(det_labels_list[i * num_branch:(i + 1) * - num_branch])) - det_bboxes.append(det_result[0]) - det_labels.append(det_result[1]) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - return bbox_results - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - trident_bboxes, trident_scores = [], [] - for branch_idx in range(len(proposal_list)): - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - trident_bboxes.append(bboxes) - trident_scores.append(scores) - - aug_bboxes.append(torch.cat(trident_bboxes, 0)) - aug_scores.append(torch.cat(trident_scores, 0)) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels diff --git a/spaces/CanIpleas/gpt2/README.md b/spaces/CanIpleas/gpt2/README.md deleted file mode 100644 index 5324e895bd1eb37aa3dc84ea4811b5fba7b3daf0..0000000000000000000000000000000000000000 --- a/spaces/CanIpleas/gpt2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gpt2 -emoji: 📚 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/json_utils/__init__.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/json_utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CofAI/viewq/README.md b/spaces/CofAI/viewq/README.md deleted file mode 100644 index 2030da86c7bc80d1d83f631ec17e30b98a81757d..0000000000000000000000000000000000000000 --- a/spaces/CofAI/viewq/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: ViewQ - search engine -emoji: 🔎🔍 -colorFrom: green -colorTo: red -sdk: static -pinned: false -duplicated_from: TNR-5/test_dev_s ---- - -🔎 ViewQ - new search system and engine for you 🔍 \ No newline at end of file diff --git a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/misc.py b/spaces/CorvaeOboro/gen_ability_icon/torch_utils/misc.py deleted file mode 100644 index 7829f4d9f168557ce8a9a6dec289aa964234cb8c..0000000000000000000000000000000000000000 --- a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/misc.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import re -import contextlib -import numpy as np -import torch -import warnings -import dnnlib - -#---------------------------------------------------------------------------- -# Cached construction of constant tensors. Avoids CPU=>GPU copy when the -# same constant is used multiple times. - -_constant_cache = dict() - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - -#---------------------------------------------------------------------------- -# Replace NaN/Inf with specified numerical values. - -try: - nan_to_num = torch.nan_to_num # 1.8.0a0 -except AttributeError: - def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin - assert isinstance(input, torch.Tensor) - if posinf is None: - posinf = torch.finfo(input.dtype).max - if neginf is None: - neginf = torch.finfo(input.dtype).min - assert nan == 0 - return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out) - -#---------------------------------------------------------------------------- -# Symbolic assert. - -try: - symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access -except AttributeError: - symbolic_assert = torch.Assert # 1.7.0 - -#---------------------------------------------------------------------------- -# Context manager to suppress known warnings in torch.jit.trace(). - -class suppress_tracer_warnings(warnings.catch_warnings): - def __enter__(self): - super().__enter__() - warnings.simplefilter('ignore', category=torch.jit.TracerWarning) - return self - -#---------------------------------------------------------------------------- -# Assert that the shape of a tensor matches the given list of integers. -# None indicates that the size of a dimension is allowed to vary. -# Performs symbolic assertion when used in torch.jit.trace(). - -def assert_shape(tensor, ref_shape): - if tensor.ndim != len(ref_shape): - raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}') - for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)): - if ref_size is None: - pass - elif isinstance(ref_size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}') - elif isinstance(size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}') - elif size != ref_size: - raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}') - -#---------------------------------------------------------------------------- -# Function decorator that calls torch.autograd.profiler.record_function(). - -def profiled_function(fn): - def decorator(*args, **kwargs): - with torch.autograd.profiler.record_function(fn.__name__): - return fn(*args, **kwargs) - decorator.__name__ = fn.__name__ - return decorator - -#---------------------------------------------------------------------------- -# Sampler for torch.utils.data.DataLoader that loops over the dataset -# indefinitely, shuffling items as it goes. - -class InfiniteSampler(torch.utils.data.Sampler): - def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5): - assert len(dataset) > 0 - assert num_replicas > 0 - assert 0 <= rank < num_replicas - assert 0 <= window_size <= 1 - super().__init__(dataset) - self.dataset = dataset - self.rank = rank - self.num_replicas = num_replicas - self.shuffle = shuffle - self.seed = seed - self.window_size = window_size - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = None - window = 0 - if self.shuffle: - rnd = np.random.RandomState(self.seed) - rnd.shuffle(order) - window = int(np.rint(order.size * self.window_size)) - - idx = 0 - while True: - i = idx % order.size - if idx % self.num_replicas == self.rank: - yield order[i] - if window >= 2: - j = (i - rnd.randint(window)) % order.size - order[i], order[j] = order[j], order[i] - idx += 1 - -#---------------------------------------------------------------------------- -# Utilities for operating with torch.nn.Module parameters and buffers. - -def params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.parameters()) + list(module.buffers()) - -def named_params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.named_parameters()) + list(module.named_buffers()) - -def copy_params_and_buffers(src_module, dst_module, require_all=False): - assert isinstance(src_module, torch.nn.Module) - assert isinstance(dst_module, torch.nn.Module) - src_tensors = {name: tensor for name, tensor in named_params_and_buffers(src_module)} - for name, tensor in named_params_and_buffers(dst_module): - assert (name in src_tensors) or (not require_all) - if name in src_tensors: - tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad) - -#---------------------------------------------------------------------------- -# Context manager for easily enabling/disabling DistributedDataParallel -# synchronization. - -@contextlib.contextmanager -def ddp_sync(module, sync): - assert isinstance(module, torch.nn.Module) - if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel): - yield - else: - with module.no_sync(): - yield - -#---------------------------------------------------------------------------- -# Check DistributedDataParallel consistency across processes. - -def check_ddp_consistency(module, ignore_regex=None): - assert isinstance(module, torch.nn.Module) - for name, tensor in named_params_and_buffers(module): - fullname = type(module).__name__ + '.' + name - if ignore_regex is not None and re.fullmatch(ignore_regex, fullname): - continue - tensor = tensor.detach() - other = tensor.clone() - torch.distributed.broadcast(tensor=other, src=0) - assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname - -#---------------------------------------------------------------------------- -# Print summary table of module hierarchy. - -def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True): - assert isinstance(module, torch.nn.Module) - assert not isinstance(module, torch.jit.ScriptModule) - assert isinstance(inputs, (tuple, list)) - - # Register hooks. - entries = [] - nesting = [0] - def pre_hook(_mod, _inputs): - nesting[0] += 1 - def post_hook(mod, _inputs, outputs): - nesting[0] -= 1 - if nesting[0] <= max_nesting: - outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs] - outputs = [t for t in outputs if isinstance(t, torch.Tensor)] - entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs)) - hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()] - hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()] - - # Run module. - outputs = module(*inputs) - for hook in hooks: - hook.remove() - - # Identify unique outputs, parameters, and buffers. - tensors_seen = set() - for e in entries: - e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen] - e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen] - e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen] - tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs} - - # Filter out redundant entries. - if skip_redundant: - entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)] - - # Construct table. - rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']] - rows += [['---'] * len(rows[0])] - param_total = 0 - buffer_total = 0 - submodule_names = {mod: name for name, mod in module.named_modules()} - for e in entries: - name = '' if e.mod is module else submodule_names[e.mod] - param_size = sum(t.numel() for t in e.unique_params) - buffer_size = sum(t.numel() for t in e.unique_buffers) - output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs] - output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs] - rows += [[ - name + (':0' if len(e.outputs) >= 2 else ''), - str(param_size) if param_size else '-', - str(buffer_size) if buffer_size else '-', - (output_shapes + ['-'])[0], - (output_dtypes + ['-'])[0], - ]] - for idx in range(1, len(e.outputs)): - rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]] - param_total += param_size - buffer_total += buffer_size - rows += [['---'] * len(rows[0])] - rows += [['Total', str(param_total), str(buffer_total), '-', '-']] - - # Print table. - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths))) - print() - return outputs - -#---------------------------------------------------------------------------- diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/py23.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/py23.py deleted file mode 100644 index 29f634d624b7df125722c3bae594c1d39a835aec..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/py23.py +++ /dev/null @@ -1,96 +0,0 @@ -"""Python 2/3 compat layer leftovers.""" - -import decimal as _decimal -import math as _math -import warnings -from contextlib import redirect_stderr, redirect_stdout -from io import BytesIO -from io import StringIO as UnicodeIO -from types import SimpleNamespace - -from .textTools import Tag, bytechr, byteord, bytesjoin, strjoin, tobytes, tostr - -warnings.warn( - "The py23 module has been deprecated and will be removed in a future release. " - "Please update your code.", - DeprecationWarning, -) - -__all__ = [ - "basestring", - "bytechr", - "byteord", - "BytesIO", - "bytesjoin", - "open", - "Py23Error", - "range", - "RecursionError", - "round", - "SimpleNamespace", - "StringIO", - "strjoin", - "Tag", - "tobytes", - "tostr", - "tounicode", - "unichr", - "unicode", - "UnicodeIO", - "xrange", - "zip", -] - - -class Py23Error(NotImplementedError): - pass - - -RecursionError = RecursionError -StringIO = UnicodeIO - -basestring = str -isclose = _math.isclose -isfinite = _math.isfinite -open = open -range = range -round = round3 = round -unichr = chr -unicode = str -zip = zip - -tounicode = tostr - - -def xrange(*args, **kwargs): - raise Py23Error("'xrange' is not defined. Use 'range' instead.") - - -def round2(number, ndigits=None): - """ - Implementation of Python 2 built-in round() function. - Rounds a number to a given precision in decimal digits (default - 0 digits). The result is a floating point number. Values are rounded - to the closest multiple of 10 to the power minus ndigits; if two - multiples are equally close, rounding is done away from 0. - ndigits may be negative. - See Python 2 documentation: - https://docs.python.org/2/library/functions.html?highlight=round#round - """ - if ndigits is None: - ndigits = 0 - - if ndigits < 0: - exponent = 10 ** (-ndigits) - quotient, remainder = divmod(number, exponent) - if remainder >= exponent // 2 and number >= 0: - quotient += 1 - return float(quotient * exponent) - else: - exponent = _decimal.Decimal("10") ** (-ndigits) - - d = _decimal.Decimal.from_float(number).quantize( - exponent, rounding=_decimal.ROUND_HALF_UP - ) - - return float(d) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-fe39713d.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-fe39713d.css deleted file mode 100644 index 4ba4a8f2be52bdaf48d39476847610bc899d05d9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-fe39713d.css +++ /dev/null @@ -1 +0,0 @@ -span.svelte-1voqrms.svelte-1voqrms{text-shadow:0 0 8px rgba(0,0,0,.5)}progress.svelte-1voqrms.svelte-1voqrms{margin-right:var(--size-3);border-radius:var(--radius-sm);width:var(--size-full);height:var(--size-2)}progress.svelte-1voqrms.svelte-1voqrms::-webkit-progress-bar{border-radius:2px;background-color:#fff3;overflow:hidden}progress.svelte-1voqrms.svelte-1voqrms::-webkit-progress-value{background-color:#ffffffe6}video.svelte-1voqrms.svelte-1voqrms{position:inherit;background-color:#000;width:var(--size-full);height:var(--size-full);object-fit:contain}.mirror.svelte-1voqrms.svelte-1voqrms{transform:scaleX(-1)}.controls.svelte-1voqrms.svelte-1voqrms{position:absolute;bottom:0;opacity:0;transition:.5s;margin:var(--size-2);border-radius:var(--radius-md);background:var(--color-grey-800);padding:var(--size-2) var(--size-1);width:calc(100% - .75rem);width:calc(100% - var(--size-2) * 2)}.wrap.svelte-1voqrms:hover .controls.svelte-1voqrms{opacity:1}.inner.svelte-1voqrms.svelte-1voqrms{display:flex;justify-content:space-between;align-items:center;padding-right:var(--size-2);padding-left:var(--size-2);width:var(--size-full);height:var(--size-full)}.icon.svelte-1voqrms.svelte-1voqrms{display:flex;justify-content:center;cursor:pointer;width:var(--size-6);color:#fff}.time.svelte-1voqrms.svelte-1voqrms{flex-shrink:0;margin-right:var(--size-3);margin-left:var(--size-3);color:#fff;font-size:var(--text-sm);font-family:var(--font-mono)}.wrap.svelte-1voqrms.svelte-1voqrms{position:relative;background-color:var(--background-fill-secondary)}.file-name.svelte-a6ruol{padding:var(--size-6);font-size:var(--text-xxl);word-break:break-all}.file-size.svelte-a6ruol{padding:var(--size-2);font-size:var(--text-xl)}.icon-buttons.svelte-rvdo70{display:flex;position:absolute;top:6px;right:6px;gap:var(--size-1)} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_api.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_api.py deleted file mode 100644 index 854235f5f6035031f0960d4a4b8834081d5df389..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_api.py +++ /dev/null @@ -1,92 +0,0 @@ -from contextlib import contextmanager -from typing import Iterator, Optional, Union - -from ._models import URL, Extensions, HeaderTypes, Response -from ._sync.connection_pool import ConnectionPool - - -def request( - method: Union[bytes, str], - url: Union[URL, bytes, str], - *, - headers: HeaderTypes = None, - content: Union[bytes, Iterator[bytes], None] = None, - extensions: Optional[Extensions] = None, -) -> Response: - """ - Sends an HTTP request, returning the response. - - ``` - response = httpcore.request("GET", "https://www.example.com/") - ``` - - Arguments: - method: The HTTP method for the request. Typically one of `"GET"`, - `"OPTIONS"`, `"HEAD"`, `"POST"`, `"PUT"`, `"PATCH"`, or `"DELETE"`. - url: The URL of the HTTP request. Either as an instance of `httpcore.URL`, - or as str/bytes. - headers: The HTTP request headers. Either as a dictionary of str/bytes, - or as a list of two-tuples of str/bytes. - content: The content of the request body. Either as bytes, - or as a bytes iterator. - extensions: A dictionary of optional extra information included on the request. - Possible keys include `"timeout"`. - - Returns: - An instance of `httpcore.Response`. - """ - with ConnectionPool() as pool: - return pool.request( - method=method, - url=url, - headers=headers, - content=content, - extensions=extensions, - ) - - -@contextmanager -def stream( - method: Union[bytes, str], - url: Union[URL, bytes, str], - *, - headers: HeaderTypes = None, - content: Union[bytes, Iterator[bytes], None] = None, - extensions: Optional[Extensions] = None, -) -> Iterator[Response]: - """ - Sends an HTTP request, returning the response within a content manager. - - ``` - with httpcore.stream("GET", "https://www.example.com/") as response: - ... - ``` - - When using the `stream()` function, the body of the response will not be - automatically read. If you want to access the response body you should - either use `content = response.read()`, or `for chunk in response.iter_content()`. - - Arguments: - method: The HTTP method for the request. Typically one of `"GET"`, - `"OPTIONS"`, `"HEAD"`, `"POST"`, `"PUT"`, `"PATCH"`, or `"DELETE"`. - url: The URL of the HTTP request. Either as an instance of `httpcore.URL`, - or as str/bytes. - headers: The HTTP request headers. Either as a dictionary of str/bytes, - or as a list of two-tuples of str/bytes. - content: The content of the request body. Either as bytes, - or as a bytes iterator. - extensions: A dictionary of optional extra information included on the request. - Possible keys include `"timeout"`. - - Returns: - An instance of `httpcore.Response`. - """ - with ConnectionPool() as pool: - with pool.stream( - method=method, - url=url, - headers=headers, - content=content, - extensions=extensions, - ) as response: - yield response diff --git a/spaces/DaleChen/AutoGPT/tests/test_prompt_generator.py b/spaces/DaleChen/AutoGPT/tests/test_prompt_generator.py deleted file mode 100644 index 6a0bfd6c7bbdbfaa3750e9dee621bd25e17a448b..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/tests/test_prompt_generator.py +++ /dev/null @@ -1,114 +0,0 @@ -from unittest import TestCase - -from autogpt.promptgenerator import PromptGenerator - - -class TestPromptGenerator(TestCase): - """ - Test cases for the PromptGenerator class, which is responsible for generating - prompts for the AI with constraints, commands, resources, and performance evaluations. - """ - - @classmethod - def setUpClass(cls): - """ - Set up the initial state for each test method by creating an instance of PromptGenerator. - """ - cls.generator = PromptGenerator() - - # Test whether the add_constraint() method adds a constraint to the generator's constraints list - def test_add_constraint(self): - """ - Test if the add_constraint() method adds a constraint to the generator's constraints list. - """ - constraint = "Constraint1" - self.generator.add_constraint(constraint) - self.assertIn(constraint, self.generator.constraints) - - # Test whether the add_command() method adds a command to the generator's commands list - def test_add_command(self): - """ - Test if the add_command() method adds a command to the generator's commands list. - """ - command_label = "Command Label" - command_name = "command_name" - args = {"arg1": "value1", "arg2": "value2"} - self.generator.add_command(command_label, command_name, args) - command = { - "label": command_label, - "name": command_name, - "args": args, - } - self.assertIn(command, self.generator.commands) - - def test_add_resource(self): - """ - Test if the add_resource() method adds a resource to the generator's resources list. - """ - resource = "Resource1" - self.generator.add_resource(resource) - self.assertIn(resource, self.generator.resources) - - def test_add_performance_evaluation(self): - """ - Test if the add_performance_evaluation() method adds an evaluation to the generator's - performance_evaluation list. - """ - evaluation = "Evaluation1" - self.generator.add_performance_evaluation(evaluation) - self.assertIn(evaluation, self.generator.performance_evaluation) - - def test_generate_prompt_string(self): - """ - Test if the generate_prompt_string() method generates a prompt string with all the added - constraints, commands, resources, and evaluations. - """ - # Define the test data - constraints = ["Constraint1", "Constraint2"] - commands = [ - { - "label": "Command1", - "name": "command_name1", - "args": {"arg1": "value1"}, - }, - { - "label": "Command2", - "name": "command_name2", - "args": {}, - }, - ] - resources = ["Resource1", "Resource2"] - evaluations = ["Evaluation1", "Evaluation2"] - - # Add test data to the generator - for constraint in constraints: - self.generator.add_constraint(constraint) - for command in commands: - self.generator.add_command( - command["label"], command["name"], command["args"] - ) - for resource in resources: - self.generator.add_resource(resource) - for evaluation in evaluations: - self.generator.add_performance_evaluation(evaluation) - - # Generate the prompt string and verify its correctness - prompt_string = self.generator.generate_prompt_string() - self.assertIsNotNone(prompt_string) - - # Check if all constraints, commands, resources, and evaluations are present in the prompt string - for constraint in constraints: - self.assertIn(constraint, prompt_string) - for command in commands: - self.assertIn(command["name"], prompt_string) - for key, value in command["args"].items(): - self.assertIn(f'"{key}": "{value}"', prompt_string) - for resource in resources: - self.assertIn(resource, prompt_string) - for evaluation in evaluations: - self.assertIn(evaluation, prompt_string) - - self.assertIn("constraints", prompt_string.lower()) - self.assertIn("commands", prompt_string.lower()) - self.assertIn("resources", prompt_string.lower()) - self.assertIn("performance evaluation", prompt_string.lower()) diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/horizon_net_feature_extractor.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/horizon_net_feature_extractor.py deleted file mode 100644 index 328e7942ef7a1441e124681fe3c7868e5b60f6be..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/horizon_net_feature_extractor.py +++ /dev/null @@ -1,267 +0,0 @@ -""" -@author: -@Date: 2021/07/17 -@description: Use the feature extractor proposed by HorizonNet -""" - -import numpy as np -import math -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -import functools -from models.base_model import BaseModule - -ENCODER_RESNET = [ - 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', - 'resnext50_32x4d', 'resnext101_32x8d' -] -ENCODER_DENSENET = [ - 'densenet121', 'densenet169', 'densenet161', 'densenet201' -] - - -def lr_pad(x, padding=1): - ''' Pad left/right-most to each other instead of zero padding ''' - return torch.cat([x[..., -padding:], x, x[..., :padding]], dim=3) - - -class LR_PAD(nn.Module): - ''' Pad left/right-most to each other instead of zero padding ''' - - def __init__(self, padding=1): - super(LR_PAD, self).__init__() - self.padding = padding - - def forward(self, x): - return lr_pad(x, self.padding) - - -def wrap_lr_pad(net): - for name, m in net.named_modules(): - if not isinstance(m, nn.Conv2d): - continue - if m.padding[1] == 0: - continue - w_pad = int(m.padding[1]) - m.padding = (m.padding[0], 0) # weight padding is 0, LR_PAD then use valid padding will keep dim of weight - names = name.split('.') - - root = functools.reduce(lambda o, i: getattr(o, i), [net] + names[:-1]) - setattr( - root, names[-1], - nn.Sequential(LR_PAD(w_pad), m) - ) - - -''' -Encoder -''' - - -class Resnet(nn.Module): - def __init__(self, backbone='resnet50', pretrained=True): - super(Resnet, self).__init__() - assert backbone in ENCODER_RESNET - self.encoder = getattr(models, backbone)(pretrained=pretrained) - del self.encoder.fc, self.encoder.avgpool - - def forward(self, x): - features = [] - x = self.encoder.conv1(x) - x = self.encoder.bn1(x) - x = self.encoder.relu(x) - x = self.encoder.maxpool(x) - - x = self.encoder.layer1(x) - features.append(x) # 1/4 - x = self.encoder.layer2(x) - features.append(x) # 1/8 - x = self.encoder.layer3(x) - features.append(x) # 1/16 - x = self.encoder.layer4(x) - features.append(x) # 1/32 - return features - - def list_blocks(self): - lst = [m for m in self.encoder.children()] - block0 = lst[:4] - block1 = lst[4:5] - block2 = lst[5:6] - block3 = lst[6:7] - block4 = lst[7:8] - return block0, block1, block2, block3, block4 - - -class Densenet(nn.Module): - def __init__(self, backbone='densenet169', pretrained=True): - super(Densenet, self).__init__() - assert backbone in ENCODER_DENSENET - self.encoder = getattr(models, backbone)(pretrained=pretrained) - self.final_relu = nn.ReLU(inplace=True) - del self.encoder.classifier - - def forward(self, x): - lst = [] - for m in self.encoder.features.children(): - x = m(x) - lst.append(x) - features = [lst[4], lst[6], lst[8], self.final_relu(lst[11])] - return features - - def list_blocks(self): - lst = [m for m in self.encoder.features.children()] - block0 = lst[:4] - block1 = lst[4:6] - block2 = lst[6:8] - block3 = lst[8:10] - block4 = lst[10:] - return block0, block1, block2, block3, block4 - - -''' -Decoder -''' - - -class ConvCompressH(nn.Module): - ''' Reduce feature height by factor of two ''' - - def __init__(self, in_c, out_c, ks=3): - super(ConvCompressH, self).__init__() - assert ks % 2 == 1 - self.layers = nn.Sequential( - nn.Conv2d(in_c, out_c, kernel_size=ks, stride=(2, 1), padding=ks // 2), - nn.BatchNorm2d(out_c), - nn.ReLU(inplace=True), - ) - - def forward(self, x): - return self.layers(x) - - -class GlobalHeightConv(nn.Module): - def __init__(self, in_c, out_c): - super(GlobalHeightConv, self).__init__() - self.layer = nn.Sequential( - ConvCompressH(in_c, in_c // 2), - ConvCompressH(in_c // 2, in_c // 2), - ConvCompressH(in_c // 2, in_c // 4), - ConvCompressH(in_c // 4, out_c), - ) - - def forward(self, x, out_w): - x = self.layer(x) - - factor = out_w // x.shape[3] - x = torch.cat([x[..., -1:], x, x[..., :1]], 3) # 先补左右,相当于warp模式,然后进行插值 - d_type = x.dtype - x = F.interpolate(x, size=(x.shape[2], out_w + 2 * factor), mode='bilinear', align_corners=False) - # if x.dtype != d_type: - # x = x.type(d_type) - x = x[..., factor:-factor] - return x - - -class GlobalHeightStage(nn.Module): - def __init__(self, c1, c2, c3, c4, out_scale=8): - ''' Process 4 blocks from encoder to single multiscale features ''' - super(GlobalHeightStage, self).__init__() - self.cs = c1, c2, c3, c4 - self.out_scale = out_scale - self.ghc_lst = nn.ModuleList([ - GlobalHeightConv(c1, c1 // out_scale), - GlobalHeightConv(c2, c2 // out_scale), - GlobalHeightConv(c3, c3 // out_scale), - GlobalHeightConv(c4, c4 // out_scale), - ]) - - def forward(self, conv_list, out_w): - assert len(conv_list) == 4 - bs = conv_list[0].shape[0] - feature = torch.cat([ - f(x, out_w).reshape(bs, -1, out_w) - for f, x, out_c in zip(self.ghc_lst, conv_list, self.cs) - ], dim=1) - # conv_list: - # 0 [b, 256(d), 128(h), 256(w)] ->(4*{conv3*3 step2*1} : d/8 h/16)-> [b 32(d) 8(h) 256(w)] - # 1 [b, 512(d), 64(h), 128(w)] ->(4*{conv3*3 step2*1} : d/8 h/16)-> [b 64(d) 4(h) 128(w)] - # 2 [b, 1024(d), 32(h), 64(w)] ->(4*{conv3*3 step2*1} : d/8 h/16)-> [b 128(d) 2(h) 64(w)] - # 3 [b, 2048(d), 16(h), 32(w)] ->(4*{conv3*3 step2*1} : d/8 h/16)-> [b 256(d) 1(h) 32(w)] - # 0 ->(unsampledW256} : w=256)-> [b 32(d) 8(h) 256(w)] ->(reshapeH1} : h=1)-> [b 256(d) 1(h) 256(w)] - # 1 ->(unsampledW256} : w=256)-> [b 64(d) 4(h) 256(w)] ->(reshapeH1} : h=1)-> [b 256(d) 1(h) 256(w)] - # 2 ->(unsampledW256} : w=256)-> [b 128(d) 2(h) 256(w)] ->(reshapeH1} : h=1)-> [b 256(d) 1(h) 256(w)] - # 3 ->(unsampledW256} : w=256)-> [b 256(d) 1(h) 256(w)] ->(reshapeH1} : h=1)-> [b 256(d) 1(h) 256(w)] - # 0 --\ - # 1 -- \ - # ---- cat [b 1024(d) 1(h) 256(w)] - # 2 -- / - # 3 --/ - return feature # [b 1024(d) 256(w)] - - -class HorizonNetFeatureExtractor(nn.Module): - x_mean = torch.FloatTensor(np.array([0.485, 0.456, 0.406])[None, :, None, None]) - x_std = torch.FloatTensor(np.array([0.229, 0.224, 0.225])[None, :, None, None]) - - def __init__(self, backbone='resnet50'): - super(HorizonNetFeatureExtractor, self).__init__() - self.out_scale = 8 - self.step_cols = 4 - - # Encoder - if backbone.startswith('res'): - self.feature_extractor = Resnet(backbone, pretrained=True) - elif backbone.startswith('dense'): - self.feature_extractor = Densenet(backbone, pretrained=True) - else: - raise NotImplementedError() - - # Inference channels number from each block of the encoder - with torch.no_grad(): - dummy = torch.zeros(1, 3, 512, 1024) - c1, c2, c3, c4 = [b.shape[1] for b in self.feature_extractor(dummy)] - self.c_last = (c1 * 8 + c2 * 4 + c3 * 2 + c4 * 1) // self.out_scale - - # Convert features from 4 blocks of the encoder into B x C x 1 x W' - self.reduce_height_module = GlobalHeightStage(c1, c2, c3, c4, self.out_scale) - self.x_mean.requires_grad = False - self.x_std.requires_grad = False - wrap_lr_pad(self) - - def _prepare_x(self, x): - x = x.clone() - if self.x_mean.device != x.device: - self.x_mean = self.x_mean.to(x.device) - self.x_std = self.x_std.to(x.device) - x[:, :3] = (x[:, :3] - self.x_mean) / self.x_std - - return x - - def forward(self, x): - # x [b 3 512 1024] - x = self._prepare_x(x) # [b 3 512 1024] - conv_list = self.feature_extractor(x) - # conv_list: - # 0 [b, 256(d), 128(h), 256(w)] - # 1 [b, 512(d), 64(h), 128(w)] - # 2 [b, 1024(d), 32(h), 64(w)] - # 3 [b, 2048(d), 16(h), 32(w)] - x = self.reduce_height_module(conv_list, x.shape[3] // self.step_cols) # [b 1024(d) 1(h) 256(w)] - # After reduce_Height_module, h becomes 1, the information is compressed to d, - # and w contains different resolutions - # 0 [b, 256(d), 128(h), 256(w)] -> [b, 256/8(d) * 128/16(h') = 256(d), 1(h) 256(w)] - # 1 [b, 512(d), 64(h), 128(w)] -> [b, 512/8(d) * 64/16(h') = 256(d), 1(h) 256(w)] - # 2 [b, 1024(d), 32(h), 64(w)] -> [b, 1024/8(d) * 32/16(h') = 256(d), 1(h) 256(w)] - # 3 [b, 2048(d), 16(h), 32(w)] -> [b, 2048/8(d) * 16/16(h') = 256(d), 1(h) 256(w)] - return x # [b 1024(d) 1(h) 256(w)] - - -if __name__ == '__main__': - from PIL import Image - extractor = HorizonNetFeatureExtractor() - img = np.array(Image.open("../../src/demo.png")).transpose((2, 0, 1)) - input = torch.Tensor([img]) # 1 3 512 1024 - feature = extractor(input) - print(feature.shape) # 1, 1024, 256 | 1024 = (out_c_0*h_0 +... + out_c_3*h_3) = 256 * 4 diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/other/scheduler.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/other/scheduler.py deleted file mode 100644 index 27d93bc4a6f72059d5e00e6589bc1715f5452aab..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/other/scheduler.py +++ /dev/null @@ -1,51 +0,0 @@ -""" -@Date: 2021/09/14 -@description: -""" - - -class WarmupScheduler: - def __init__(self, optimizer, lr_pow, init_lr, warmup_lr, warmup_step, max_step, **kwargs): - self.lr_pow = lr_pow - self.init_lr = init_lr - self.running_lr = init_lr - self.warmup_lr = warmup_lr - self.warmup_step = warmup_step - self.max_step = max_step - self.optimizer = optimizer - - def step_update(self, cur_step): - if cur_step < self.warmup_step: - frac = cur_step / self.warmup_step - step = self.warmup_lr - self.init_lr - self.running_lr = self.init_lr + step * frac - else: - frac = (float(cur_step) - self.warmup_step) / (self.max_step - self.warmup_step) - scale_running_lr = max((1. - frac), 0.) ** self.lr_pow - self.running_lr = self.warmup_lr * scale_running_lr - - if self.optimizer is not None: - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.running_lr - - -if __name__ == '__main__': - import matplotlib.pyplot as plt - - scheduler = WarmupScheduler(optimizer=None, - lr_pow=4, - init_lr=0.0000003, - warmup_lr=0.00003, - warmup_step=10000, - max_step=100000) - - x = [] - y = [] - for i in range(100000): - if i == 10000-1: - print() - scheduler.step_update(i) - x.append(i) - y.append(scheduler.running_lr) - plt.plot(x, y, linewidth=1) - plt.show() diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/vctree/panoptic_fpn_r101_fpn_1x_predcls_psg.py b/spaces/ECCV2022/PSG/OpenPSG/configs/vctree/panoptic_fpn_r101_fpn_1x_predcls_psg.py deleted file mode 100644 index faabe0d659a7e1b24b2f58dda644a9a0fe8faf08..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/vctree/panoptic_fpn_r101_fpn_1x_predcls_psg.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = './panoptic_fpn_r50_fpn_1x_predcls_psg.py' - -model = dict(backbone=dict( - depth=101, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101'))) - -# Log config -project_name = 'openpsg' -expt_name = 'vctree_panoptic_fpn_r101_fpn_1x_predcls_psg' -work_dir = f'./work_dirs/{expt_name}' - -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - dict( - type='WandbLoggerHook', - init_kwargs=dict( - project=project_name, - name=expt_name, - # config=work_dir + "/cfg.yaml" - ), - ), - ], -) - -load_from = 'work_dirs/checkpoints/panoptic_fpn_r101_fpn_1x_coco_20210820_193950-ab9157a2.pth' diff --git a/spaces/Emmy101/Emer/README.md b/spaces/Emmy101/Emer/README.md deleted file mode 100644 index 5495aed47ad3645d6345ff847f51e18bd1af6c71..0000000000000000000000000000000000000000 --- a/spaces/Emmy101/Emer/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Emer -emoji: 📈 -colorFrom: yellow -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EuroPython2022/clickbaitonator/fudge/eval_topic_metrics.py b/spaces/EuroPython2022/clickbaitonator/fudge/eval_topic_metrics.py deleted file mode 100644 index aec7c42f2797cadf8b91e16d991de7408de8764c..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/clickbaitonator/fudge/eval_topic_metrics.py +++ /dev/null @@ -1,134 +0,0 @@ -import os -import random -import time -import pickle -import math -from argparse import ArgumentParser -from collections import defaultdict -import string -import csv - -from tqdm import tqdm -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import AutoTokenizer, AutoModelWithLMHead, AutoModelForSequenceClassification - -from data import Dataset -from model import Model -from util import save_checkpoint, ProgressMeter, AverageMeter, num_params, pad_mask -from predict import predict -from constants import * - -def tw_topic_eval(sentences, category, tw_dir, cap=None): - # num matches of distinct words - words = [] - with open(os.path.join(tw_dir, category + '.txt'), 'r') as rf: - for line in rf: - words.append(line.strip().lower()) - num_match = 0 - for sent in sentences: - sent_match = 0 - sent = sent.strip().lower().split() - sent = [tok.strip(string.punctuation) for tok in sent] - for word in words: - if word in sent: - sent_match += 1 - if cap is None: - num_match += sent_match - else: - num_match += min(cap, sent_match) - return num_match - - -def perplexity(sentences, tokenizer, model, device='cuda'): - # calculate perplexity - with torch.no_grad(): - ppl = [] - sos_token = tokenizer.decode([0]) - for sentence in tqdm(sentences, total=len(sentences)): - full_tensor_input = tokenizer.encode(sos_token + sentence.replace(EOT_TOKEN, ' ').strip(), return_tensors='pt').to(device) - full_loss = model(full_tensor_input, labels=full_tensor_input)[0].mean() - ppl.append(torch.exp(full_loss).flatten().cpu().item()) - return np.mean(ppl), np.std(ppl) - - -def grammaticality(sentences, tokenizer, model, device='cuda'): - with torch.no_grad(): - total_good = 0 - for sent in tqdm(sentences, total=len(sentences)): - good_prob = F.softmax(model(tokenizer.encode(sent, return_tensors='pt').to(device))[0].flatten(), dim=0)[1] - total_good += good_prob - return total_good / len(sentences) # avg probability of grammaticality according to model - - -def distinctness(results): - d1, d2, d3 = defaultdict(lambda: set()), defaultdict(lambda: set()), defaultdict(lambda: set()) - total_words = defaultdict(lambda: 0) - for cw, outputs in results.items(): - for o in outputs: - o = o.replace(EOT_TOKEN, ' ').strip().split(' ') - o = [str(x) for x in o] - total_words[cw] += len(o) - d1[cw].update(o) - for i in range(len(o) - 1): - d2[cw].add(o[i] + ' ' + o[i+1]) - for i in range(len(o) - 2): - d3[cw].add(o[i] + ' ' + o[i+1] + ' ' + o[i+2]) - return_info = [] - avg_d1, avg_d2, avg_d3 = 0, 0, 0 - for cw in total_words.keys(): - return_info.append((cw, 'DISTINCTNESS', len(d1[cw]) / total_words[cw], len(d2[cw]) / total_words[cw], len(d3[cw]) / total_words[cw])) - avg_d1 += len(d1[cw]) / total_words[cw] - avg_d2 += len(d2[cw]) / total_words[cw] - avg_d3 += len(d3[cw]) / total_words[cw] - avg_d1, avg_d2, avg_d3 = avg_d1 / len(total_words.keys()), avg_d2 / len(total_words.keys()), avg_d3 / len(total_words.keys()) - return return_info, (avg_d1, avg_d2, avg_d3) - - -if __name__=='__main__': - parser = ArgumentParser() - parser.add_argument('--log_file', type=str, required=True, help='where to load results from') - parser.add_argument('--tw_dir', type=str, default='test_wordlists', help='test wordlists') - parser.add_argument('--batch_size', type=int, default=8, help='max samples at a time') - parser.add_argument('--cap_per_example', type=int, default=None, help='max matches to count per sentence') - parser.add_argument('--device', type=str, default='cuda', choices=['cpu', 'cuda']) - args = parser.parse_args() - - tw_topic_match_c_total = 0 - category_totals_c = defaultdict(lambda:0) - results = defaultdict(lambda: []) - with open(args.log_file, 'r') as rf: - data = list(csv.DictReader(rf)) - for line in data: - results[line['category']].append(line['generation']) - - all_c_sents = [] - for category, condition_results in results.items(): - tw_topic_match_c = tw_topic_eval(condition_results, category, args.tw_dir, cap=args.cap_per_example) - tw_topic_match_c_total += tw_topic_match_c - category_totals_c[category] += tw_topic_match_c - all_c_sents += condition_results - - print('Test wordlist matches (divide by num outputs to get the Success metric):', tw_topic_match_c_total) - print('per category:', category_totals_c) - - dist_info_by_category, dist_overall = distinctness(results) - print('Overall avg distinctness:', dist_overall) - print('per category:', dist_info_by_category) - - grammar_tokenizer = AutoTokenizer.from_pretrained('textattack/roberta-base-CoLA') - grammar_model = AutoModelForSequenceClassification.from_pretrained('textattack/roberta-base-CoLA').to(args.device) - grammar_model.eval() - print('grammaticality:', grammaticality(all_c_sents, grammar_tokenizer, grammar_model, device=args.device)) - - eval_tokenizer = AutoTokenizer.from_pretrained('openai-gpt') - eval_model = AutoModelWithLMHead.from_pretrained('openai-gpt').to(args.device) - eval_model.eval() - print('GPT perplexity:', perplexity(all_c_sents, eval_tokenizer, eval_model)) - - eval_tokenizer = AutoTokenizer.from_pretrained('transfo-xl-wt103') - eval_model = AutoModelWithLMHead.from_pretrained('transfo-xl-wt103').to(args.device) - eval_model.eval() - print('TFXL perplexity:', perplexity(all_c_sents, eval_tokenizer, eval_model)) diff --git a/spaces/FER-Universe/FER-Benchmarking/README.md b/spaces/FER-Universe/FER-Benchmarking/README.md deleted file mode 100644 index 542d83f49baaf6fb62cbe68c503daa0ff948722b..0000000000000000000000000000000000000000 --- a/spaces/FER-Universe/FER-Benchmarking/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FER Benchmarking -emoji: 🐠 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Falah/object_detection/README.md b/spaces/Falah/object_detection/README.md deleted file mode 100644 index 1031a4e5859ac629c74450a938cc02a36e52f705..0000000000000000000000000000000000000000 --- a/spaces/Falah/object_detection/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Object Detection -emoji: 📈 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/models.py b/spaces/FrankZxShen/so-vits-svc-models-ba/models.py deleted file mode 100644 index 4cfc5c4c9920cbd1a082f83e861faf86cdd41e74..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/models.py +++ /dev/null @@ -1,420 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, noice_scale=1): - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs) * noice_scale) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels , 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if (spk_emb is not None): - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - - def forward(self, c, f0, uv, spec, g=None, c_lengths=None, spec_lengths=None): - g = self.emb_g(g).transpose(1,2) - # ssl prenet - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - # f0 predict - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - - # encoder - z_ptemp, m_p, logs_p, _ = self.enc_p(x, x_mask, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - # flow - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # nsf decoder - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 - - def infer(self, c, f0, uv, g=None, noice_scale=0.35, predict_f0=False): - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - if predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), noice_scale=noice_scale) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o,f0 diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/GXSA/bingo/src/components/toaster.tsx b/spaces/GXSA/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/prng_test.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/prng_test.py deleted file mode 100644 index f77276ffe31709b11ff3c0a347c6ee0414b29b5c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/prng_test.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for prng.""" - -from absl.testing import absltest -from alphafold.model import prng -import jax - - -class PrngTest(absltest.TestCase): - - def test_key_reuse(self): - - init_key = jax.random.PRNGKey(42) - safe_key = prng.SafeKey(init_key) - _, safe_key = safe_key.split() - - raw_key = safe_key.get() - - self.assertNotEqual(raw_key[0], init_key[0]) - self.assertNotEqual(raw_key[1], init_key[1]) - - with self.assertRaises(RuntimeError): - safe_key.get() - - with self.assertRaises(RuntimeError): - safe_key.split() - - with self.assertRaises(RuntimeError): - safe_key.duplicate() - - -if __name__ == '__main__': - absltest.main() diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/mask_rcnn_r50_fpn.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/mask_rcnn_r50_fpn.py deleted file mode 100644 index 6fc7908249e013376b343c5fc136cbbe5ff29390..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,120 +0,0 @@ -# model settings -model = dict( - type='MaskRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/sabl_retina_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/sabl_retina_head.py deleted file mode 100644 index 4211622cb8b4fe807230a89bcaab8f4f1681bfc0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/sabl_retina_head.py +++ /dev/null @@ -1,621 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import (build_anchor_generator, build_assigner, - build_bbox_coder, build_sampler, images_to_levels, - multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .guided_anchor_head import GuidedAnchorHead - - -@HEADS.register_module() -class SABLRetinaHead(BaseDenseHead): - """Side-Aware Boundary Localization (SABL) for RetinaNet. - - The anchor generation, assigning and sampling in SABLRetinaHead - are the same as GuidedAnchorHead for guided anchoring. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of Convs for classification \ - and regression branches. Defaults to 4. - feat_channels (int): Number of hidden channels. \ - Defaults to 256. - approx_anchor_generator (dict): Config dict for approx generator. - square_anchor_generator (dict): Config dict for square generator. - conv_cfg (dict): Config dict for ConvModule. Defaults to None. - norm_cfg (dict): Config dict for Norm Layer. Defaults to None. - bbox_coder (dict): Config dict for bbox coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (dict): Training config of SABLRetinaHead. - test_cfg (dict): Testing config of SABLRetinaHead. - loss_cls (dict): Config of classification loss. - loss_bbox_cls (dict): Config of classification loss for bbox branch. - loss_bbox_reg (dict): Config of regression loss for bbox branch. - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - conv_cfg=None, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', - num_buckets=14, - scale_factor=3.0), - reg_decoded_bbox=False, - train_cfg=None, - test_cfg=None, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.5), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)): - super(SABLRetinaHead, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.num_buckets = bbox_coder['num_buckets'] - self.side_num = int(np.ceil(self.num_buckets / 2)) - - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - - self.approx_anchor_generator = build_anchor_generator( - approx_anchor_generator) - self.square_anchor_generator = build_anchor_generator( - square_anchor_generator) - self.approxs_per_octave = ( - self.approx_anchor_generator.num_base_anchors[0]) - - # one anchor per location - self.num_anchors = 1 - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.reg_decoded_bbox = reg_decoded_bbox - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.sampling = loss_cls['type'] not in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ] - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox_cls = build_loss(loss_bbox_cls) - self.loss_bbox_reg = build_loss(loss_bbox_reg) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.fp16_enabled = False - self._init_layers() - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.retina_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.retina_bbox_reg = nn.Conv2d( - self.feat_channels, self.side_num * 4, 3, padding=1) - self.retina_bbox_cls = nn.Conv2d( - self.feat_channels, self.side_num * 4, 3, padding=1) - - def init_weights(self): - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.retina_cls, std=0.01, bias=bias_cls) - normal_init(self.retina_bbox_reg, std=0.01) - normal_init(self.retina_bbox_cls, std=0.01) - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.retina_cls(cls_feat) - bbox_cls_pred = self.retina_bbox_cls(reg_feat) - bbox_reg_pred = self.retina_bbox_reg(reg_feat) - bbox_pred = (bbox_cls_pred, bbox_reg_pred) - return cls_score, bbox_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_anchors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - return squares_list - - def get_target(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=None, - sampling=True, - unmap_outputs=True): - """Compute bucketing targets. - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - label_channels (int): Channel of label. - sampling (bool): Sample Anchors or not. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple: Returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each \ - level. - - bbox_cls_targets_list (list[Tensor]): BBox cls targets of \ - each level. - - bbox_cls_weights_list (list[Tensor]): BBox cls weights of \ - each level. - - bbox_reg_targets_list (list[Tensor]): BBox reg targets of \ - each level. - - bbox_reg_weights_list (list[Tensor]): BBox reg weights of \ - each level. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_cls_targets, - all_bbox_cls_weights, all_bbox_reg_targets, all_bbox_reg_weights, - pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - sampling=sampling, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_squares) - label_weights_list = images_to_levels(all_label_weights, - num_level_squares) - bbox_cls_targets_list = images_to_levels(all_bbox_cls_targets, - num_level_squares) - bbox_cls_weights_list = images_to_levels(all_bbox_cls_weights, - num_level_squares) - bbox_reg_targets_list = images_to_levels(all_bbox_reg_targets, - num_level_squares) - bbox_reg_weights_list = images_to_levels(all_bbox_reg_weights, - num_level_squares) - return (labels_list, label_weights_list, bbox_cls_targets_list, - bbox_cls_weights_list, bbox_reg_targets_list, - bbox_reg_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=None, - sampling=True, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image, \ - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - sampling (bool): Sample Anchors or not. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple: - - - labels_list (Tensor): Labels in a single image - - label_weights (Tensor): Label weights in a single image - - bbox_cls_targets (Tensor): BBox cls targets in a single image - - bbox_cls_weights (Tensor): BBox cls weights in a single image - - bbox_reg_targets (Tensor): BBox reg targets in a single image - - bbox_reg_weights (Tensor): BBox reg weights in a single image - - num_total_pos (int): Number of positive samples \ - in a single image - - num_total_neg (int): Number of negative samples \ - in a single image - """ - if not inside_flags.any(): - return (None, ) * 8 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, squares, - gt_bboxes) - - num_valid_squares = squares.shape[0] - bbox_cls_targets = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_cls_weights = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_reg_targets = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - bbox_reg_weights = squares.new_zeros( - (num_valid_squares, self.side_num * 4)) - labels = squares.new_full((num_valid_squares, ), - self.num_classes, - dtype=torch.long) - label_weights = squares.new_zeros(num_valid_squares, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - (pos_bbox_reg_targets, pos_bbox_reg_weights, pos_bbox_cls_targets, - pos_bbox_cls_weights) = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - - bbox_cls_targets[pos_inds, :] = pos_bbox_cls_targets - bbox_reg_targets[pos_inds, :] = pos_bbox_reg_targets - bbox_cls_weights[pos_inds, :] = pos_bbox_cls_weights - bbox_reg_weights[pos_inds, :] = pos_bbox_reg_weights - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_cls_targets = unmap(bbox_cls_targets, num_total_anchors, - inside_flags) - bbox_cls_weights = unmap(bbox_cls_weights, num_total_anchors, - inside_flags) - bbox_reg_targets = unmap(bbox_reg_targets, num_total_anchors, - inside_flags) - bbox_reg_weights = unmap(bbox_reg_weights, num_total_anchors, - inside_flags) - return (labels, label_weights, bbox_cls_targets, bbox_cls_weights, - bbox_reg_targets, bbox_reg_weights, pos_inds, neg_inds) - - def loss_single(self, cls_score, bbox_pred, labels, label_weights, - bbox_cls_targets, bbox_cls_weights, bbox_reg_targets, - bbox_reg_weights, num_total_samples): - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_cls_targets = bbox_cls_targets.reshape(-1, self.side_num * 4) - bbox_cls_weights = bbox_cls_weights.reshape(-1, self.side_num * 4) - bbox_reg_targets = bbox_reg_targets.reshape(-1, self.side_num * 4) - bbox_reg_weights = bbox_reg_weights.reshape(-1, self.side_num * 4) - (bbox_cls_pred, bbox_reg_pred) = bbox_pred - bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape( - -1, self.side_num * 4) - bbox_reg_pred = bbox_reg_pred.permute(0, 2, 3, 1).reshape( - -1, self.side_num * 4) - loss_bbox_cls = self.loss_bbox_cls( - bbox_cls_pred, - bbox_cls_targets.long(), - bbox_cls_weights, - avg_factor=num_total_samples * 4 * self.side_num) - loss_bbox_reg = self.loss_bbox_reg( - bbox_reg_pred, - bbox_reg_targets, - bbox_reg_weights, - avg_factor=num_total_samples * 4 * self.bbox_coder.offset_topk) - return loss_cls, loss_bbox_cls, loss_bbox_reg - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get sampled approxes - approxs_list, inside_flag_list = GuidedAnchorHead.get_sampled_approxs( - self, featmap_sizes, img_metas, device=device) - - square_list = self.get_anchors(featmap_sizes, img_metas, device=device) - - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_target( - approxs_list, - inside_flag_list, - square_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - sampling=self.sampling) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_cls_targets_list, - bbox_cls_weights_list, bbox_reg_targets_list, bbox_reg_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - losses_cls, losses_bbox_cls, losses_bbox_reg = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_cls_targets_list, - bbox_cls_weights_list, - bbox_reg_targets_list, - bbox_reg_weights_list, - num_total_samples=num_total_samples) - return dict( - loss_cls=losses_cls, - loss_bbox_cls=losses_bbox_cls, - loss_bbox_reg=losses_bbox_reg) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - - device = cls_scores[0].device - mlvl_anchors = self.get_anchors( - featmap_sizes, img_metas, device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_cls_pred_list = [ - bbox_preds[i][0][img_id].detach() for i in range(num_levels) - ] - bbox_reg_pred_list = [ - bbox_preds[i][1][img_id].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self.get_bboxes_single(cls_score_list, - bbox_cls_pred_list, - bbox_reg_pred_list, - mlvl_anchors[img_id], img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def get_bboxes_single(self, - cls_scores, - bbox_cls_preds, - bbox_reg_preds, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_confids = [] - assert len(cls_scores) == len(bbox_cls_preds) == len( - bbox_reg_preds) == len(mlvl_anchors) - for cls_score, bbox_cls_pred, bbox_reg_pred, anchors in zip( - cls_scores, bbox_cls_preds, bbox_reg_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_cls_pred.size( - )[-2:] == bbox_reg_pred.size()[-2::] - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_cls_pred = bbox_cls_pred.permute(1, 2, 0).reshape( - -1, self.side_num * 4) - bbox_reg_pred = bbox_reg_pred.permute(1, 2, 0).reshape( - -1, self.side_num * 4) - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_cls_pred = bbox_cls_pred[topk_inds, :] - bbox_reg_pred = bbox_reg_pred[topk_inds, :] - scores = scores[topk_inds, :] - bbox_preds = [ - bbox_cls_pred.contiguous(), - bbox_reg_pred.contiguous() - ] - bboxes, confids = self.bbox_coder.decode( - anchors.contiguous(), bbox_preds, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_confids.append(confids) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_confids = torch.cat(mlvl_confids) - if self.use_sigmoid_cls: - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - det_bboxes, det_labels = multiclass_nms( - mlvl_bboxes, - mlvl_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=mlvl_confids) - return det_bboxes, det_labels diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/music_dataset.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/music_dataset.py deleted file mode 100644 index 4e28796939f9cde2b23a2c4bf43fd7ba5fa26b2d..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/data/music_dataset.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Dataset of music tracks with rich metadata. -""" -from dataclasses import dataclass, field, fields, replace -import gzip -import json -import logging -from pathlib import Path -import random -import typing as tp - -import torch - -from .info_audio_dataset import ( - InfoAudioDataset, - AudioInfo, - get_keyword_list, - get_keyword, - get_string -) -from ..modules.conditioners import ( - ConditioningAttributes, - JointEmbedCondition, - WavCondition, -) -from ..utils.utils import warn_once - - -logger = logging.getLogger(__name__) - - -@dataclass -class MusicInfo(AudioInfo): - """Segment info augmented with music metadata. - """ - # music-specific metadata - title: tp.Optional[str] = None - artist: tp.Optional[str] = None # anonymized artist id, used to ensure no overlap between splits - key: tp.Optional[str] = None - bpm: tp.Optional[float] = None - genre: tp.Optional[str] = None - moods: tp.Optional[list] = None - keywords: tp.Optional[list] = None - description: tp.Optional[str] = None - name: tp.Optional[str] = None - instrument: tp.Optional[str] = None - # original wav accompanying the metadata - self_wav: tp.Optional[WavCondition] = None - # dict mapping attributes names to tuple of wav, text and metadata - joint_embed: tp.Dict[str, JointEmbedCondition] = field(default_factory=dict) - - @property - def has_music_meta(self) -> bool: - return self.name is not None - - def to_condition_attributes(self) -> ConditioningAttributes: - out = ConditioningAttributes() - for _field in fields(self): - key, value = _field.name, getattr(self, _field.name) - if key == 'self_wav': - out.wav[key] = value - elif key == 'joint_embed': - for embed_attribute, embed_cond in value.items(): - out.joint_embed[embed_attribute] = embed_cond - else: - if isinstance(value, list): - value = ' '.join(value) - out.text[key] = value - return out - - @staticmethod - def attribute_getter(attribute): - if attribute == 'bpm': - preprocess_func = get_bpm - elif attribute == 'key': - preprocess_func = get_musical_key - elif attribute in ['moods', 'keywords']: - preprocess_func = get_keyword_list - elif attribute in ['genre', 'name', 'instrument']: - preprocess_func = get_keyword - elif attribute in ['title', 'artist', 'description']: - preprocess_func = get_string - else: - preprocess_func = None - return preprocess_func - - @classmethod - def from_dict(cls, dictionary: dict, fields_required: bool = False): - _dictionary: tp.Dict[str, tp.Any] = {} - - # allow a subset of attributes to not be loaded from the dictionary - # these attributes may be populated later - post_init_attributes = ['self_wav', 'joint_embed'] - optional_fields = ['keywords'] - - for _field in fields(cls): - if _field.name in post_init_attributes: - continue - elif _field.name not in dictionary: - if fields_required and _field.name not in optional_fields: - raise KeyError(f"Unexpected missing key: {_field.name}") - else: - preprocess_func: tp.Optional[tp.Callable] = cls.attribute_getter(_field.name) - value = dictionary[_field.name] - if preprocess_func: - value = preprocess_func(value) - _dictionary[_field.name] = value - return cls(**_dictionary) - - -def augment_music_info_description(music_info: MusicInfo, merge_text_p: float = 0., - drop_desc_p: float = 0., drop_other_p: float = 0.) -> MusicInfo: - """Augment MusicInfo description with additional metadata fields and potential dropout. - Additional textual attributes are added given probability 'merge_text_conditions_p' and - the original textual description is dropped from the augmented description given probability drop_desc_p. - - Args: - music_info (MusicInfo): The music metadata to augment. - merge_text_p (float): Probability of merging additional metadata to the description. - If provided value is 0, then no merging is performed. - drop_desc_p (float): Probability of dropping the original description on text merge. - if provided value is 0, then no drop out is performed. - drop_other_p (float): Probability of dropping the other fields used for text augmentation. - Returns: - MusicInfo: The MusicInfo with augmented textual description. - """ - def is_valid_field(field_name: str, field_value: tp.Any) -> bool: - valid_field_name = field_name in ['key', 'bpm', 'genre', 'moods', 'instrument', 'keywords'] - valid_field_value = field_value is not None and isinstance(field_value, (int, float, str, list)) - keep_field = random.uniform(0, 1) < drop_other_p - return valid_field_name and valid_field_value and keep_field - - def process_value(v: tp.Any) -> str: - if isinstance(v, (int, float, str)): - return str(v) - if isinstance(v, list): - return ", ".join(v) - else: - raise ValueError(f"Unknown type for text value! ({type(v), v})") - - description = music_info.description - - metadata_text = "" - if random.uniform(0, 1) < merge_text_p: - meta_pairs = [f'{_field.name}: {process_value(getattr(music_info, _field.name))}' - for _field in fields(music_info) if is_valid_field(_field.name, getattr(music_info, _field.name))] - random.shuffle(meta_pairs) - metadata_text = ". ".join(meta_pairs) - description = description if not random.uniform(0, 1) < drop_desc_p else None - logger.debug(f"Applying text augmentation on MMI info. description: {description}, metadata: {metadata_text}") - - if description is None: - description = metadata_text if len(metadata_text) > 1 else None - else: - description = ". ".join([description.rstrip('.'), metadata_text]) - description = description.strip() if description else None - - music_info = replace(music_info) - music_info.description = description - return music_info - - -class Paraphraser: - def __init__(self, paraphrase_source: tp.Union[str, Path], paraphrase_p: float = 0.): - self.paraphrase_p = paraphrase_p - open_fn = gzip.open if str(paraphrase_source).lower().endswith('.gz') else open - with open_fn(paraphrase_source, 'rb') as f: # type: ignore - self.paraphrase_source = json.loads(f.read()) - logger.info(f"loaded paraphrasing source from: {paraphrase_source}") - - def sample_paraphrase(self, audio_path: str, description: str): - if random.random() >= self.paraphrase_p: - return description - info_path = Path(audio_path).with_suffix('.json') - if info_path not in self.paraphrase_source: - warn_once(logger, f"{info_path} not in paraphrase source!") - return description - new_desc = random.choice(self.paraphrase_source[info_path]) - logger.debug(f"{description} -> {new_desc}") - return new_desc - - -class MusicDataset(InfoAudioDataset): - """Music dataset is an AudioDataset with music-related metadata. - - Args: - info_fields_required (bool): Whether to enforce having required fields. - merge_text_p (float): Probability of merging additional metadata to the description. - drop_desc_p (float): Probability of dropping the original description on text merge. - drop_other_p (float): Probability of dropping the other fields used for text augmentation. - joint_embed_attributes (list[str]): A list of attributes for which joint embedding metadata is returned. - paraphrase_source (str, optional): Path to the .json or .json.gz file containing the - paraphrases for the description. The json should be a dict with keys are the - original info path (e.g. track_path.json) and each value is a list of possible - paraphrased. - paraphrase_p (float): probability of taking a paraphrase. - - See `audiocraft.data.info_audio_dataset.InfoAudioDataset` for full initialization arguments. - """ - def __init__(self, *args, info_fields_required: bool = True, - merge_text_p: float = 0., drop_desc_p: float = 0., drop_other_p: float = 0., - joint_embed_attributes: tp.List[str] = [], - paraphrase_source: tp.Optional[str] = None, paraphrase_p: float = 0, - **kwargs): - kwargs['return_info'] = True # We require the info for each song of the dataset. - super().__init__(*args, **kwargs) - self.info_fields_required = info_fields_required - self.merge_text_p = merge_text_p - self.drop_desc_p = drop_desc_p - self.drop_other_p = drop_other_p - self.joint_embed_attributes = joint_embed_attributes - self.paraphraser = None - if paraphrase_source is not None: - self.paraphraser = Paraphraser(paraphrase_source, paraphrase_p) - - def __getitem__(self, index): - wav, info = super().__getitem__(index) - info_data = info.to_dict() - music_info_path = Path(info.meta.path).with_suffix('.json') - - if Path(music_info_path).exists(): - with open(music_info_path, 'r') as json_file: - music_data = json.load(json_file) - music_data.update(info_data) - music_info = MusicInfo.from_dict(music_data, fields_required=self.info_fields_required) - if self.paraphraser is not None: - music_info.description = self.paraphraser.sample(music_info.meta.path, music_info.description) - if self.merge_text_p: - music_info = augment_music_info_description( - music_info, self.merge_text_p, self.drop_desc_p, self.drop_other_p) - else: - music_info = MusicInfo.from_dict(info_data, fields_required=False) - - music_info.self_wav = WavCondition( - wav=wav[None], length=torch.tensor([info.n_frames]), - sample_rate=[info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time]) - - for att in self.joint_embed_attributes: - att_value = getattr(music_info, att) - joint_embed_cond = JointEmbedCondition( - wav[None], [att_value], torch.tensor([info.n_frames]), - sample_rate=[info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time]) - music_info.joint_embed[att] = joint_embed_cond - - return wav, music_info - - -def get_musical_key(value: tp.Optional[str]) -> tp.Optional[str]: - """Preprocess key keywords, discarding them if there are multiple key defined.""" - if value is None or (not isinstance(value, str)) or len(value) == 0 or value == 'None': - return None - elif ',' in value: - # For now, we discard when multiple keys are defined separated with comas - return None - else: - return value.strip().lower() - - -def get_bpm(value: tp.Optional[str]) -> tp.Optional[float]: - """Preprocess to a float.""" - if value is None: - return None - try: - return float(value) - except ValueError: - return None diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/base.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/base.py deleted file mode 100644 index a77fefb98e62a5bbc6385910261ffdde2ffa5a25..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/base.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Base class for all quantizers. -""" - -from dataclasses import dataclass, field -import typing as tp - -import torch -from torch import nn - - -@dataclass -class QuantizedResult: - x: torch.Tensor - codes: torch.Tensor - bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item. - penalty: tp.Optional[torch.Tensor] = None - metrics: dict = field(default_factory=dict) - - -class BaseQuantizer(nn.Module): - """Base class for quantizers. - """ - - def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult: - """ - Given input tensor x, returns first the quantized (or approximately quantized) - representation along with quantized codes, bandwidth, and any penalty term for the loss. - Finally, this returns a dict of metrics to update logging etc. - Frame rate must be passed so that the bandwidth is properly computed. - """ - raise NotImplementedError() - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth.""" - raise NotImplementedError() - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation.""" - raise NotImplementedError() - - @property - def total_codebooks(self): - """Total number of codebooks.""" - raise NotImplementedError() - - @property - def num_codebooks(self): - """Number of active codebooks.""" - raise NotImplementedError() - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks.""" - raise NotImplementedError() - - -class DummyQuantizer(BaseQuantizer): - """Fake quantizer that actually does not perform any quantization. - """ - def __init__(self): - super().__init__() - - def forward(self, x: torch.Tensor, frame_rate: int): - q = x.unsqueeze(1) - return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return x.unsqueeze(1) - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return codes.squeeze(1) - - @property - def total_codebooks(self): - """Total number of codebooks.""" - return 1 - - @property - def num_codebooks(self): - """Total number of codebooks.""" - return self.total_codebooks - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks.""" - raise AttributeError("Cannot override the number of codebooks for the dummy quantizer") diff --git a/spaces/Guilhh-kell0/Jennifer-Home/README.md b/spaces/Guilhh-kell0/Jennifer-Home/README.md deleted file mode 100644 index a3c9011775d4683f26e6cdd3ede2e382acd47869..0000000000000000000000000000000000000000 --- a/spaces/Guilhh-kell0/Jennifer-Home/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Jennifer Home -emoji: 👀 -colorFrom: green -colorTo: green -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gyjkkih/WizardLM-WizardCoder-15B-V1.0/README.md b/spaces/Gyjkkih/WizardLM-WizardCoder-15B-V1.0/README.md deleted file mode 100644 index 54113b766351e5f62fe47e35de915bf19a205aaa..0000000000000000000000000000000000000000 --- a/spaces/Gyjkkih/WizardLM-WizardCoder-15B-V1.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: WizardLM WizardCoder 15B V1.0 -emoji: 💻 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/vit.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/vit.py deleted file mode 100644 index 9a60d56f15ad7def53d9b391b5fccd9935e386ce..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/vit.py +++ /dev/null @@ -1,576 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -attention = {} - - -def get_attention(name): - def hook(module, input, output): - x = input[0] - B, N, C = x.shape - qkv = ( - module.qkv(x) - .reshape(B, N, 3, module.num_heads, C // module.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * module.scale - - attn = attn.softmax(dim=-1) # [:,:,1,1:] - attention[name] = attn - - return hook - - -def get_mean_attention_map(attn, token, shape): - attn = attn[:, :, token, 1:] - attn = attn.unflatten(2, torch.Size([shape[2] // 16, shape[3] // 16])).float() - attn = torch.nn.functional.interpolate( - attn, size=shape[2:], mode="bicubic", align_corners=False - ).squeeze(0) - - all_attn = torch.mean(attn, 0) - - return all_attn - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, - enable_attention_hooks=False, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - if enable_attention_hooks: - pretrained.model.blocks[hooks[0]].attn.register_forward_hook( - get_attention("attn_1") - ) - pretrained.model.blocks[hooks[1]].attn.register_forward_hook( - get_attention("attn_2") - ) - pretrained.model.blocks[hooks[2]].attn.register_forward_hook( - get_attention("attn_3") - ) - pretrained.model.blocks[hooks[3]].attn.register_forward_hook( - get_attention("attn_4") - ) - pretrained.attention = attention - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, - enable_attention_hooks=False, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - if enable_attention_hooks: - pretrained.model.blocks[2].attn.register_forward_hook(get_attention("attn_1")) - pretrained.model.blocks[5].attn.register_forward_hook(get_attention("attn_2")) - pretrained.model.blocks[8].attn.register_forward_hook(get_attention("attn_3")) - pretrained.model.blocks[11].attn.register_forward_hook(get_attention("attn_4")) - pretrained.attention = attention - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, - use_readout="ignore", - hooks=None, - use_vit_only=False, - enable_attention_hooks=False, -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - enable_attention_hooks=enable_attention_hooks, - ) - - -def _make_pretrained_vitl16_384( - pretrained, use_readout="ignore", hooks=None, enable_attention_hooks=False -): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - enable_attention_hooks=enable_attention_hooks, - ) - - -def _make_pretrained_vitb16_384( - pretrained, use_readout="ignore", hooks=None, enable_attention_hooks=False -): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - enable_attention_hooks=enable_attention_hooks, - ) - - -def _make_pretrained_deitb16_384( - pretrained, use_readout="ignore", hooks=None, enable_attention_hooks=False -): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - enable_attention_hooks=enable_attention_hooks, - ) - - -def _make_pretrained_deitb16_distil_384( - pretrained, use_readout="ignore", hooks=None, enable_attention_hooks=False -): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - enable_attention_hooks=enable_attention_hooks, - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/__init__.py deleted file mode 100644 index 681fca3d4553f6832a65f61fc186793bc4ee0679..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/__init__.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .transformer_config import ( - TransformerConfig, - DEFAULT_MAX_SOURCE_POSITIONS, - DEFAULT_MAX_TARGET_POSITIONS, - DEFAULT_MIN_PARAMS_TO_WRAP, -) -from .transformer_decoder import TransformerDecoder, TransformerDecoderBase, Linear -from .transformer_encoder import TransformerEncoder, TransformerEncoderBase -from .transformer_legacy import ( - TransformerModel, - base_architecture, - tiny_architecture, - transformer_iwslt_de_en, - transformer_wmt_en_de, - transformer_vaswani_wmt_en_de_big, - transformer_vaswani_wmt_en_fr_big, - transformer_wmt_en_de_big, - transformer_wmt_en_de_big_t2t, -) -from .transformer_base import TransformerModelBase, Embedding - - -__all__ = [ - "TransformerModelBase", - "TransformerConfig", - "TransformerDecoder", - "TransformerDecoderBase", - "TransformerEncoder", - "TransformerEncoderBase", - "TransformerModel", - "Embedding", - "Linear", - "base_architecture", - "tiny_architecture", - "transformer_iwslt_de_en", - "transformer_wmt_en_de", - "transformer_vaswani_wmt_en_de_big", - "transformer_vaswani_wmt_en_fr_big", - "transformer_wmt_en_de_big", - "transformer_wmt_en_de_big_t2t", - "DEFAULT_MAX_SOURCE_POSITIONS", - "DEFAULT_MAX_TARGET_POSITIONS", - "DEFAULT_MIN_PARAMS_TO_WRAP", -] diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/modules/qact.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/modules/qact.py deleted file mode 100644 index c5dd1d63362423ab0cfc381dddabb547a3b44c72..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/modules/qact.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from ..ops import emulate_int - - -class ActivationQuantizer: - """ - Fake scalar quantization of the activations using a forward hook. - - Args: - - module. a nn.Module for which we quantize the *post-activations* - - p: proportion of activations to quantize, set by default to 1 - - update_step: to recompute quantization parameters - - bits: number of bits for quantization - - method: choose among {"tensor", "histogram", "channel"} - - clamp_threshold: to prevent gradients overflow - - Remarks: - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - For the list of quantization methods and number of bits, see ops.py - - To remove the hook from the module, simply call self.handle.remove() - - At test time, the activations are fully quantized - - We use the straight-through estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick - - The activations are hard-clamped in [-clamp_threshold, clamp_threshold] - to prevent overflow during the backward pass - """ - - def __init__( - self, - module, - p=1, - update_step=1000, - bits=8, - method="histogram", - clamp_threshold=5, - ): - self.module = module - self.p = p - self.update_step = update_step - self.counter = 0 - self.bits = bits - self.method = method - self.clamp_threshold = clamp_threshold - self.handle = None - self.register_hook() - - def register_hook(self): - # forward hook - def quantize_hook(module, x, y): - - # update parameters every 1000 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.module.training else 1 - - # quantize activations - y_q, self.scale, self.zero_point = emulate_int( - y.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(y) - mask.bernoulli_(1 - p) - noise = (y_q - y).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - return torch.clamp(y, clamp_low.item(), clamp_high.item()) + noise.detach() - - # register hook - self.handle = self.module.register_forward_hook(quantize_hook) diff --git a/spaces/HarshulNanda/HARM_ML_App_ludwig/categoryPredictor.py b/spaces/HarshulNanda/HARM_ML_App_ludwig/categoryPredictor.py deleted file mode 100644 index bb012601bb41b5f487edd499761268ada26ff5d1..0000000000000000000000000000000000000000 --- a/spaces/HarshulNanda/HARM_ML_App_ludwig/categoryPredictor.py +++ /dev/null @@ -1,55 +0,0 @@ -from youtubesearchpython import Video, ResultMode -from colors import colorOf, dataset - -import numpy as np -import pandas as pd -import matplotlib.pyplot as plt -import requests -import pickle -import warnings -warnings.filterwarnings("ignore") - -def predictCategoryFor(url): - try: - - video = Video.getInfo(url, mode = ResultMode.json) - - text = [video["title"] + " " + video["description"]] - - categories = sorted(list(dataset.keys())) - - # education_model = pickle.load(open("./models/educated_model.pkl", "rb")) - # education_prediction = education_model.predict(text)[0] - - education_classifier = pickle.load(open("./models/ludwig_edu.pkl", "rb")) - text_to_predict = pd.DataFrame({ - "text": [ - text, - ] - }) - edu_pred, _ = education_classifier.predict(text_to_predict) - edu_pred = list(edu_pred.category_predictions)[0] - education_prediction = edu_pred - - if education_prediction == "Education": - - category_classifier = pickle.load(open("./models/ludwig_cat_final.pkl", "rb")) - pred, _ = category_classifier.predict(text_to_predict) - pred = list(pred.category_predictions)[0] - category_prediction = pred - - sub_cat_clf = pickle.load(open(f"./models/{category_prediction.lower().replace(' ', '_')}_model.pkl", "rb")) - sub_cat_pred = sub_cat_clf.predict_proba(text)[0] - sub_cat_pred *= 100 - subs = sorted(dataset[category_prediction]) - - return ("Educational", category_prediction, subs, sub_cat_pred) - - else: - - return ("Non Educational", "", [], []) - - except: - return ("There must be an error in getting the title and description of the video.", "", [], []) - - diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/hifi/models.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/hifi/models.py deleted file mode 100644 index aaf911836119d69129abe22aa4fc875f2ba3d53c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/hifi/models.py +++ /dev/null @@ -1,403 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm( - Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3) - ) - resblock = ResBlock1 if h.resblock == "1" else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - h.upsample_initial_channel // (2 ** i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes) - ): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorP(2), - DiscriminatorP(3), - DiscriminatorP(5), - DiscriminatorP(7), - DiscriminatorP(11), - ] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ] - ) - self.meanpools = nn.ModuleList( - [AvgPool1d(4, 2, padding=2), AvgPool1d(4, 2, padding=2)] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/Harveenchadha/en_to_indic_translation/inference/custom_interactive.py b/spaces/Harveenchadha/en_to_indic_translation/inference/custom_interactive.py deleted file mode 100644 index 1e167a450c10991fa30f885721f99f233c35416e..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/inference/custom_interactive.py +++ /dev/null @@ -1,298 +0,0 @@ -# python wrapper for fairseq-interactive command line tool - -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Translate raw text with a trained model. Batches data on-the-fly. -""" - -import ast -from collections import namedtuple - -import torch -from fairseq import checkpoint_utils, options, tasks, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.token_generation_constraints import pack_constraints, unpack_constraints -from fairseq_cli.generate import get_symbols_to_strip_from_output - -import codecs - - -Batch = namedtuple("Batch", "ids src_tokens src_lengths constraints") -Translation = namedtuple("Translation", "src_str hypos pos_scores alignments") - - -def make_batches( - lines, cfg, task, max_positions, encode_fn, constrainted_decoding=False -): - def encode_fn_target(x): - return encode_fn(x) - - if constrainted_decoding: - # Strip (tab-delimited) contraints, if present, from input lines, - # store them in batch_constraints - batch_constraints = [list() for _ in lines] - for i, line in enumerate(lines): - if "\t" in line: - lines[i], *batch_constraints[i] = line.split("\t") - - # Convert each List[str] to List[Tensor] - for i, constraint_list in enumerate(batch_constraints): - batch_constraints[i] = [ - task.target_dictionary.encode_line( - encode_fn_target(constraint), - append_eos=False, - add_if_not_exist=False, - ) - for constraint in constraint_list - ] - - if constrainted_decoding: - constraints_tensor = pack_constraints(batch_constraints) - else: - constraints_tensor = None - - tokens, lengths = task.get_interactive_tokens_and_lengths(lines, encode_fn) - - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference( - tokens, lengths, constraints=constraints_tensor - ), - max_tokens=cfg.dataset.max_tokens, - max_sentences=cfg.dataset.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test, - ).next_epoch_itr(shuffle=False) - for batch in itr: - ids = batch["id"] - src_tokens = batch["net_input"]["src_tokens"] - src_lengths = batch["net_input"]["src_lengths"] - constraints = batch.get("constraints", None) - - yield Batch( - ids=ids, - src_tokens=src_tokens, - src_lengths=src_lengths, - constraints=constraints, - ) - - -class Translator: - def __init__( - self, data_dir, checkpoint_path, batch_size=25, constrained_decoding=False - ): - - self.constrained_decoding = constrained_decoding - self.parser = options.get_generation_parser(interactive=True) - # buffer_size is currently not used but we just initialize it to batch - # size + 1 to avoid any assertion errors. - if self.constrained_decoding: - self.parser.set_defaults( - path=checkpoint_path, - remove_bpe="subword_nmt", - num_workers=-1, - constraints="ordered", - batch_size=batch_size, - buffer_size=batch_size + 1, - ) - else: - self.parser.set_defaults( - path=checkpoint_path, - remove_bpe="subword_nmt", - num_workers=-1, - batch_size=batch_size, - buffer_size=batch_size + 1, - ) - args = options.parse_args_and_arch(self.parser, input_args=[data_dir]) - # we are explictly setting src_lang and tgt_lang here - # generally the data_dir we pass contains {split}-{src_lang}-{tgt_lang}.*.idx files from - # which fairseq infers the src and tgt langs(if these are not passed). In deployment we dont - # use any idx files and only store the SRC and TGT dictionaries. - args.source_lang = "SRC" - args.target_lang = "TGT" - # since we are truncating sentences to max_seq_len in engine, we can set it to False here - args.skip_invalid_size_inputs_valid_test = False - - # we have custom architechtures in this folder and we will let fairseq - # import this - args.user_dir = "model_configs" - self.cfg = convert_namespace_to_omegaconf(args) - - utils.import_user_module(self.cfg.common) - - if self.cfg.interactive.buffer_size < 1: - self.cfg.interactive.buffer_size = 1 - if self.cfg.dataset.max_tokens is None and self.cfg.dataset.batch_size is None: - self.cfg.dataset.batch_size = 1 - - assert ( - not self.cfg.generation.sampling - or self.cfg.generation.nbest == self.cfg.generation.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - not self.cfg.dataset.batch_size - or self.cfg.dataset.batch_size <= self.cfg.interactive.buffer_size - ), "--batch-size cannot be larger than --buffer-size" - - # Fix seed for stochastic decoding - # if self.cfg.common.seed is not None and not self.cfg.generation.no_seed_provided: - # np.random.seed(self.cfg.common.seed) - # utils.set_torch_seed(self.cfg.common.seed) - - # if not self.constrained_decoding: - # self.use_cuda = torch.cuda.is_available() and not self.cfg.common.cpu - # else: - # self.use_cuda = False - - self.use_cuda = torch.cuda.is_available() and not self.cfg.common.cpu - - # Setup task, e.g., translation - self.task = tasks.setup_task(self.cfg.task) - - # Load ensemble - overrides = ast.literal_eval(self.cfg.common_eval.model_overrides) - self.models, self._model_args = checkpoint_utils.load_model_ensemble( - utils.split_paths(self.cfg.common_eval.path), - arg_overrides=overrides, - task=self.task, - suffix=self.cfg.checkpoint.checkpoint_suffix, - strict=(self.cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=self.cfg.checkpoint.checkpoint_shard_count, - ) - - # Set dictionaries - self.src_dict = self.task.source_dictionary - self.tgt_dict = self.task.target_dictionary - - # Optimize ensemble for generation - for model in self.models: - if model is None: - continue - if self.cfg.common.fp16: - model.half() - if ( - self.use_cuda - and not self.cfg.distributed_training.pipeline_model_parallel - ): - model.cuda() - model.prepare_for_inference_(self.cfg) - - # Initialize generator - self.generator = self.task.build_generator(self.models, self.cfg.generation) - - # Handle tokenization and BPE - self.tokenizer = self.task.build_tokenizer(self.cfg.tokenizer) - self.bpe = self.task.build_bpe(self.cfg.bpe) - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - self.align_dict = utils.load_align_dict(self.cfg.generation.replace_unk) - - self.max_positions = utils.resolve_max_positions( - self.task.max_positions(), *[model.max_positions() for model in self.models] - ) - - def encode_fn(self, x): - if self.tokenizer is not None: - x = self.tokenizer.encode(x) - if self.bpe is not None: - x = self.bpe.encode(x) - return x - - def decode_fn(self, x): - if self.bpe is not None: - x = self.bpe.decode(x) - if self.tokenizer is not None: - x = self.tokenizer.decode(x) - return x - - def translate(self, inputs, constraints=None): - if self.constrained_decoding and constraints is None: - raise ValueError("Constraints cant be None in constrained decoding mode") - if not self.constrained_decoding and constraints is not None: - raise ValueError("Cannot pass constraints during normal translation") - if constraints: - constrained_decoding = True - modified_inputs = [] - for _input, constraint in zip(inputs, constraints): - modified_inputs.append(_input + f"\t{constraint}") - inputs = modified_inputs - else: - constrained_decoding = False - - start_id = 0 - results = [] - final_translations = [] - for batch in make_batches( - inputs, - self.cfg, - self.task, - self.max_positions, - self.encode_fn, - constrained_decoding, - ): - bsz = batch.src_tokens.size(0) - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - constraints = batch.constraints - if self.use_cuda: - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - if constraints is not None: - constraints = constraints.cuda() - - sample = { - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - }, - } - - translations = self.task.inference_step( - self.generator, self.models, sample, constraints=constraints - ) - - list_constraints = [[] for _ in range(bsz)] - if constrained_decoding: - list_constraints = [unpack_constraints(c) for c in constraints] - for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)): - src_tokens_i = utils.strip_pad(src_tokens[i], self.tgt_dict.pad()) - constraints = list_constraints[i] - results.append( - ( - start_id + id, - src_tokens_i, - hypos, - { - "constraints": constraints, - }, - ) - ) - - # sort output to match input order - for id_, src_tokens, hypos, _ in sorted(results, key=lambda x: x[0]): - src_str = "" - if self.src_dict is not None: - src_str = self.src_dict.string( - src_tokens, self.cfg.common_eval.post_process - ) - - # Process top predictions - for hypo in hypos[: min(len(hypos), self.cfg.generation.nbest)]: - hypo_tokens, hypo_str, alignment = utils.post_process_prediction( - hypo_tokens=hypo["tokens"].int().cpu(), - src_str=src_str, - alignment=hypo["alignment"], - align_dict=self.align_dict, - tgt_dict=self.tgt_dict, - remove_bpe="subword_nmt", - extra_symbols_to_ignore=get_symbols_to_strip_from_output( - self.generator - ), - ) - detok_hypo_str = self.decode_fn(hypo_str) - final_translations.append(detok_hypo_str) - return final_translations diff --git a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/apply_bpe.py b/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/apply_bpe.py deleted file mode 100644 index b4fdd92107e53724f7118ecf5672555ed529dda9..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/apply_bpe.py +++ /dev/null @@ -1 +0,0 @@ -subword_nmt/apply_bpe.py \ No newline at end of file diff --git a/spaces/HemanthSai7/IntelligentQuestionGenerator/README.md b/spaces/HemanthSai7/IntelligentQuestionGenerator/README.md deleted file mode 100644 index 5f35ae54f9fc3e828d83b1a238f4f8c18641b9a3..0000000000000000000000000000000000000000 --- a/spaces/HemanthSai7/IntelligentQuestionGenerator/README.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: Question Generator -emoji: 🔑 -colorFrom: yellow -colorTo: yellow -sdk: streamlit -sdk_version: "1.10.0" -app_file: app.py -pinned: false ---- - -# Internship-IVIS-labs - -- The *Intelligent Question Generator* app is an easy-to-use interface built in Streamlit which uses [KeyBERT](https://github.com/MaartenGr/KeyBERT), [Sense2vec](https://github.com/explosion/sense2vec), [T5](https://huggingface.co/ramsrigouthamg/t5_paraphraser) -- It uses a minimal keyword extraction technique that leverages multiple NLP embeddings and relies on [Transformers](https://huggingface.co/transformers/) 🤗 to create keywords/keyphrases that are most similar to a document. -- [sense2vec](https://github.com/explosion/sense2vec) (Trask et. al, 2015) is a nice twist on word2vec that lets you learn more interesting and detailed word vectors. - -## Repository Breakdown -### src Directory ---- -- `src/Pipeline/QAhaystack.py`: This file contains the code of question answering using [haystack](https://haystack.deepset.ai/overview/intro). -- `src/Pipeline/QuestGen.py`: This file contains the code of question generation. -- `src/Pipeline/Reader.py`: This file contains the code of reading the document. -- `src/Pipeline/TextSummariztion.py`: This file contains the code of text summarization. -- `src/PreviousVersionCode/context.py`: This file contains the finding the context of the paragraph. -- `src/PreviousVersionCode/QuestionGenerator.py`: This file contains the code of first attempt of question generation. - -## Installation -```shell -$ git clone https://github.com/HemanthSai7/Internship-IVIS-labs.git -``` -```shell -$ cd Internship-IVIS-labs -``` -```python -pip install -r requirements.txt -``` -- For the running the app for the first time locally, you need to uncomment the the lines in `src/Pipeline/QuestGen.py` to download the models to the models directory. - -```python -streamlit run app.py -``` -- Once the app is running, you can access it at http://localhost:8501 -```shell - You can now view your Streamlit app in your browser. - - Local URL: http://localhost:8501 - Network URL: http://192.168.0.103:8501 -``` - -## Tech Stack Used -![image](https://img.shields.io/badge/Sense2vec-EF546D?style=for-the-badge&logo=Explosion.ai&logoColor=white) -![image](https://img.shields.io/badge/Spacy-09A3D5?style=for-the-badge&logo=spaCy&logoColor=white) -![image](https://img.shields.io/badge/Haystack-03AF9D?style=for-the-badge&logo=Haystackh&logoColor=white) -![image](https://img.shields.io/badge/Python-3776AB?style=for-the-badge&logo=python&logoColor=white) -![image](https://img.shields.io/badge/PyTorch-D04139?style=for-the-badge&logo=pytorch&logoColor=white) -![image](https://img.shields.io/badge/Numpy-013243?style=for-the-badge&logo=numpy&logoColor=white) -![image](https://img.shields.io/badge/Pandas-130654?style=for-the-badge&logo=pandas&logoColor=white) -![image](https://img.shields.io/badge/matplotlib-b2feb0?style=for-the-badge&logo=matplotlib&logoColor=white) -![image](https://img.shields.io/badge/scikit_learn-F7931E?style=for-the-badge&logo=scikit-learn&logoColor=white) -![image](https://img.shields.io/badge/Streamlit-EA6566?style=for-the-badge&logo=streamlit&logoColor=white) - -## Timeline -### Week 1-2: -#### Tasks -- [x] Understanding and brushing up the concepts of NLP. -- [x] Extracting images and text from a pdf file and storing it in a texty file. -- [x] Exploring various open source tools for generating questions from a given text. -- [x] Read papers related to the project (Bert,T5,RoBERTa etc). -- [x] Summarizing the extracted text using T5 base pre-trained model from the pdf file. - -### Week 3-4: -#### Tasks -- [x] Understanding the concept of QA systems. -- [x] Created a basic script for generating questions from the text. -- [x] Created a basic script for finding the context of the paragraph. - -### Week 5-6: -#### Tasks - -- [x] Understanding how Transformers models work for NLP tasks Question answering and generation -- [x] Understanding how to use the Haystack library for QA systems. -- [x] Understanding how to use the Haystack library for Question generation. -- [x] PreProcessed the document for Haystack QA for better results . - -### Week 7-8: -#### Tasks -- [x] Understanding how to generate questions intelligently. -- [x] Explored wordnet to find synonyms -- [x] Used BertWSD for disambiguating the sentence provided. -- [x] Used KeyBERT for finding the keywords in the document. -- [x] Used sense2vec for finding better words with high relatedness for the keywords generated. - -### Week 9-10: -#### Tasks -- [x] Create a streamlit app to demonstrate the project. diff --git a/spaces/ICML2022/OFA/fairseq/examples/truncated_bptt/README.md b/spaces/ICML2022/OFA/fairseq/examples/truncated_bptt/README.md deleted file mode 100644 index 86518c9d5ef09fbd4fed1512a52e9431b74f08fa..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/truncated_bptt/README.md +++ /dev/null @@ -1,70 +0,0 @@ -# Truncated Backpropagation Through Time (BPTT) - -Truncated BPTT is a useful technique for training language models on very long -sequences. Typically a long sequences is split into chunks and a language model -is trained over the chunks sequentially. The LM may condition on previous -chunks, but gradients only flow through the current chunk. This technique was -the basis for the paper: [Transformer-XL: Attentive Language Models Beyond a -Fixed-Length Context](https://arxiv.org/abs/1901.02860), which achieved -state-of-the-art language modeling results at the time of publication. - -It is slightly tricky to implement Truncated BPTT efficiently in fairseq, since -we need to iterate over the data sequentially and disable any batch shuffling -logic. The code provided in this example illustrates how to implement Truncated -BPTT in fairseq by overriding ``FairseqTask::get_batch_iterator`` to iterate -over the data sequentially. Crucially, this example supports batching and -multi-GPU (data parallel) training. - -##### 0. Setup - -First, see the general [language modeling README](README.md) for instructions on -preprocessing the WikiText-103 data. - -##### 1. Train a Transformer-XL model on WikiText-103 - -We will train a 16-layer Transformer-XL model following the [hyperparameters -used in the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). - -The following command assumes 4 GPUs, so that the total batch size is 60 -sequences (15 x 4). Training should take ~24 hours on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - data-bin/wikitext-103/ \ - --task truncated_bptt_lm --tokens-per-sample 150 \ - --batch-size 15 --max-update 200000 \ - --arch transformer_xl --n-layer 16 --d-model 410 --n-head 10 \ - --d-head 41 --d-inner 2100 --dropout 0.1 --dropatt 0.0 --mem-len 150 \ - --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 --min-lr 0.0 --lr 0.00025 \ - --log-format json --log-interval 25 \ - --fp16 -``` - -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. - -##### 2. Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103/ \ - --path checkpoints/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ \ - --task truncated_bptt_lm \ - --batch-size 1 --required-batch-size-multiple 1 \ - --model-overrides '{"mem_len":640,"clamp_len":400,"same_length":True}' \ - --tokens-per-sample 64 -# ... | INFO | fairseq_cli.eval_lm | num. model params: 151123537 -# ... | INFO | fairseq_cli.eval_lm | Evaluated 245569 tokens in 83.1s (2956.82 tokens/s) -# ... | INFO | fairseq_cli.eval_lm | Loss (base 2): 4.5668, Perplexity: 23.70 -# Compare to 24.0 test perplexity from the paper -``` - -*Note:* During training the model saw 150 tokens of context -(``--tokens-per-sample=150``) and 150 extra memory tokens (``--mem-len=150``). -During evaluation we measure perplexity on sequences of 64 tokens -(``--tokens-per-sample=64``) and increase the memory length -(``--model-overrides='{"mem_len":640}'``). These settings match the evaluation -settings from [the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/multi_modality_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/multi_modality_dataset.py deleted file mode 100644 index 69d23d31c1eb66803fa5062b5991a7c34ab07dc7..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/multi_modality_dataset.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) 2021-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import math -from typing import List, Optional, NamedTuple - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - LanguagePairDataset, - FileAudioDataset, - data_utils, -) -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class ModalityDatasetItem(NamedTuple): - datasetname: str - dataset: any - max_positions: List[int] - max_tokens: Optional[int] = None - max_sentences: Optional[int] = None - -# MultiModalityDataset: it concate multiple datasets with different modalities. -# Compared with ConcatDataset it can 1) sample data given the ratios for different datasets -# 2) it adds mode to indicate what type of the data samples come from. -# It will be used with GroupedEpochBatchIterator together to generate mini-batch with samples -# from the same type of dataset -# If only one dataset is used, it will perform like the original dataset with mode added -class MultiModalityDataset(ConcatDataset): - def __init__(self, datasets: List[ModalityDatasetItem]): - id_to_mode = [] - dsets = [] - max_tokens = [] - max_sentences = [] - max_positions = [] - for dset in datasets: - id_to_mode.append(dset.datasetname) - dsets.append(dset.dataset) - max_tokens.append(dset.max_tokens) - max_positions.append(dset.max_positions) - max_sentences.append(dset.max_sentences) - weights = [1.0 for s in dsets] - super().__init__(dsets, weights) - self.max_tokens = max_tokens - self.max_positions = max_positions - self.max_sentences = max_sentences - self.id_to_mode = id_to_mode - self.raw_sub_batch_samplers = [] - self._cur_epoch = 0 - - def set_epoch(self, epoch): - super().set_epoch(epoch) - self._cur_epoch = epoch - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - sample = self.datasets[dataset_idx][sample_idx] - return (dataset_idx, sample) - - def collater(self, samples): - if len(samples) == 0: - return {} - dataset_idx = samples[0][0] - # make sure all samples in samples are from same dataset - assert sum([0 if dataset_idx == s[0] else 1 for s in samples]) == 0 - samples = self.datasets[dataset_idx].collater([x[1] for x in samples]) - # add mode - samples["net_input"]["mode"] = self.id_to_mode[dataset_idx] - - return samples - - def size(self, index: int): - if len(self.datasets) == 1: - return self.datasets[0].size(index) - return super().size(index) - - @property - def sizes(self): - if len(self.datasets) == 1: - return self.datasets[0].sizes - super().sizes - - def ordered_indices(self): - """ - Returns indices sorted by length. So less padding is needed. - """ - if len(self.datasets) == 1: - return self.datasets[0].ordered_indices() - indices_group = [] - for d_idx, ds in enumerate(self.datasets): - sample_num = self.cumulative_sizes[d_idx] - if d_idx > 0: - sample_num = sample_num - self.cumulative_sizes[d_idx - 1] - assert sample_num == len(ds) - indices_group.append(ds.ordered_indices()) - return indices_group - - def get_raw_batch_samplers(self, required_batch_size_multiple, seed): - if len(self.raw_sub_batch_samplers) > 0: - logger.info(" raw_sub_batch_samplers exists. No action is taken") - return - with data_utils.numpy_seed(seed): - indices = self.ordered_indices() - for i, ds in enumerate(self.datasets): - indices[i] = ds.filter_indices_by_size( - indices[i], - self.max_positions[i], - )[0] - sub_batch_sampler = ds.batch_by_size( - indices[i], - max_tokens=self.max_tokens[i], - max_sentences=self.max_sentences[i], - required_batch_size_multiple=required_batch_size_multiple, - ) - self.raw_sub_batch_samplers.append(sub_batch_sampler) - - def get_batch_samplers(self, mult_ratios, required_batch_size_multiple, seed): - self.get_raw_batch_samplers(required_batch_size_multiple, seed) - batch_samplers = [] - for i, _ in enumerate(self.datasets): - if i > 0: - sub_batch_sampler = [ - [y + self.cumulative_sizes[i - 1] for y in x] - for x in self.raw_sub_batch_samplers[i] - ] - else: - sub_batch_sampler = list(self.raw_sub_batch_samplers[i]) - smp_r = mult_ratios[i] - if smp_r != 1: - is_increase = "increased" if smp_r > 1 else "decreased" - logger.info( - "number of batch for the dataset {} is {} from {} to {}".format( - self.id_to_mode[i], - is_increase, - len(sub_batch_sampler), - int(len(sub_batch_sampler) * smp_r), - ) - ) - mul_samplers = [] - for _ in range(math.floor(smp_r)): - mul_samplers = mul_samplers + sub_batch_sampler - if math.floor(smp_r) != smp_r: - with data_utils.numpy_seed(seed + self._cur_epoch): - np.random.shuffle(sub_batch_sampler) - smp_num = int( - (smp_r - math.floor(smp_r)) * len(sub_batch_sampler) - ) - mul_samplers = mul_samplers + sub_batch_sampler[:smp_num] - sub_batch_sampler = mul_samplers - else: - logger.info( - "dataset {} batch number is {} ".format( - self.id_to_mode[i], len(sub_batch_sampler) - ) - ) - batch_samplers.append(sub_batch_sampler) - - return batch_samplers - - -class LangPairMaskDataset(FairseqDataset): - def __init__( - self, - dataset: LanguagePairDataset, - src_eos: int, - src_bos: Optional[int] = None, - noise_id: Optional[int] = -1, - mask_ratio: Optional[float] = 0, - mask_type: Optional[str] = "random", - ): - self.dataset = dataset - self.src_eos = src_eos - self.src_bos = src_bos - self.noise_id = noise_id - self.mask_ratio = mask_ratio - self.mask_type = mask_type - assert mask_type in ("random", "tail") - - @property - def src_sizes(self): - return self.dataset.src_sizes - - @property - def tgt_sizes(self): - return self.dataset.tgt_sizes - - @property - def sizes(self): - # dataset.sizes can be a dynamically computed sizes: - return self.dataset.sizes - - def get_batch_shapes(self): - return self.dataset.buckets - - def num_tokens_vec(self, indices): - return self.dataset.num_tokens_vec(indices) - - def __len__(self): - return len(self.dataset) - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) - - def mask_src_tokens(self, sample): - src_item = sample["source"] - mask = None - if self.mask_type == "random": - mask = torch.rand(len(src_item)).le(self.mask_ratio) - else: - mask = torch.ones(len(src_item)) - mask[: int(len(src_item) * (1 - self.mask_ratio))] = 0 - mask = mask.eq(1) - if src_item[0] == self.src_bos: - mask[0] = False - if src_item[-1] == self.src_eos: - mask[-1] = False - mask_src_item = src_item.masked_fill(mask, self.noise_id) - smp = {"id": sample["id"], "source": mask_src_item, "target": sample["target"]} - return smp - - def __getitem__(self, index): - sample = self.dataset[index] - if self.mask_ratio > 0: - sample = self.mask_src_tokens(sample) - return sample - - def collater(self, samples, pad_to_length=None): - return self.dataset.collater(samples, pad_to_length) - - -class FileAudioDatasetWrapper(FileAudioDataset): - def collater(self, samples): - samples = super().collater(samples) - if len(samples) == 0: - return {} - samples["net_input"]["src_tokens"] = samples["net_input"]["source"] - samples["net_input"]["prev_output_tokens"] = None - del samples["net_input"]["source"] - samples["net_input"]["src_lengths"] = None - samples["net_input"]["alignment"] = None - return samples diff --git a/spaces/IHaBiS/wd-v1-4-tags/app.py b/spaces/IHaBiS/wd-v1-4-tags/app.py deleted file mode 100644 index 9fd637e1c0dc9a22ebf568c95af9f88dbba98b4b..0000000000000000000000000000000000000000 --- a/spaces/IHaBiS/wd-v1-4-tags/app.py +++ /dev/null @@ -1,268 +0,0 @@ -from __future__ import annotations - -import argparse -import functools -import html -import os - -import gradio as gr -import huggingface_hub -import numpy as np -import onnxruntime as rt -import pandas as pd -import piexif -import piexif.helper -import PIL.Image - -from Utils import dbimutils - -TITLE = "WaifuDiffusion v1.4 Tags" -DESCRIPTION = """ -Demo for: -- [SmilingWolf/wd-v1-4-swinv2-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger-v2) -- [SmilingWolf/wd-v1-4-convnext-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger-v2) -- [SmilingWolf/wd-v1-4-vit-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger-v2) - -Includes "ready to copy" prompt and a prompt analyzer. - -Modified from [NoCrypt/DeepDanbooru_string](https://huggingface.co/spaces/NoCrypt/DeepDanbooru_string) -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -Huggingface Space forked from [SmilingWolf/wd-v1-4-tags](https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags) -Example image by [이하비스](https://arca.live/b/aiart/65215657) -""" - -HF_TOKEN = os.environ["HF_TOKEN"] -SWIN_MODEL_REPO = "SmilingWolf/wd-v1-4-swinv2-tagger-v2" -CONV_MODEL_REPO = "SmilingWolf/wd-v1-4-convnext-tagger-v2" -VIT_MODEL_REPO = "SmilingWolf/wd-v1-4-vit-tagger-v2" -MODEL_FILENAME = "model.onnx" -LABEL_FILENAME = "selected_tags.csv" - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument("--score-slider-step", type=float, default=0.05) - parser.add_argument("--score-general-threshold", type=float, default=0.35) - parser.add_argument("--score-character-threshold", type=float, default=0.85) - parser.add_argument("--share", action="store_true") - return parser.parse_args() - - -def load_model(model_repo: str, model_filename: str) -> rt.InferenceSession: - path = huggingface_hub.hf_hub_download( - model_repo, model_filename, use_auth_token=HF_TOKEN - ) - model = rt.InferenceSession(path) - return model - - -def change_model(model_name): - global loaded_models - - if model_name == "SwinV2": - model = load_model(SWIN_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ConvNext": - model = load_model(CONV_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ViT": - model = load_model(VIT_MODEL_REPO, MODEL_FILENAME) - - loaded_models[model_name] = model - return loaded_models[model_name] - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download( - SWIN_MODEL_REPO, LABEL_FILENAME, use_auth_token=HF_TOKEN - ) - df = pd.read_csv(path) - - tag_names = df["name"].tolist() - rating_indexes = list(np.where(df["category"] == 9)[0]) - general_indexes = list(np.where(df["category"] == 0)[0]) - character_indexes = list(np.where(df["category"] == 4)[0]) - return tag_names, rating_indexes, general_indexes, character_indexes - - -def plaintext_to_html(text): - text = ( - "

    " + "
    \n".join([f"{html.escape(x)}" for x in text.split("\n")]) + "

    " - ) - return text - - -def predict( - image: PIL.Image.Image, - model_name: str, - general_threshold: float, - character_threshold: float, - tag_names: list[str], - rating_indexes: list[np.int64], - general_indexes: list[np.int64], - character_indexes: list[np.int64], -): - global loaded_models - - rawimage = image - - model = loaded_models[model_name] - if model is None: - model = change_model(model_name) - - _, height, width, _ = model.get_inputs()[0].shape - - # Alpha to white - image = image.convert("RGBA") - new_image = PIL.Image.new("RGBA", image.size, "WHITE") - new_image.paste(image, mask=image) - image = new_image.convert("RGB") - image = np.asarray(image) - - # PIL RGB to OpenCV BGR - image = image[:, :, ::-1] - - image = dbimutils.make_square(image, height) - image = dbimutils.smart_resize(image, height) - image = image.astype(np.float32) - image = np.expand_dims(image, 0) - - input_name = model.get_inputs()[0].name - label_name = model.get_outputs()[0].name - probs = model.run([label_name], {input_name: image})[0] - - labels = list(zip(tag_names, probs[0].astype(float))) - - # First 4 labels are actually ratings: pick one with argmax - ratings_names = [labels[i] for i in rating_indexes] - rating = dict(ratings_names) - - # Then we have general tags: pick any where prediction confidence > threshold - general_names = [labels[i] for i in general_indexes] - general_res = [x for x in general_names if x[1] > general_threshold] - general_res = dict(general_res) - - # Everything else is characters: pick any where prediction confidence > threshold - character_names = [labels[i] for i in character_indexes] - character_res = [x for x in character_names if x[1] > character_threshold] - character_res = dict(character_res) - - b = dict(sorted(general_res.items(), key=lambda item: item[1], reverse=True)) - a = ( - ", ".join(list(b.keys())) - .replace("_", " ") - .replace("(", "\(") - .replace(")", "\)") - ) - c = ", ".join(list(b.keys())) - - items = rawimage.info - geninfo = "" - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b"") - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode("utf8", errors="ignore") - - items["exif comment"] = exif_comment - geninfo = exif_comment - - for field in [ - "jfif", - "jfif_version", - "jfif_unit", - "jfif_density", - "dpi", - "exif", - "loop", - "background", - "timestamp", - "duration", - ]: - items.pop(field, None) - - geninfo = items.get("parameters", geninfo) - - info = f""" -

    PNG Info

    -""" - for key, text in items.items(): - info += ( - f""" -
    -

    {plaintext_to_html(str(key))}

    -

    {plaintext_to_html(str(text))}

    -
    -""".strip() - + "\n" - ) - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

    {message}

    " - - return (a, c, rating, character_res, general_res, info) - - -def main(): - global loaded_models - loaded_models = {"SwinV2": None, "ConvNext": None, "ViT": None} - - args = parse_args() - - change_model("SwinV2") - - tag_names, rating_indexes, general_indexes, character_indexes = load_labels() - - func = functools.partial( - predict, - tag_names=tag_names, - rating_indexes=rating_indexes, - general_indexes=general_indexes, - character_indexes=character_indexes, - ) - - gr.Interface( - fn=func, - inputs=[ - gr.Image(type="pil", label="Input"), - gr.Radio(["SwinV2", "ConvNext", "ViT"], value="SwinV2", label="Model"), - gr.Slider( - 0, - 1, - step=args.score_slider_step, - value=args.score_general_threshold, - label="General Tags Threshold", - ), - gr.Slider( - 0, - 1, - step=args.score_slider_step, - value=args.score_character_threshold, - label="Character Tags Threshold", - ), - ], - outputs=[ - gr.Textbox(label="Output (string)"), - gr.Textbox(label="Output (raw string)"), - gr.Label(label="Rating"), - gr.Label(label="Output (characters)"), - gr.Label(label="Output (tags)"), - gr.HTML(), - ], - examples=[["c9b4889b469576ad521d8f1414ce9b25f2142bac089d8e88483c86551d54989e.webp", "SwinV2", 0.35, 0.85]], - title=TITLE, - description=DESCRIPTION, - allow_flagging="never", - ).launch( - enable_queue=True, - share=args.share, - ) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Illumotion/Koboldcpp/examples/server/public/index.js b/spaces/Illumotion/Koboldcpp/examples/server/public/index.js deleted file mode 100644 index 9db5a9d9faf299fe8f19467b6a3bd068e4f56677..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/server/public/index.js +++ /dev/null @@ -1 +0,0 @@ -function t(){throw new Error("Cycle detected")}function n(){if(u>1){u--;return}let t,n=!1;while(void 0!==_){let i=_;_=void 0;f++;while(void 0!==i){const _=i.o;i.o=void 0;i.f&=-3;if(!(8&i.f)&&a(i))try{i.c()}catch(e){if(!n){t=e;n=!0}}i=_}}f=0;u--;if(n)throw t}function e(t){if(u>0)return t();u++;try{return t()}finally{n()}}let i,_,o=0;function r(t){if(o>0)return t();const n=i;i=void 0;o++;try{return t()}finally{o--;i=n}}let u=0,f=0,l=0;function s(t){if(void 0===i)return;let n=t.n;if(void 0===n||n.t!==i){n={i:0,S:t,p:i.s,n:void 0,t:i,e:void 0,x:void 0,r:n};if(void 0!==i.s)i.s.n=n;i.s=n;t.n=n;if(32&i.f)t.S(n);return n}else if(-1===n.i){n.i=0;if(void 0!==n.n){n.n.p=n.p;if(void 0!==n.p)n.p.n=n.n;n.p=i.s;n.n=void 0;i.s.n=n;i.s=n}return n}}function c(t){this.v=t;this.i=0;this.n=void 0;this.t=void 0}c.prototype.h=function(){return!0};c.prototype.S=function(t){if(this.t!==t&&void 0===t.e){t.x=this.t;if(void 0!==this.t)this.t.e=t;this.t=t}};c.prototype.U=function(t){if(void 0!==this.t){const n=t.e,e=t.x;if(void 0!==n){n.x=e;t.e=void 0}if(void 0!==e){e.e=n;t.x=void 0}if(t===this.t)this.t=e}};c.prototype.subscribe=function(t){const n=this;return S((function(){const e=n.value,i=32&this.f;this.f&=-33;try{t(e)}finally{this.f|=i}}))};c.prototype.valueOf=function(){return this.value};c.prototype.toString=function(){return this.value+""};c.prototype.toJSON=function(){return this.value};c.prototype.peek=function(){return this.v};Object.defineProperty(c.prototype,"value",{get(){const t=s(this);if(void 0!==t)t.i=this.i;return this.v},set(e){if(i instanceof v)!function(){throw new Error("Computed cannot have side-effects")}();if(e!==this.v){if(f>100)t();this.v=e;this.i++;l++;u++;try{for(let t=this.t;void 0!==t;t=t.x)t.t.N()}finally{n()}}}});function h(t){return new c(t)}function a(t){for(let n=t.s;void 0!==n;n=n.n)if(n.S.i!==n.i||!n.S.h()||n.S.i!==n.i)return!0;return!1}function p(t){for(let n=t.s;void 0!==n;n=n.n){const e=n.S.n;if(void 0!==e)n.r=e;n.S.n=n;n.i=-1;if(void 0===n.n){t.s=n;break}}}function d(t){let n,e=t.s;while(void 0!==e){const t=e.p;if(-1===e.i){e.S.U(e);if(void 0!==t)t.n=e.n;if(void 0!==e.n)e.n.p=t}else n=e;e.S.n=e.r;if(void 0!==e.r)e.r=void 0;e=t}t.s=n}function v(t){c.call(this,void 0);this.x=t;this.s=void 0;this.g=l-1;this.f=4}(v.prototype=new c).h=function(){this.f&=-3;if(1&this.f)return!1;if(32==(36&this.f))return!0;this.f&=-5;if(this.g===l)return!0;this.g=l;this.f|=1;if(this.i>0&&!a(this)){this.f&=-2;return!0}const t=i;try{p(this);i=this;const t=this.x();if(16&this.f||this.v!==t||0===this.i){this.v=t;this.f&=-17;this.i++}}catch(t){this.v=t;this.f|=16;this.i++}i=t;d(this);this.f&=-2;return!0};v.prototype.S=function(t){if(void 0===this.t){this.f|=36;for(let t=this.s;void 0!==t;t=t.n)t.S.S(t)}c.prototype.S.call(this,t)};v.prototype.U=function(t){if(void 0!==this.t){c.prototype.U.call(this,t);if(void 0===this.t){this.f&=-33;for(let t=this.s;void 0!==t;t=t.n)t.S.U(t)}}};v.prototype.N=function(){if(!(2&this.f)){this.f|=6;for(let t=this.t;void 0!==t;t=t.x)t.t.N()}};v.prototype.peek=function(){if(!this.h())t();if(16&this.f)throw this.v;return this.v};Object.defineProperty(v.prototype,"value",{get(){if(1&this.f)t();const n=s(this);this.h();if(void 0!==n)n.i=this.i;if(16&this.f)throw this.v;return this.v}});function y(t){return new v(t)}function m(t){const e=t.u;t.u=void 0;if("function"==typeof e){u++;const _=i;i=void 0;try{e()}catch(n){t.f&=-2;t.f|=8;g(t);throw n}finally{i=_;n()}}}function g(t){for(let n=t.s;void 0!==n;n=n.n)n.S.U(n);t.x=void 0;t.s=void 0;m(t)}function b(t){if(i!==this)throw new Error("Out-of-order effect");d(this);i=t;this.f&=-2;if(8&this.f)g(this);n()}function k(t){this.x=t;this.u=void 0;this.s=void 0;this.o=void 0;this.f=32}k.prototype.c=function(){const t=this.S();try{if(8&this.f)return;if(void 0===this.x)return;const n=this.x();if("function"==typeof n)this.u=n}finally{t()}};k.prototype.S=function(){if(1&this.f)t();this.f|=1;this.f&=-9;m(this);p(this);u++;const n=i;i=this;return b.bind(this,n)};k.prototype.N=function(){if(!(2&this.f)){this.f|=2;this.o=_;_=this}};k.prototype.d=function(){this.f|=8;if(!(1&this.f))g(this)};function S(t){const n=new k(t);try{n.c()}catch(t){n.d();throw t}return n.d.bind(n)}var x,w,C,E,U,H,N,P,$,D={},T=[],V=/acit|ex(?:s|g|n|p|$)|rph|grid|ows|mnc|ntw|ine[ch]|zoo|^ord|itera/i,A=Array.isArray;function F(t,n){for(var e in n)t[e]=n[e];return t}function M(t){var n=t.parentNode;n&&n.removeChild(t)}function W(t,n,e){var i,_,o,r={};for(o in n)"key"==o?i=n[o]:"ref"==o?_=n[o]:r[o]=n[o];if(arguments.length>2&&(r.children=arguments.length>3?x.call(arguments,2):e),"function"==typeof t&&null!=t.defaultProps)for(o in t.defaultProps)void 0===r[o]&&(r[o]=t.defaultProps[o]);return O(t,r,i,_,null)}function O(t,n,e,i,_){var o={type:t,props:n,key:e,ref:i,__k:null,__:null,__b:0,__e:null,__d:void 0,__c:null,__h:null,constructor:void 0,__v:null==_?++C:_};return null==_&&null!=w.vnode&&w.vnode(o),o}function L(){return{current:null}}function R(t){return t.children}function I(t,n){this.props=t,this.context=n}function j(t,n){if(null==n)return t.__?j(t.__,t.__.__k.indexOf(t)+1):null;for(var e;nn&&U.sort(P));G.__r=0}function z(t,n,e,i,_,o,r,u,f,l,s){var c,h,a,p,d,v,y,m,g,b,k=0,S=i&&i.__k||T,x=S.length,w=x,C=n.length;for(e.__k=[],c=0;c0?O(p.type,p.props,p.key,p.ref?p.ref:null,p.__v):p)&&(p.__=e,p.__b=e.__b+1,-1===(m=X(p,S,y=c+k,w))?a=D:(a=S[m]||D,S[m]=void 0,w--),it(t,p,a,_,o,r,u,f,l,s),d=p.__e,(h=p.ref)&&a.ref!=h&&(a.ref&&rt(a.ref,null,p),s.push(h,p.__c||d,p)),null!=d&&(null==v&&(v=d),b=!(g=a===D||null===a.__v)&&m===y,g?-1==m&&k--:m!==y&&(m===y+1?(k++,b=!0):m>y?w>C-y?(k+=m-y,b=!0):k--:k=m(null!=f?1:0))for(;r>=0||u=0){if((f=n[r])&&_==f.key&&o===f.type)return r;r--}if(u2&&(u.children=arguments.length>3?x.call(arguments,2):e),O(t.type,u,i||t.key,_||t.ref,null)}function ht(t,n){var e={__c:n="__cC"+$++,__:t,Consumer:function(t,n){return t.children(n)},Provider:function(t){var e,i;return this.getChildContext||(e=[],(i={})[n]=this,this.getChildContext=function(){return i},this.shouldComponentUpdate=function(t){this.props.value!==t.value&&e.some((function(t){t.__e=!0,q(t)}))},this.sub=function(t){e.push(t);var n=t.componentWillUnmount;t.componentWillUnmount=function(){e.splice(e.indexOf(t),1),n&&n.call(t)}}),t.children}};return e.Provider.__=e.Consumer.contextType=e}x=T.slice,w={__e:function(t,n,e,i){for(var _,o,r;n=n.__;)if((_=n.__c)&&!_.__)try{if((o=_.constructor)&&null!=o.getDerivedStateFromError&&(_.setState(o.getDerivedStateFromError(t)),r=_.__d),null!=_.componentDidCatch&&(_.componentDidCatch(t,i||{}),r=_.__d),r)return _.__E=_}catch(n){t=n}throw t}},C=0,E=function(t){return null!=t&&void 0===t.constructor},I.prototype.setState=function(t,n){var e;e=null!=this.__s&&this.__s!==this.state?this.__s:this.__s=F({},this.state),"function"==typeof t&&(t=t(F({},e),this.props)),t&&F(e,t),null!=t&&this.__v&&(n&&this._sb.push(n),q(this))},I.prototype.forceUpdate=function(t){this.__v&&(this.__e=!0,t&&this.__h.push(t),q(this))},I.prototype.render=R,U=[],N="function"==typeof Promise?Promise.prototype.then.bind(Promise.resolve()):setTimeout,P=function(t,n){return t.__v.__b-n.__v.__b},G.__r=0,$=0;var at,pt,dt,vt,yt=0,mt=[],gt=[],bt=w.__b,kt=w.__r,St=w.diffed,xt=w.__c,wt=w.unmount;function Ct(t,n){w.__h&&w.__h(pt,t,yt||n),yt=0;var e=pt.__H||(pt.__H={__:[],__h:[]});return t>=e.__.length&&e.__.push({__V:gt}),e.__[t]}function Et(t){return yt=1,Ut(Bt,t)}function Ut(t,n,e){var i=Ct(at++,2);if(i.t=t,!i.__c&&(i.__=[e?e(n):Bt(void 0,n),function(t){var n=i.__N?i.__N[0]:i.__[0],e=i.t(n,t);n!==e&&(i.__N=[e,i.__[1]],i.__c.setState({}))}],i.__c=pt,!pt.u)){var _=function(t,n,e){if(!i.__c.__H)return!0;var _=i.__c.__H.__.filter((function(t){return t.__c}));if(_.every((function(t){return!t.__N})))return!o||o.call(this,t,n,e);var r=!1;return _.forEach((function(t){if(t.__N){var n=t.__[0];t.__=t.__N,t.__N=void 0,n!==t.__[0]&&(r=!0)}})),!(!r&&i.__c.props===t)&&(!o||o.call(this,t,n,e))};pt.u=!0;var o=pt.shouldComponentUpdate,r=pt.componentWillUpdate;pt.componentWillUpdate=function(t,n,e){if(this.__e){var i=o;o=void 0,_(t,n,e),o=i}r&&r.call(this,t,n,e)},pt.shouldComponentUpdate=_}return i.__N||i.__}function Ht(t,n){var e=Ct(at++,3);!w.__s&&jt(e.__H,n)&&(e.__=t,e.i=n,pt.__H.__h.push(e))}function Nt(t,n){var e=Ct(at++,4);!w.__s&&jt(e.__H,n)&&(e.__=t,e.i=n,pt.__h.push(e))}function Pt(t){return yt=5,Dt((function(){return{current:t}}),[])}function $t(t,n,e){yt=6,Nt((function(){return"function"==typeof t?(t(n()),function(){return t(null)}):t?(t.current=n(),function(){return t.current=null}):void 0}),null==e?e:e.concat(t))}function Dt(t,n){var e=Ct(at++,7);return jt(e.__H,n)?(e.__V=t(),e.i=n,e.__h=t,e.__V):e.__}function Tt(t,n){return yt=8,Dt((function(){return t}),n)}function Vt(t){var n=pt.context[t.__c],e=Ct(at++,9);return e.c=t,n?(null==e.__&&(e.__=!0,n.sub(pt)),n.props.value):t.__}function At(t,n){w.useDebugValue&&w.useDebugValue(n?n(t):t)}function Ft(t){var n=Ct(at++,10),e=Et();return n.__=t,pt.componentDidCatch||(pt.componentDidCatch=function(t,i){n.__&&n.__(t,i),e[1](t)}),[e[0],function(){e[1](void 0)}]}function Mt(){var t=Ct(at++,11);if(!t.__){for(var n=pt.__v;null!==n&&!n.__m&&null!==n.__;)n=n.__;var e=n.__m||(n.__m=[0,0]);t.__="P"+e[0]+"-"+e[1]++}return t.__}function Wt(){for(var t;t=mt.shift();)if(t.__P&&t.__H)try{t.__H.__h.forEach(Rt),t.__H.__h.forEach(It),t.__H.__h=[]}catch(u){t.__H.__h=[],w.__e(u,t.__v)}}w.__b=function(t){pt=null,bt&&bt(t)},w.__r=function(t){kt&&kt(t),at=0;var n=(pt=t.__c).__H;n&&(dt===pt?(n.__h=[],pt.__h=[],n.__.forEach((function(t){t.__N&&(t.__=t.__N),t.__V=gt,t.__N=t.i=void 0}))):(n.__h.forEach(Rt),n.__h.forEach(It),n.__h=[],at=0)),dt=pt},w.diffed=function(t){St&&St(t);var n=t.__c;n&&n.__H&&(n.__H.__h.length&&(1!==mt.push(n)&&vt===w.requestAnimationFrame||((vt=w.requestAnimationFrame)||Lt)(Wt)),n.__H.__.forEach((function(t){t.i&&(t.__H=t.i),t.__V!==gt&&(t.__=t.__V),t.i=void 0,t.__V=gt}))),dt=pt=null},w.__c=function(t,n){n.some((function(t){try{t.__h.forEach(Rt),t.__h=t.__h.filter((function(t){return!t.__||It(t)}))}catch(s){n.some((function(t){t.__h&&(t.__h=[])})),n=[],w.__e(s,t.__v)}})),xt&&xt(t,n)},w.unmount=function(t){wt&&wt(t);var n,e=t.__c;e&&e.__H&&(e.__H.__.forEach((function(t){try{Rt(t)}catch(t){n=t}})),e.__H=void 0,n&&w.__e(n,e.__v))};var Ot="function"==typeof requestAnimationFrame;function Lt(t){var n,e=function(){clearTimeout(i),Ot&&cancelAnimationFrame(n),setTimeout(t)},i=setTimeout(e,100);Ot&&(n=requestAnimationFrame(e))}function Rt(t){var n=pt,e=t.__c;"function"==typeof e&&(t.__c=void 0,e()),pt=n}function It(t){var n=pt;t.__c=t.__(),pt=n}function jt(t,n){return!t||t.length!==n.length||n.some((function(n,e){return n!==t[e]}))}function Bt(t,n){return"function"==typeof n?n(t):n}function qt(t,n){w[t]=n.bind(null,w[t]||(()=>{}))}let Gt,zt;function Jt(t){if(zt)zt();zt=t&&t.S()}function Kt({data:t}){const n=Xt(t);n.value=t;const e=Dt(()=>{let t=this.__v;while(t=t.__)if(t.__c){t.__c.__$f|=4;break}this.__$u.c=()=>{var t;if(!E(e.peek())&&3===(null==(t=this.base)?void 0:t.nodeType))this.base.data=e.peek();else{this.__$f|=1;this.setState({})}};return y(()=>{let t=n.value.value;return 0===t?0:!0===t?"":t||""})},[]);return e.value}Kt.displayName="_st";Object.defineProperties(c.prototype,{constructor:{configurable:!0,value:void 0},type:{configurable:!0,value:Kt},props:{configurable:!0,get(){return{data:this}}},__b:{configurable:!0,value:1}});qt("__b",(t,n)=>{if("string"==typeof n.type){let t,e=n.props;for(let i in e){if("children"===i)continue;let _=e[i];if(_ instanceof c){if(!t)n.__np=t={};t[i]=_;e[i]=_.peek()}}}t(n)});qt("__r",(t,n)=>{Jt();let e,i=n.__c;if(i){i.__$f&=-2;e=i.__$u;if(void 0===e)i.__$u=e=function(t){let n;S((function(){n=this}));n.c=()=>{i.__$f|=1;i.setState({})};return n}()}Gt=i;Jt(e);t(n)});qt("__e",(t,n,e,i)=>{Jt();Gt=void 0;t(n,e,i)});qt("diffed",(t,n)=>{Jt();Gt=void 0;let e;if("string"==typeof n.type&&(e=n.__e)){let t=n.__np,i=n.props;if(t){let n=e.U;if(n)for(let e in n){let i=n[e];if(void 0!==i&&!(e in t)){i.d();n[e]=void 0}}else{n={};e.U=n}for(let _ in t){let o=n[_],r=t[_];if(void 0===o){o=Qt(e,_,r,i);n[_]=o}else o.o(r,i)}}}t(n)});function Qt(t,n,e,i){const _=n in t&&void 0===t.ownerSVGElement,o=h(e);return{o:(t,n)=>{o.value=t;i=n},d:S(()=>{const e=o.value.value;if(i[n]!==e){i[n]=e;if(_)t[n]=e;else if(e)t.setAttribute(n,e);else t.removeAttribute(n)}})}}qt("unmount",(t,n)=>{if("string"==typeof n.type){let t=n.__e;if(t){const n=t.U;if(n){t.U=void 0;for(let t in n){let e=n[t];if(e)e.d()}}}}else{let t=n.__c;if(t){const n=t.__$u;if(n){t.__$u=void 0;n.d()}}}t(n)});qt("__h",(t,n,e,i)=>{if(i<3||9===i)n.__$f|=2;t(n,e,i)});I.prototype.shouldComponentUpdate=function(t,n){const e=this.__$u;if(!(e&&void 0!==e.s||4&this.__$f))return!0;if(3&this.__$f)return!0;for(let i in n)return!0;for(let i in t)if("__source"!==i&&t[i]!==this.props[i])return!0;for(let i in this.props)if(!(i in t))return!0;return!1};function Xt(t){return Dt(()=>h(t),[])}function Yt(t){const n=Pt(t);n.current=t;Gt.__$f|=4;return Dt(()=>y(()=>n.current()),[])}function Zt(t){const n=Pt(t);n.current=t;Ht(()=>S(()=>n.current()),[])}var tn=function(t,n,e,i){var _;n[0]=0;for(var o=1;o=5&&((_||!t&&5===i)&&(r.push(i,0,_,e),i=6),t&&(r.push(i,t,0,e),i=6)),_=""},f=0;f"===n?(i=1,_=""):_=n+_[0]:o?n===o?o="":_+=n:'"'===n||"'"===n?o=n:">"===n?(u(),i=1):i&&("="===n?(i=5,e=_,_=""):"/"===n&&(i<5||">"===t[f][l+1])?(u(),3===i&&(r=r[0]),i=r,(r=r[0]).push(2,0,i),i=0):" "===n||"\t"===n||"\n"===n||"\r"===n?(u(),i=2):_+=n),3===i&&"!--"===_&&(i=4,r=r[0])}return u(),r}(t)),n),arguments,[])).length>1?n:n[0]}var _n=en.bind(W);export{I as Component,R as Fragment,c as Signal,e as batch,ct as cloneElement,y as computed,ht as createContext,W as createElement,L as createRef,S as effect,W as h,_n as html,st as hydrate,E as isValidElement,w as options,lt as render,h as signal,K as toChildArray,r as untracked,Tt as useCallback,Yt as useComputed,Vt as useContext,At as useDebugValue,Ht as useEffect,Ft as useErrorBoundary,Mt as useId,$t as useImperativeHandle,Nt as useLayoutEffect,Dt as useMemo,Ut as useReducer,Pt as useRef,Xt as useSignal,Zt as useSignalEffect,Et as useState}; diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/squeeze_excitation.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/squeeze_excitation.py deleted file mode 100644 index d1d902bb30c071acbc0fa919a134c80fed86bd6c..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/squeeze_excitation.py +++ /dev/null @@ -1,20 +0,0 @@ -import torch.nn as nn - - -class SELayer(nn.Module): - def __init__(self, channel, reduction=16): - super(SELayer, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction, bias=False), - nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel, bias=False), - nn.Sigmoid() - ) - - def forward(self, x): - b, c, _, _ = x.size() - y = self.avg_pool(x).view(b, c) - y = self.fc(y).view(b, c, 1, 1) - res = x * y.expand_as(x) - return res diff --git a/spaces/JUNGU/VToonify/vtoonify/model/raft/demo.py b/spaces/JUNGU/VToonify/vtoonify/model/raft/demo.py deleted file mode 100644 index 5abc1da863f1231af1247209739402b05fa8bf85..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/model/raft/demo.py +++ /dev/null @@ -1,75 +0,0 @@ -import sys -sys.path.append('core') - -import argparse -import os -import cv2 -import glob -import numpy as np -import torch -from PIL import Image - -from raft import RAFT -from utils import flow_viz -from utils.utils import InputPadder - - - -DEVICE = 'cuda' - -def load_image(imfile): - img = np.array(Image.open(imfile)).astype(np.uint8) - img = torch.from_numpy(img).permute(2, 0, 1).float() - return img[None].to(DEVICE) - - -def viz(img, flo): - img = img[0].permute(1,2,0).cpu().numpy() - flo = flo[0].permute(1,2,0).cpu().numpy() - - # map flow to rgb image - flo = flow_viz.flow_to_image(flo) - img_flo = np.concatenate([img, flo], axis=0) - - # import matplotlib.pyplot as plt - # plt.imshow(img_flo / 255.0) - # plt.show() - - cv2.imshow('image', img_flo[:, :, [2,1,0]]/255.0) - cv2.waitKey() - - -def demo(args): - model = torch.nn.DataParallel(RAFT(args)) - model.load_state_dict(torch.load(args.model)) - - model = model.module - model.to(DEVICE) - model.eval() - - with torch.no_grad(): - images = glob.glob(os.path.join(args.path, '*.png')) + \ - glob.glob(os.path.join(args.path, '*.jpg')) - - images = sorted(images) - for imfile1, imfile2 in zip(images[:-1], images[1:]): - image1 = load_image(imfile1) - image2 = load_image(imfile2) - - padder = InputPadder(image1.shape) - image1, image2 = padder.pad(image1, image2) - - flow_low, flow_up = model(image1, image2, iters=20, test_mode=True) - viz(image1, flow_up) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--model', help="restore checkpoint") - parser.add_argument('--path', help="dataset for evaluation") - parser.add_argument('--small', action='store_true', help='use small model') - parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision') - parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation') - args = parser.parse_args() - - demo(args) diff --git a/spaces/JUNGU/Whisper-Auto-Subtitled-Video-Generator/README.md b/spaces/JUNGU/Whisper-Auto-Subtitled-Video-Generator/README.md deleted file mode 100644 index ebd0d30e4816d1b180024fadc24cb7d5f271072d..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/Whisper-Auto-Subtitled-Video-Generator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper-Auto-Subtitled-Video-Generator -emoji: 🎥 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: 01_🎥_Input_YouTube_Link.py -pinned: false -duplicated_from: BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Jamkonams/AutoGPT/ui/app.py b/spaces/Jamkonams/AutoGPT/ui/app.py deleted file mode 100644 index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/ui/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -import utils -from api import AutoAPI, get_openai_api_key -import os, shutil -import json - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace") -if not os.path.exists(OUTPUT_DIR): - os.mkdir(OUTPUT_DIR) - -CSS = """ -#chatbot {font-family: monospace;} -#files .generating {display: none;} -#files .min {min-height: 0px;} -""" - -with gr.Blocks(css=CSS) as app: - with gr.Column() as setup_pane: - gr.Markdown(f"""# Auto-GPT - 1. Duplicate this Space: Duplicate Space This will **NOT** work without duplication! - 2. Enter your OpenAI API Key below. - """) - with gr.Row(): - open_ai_key = gr.Textbox( - value=get_openai_api_key(), - label="OpenAI API Key", - type="password", - ) - gr.Markdown( - "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page." - ) - with gr.Row(): - ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT") - ai_role = gr.Textbox( - label="AI Role", - placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.", - ) - top_5_goals = gr.Dataframe( - row_count=(5, "fixed"), - col_count=(1, "fixed"), - headers=["AI Goals - Enter up to 5"], - type="array" - ) - start_btn = gr.Button("Start", variant="primary") - with open(os.path.join(FILE_DIR, "examples.json"), "r") as f: - example_values = json.load(f) - gr.Examples( - example_values, - [ai_name, ai_role, top_5_goals], - ) - with gr.Column(visible=False) as main_pane: - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - yes_btn = gr.Button("Yes", variant="primary", interactive=False) - consecutive_yes = gr.Slider( - 1, 10, 1, step=1, label="Consecutive Yes", interactive=False - ) - custom_response = gr.Textbox( - label="Custom Response", - placeholder="Press 'Enter' to Submit.", - interactive=False, - ) - with gr.Column(scale=1): - gr.HTML( - lambda: f""" - Generated Files -
    {utils.format_directory(OUTPUT_DIR)}
    - """, every=3, elem_id="files" - ) - download_btn = gr.Button("Download All Files") - - chat_history = gr.State([[None, None]]) - api = gr.State(None) - - def start(open_ai_key, ai_name, ai_role, top_5_goals): - auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals) - return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api - - def bot_response(chat, api): - messages = [] - for message in api.get_chatbot_response(): - messages.append(message) - chat[-1][1] = "\n".join(messages) + "..." - yield chat - chat[-1][1] = "\n".join(messages) - yield chat - - def send_message(count, chat, api, message="Y"): - if message != "Y": - count = 1 - for i in range(count): - chat.append([message, None]) - yield chat, count - i - api.send_message(message) - for updated_chat in bot_response(chat, api): - yield updated_chat, count - i - - def activate_inputs(): - return { - yes_btn: gr.Button.update(interactive=True), - consecutive_yes: gr.Slider.update(interactive=True), - custom_response: gr.Textbox.update(interactive=True), - } - - def deactivate_inputs(): - return { - yes_btn: gr.Button.update(interactive=False), - consecutive_yes: gr.Slider.update(interactive=False), - custom_response: gr.Textbox.update(interactive=False), - } - - start_btn.click( - start, - [open_ai_key, ai_name, ai_role, top_5_goals], - [setup_pane, main_pane, api], - ).then(bot_response, [chat_history, api], chatbot).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - yes_btn.click( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes] - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - custom_response.submit( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, - [consecutive_yes, chat_history, api, custom_response], - [chatbot, consecutive_yes], - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - def download_all_files(): - shutil.make_archive("outputs", "zip", OUTPUT_DIR) - - download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS) - -app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR]) diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/losses/__init__.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/losses/__init__.py deleted file mode 100644 index 2b184e74c861e6fca0c548692a9a949a6100b0aa..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/losses/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -from copy import deepcopy - -from basicsr.utils import get_root_logger -from basicsr.utils.registry import LOSS_REGISTRY -from .losses import (CharbonnierLoss, GANLoss, L1Loss, MSELoss, PerceptualLoss, WeightedTVLoss, g_path_regularize, - gradient_penalty_loss, r1_penalty) - -__all__ = [ - 'L1Loss', 'MSELoss', 'CharbonnierLoss', 'WeightedTVLoss', 'PerceptualLoss', 'GANLoss', 'gradient_penalty_loss', - 'r1_penalty', 'g_path_regularize' -] - - -def build_loss(opt): - """Build loss from options. - - Args: - opt (dict): Configuration. It must constain: - type (str): Model type. - """ - opt = deepcopy(opt) - loss_type = opt.pop('type') - loss = LOSS_REGISTRY.get(loss_type)(**opt) - logger = get_root_logger() - logger.info(f'Loss [{loss.__class__.__name__}] is created.') - return loss diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/autoanchor.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/autoanchor.py deleted file mode 100644 index a4eba3e94888709be7d2a7c7499fbcc1808b4a88..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/autoanchor.py +++ /dev/null @@ -1,12 +0,0 @@ -# Auto-anchor utils - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary - a = m.anchor_grid.prod(-1).view(-1) # anchor area - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da.sign() != ds.sign(): # same order - print("Reversing anchor order") - m.anchors[:] = m.anchors.flip(0) - m.anchor_grid[:] = m.anchor_grid.flip(0) diff --git a/spaces/Jimmie/Urban8K-mini/app.py b/spaces/Jimmie/Urban8K-mini/app.py deleted file mode 100644 index a55eaf5ec6d10cf589be594b20393343537f5aa1..0000000000000000000000000000000000000000 --- a/spaces/Jimmie/Urban8K-mini/app.py +++ /dev/null @@ -1,168 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: app.ipynb. - -# %% auto 0 -__all__ = ['data', 'audios', 'metadata', 'to_consider', 'processed_metadata', 'repo_id', 'learner', 'categories', 'title', - 'description', 'mic', 'label', 'examples', 'intf', 'process_audio_exists', 'load_x', 'load_label_tfm', - 'classify_audio'] - -# %% app.ipynb 1 -import torch -import gradio as gr -from gradio import CSVLogger -from fastai.vision.all import * -import torchaudio -import torchaudio.transforms as T -import warnings -from huggingface_hub import from_pretrained_fastai - -# %% app.ipynb 2 -warnings.filterwarnings("ignore") - -# %% app.ipynb 3 -def process_audio_exists(audio): - slice_name = audio.name - - # check if slice name exists in new metadata file - row = processed_metadata.loc[processed_metadata['slice_file_name'] == slice_name].index.any() - - return row - -# %% app.ipynb 4 -data = Path('examples') -audios = get_files(data, extensions='.wav') - -metadata = pd.read_csv('UrbanSound8K.csv') -to_consider = ['siren', 'street_music', 'children_playing', 'dog_bark', 'car_horn'] -processed_metadata = metadata.loc[metadata['class'].isin(to_consider)] -processed_metadata.loc[processed_metadata['class'] == 'siren', 'classID'] = 4 -processed_metadata.loc[processed_metadata['class'] == 'street_music', 'classID'] = 0 - -# %% app.ipynb 5 -class load_x(Transform): - def __init__(self): - self.sr = 44100 - self.max_ms = 4000 - self.channels = 2 - # self.transform = transform - def rechannel(self, waveform, sr): - if (waveform.shape[0] == self.channels): - # no rechanneling needed - return waveform, sr - - if (self.channels==1): - # converting stereo to mono - # by selecting the first channel - new_waveform = waveform[:1,:] - - elif (self.channels==2): - # converting mono to stereo - # by duplicating the first channel - new_waveform = torch.cat([waveform, waveform]) - return new_waveform, sr - - def resample(self, waveform, sr): - if (sr==self.sr): - # no resampling needed - return waveform, sr - - num_channels = waveform.shape[0] - - # resample first channel - new_waveform = torchaudio.transforms.Resample(sr, self.sr)(waveform[:1,:]) - if (num_channels) > 1: - # resample second channel and merge the two - re_two = torchaudio.transforms.Resample(sr, self.sr)(waveform[1:,:]) - new_waveform = torch.cat([new_waveform, re_two]) - - return (new_waveform, self.sr) - - def pad_trunc(self, waveform, sr): - num_channels, num_frames = waveform.shape - max_len = sr//1000 * self.max_ms - - if (num_frames>max_len): - # truncate signal to given length - waveform = waveform[:,:max_len] - - else: - # get padding lengths for beginning and end - begin_ln = random.randint(0, max_len-num_frames) - end_ln = max_len - num_frames - begin_ln - - # pad the audio with zeros - pad_begin = torch.zeros((num_channels, begin_ln)) - pad_end = torch.zeros((num_channels, end_ln)) - - waveform = torch.cat((pad_begin, waveform, pad_end), 1) - - return (waveform, sr) - - def mel_specgram(self, waveform, sr): - mel_tfm = T.MelSpectrogram( - sample_rate=sr, - n_fft=1024, - win_length=None, - hop_length=512, - center=True, - pad_mode="reflect", - power=2.0, - norm="slaney", - onesided=True, - n_mels=128, - mel_scale="htk") - - spec = mel_tfm(waveform) - - waveform = torchaudio.transforms.AmplitudeToDB(top_db=80)(spec) - - return waveform, sr - - - def encodes(self, x): - waveform, sr = torchaudio.load(x) - waveform, sr = self.resample(waveform, sr) - waveform, sr = self.pad_trunc(waveform, sr) - waveform, sr = self.rechannel(waveform, sr) - waveform, sr = self.mel_specgram(waveform, sr) - return waveform - - -class load_label_tfm(Transform): - def __init__(self, metadata=processed_metadata): self.metadata = metadata - def encodes(self, x): - return self.metadata.loc[self.metadata['slice_file_name'] == x.name]['class'].item() - -# %% app.ipynb 6 -repo_id = "Jimmie/urban8k" - -learner = from_pretrained_fastai(repo_id) - -# %% app.ipynb 14 -categories = tuple(learner.dls.vocab) - -def classify_audio(audio): - # use Path to open audio - audio_path = Path(audio) - pred,idx,probs = learner.predict(audio_path) - return dict(zip(categories, map(float, probs))) - -# %% app.ipynb 16 -title = "Environmental Sound Classification" - -description = """ -This demo showcases how AI can be used to recognize environmental sounds. It focuses specifically on 5 classes: car_horn, children_playing, dog_bark, siren and street music - - -When uploading audio, make sure it is in .wav format and is less than 4 seconds long. - -Enjoy! -""" -mic = gr.Audio(source='upload', type="filepath", label='Upload Audio File here') -label = gr.outputs.Label() -examples = list(data.ls()) - -intf = gr.Interface(fn=classify_audio, inputs=mic, outputs=label, examples=examples, - title=title, description=description, cache_examples=False, - auto_submit_duration=5) - -intf.launch(inline=False) diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/stylesheet/custom-components.css b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/stylesheet/custom-components.css deleted file mode 100644 index 633c4cd958b8f45d6f185aa81adcf26f07043ea8..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/stylesheet/custom-components.css +++ /dev/null @@ -1,240 +0,0 @@ - -/* user-info */ -#user-info.block { - white-space: nowrap; - position: absolute; left: 8em; top: .8em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none!important; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; max-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user-info.block .wrap { - opacity: 0; -} -#user-info p { - color: white; - font-weight: var(--block-label-text-weight); -} -#user-info.info-transparent { - opacity: 0; - transition: opacity 1s ease-in-out; -} - - -/* updater */ -#toast-update { - position: absolute; - display: flex; - top: -500px; - width: 100%; - justify-content: center; - z-index: var(--layer-top); - transition: top 0.3s ease-out; -} -#check-chuanhu-update { - position: absolute; - align-items: center; - display: flex; - flex-direction: column; - justify-content: center; - margin: var(--size-6) var(--size-4); - box-shadow: var(--shadow-drop-lg); - border: 1px solid var(--block-label-border-color); - border-radius: var(--container-radius); - background: var(--background-fill-primary); - padding: var(--size-4) var(--size-6); - min-width: 360px; - max-width: 480px; - overflow: hidden; - pointer-events: auto; -} -#version-info-title { - font-size: 1.2em; - font-weight: bold; - text-align: start; - width: 100%; -} -#release-note-wrap { - width: 100%; - max-width: 400px; - height: 120px; - border: solid 1px var(--border-color-primary); - overflow: auto; - padding: 0 8px; -} -#release-note-wrap.hideK { - display: none; -} -.btn-update-group { - display: flex; - justify-content: space-evenly; - align-items: center; - width: 100%; - padding-top: 10px; -} -.btn-update-group.hideK { - display: none; -} -#updating-info { - margin: 16px 0px 24px; - text-align: start; - width: 100%; -} - - -#usage-display p, #usage-display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: .5em 0 !important; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - - -/* 亮暗色模式切换 */ -#apSwitch input[type="checkbox"] { - margin: 0 !important; -} -#apSwitch label.apSwitch { - display: flex; - align-items: center; - cursor: pointer; - color: var(--body-text-color); - font-weight: var(--checkbox-label-text-weight); - font-size: var(--checkbox-label-text-size); - line-height: var(--line-md); - margin: 2px 0 !important; -} -input[type="checkbox"]#apSwitch-checkbox::before { - background: none !important; - content: '🌞'; - border: none !important; - box-shadow: none !important; - font-size: 22px; - top: -4.4px; - left: -1px; -} -input:checked[type="checkbox"]#apSwitch-checkbox::before { - content: '🌚'; - left: 16px; -} - -/* .apSwitch { - top: 2px; - display: inline-block; - height: 22px; - position: relative; - width: 40px; - border-radius: 11px; - box-shadow: inset 0 0 1px 0 rgba(0,0,0,0.05), inset 0 0 2px 0 rgba(0,0,0,0.08) !important; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--neutral-200); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 22px; - border-radius: 11px; -} -.apSlider::before { - transform: scale(0.9); - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--primary-600); -} -input:checked + .apSlider::before { - transform: translateX(18px); - content:"🌚"; -} */ - -/* switch-checkbox */ -.switch-checkbox label { - flex-direction: row-reverse; - justify-content: space-between; -} -.switch-checkbox input[type="checkbox"] + span { - margin-left: 0 !important; -} - -.switch-checkbox input[type="checkbox"] { - -moz-appearance: none; - appearance: none; - -webkit-appearance: none; - outline: none; -} - -.switch-checkbox input[type="checkbox"] { - display: inline-block !important; - position: relative !important; - border: none !important; - outline: none; - width: 40px !important; - height: 22px !important; - border-radius: 11px !important; - background-image: none !important; - box-shadow: inset 0 0 1px 0 rgba(0,0,0,0.05), inset 0 0 2px 0 rgba(0,0,0,0.08) !important; - background-image: none !important; - background-color: var(--switch-checkbox-color-light) !important; - transition: .2s ease background-color; -} -.dark .switch-checkbox input[type="checkbox"] { - background-color: var(--switch-checkbox-color-dark) !important; -} -.switch-checkbox input[type="checkbox"]::before { - content: ""; - position: absolute; - width: 22px; - height: 22px; - top: 0; - left: 0; - background: #FFFFFF; - border: 0.5px solid rgba(0,0,0,0.02); - box-shadow: 0 0 0 0 rgba(0,0,0,0.15), 0 1px 0 0 rgba(0,0,0,0.05); - transform: scale(0.9); - border-radius: 11px !important; - transition: .4s ease all; - box-shadow: var(--input-shadow); -} -.switch-checkbox input:checked[type="checkbox"] { - background-color: var(--primary-600) !important; -} -.switch-checkbox input:checked[type="checkbox"]::before { - background-color: #fff; - left: 18px; -} - diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/spsa/spsa_func.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/spsa/spsa_func.py deleted file mode 100644 index a657789554fe62e8b53afc573f02f20c593cf3c5..0000000000000000000000000000000000000000 --- a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/spsa/spsa_func.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch - - -def spsa_func(input, params, process, i, sample_rate=24000): - return process(input.cpu(), params.cpu(), i, sample_rate).type_as(input) - - -class SPSAFunction(torch.autograd.Function): - @staticmethod - def forward( - ctx, - input, - params, - process, - epsilon, - thread_context, - sample_rate=24000, - ): - """Apply processor to a batch of tensors using given parameters. - - Args: - input (Tensor): Audio with shape: batch x 2 x samples - params (Tensor): Processor parameters with shape: batch x params - process (function): Function that will apply processing. - epsilon (float): Perturbation strength for SPSA computation. - - Returns: - output (Tensor): Processed audio with same shape as input. - """ - ctx.save_for_backward(input, params) - ctx.epsilon = epsilon - ctx.process = process - ctx.thread_context = thread_context - - if thread_context.parallel: - - for i in range(input.shape[0]): - msg = ( - "forward", - ( - i, - input[i].view(-1).detach().cpu().numpy(), - params[i].view(-1).detach().cpu().numpy(), - sample_rate, - ), - ) - thread_context.procs[i][1].send(msg) - - z = torch.empty_like(input) - for i in range(input.shape[0]): - z[i] = torch.from_numpy(thread_context.procs[i][1].recv()) - else: - z = torch.empty_like(input) - for i in range(input.shape[0]): - value = ( - i, - input[i].view(-1).detach().cpu().numpy(), - params[i].view(-1).detach().cpu().numpy(), - sample_rate, - ) - z[i] = torch.from_numpy( - thread_context.static_forward(thread_context.dsp, value) - ) - - return z - - @staticmethod - def backward(ctx, grad_output): - """Estimate gradients using SPSA.""" - - input, params = ctx.saved_tensors - epsilon = ctx.epsilon - needs_input_grad = ctx.needs_input_grad[0] - needs_param_grad = ctx.needs_input_grad[1] - thread_context = ctx.thread_context - - grads_input = None - grads_params = None - - # Receive grads - if needs_input_grad: - grads_input = torch.empty_like(input) - if needs_param_grad: - grads_params = torch.empty_like(params) - - if thread_context.parallel: - - for i in range(input.shape[0]): - msg = ( - "backward", - ( - i, - input[i].view(-1).detach().cpu().numpy(), - params[i].view(-1).detach().cpu().numpy(), - needs_input_grad, - needs_param_grad, - grad_output[i].view(-1).detach().cpu().numpy(), - epsilon, - ), - ) - thread_context.procs[i][1].send(msg) - - # Wait for output - for i in range(input.shape[0]): - temp1, temp2 = thread_context.procs[i][1].recv() - - if temp1 is not None: - grads_input[i] = torch.from_numpy(temp1) - - if temp2 is not None: - grads_params[i] = torch.from_numpy(temp2) - - return grads_input, grads_params, None, None, None, None - else: - for i in range(input.shape[0]): - value = ( - i, - input[i].view(-1).detach().cpu().numpy(), - params[i].view(-1).detach().cpu().numpy(), - needs_input_grad, - needs_param_grad, - grad_output[i].view(-1).detach().cpu().numpy(), - epsilon, - ) - temp1, temp2 = thread_context.static_backward(thread_context.dsp, value) - if temp1 is not None: - grads_input[i] = torch.from_numpy(temp1) - - if temp2 is not None: - grads_params[i] = torch.from_numpy(temp2) - return grads_input, grads_params, None, None, None, None diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/samseg_mask2former_nwpu_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/samseg_mask2former_nwpu_config.py deleted file mode 100644 index 8f855100b8a188a827bdd99d1914e407ed607dc0..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/configs/rsprompter/samseg_mask2former_nwpu_config.py +++ /dev/null @@ -1,350 +0,0 @@ -custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models'], allow_failed_imports=False) - -sub_model_train = [ - 'panoptic_head', - 'sam_neck', - 'data_preprocessor' -] - -sub_model_optim = { - 'sam_neck': {'lr_mult': 1}, - 'panoptic_head': {'lr_mult': 1}, -} - -max_epochs = 500 - -optimizer = dict( - type='AdamW', - sub_model=sub_model_optim, - lr=0.0001, - weight_decay=1e-3 -) - -param_scheduler = [ - # warm up learning rate scheduler - dict( - type='LinearLR', - start_factor=1e-4, - by_epoch=True, - begin=0, - end=1, - # update by iter - convert_to_iter_based=True), - # main learning rate scheduler - dict( - type='CosineAnnealingLR', - T_max=max_epochs, - by_epoch=True, - begin=1, - end=max_epochs, - ), -] - -param_scheduler_callback = dict( - type='ParamSchedulerHook' -) - -evaluator_ = dict( - type='CocoPLMetric', - metric=['bbox', 'segm'], - proposal_nums=[1, 10, 100] -) - -evaluator = dict( - val_evaluator=evaluator_, -) - - -image_size = (1024, 1024) - -data_preprocessor = dict( - type='mmdet.DetDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True, - pad_size_divisor=32, - pad_mask=True, - mask_pad_value=0, -) - -num_things_classes = 10 -num_stuff_classes = 0 -num_classes = num_things_classes + num_stuff_classes -num_queries = 90 - -model_cfg = dict( - type='SegSAMPLer', - hyperparameters=dict( - optimizer=optimizer, - param_scheduler=param_scheduler, - evaluator=evaluator, - ), - need_train_names=sub_model_train, - data_preprocessor=data_preprocessor, - backbone=dict( - type='vit_h', - checkpoint='pretrain/sam/sam_vit_h_4b8939.pth', - # type='vit_b', - # checkpoint='pretrain/sam/sam_vit_b_01ec64.pth', - ), - sam_neck=dict( - type='SAMAggregatorNeck', - in_channels=[1280] * 32, - # in_channels=[768] * 12, - inner_channels=32, - selected_channels=range(8, 32, 3), - # selected_channels=range(4, 12, 2), - out_channels=256, - up_sample_scale=4, - ), - panoptic_head=dict( - type='mmdet.Mask2FormerHead', - in_channels=[256, 256, 256], # pass to pixel_decoder inside - strides=[8, 16, 32], - feat_channels=256, - out_channels=256, - num_things_classes=num_things_classes, - num_stuff_classes=num_stuff_classes, - num_queries=num_queries, - num_transformer_feat_level=3, - pixel_decoder=dict( - type='mmdet.MSDeformAttnPixelDecoder', - num_outs=3, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - encoder=dict( # DeformableDetrTransformerEncoder - # num_layers=6, - num_layers=2, - layer_cfg=dict( # DeformableDetrTransformerEncoderLayer - self_attn_cfg=dict( # MultiScaleDeformableAttention - embed_dims=256, - num_heads=8, - num_levels=3, - num_points=4, - dropout=0.1, - batch_first=True), - ffn_cfg=dict( - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0.1, - act_cfg=dict(type='ReLU', inplace=True)))), - positional_encoding=dict(num_feats=128, normalize=True)), - enforce_decoder_input_project=False, - positional_encoding=dict(num_feats=128, normalize=True), - transformer_decoder=dict( # Mask2FormerTransformerDecoder - return_intermediate=True, - # num_layers=9, - num_layers=3, - layer_cfg=dict( # Mask2FormerTransformerDecoderLayer - self_attn_cfg=dict( # MultiheadAttention - embed_dims=256, - num_heads=8, - dropout=0.1, - batch_first=True), - cross_attn_cfg=dict( # MultiheadAttention - embed_dims=256, - num_heads=8, - dropout=0.1, - batch_first=True), - ffn_cfg=dict( - embed_dims=256, - feedforward_channels=2048, - num_fcs=2, - ffn_drop=0.1, - act_cfg=dict(type='ReLU', inplace=True))), - init_cfg=None), - loss_cls=dict( - type='mmdet.CrossEntropyLoss', - use_sigmoid=False, - loss_weight=2.0, - reduction='mean', - class_weight=[1.0] * num_classes + [0.1]), - loss_mask=dict( - type='mmdet.CrossEntropyLoss', - use_sigmoid=True, - reduction='mean', - loss_weight=5.0), - loss_dice=dict( - type='mmdet.DiceLoss', - use_sigmoid=True, - activate=True, - reduction='mean', - naive_dice=True, - eps=1.0, - loss_weight=5.0)), - panoptic_fusion_head=dict( - type='mmdet.MaskFormerFusionHead', - num_things_classes=num_things_classes, - num_stuff_classes=num_stuff_classes, - loss_panoptic=None, - init_cfg=None), - train_cfg=dict( - num_points=12544, - oversample_ratio=3.0, - importance_sample_ratio=0.75, - assigner=dict( - type='mmdet.HungarianAssigner', - match_costs=[ - dict(type='mmdet.ClassificationCost', weight=2.0), - dict( - type='mmdet.CrossEntropyLossCost', weight=5.0, use_sigmoid=True), - dict(type='mmdet.DiceCost', weight=5.0, pred_act=True, eps=1.0) - ]), - sampler=dict(type='mmdet.MaskPseudoSampler')), - test_cfg=dict( - panoptic_on=False, - # For now, the dataset does not support - # evaluating semantic segmentation metric. - semantic_on=False, - instance_on=True, - # max_per_image is for instance segmentation. - max_per_image=num_queries, - iou_thr=0.8, - # In Mask2Former's panoptic postprocessing, - # it will filter mask area where score is less than 0.5 . - filter_low_score=True), - init_cfg=None) - - -task_name = 'nwpu_ins' -exp_name = 'E20230604_5' -logger = dict( - type='WandbLogger', - project=task_name, - group='samseg-mask2former', - name=exp_name -) -# logger = None - -callbacks = [ - param_scheduler_callback, - dict( - type='ModelCheckpoint', - dirpath=f'results/{task_name}/{exp_name}/checkpoints', - save_last=True, - mode='max', - monitor='valsegm_map_0', - save_top_k=2, - filename='epoch_{epoch}-map_{valsegm_map_0:.4f}' - ), - dict( - type='LearningRateMonitor', - logging_interval='step' - ) -] - - -trainer_cfg = dict( - compiled_model=False, - accelerator="auto", - strategy="auto", - # strategy="ddp", - # strategy='ddp_find_unused_parameters_true', - # precision='32', - # precision='16-mixed', - devices=8, - default_root_dir=f'results/{task_name}/{exp_name}', - # default_root_dir='results/tmp', - max_epochs=max_epochs, - logger=logger, - callbacks=callbacks, - log_every_n_steps=5, - check_val_every_n_epoch=5, - benchmark=True, - # sync_batchnorm=True, - # fast_dev_run=True, - - # limit_train_batches=1, - # limit_val_batches=0, - # limit_test_batches=None, - # limit_predict_batches=None, - # overfit_batches=0.0, - - # val_check_interval=None, - # num_sanity_val_steps=0, - # enable_checkpointing=None, - # enable_progress_bar=None, - # enable_model_summary=None, - # accumulate_grad_batches=32, - # gradient_clip_val=15, - # gradient_clip_algorithm='norm', - # deterministic=None, - # inference_mode: bool=True, - use_distributed_sampler=True, - # profiler="simple", - # detect_anomaly=False, - # barebones=False, - # plugins=None, - # reload_dataloaders_every_n_epochs=0, -) - - -backend_args = None -train_pipeline = [ - dict(type='mmdet.LoadImageFromFile'), - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='mmdet.Resize', scale=image_size), - dict(type='mmdet.RandomFlip', prob=0.5), - dict(type='mmdet.PackDetInputs') -] - -test_pipeline = [ - dict(type='mmdet.LoadImageFromFile', backend_args=backend_args), - dict(type='mmdet.Resize', scale=image_size), - # If you don't have a gt annotation, delete the pipeline - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor')) -] - - -train_batch_size_per_gpu = 4 -train_num_workers = 4 -test_batch_size_per_gpu = 4 -test_num_workers = 4 -persistent_workers = True - -data_parent = '/mnt/search01/dataset/cky_data/NWPU10' -train_data_prefix = '' -val_data_prefix = '' - -dataset_type = 'NWPUInsSegDataset' - -val_loader = dict( - batch_size=test_batch_size_per_gpu, - num_workers=test_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='NWPU_instances_val.json', - data_prefix=dict(img_path='positive image set'), - test_mode=True, - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=test_pipeline, - backend_args=backend_args)) - -datamodule_cfg = dict( - type='PLDataModule', - train_loader=dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='NWPU_instances_train.json', - data_prefix=dict(img_path='positive image set'), - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=train_pipeline, - backend_args=backend_args) - ), - val_loader=val_loader, - # test_loader=val_loader - predict_loader=val_loader -) \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/datasets/transforms/colorspace.py b/spaces/KyanChen/RSPrompter/mmdet/datasets/transforms/colorspace.py deleted file mode 100644 index e0ba2e97c7eedf65df5ab8942ee461f48a785f39..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/datasets/transforms/colorspace.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -from typing import Optional - -import mmcv -import numpy as np -from mmcv.transforms import BaseTransform -from mmcv.transforms.utils import cache_randomness - -from mmdet.registry import TRANSFORMS -from .augment_wrappers import _MAX_LEVEL, level_to_mag - - -@TRANSFORMS.register_module() -class ColorTransform(BaseTransform): - """Base class for color transformations. All color transformations need to - inherit from this base class. ``ColorTransform`` unifies the class - attributes and class functions of color transformations (Color, Brightness, - Contrast, Sharpness, Solarize, SolarizeAdd, Equalize, AutoContrast, Invert, - and Posterize), and only distort color channels, without impacting the - locations of the instances. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing the geometric - transformation and should be in range [0, 1]. Defaults to 1.0. - level (int, optional): The level should be in range [0, _MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for color transformation. - Defaults to 0.1. - max_mag (float): The maximum magnitude for color transformation. - Defaults to 1.9. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.1, - max_mag: float = 1.9) -> None: - assert 0 <= prob <= 1.0, f'The probability of the transformation ' \ - f'should be in range [0,1], got {prob}.' - assert level is None or isinstance(level, int), \ - f'The level should be None or type int, got {type(level)}.' - assert level is None or 0 <= level <= _MAX_LEVEL, \ - f'The level should be in range [0,{_MAX_LEVEL}], got {level}.' - assert isinstance(min_mag, float), \ - f'min_mag should be type float, got {type(min_mag)}.' - assert isinstance(max_mag, float), \ - f'max_mag should be type float, got {type(max_mag)}.' - assert min_mag <= max_mag, \ - f'min_mag should smaller than max_mag, ' \ - f'got min_mag={min_mag} and max_mag={max_mag}' - self.prob = prob - self.level = level - self.min_mag = min_mag - self.max_mag = max_mag - - def _transform_img(self, results: dict, mag: float) -> None: - """Transform the image.""" - pass - - @cache_randomness - def _random_disable(self): - """Randomly disable the transform.""" - return np.random.rand() > self.prob - - @cache_randomness - def _get_mag(self): - """Get the magnitude of the transform.""" - return level_to_mag(self.level, self.min_mag, self.max_mag) - - def transform(self, results: dict) -> dict: - """Transform function for images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Transformed results. - """ - - if self._random_disable(): - return results - mag = self._get_mag() - self._transform_img(results, mag) - return results - - def __repr__(self) -> str: - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob}, ' - repr_str += f'level={self.level}, ' - repr_str += f'min_mag={self.min_mag}, ' - repr_str += f'max_mag={self.max_mag})' - return repr_str - - -@TRANSFORMS.register_module() -class Color(ColorTransform): - """Adjust the color balance of the image, in a manner similar to the - controls on a colour TV set. A magnitude=0 gives a black & white image, - whereas magnitude=1 gives the original image. The bboxes, masks and - segmentations are not modified. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing Color transformation. - Defaults to 1.0. - level (int, optional): Should be in range [0,_MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for Color transformation. - Defaults to 0.1. - max_mag (float): The maximum magnitude for Color transformation. - Defaults to 1.9. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.1, - max_mag: float = 1.9) -> None: - assert 0. <= min_mag <= 2.0, \ - f'min_mag for Color should be in range [0,2], got {min_mag}.' - assert 0. <= max_mag <= 2.0, \ - f'max_mag for Color should be in range [0,2], got {max_mag}.' - super().__init__( - prob=prob, level=level, min_mag=min_mag, max_mag=max_mag) - - def _transform_img(self, results: dict, mag: float) -> None: - """Apply Color transformation to image.""" - # NOTE defaultly the image should be BGR format - img = results['img'] - results['img'] = mmcv.adjust_color(img, mag).astype(img.dtype) - - -@TRANSFORMS.register_module() -class Brightness(ColorTransform): - """Adjust the brightness of the image. A magnitude=0 gives a black image, - whereas magnitude=1 gives the original image. The bboxes, masks and - segmentations are not modified. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing Brightness transformation. - Defaults to 1.0. - level (int, optional): Should be in range [0,_MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for Brightness transformation. - Defaults to 0.1. - max_mag (float): The maximum magnitude for Brightness transformation. - Defaults to 1.9. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.1, - max_mag: float = 1.9) -> None: - assert 0. <= min_mag <= 2.0, \ - f'min_mag for Brightness should be in range [0,2], got {min_mag}.' - assert 0. <= max_mag <= 2.0, \ - f'max_mag for Brightness should be in range [0,2], got {max_mag}.' - super().__init__( - prob=prob, level=level, min_mag=min_mag, max_mag=max_mag) - - def _transform_img(self, results: dict, mag: float) -> None: - """Adjust the brightness of image.""" - img = results['img'] - results['img'] = mmcv.adjust_brightness(img, mag).astype(img.dtype) - - -@TRANSFORMS.register_module() -class Contrast(ColorTransform): - """Control the contrast of the image. A magnitude=0 gives a gray image, - whereas magnitude=1 gives the original imageThe bboxes, masks and - segmentations are not modified. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing Contrast transformation. - Defaults to 1.0. - level (int, optional): Should be in range [0,_MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for Contrast transformation. - Defaults to 0.1. - max_mag (float): The maximum magnitude for Contrast transformation. - Defaults to 1.9. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.1, - max_mag: float = 1.9) -> None: - assert 0. <= min_mag <= 2.0, \ - f'min_mag for Contrast should be in range [0,2], got {min_mag}.' - assert 0. <= max_mag <= 2.0, \ - f'max_mag for Contrast should be in range [0,2], got {max_mag}.' - super().__init__( - prob=prob, level=level, min_mag=min_mag, max_mag=max_mag) - - def _transform_img(self, results: dict, mag: float) -> None: - """Adjust the image contrast.""" - img = results['img'] - results['img'] = mmcv.adjust_contrast(img, mag).astype(img.dtype) - - -@TRANSFORMS.register_module() -class Sharpness(ColorTransform): - """Adjust images sharpness. A positive magnitude would enhance the - sharpness and a negative magnitude would make the image blurry. A - magnitude=0 gives the origin img. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing Sharpness transformation. - Defaults to 1.0. - level (int, optional): Should be in range [0,_MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for Sharpness transformation. - Defaults to 0.1. - max_mag (float): The maximum magnitude for Sharpness transformation. - Defaults to 1.9. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.1, - max_mag: float = 1.9) -> None: - assert 0. <= min_mag <= 2.0, \ - f'min_mag for Sharpness should be in range [0,2], got {min_mag}.' - assert 0. <= max_mag <= 2.0, \ - f'max_mag for Sharpness should be in range [0,2], got {max_mag}.' - super().__init__( - prob=prob, level=level, min_mag=min_mag, max_mag=max_mag) - - def _transform_img(self, results: dict, mag: float) -> None: - """Adjust the image sharpness.""" - img = results['img'] - results['img'] = mmcv.adjust_sharpness(img, mag).astype(img.dtype) - - -@TRANSFORMS.register_module() -class Solarize(ColorTransform): - """Solarize images (Invert all pixels above a threshold value of - magnitude.). - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing Solarize transformation. - Defaults to 1.0. - level (int, optional): Should be in range [0,_MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for Solarize transformation. - Defaults to 0.0. - max_mag (float): The maximum magnitude for Solarize transformation. - Defaults to 256.0. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.0, - max_mag: float = 256.0) -> None: - assert 0. <= min_mag <= 256.0, f'min_mag for Solarize should be ' \ - f'in range [0, 256], got {min_mag}.' - assert 0. <= max_mag <= 256.0, f'max_mag for Solarize should be ' \ - f'in range [0, 256], got {max_mag}.' - super().__init__( - prob=prob, level=level, min_mag=min_mag, max_mag=max_mag) - - def _transform_img(self, results: dict, mag: float) -> None: - """Invert all pixel values above magnitude.""" - img = results['img'] - results['img'] = mmcv.solarize(img, mag).astype(img.dtype) - - -@TRANSFORMS.register_module() -class SolarizeAdd(ColorTransform): - """SolarizeAdd images. For each pixel in the image that is less than 128, - add an additional amount to it decided by the magnitude. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing SolarizeAdd - transformation. Defaults to 1.0. - level (int, optional): Should be in range [0,_MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for SolarizeAdd transformation. - Defaults to 0.0. - max_mag (float): The maximum magnitude for SolarizeAdd transformation. - Defaults to 110.0. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.0, - max_mag: float = 110.0) -> None: - assert 0. <= min_mag <= 110.0, f'min_mag for SolarizeAdd should be ' \ - f'in range [0, 110], got {min_mag}.' - assert 0. <= max_mag <= 110.0, f'max_mag for SolarizeAdd should be ' \ - f'in range [0, 110], got {max_mag}.' - super().__init__( - prob=prob, level=level, min_mag=min_mag, max_mag=max_mag) - - def _transform_img(self, results: dict, mag: float) -> None: - """SolarizeAdd the image.""" - img = results['img'] - img_solarized = np.where(img < 128, np.minimum(img + mag, 255), img) - results['img'] = img_solarized.astype(img.dtype) - - -@TRANSFORMS.register_module() -class Posterize(ColorTransform): - """Posterize images (reduce the number of bits for each color channel). - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing Posterize - transformation. Defaults to 1.0. - level (int, optional): Should be in range [0,_MAX_LEVEL]. - If level is None, it will generate from [0, _MAX_LEVEL] randomly. - Defaults to None. - min_mag (float): The minimum magnitude for Posterize transformation. - Defaults to 0.0. - max_mag (float): The maximum magnitude for Posterize transformation. - Defaults to 4.0. - """ - - def __init__(self, - prob: float = 1.0, - level: Optional[int] = None, - min_mag: float = 0.0, - max_mag: float = 4.0) -> None: - assert 0. <= min_mag <= 8.0, f'min_mag for Posterize should be ' \ - f'in range [0, 8], got {min_mag}.' - assert 0. <= max_mag <= 8.0, f'max_mag for Posterize should be ' \ - f'in range [0, 8], got {max_mag}.' - super().__init__( - prob=prob, level=level, min_mag=min_mag, max_mag=max_mag) - - def _transform_img(self, results: dict, mag: float) -> None: - """Posterize the image.""" - img = results['img'] - results['img'] = mmcv.posterize(img, math.ceil(mag)).astype(img.dtype) - - -@TRANSFORMS.register_module() -class Equalize(ColorTransform): - """Equalize the image histogram. The bboxes, masks and segmentations are - not modified. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing Equalize transformation. - Defaults to 1.0. - level (int, optional): No use for Equalize transformation. - Defaults to None. - min_mag (float): No use for Equalize transformation. Defaults to 0.1. - max_mag (float): No use for Equalize transformation. Defaults to 1.9. - """ - - def _transform_img(self, results: dict, mag: float) -> None: - """Equalizes the histogram of one image.""" - img = results['img'] - results['img'] = mmcv.imequalize(img).astype(img.dtype) - - -@TRANSFORMS.register_module() -class AutoContrast(ColorTransform): - """Auto adjust image contrast. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing AutoContrast should - be in range [0, 1]. Defaults to 1.0. - level (int, optional): No use for AutoContrast transformation. - Defaults to None. - min_mag (float): No use for AutoContrast transformation. - Defaults to 0.1. - max_mag (float): No use for AutoContrast transformation. - Defaults to 1.9. - """ - - def _transform_img(self, results: dict, mag: float) -> None: - """Auto adjust image contrast.""" - img = results['img'] - results['img'] = mmcv.auto_contrast(img).astype(img.dtype) - - -@TRANSFORMS.register_module() -class Invert(ColorTransform): - """Invert images. - - Required Keys: - - - img - - Modified Keys: - - - img - - Args: - prob (float): The probability for performing invert therefore should - be in range [0, 1]. Defaults to 1.0. - level (int, optional): No use for Invert transformation. - Defaults to None. - min_mag (float): No use for Invert transformation. Defaults to 0.1. - max_mag (float): No use for Invert transformation. Defaults to 1.9. - """ - - def _transform_img(self, results: dict, mag: float) -> None: - """Invert the image.""" - img = results['img'] - results['img'] = mmcv.iminvert(img).astype(img.dtype) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centernet_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centernet_head.py deleted file mode 100644 index 09f3e599eb176965e53f270014cbd326858b7c17..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/centernet_head.py +++ /dev/null @@ -1,447 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Optional, Tuple - -import torch -import torch.nn as nn -from mmcv.ops import batched_nms -from mmengine.config import ConfigDict -from mmengine.model import bias_init_with_prob, normal_init -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.utils import (ConfigType, InstanceList, OptConfigType, - OptInstanceList, OptMultiConfig) -from ..utils import (gaussian_radius, gen_gaussian_target, get_local_maximum, - get_topk_from_heatmap, multi_apply, - transpose_and_gather_feat) -from .base_dense_head import BaseDenseHead - - -@MODELS.register_module() -class CenterNetHead(BaseDenseHead): - """Objects as Points Head. CenterHead use center_point to indicate object's - position. Paper link - - Args: - in_channels (int): Number of channel in the input feature map. - feat_channels (int): Number of channel in the intermediate feature map. - num_classes (int): Number of categories excluding the background - category. - loss_center_heatmap (:obj:`ConfigDict` or dict): Config of center - heatmap loss. Defaults to - dict(type='GaussianFocalLoss', loss_weight=1.0) - loss_wh (:obj:`ConfigDict` or dict): Config of wh loss. Defaults to - dict(type='L1Loss', loss_weight=0.1). - loss_offset (:obj:`ConfigDict` or dict): Config of offset loss. - Defaults to dict(type='L1Loss', loss_weight=1.0). - train_cfg (:obj:`ConfigDict` or dict, optional): Training config. - Useless in CenterNet, but we keep this variable for - SingleStageDetector. - test_cfg (:obj:`ConfigDict` or dict, optional): Testing config - of CenterNet. - init_cfg (:obj:`ConfigDict` or dict or list[dict] or - list[:obj:`ConfigDict`], optional): Initialization - config dict. - """ - - def __init__(self, - in_channels: int, - feat_channels: int, - num_classes: int, - loss_center_heatmap: ConfigType = dict( - type='GaussianFocalLoss', loss_weight=1.0), - loss_wh: ConfigType = dict(type='L1Loss', loss_weight=0.1), - loss_offset: ConfigType = dict( - type='L1Loss', loss_weight=1.0), - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.heatmap_head = self._build_head(in_channels, feat_channels, - num_classes) - self.wh_head = self._build_head(in_channels, feat_channels, 2) - self.offset_head = self._build_head(in_channels, feat_channels, 2) - - self.loss_center_heatmap = MODELS.build(loss_center_heatmap) - self.loss_wh = MODELS.build(loss_wh) - self.loss_offset = MODELS.build(loss_offset) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.fp16_enabled = False - - def _build_head(self, in_channels: int, feat_channels: int, - out_channels: int) -> nn.Sequential: - """Build head for each branch.""" - layer = nn.Sequential( - nn.Conv2d(in_channels, feat_channels, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(feat_channels, out_channels, kernel_size=1)) - return layer - - def init_weights(self) -> None: - """Initialize weights of the head.""" - bias_init = bias_init_with_prob(0.1) - self.heatmap_head[-1].bias.data.fill_(bias_init) - for head in [self.wh_head, self.offset_head]: - for m in head.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, std=0.001) - - def forward(self, x: Tuple[Tensor, ...]) -> Tuple[List[Tensor]]: - """Forward features. Notice CenterNet head does not use FPN. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - center_heatmap_preds (list[Tensor]): center predict heatmaps for - all levels, the channels number is num_classes. - wh_preds (list[Tensor]): wh predicts for all levels, the channels - number is 2. - offset_preds (list[Tensor]): offset predicts for all levels, the - channels number is 2. - """ - return multi_apply(self.forward_single, x) - - def forward_single(self, x: Tensor) -> Tuple[Tensor, ...]: - """Forward feature of a single level. - - Args: - x (Tensor): Feature of a single level. - - Returns: - center_heatmap_pred (Tensor): center predict heatmaps, the - channels number is num_classes. - wh_pred (Tensor): wh predicts, the channels number is 2. - offset_pred (Tensor): offset predicts, the channels number is 2. - """ - center_heatmap_pred = self.heatmap_head(x).sigmoid() - wh_pred = self.wh_head(x) - offset_pred = self.offset_head(x) - return center_heatmap_pred, wh_pred, offset_pred - - def loss_by_feat( - self, - center_heatmap_preds: List[Tensor], - wh_preds: List[Tensor], - offset_preds: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None) -> dict: - """Compute losses of the head. - - Args: - center_heatmap_preds (list[Tensor]): center predict heatmaps for - all levels with shape (B, num_classes, H, W). - wh_preds (list[Tensor]): wh predicts for all levels with - shape (B, 2, H, W). - offset_preds (list[Tensor]): offset predicts for all levels - with shape (B, 2, H, W). - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - Returns: - dict[str, Tensor]: which has components below: - - loss_center_heatmap (Tensor): loss of center heatmap. - - loss_wh (Tensor): loss of hw heatmap - - loss_offset (Tensor): loss of offset heatmap. - """ - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - center_heatmap_pred = center_heatmap_preds[0] - wh_pred = wh_preds[0] - offset_pred = offset_preds[0] - - gt_bboxes = [ - gt_instances.bboxes for gt_instances in batch_gt_instances - ] - gt_labels = [ - gt_instances.labels for gt_instances in batch_gt_instances - ] - img_shape = batch_img_metas[0]['batch_input_shape'] - target_result, avg_factor = self.get_targets(gt_bboxes, gt_labels, - center_heatmap_pred.shape, - img_shape) - - center_heatmap_target = target_result['center_heatmap_target'] - wh_target = target_result['wh_target'] - offset_target = target_result['offset_target'] - wh_offset_target_weight = target_result['wh_offset_target_weight'] - - # Since the channel of wh_target and offset_target is 2, the avg_factor - # of loss_center_heatmap is always 1/2 of loss_wh and loss_offset. - loss_center_heatmap = self.loss_center_heatmap( - center_heatmap_pred, center_heatmap_target, avg_factor=avg_factor) - loss_wh = self.loss_wh( - wh_pred, - wh_target, - wh_offset_target_weight, - avg_factor=avg_factor * 2) - loss_offset = self.loss_offset( - offset_pred, - offset_target, - wh_offset_target_weight, - avg_factor=avg_factor * 2) - return dict( - loss_center_heatmap=loss_center_heatmap, - loss_wh=loss_wh, - loss_offset=loss_offset) - - def get_targets(self, gt_bboxes: List[Tensor], gt_labels: List[Tensor], - feat_shape: tuple, img_shape: tuple) -> Tuple[dict, int]: - """Compute regression and classification targets in multiple images. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box. - feat_shape (tuple): feature map shape with value [B, _, H, W] - img_shape (tuple): image shape. - - Returns: - tuple[dict, float]: The float value is mean avg_factor, the dict - has components below: - - center_heatmap_target (Tensor): targets of center heatmap, \ - shape (B, num_classes, H, W). - - wh_target (Tensor): targets of wh predict, shape \ - (B, 2, H, W). - - offset_target (Tensor): targets of offset predict, shape \ - (B, 2, H, W). - - wh_offset_target_weight (Tensor): weights of wh and offset \ - predict, shape (B, 2, H, W). - """ - img_h, img_w = img_shape[:2] - bs, _, feat_h, feat_w = feat_shape - - width_ratio = float(feat_w / img_w) - height_ratio = float(feat_h / img_h) - - center_heatmap_target = gt_bboxes[-1].new_zeros( - [bs, self.num_classes, feat_h, feat_w]) - wh_target = gt_bboxes[-1].new_zeros([bs, 2, feat_h, feat_w]) - offset_target = gt_bboxes[-1].new_zeros([bs, 2, feat_h, feat_w]) - wh_offset_target_weight = gt_bboxes[-1].new_zeros( - [bs, 2, feat_h, feat_w]) - - for batch_id in range(bs): - gt_bbox = gt_bboxes[batch_id] - gt_label = gt_labels[batch_id] - center_x = (gt_bbox[:, [0]] + gt_bbox[:, [2]]) * width_ratio / 2 - center_y = (gt_bbox[:, [1]] + gt_bbox[:, [3]]) * height_ratio / 2 - gt_centers = torch.cat((center_x, center_y), dim=1) - - for j, ct in enumerate(gt_centers): - ctx_int, cty_int = ct.int() - ctx, cty = ct - scale_box_h = (gt_bbox[j][3] - gt_bbox[j][1]) * height_ratio - scale_box_w = (gt_bbox[j][2] - gt_bbox[j][0]) * width_ratio - radius = gaussian_radius([scale_box_h, scale_box_w], - min_overlap=0.3) - radius = max(0, int(radius)) - ind = gt_label[j] - gen_gaussian_target(center_heatmap_target[batch_id, ind], - [ctx_int, cty_int], radius) - - wh_target[batch_id, 0, cty_int, ctx_int] = scale_box_w - wh_target[batch_id, 1, cty_int, ctx_int] = scale_box_h - - offset_target[batch_id, 0, cty_int, ctx_int] = ctx - ctx_int - offset_target[batch_id, 1, cty_int, ctx_int] = cty - cty_int - - wh_offset_target_weight[batch_id, :, cty_int, ctx_int] = 1 - - avg_factor = max(1, center_heatmap_target.eq(1).sum()) - target_result = dict( - center_heatmap_target=center_heatmap_target, - wh_target=wh_target, - offset_target=offset_target, - wh_offset_target_weight=wh_offset_target_weight) - return target_result, avg_factor - - def predict_by_feat(self, - center_heatmap_preds: List[Tensor], - wh_preds: List[Tensor], - offset_preds: List[Tensor], - batch_img_metas: Optional[List[dict]] = None, - rescale: bool = True, - with_nms: bool = False) -> InstanceList: - """Transform network output for a batch into bbox predictions. - - Args: - center_heatmap_preds (list[Tensor]): Center predict heatmaps for - all levels with shape (B, num_classes, H, W). - wh_preds (list[Tensor]): WH predicts for all levels with - shape (B, 2, H, W). - offset_preds (list[Tensor]): Offset predicts for all levels - with shape (B, 2, H, W). - batch_img_metas (list[dict], optional): Batch image meta info. - Defaults to None. - rescale (bool): If True, return boxes in original image space. - Defaults to True. - with_nms (bool): If True, do nms before return boxes. - Defaults to False. - - Returns: - list[:obj:`InstanceData`]: Instance segmentation - results of each image after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - assert len(center_heatmap_preds) == len(wh_preds) == len( - offset_preds) == 1 - result_list = [] - for img_id in range(len(batch_img_metas)): - result_list.append( - self._predict_by_feat_single( - center_heatmap_preds[0][img_id:img_id + 1, ...], - wh_preds[0][img_id:img_id + 1, ...], - offset_preds[0][img_id:img_id + 1, ...], - batch_img_metas[img_id], - rescale=rescale, - with_nms=with_nms)) - return result_list - - def _predict_by_feat_single(self, - center_heatmap_pred: Tensor, - wh_pred: Tensor, - offset_pred: Tensor, - img_meta: dict, - rescale: bool = True, - with_nms: bool = False) -> InstanceData: - """Transform outputs of a single image into bbox results. - - Args: - center_heatmap_pred (Tensor): Center heatmap for current level with - shape (1, num_classes, H, W). - wh_pred (Tensor): WH heatmap for current level with shape - (1, num_classes, H, W). - offset_pred (Tensor): Offset for current level with shape - (1, corner_offset_channels, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Defaults to True. - with_nms (bool): If True, do nms before return boxes. - Defaults to False. - - Returns: - :obj:`InstanceData`: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - batch_det_bboxes, batch_labels = self._decode_heatmap( - center_heatmap_pred, - wh_pred, - offset_pred, - img_meta['batch_input_shape'], - k=self.test_cfg.topk, - kernel=self.test_cfg.local_maximum_kernel) - - det_bboxes = batch_det_bboxes.view([-1, 5]) - det_labels = batch_labels.view(-1) - - batch_border = det_bboxes.new_tensor(img_meta['border'])[..., - [2, 0, 2, 0]] - det_bboxes[..., :4] -= batch_border - - if rescale and 'scale_factor' in img_meta: - det_bboxes[..., :4] /= det_bboxes.new_tensor( - img_meta['scale_factor']).repeat((1, 2)) - - if with_nms: - det_bboxes, det_labels = self._bboxes_nms(det_bboxes, det_labels, - self.test_cfg) - results = InstanceData() - results.bboxes = det_bboxes[..., :4] - results.scores = det_bboxes[..., 4] - results.labels = det_labels - return results - - def _decode_heatmap(self, - center_heatmap_pred: Tensor, - wh_pred: Tensor, - offset_pred: Tensor, - img_shape: tuple, - k: int = 100, - kernel: int = 3) -> Tuple[Tensor, Tensor]: - """Transform outputs into detections raw bbox prediction. - - Args: - center_heatmap_pred (Tensor): center predict heatmap, - shape (B, num_classes, H, W). - wh_pred (Tensor): wh predict, shape (B, 2, H, W). - offset_pred (Tensor): offset predict, shape (B, 2, H, W). - img_shape (tuple): image shape in hw format. - k (int): Get top k center keypoints from heatmap. Defaults to 100. - kernel (int): Max pooling kernel for extract local maximum pixels. - Defaults to 3. - - Returns: - tuple[Tensor]: Decoded output of CenterNetHead, containing - the following Tensors: - - - batch_bboxes (Tensor): Coords of each box with shape (B, k, 5) - - batch_topk_labels (Tensor): Categories of each box with \ - shape (B, k) - """ - height, width = center_heatmap_pred.shape[2:] - inp_h, inp_w = img_shape - - center_heatmap_pred = get_local_maximum( - center_heatmap_pred, kernel=kernel) - - *batch_dets, topk_ys, topk_xs = get_topk_from_heatmap( - center_heatmap_pred, k=k) - batch_scores, batch_index, batch_topk_labels = batch_dets - - wh = transpose_and_gather_feat(wh_pred, batch_index) - offset = transpose_and_gather_feat(offset_pred, batch_index) - topk_xs = topk_xs + offset[..., 0] - topk_ys = topk_ys + offset[..., 1] - tl_x = (topk_xs - wh[..., 0] / 2) * (inp_w / width) - tl_y = (topk_ys - wh[..., 1] / 2) * (inp_h / height) - br_x = (topk_xs + wh[..., 0] / 2) * (inp_w / width) - br_y = (topk_ys + wh[..., 1] / 2) * (inp_h / height) - - batch_bboxes = torch.stack([tl_x, tl_y, br_x, br_y], dim=2) - batch_bboxes = torch.cat((batch_bboxes, batch_scores[..., None]), - dim=-1) - return batch_bboxes, batch_topk_labels - - def _bboxes_nms(self, bboxes: Tensor, labels: Tensor, - cfg: ConfigDict) -> Tuple[Tensor, Tensor]: - """bboxes nms.""" - if labels.numel() > 0: - max_num = cfg.max_per_img - bboxes, keep = batched_nms(bboxes[:, :4], bboxes[:, - -1].contiguous(), - labels, cfg.nms) - if max_num > 0: - bboxes = bboxes[:max_num] - labels = labels[keep][:max_num] - - return bboxes, labels diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/contrib/__init__.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/contrib/__init__.py deleted file mode 100644 index 886abc4fad63bdaeb9f432e53bcbcbc4ed0c12a9..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/contrib/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import backtrader as bt - -from .import vortex as vortex -for name in vortex.__all__: - setattr(bt.indicators, name, getattr(vortex, name)) diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/train_mapping.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/train_mapping.py deleted file mode 100644 index ffff4a5de7622e831989e8cb0daa694325a345b5..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/train_mapping.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import time -from collections import OrderedDict -from options.train_options import TrainOptions -from data.data_loader import CreateDataLoader -from models.mapping_model import Pix2PixHDModel_Mapping -import util.util as util -from util.visualizer import Visualizer -import os -import numpy as np -import torch -import torchvision.utils as vutils -from torch.autograd import Variable -import datetime -import random - - - -opt = TrainOptions().parse() -visualizer = Visualizer(opt) -iter_path = os.path.join(opt.checkpoints_dir, opt.name, 'iter.txt') -if opt.continue_train: - try: - start_epoch, epoch_iter = np.loadtxt(iter_path , delimiter=',', dtype=int) - except: - start_epoch, epoch_iter = 1, 0 - visualizer.print_save('Resuming from epoch %d at iteration %d' % (start_epoch-1, epoch_iter)) -else: - start_epoch, epoch_iter = 1, 0 - -if opt.which_epoch != "latest": - start_epoch=int(opt.which_epoch) - visualizer.print_save('Notice : Resuming from epoch %d at iteration %d' % (start_epoch - 1, epoch_iter)) - -opt.start_epoch=start_epoch -### temp for continue train unfixed decoder - -data_loader = CreateDataLoader(opt) -dataset = data_loader.load_data() -dataset_size = len(dataset) * opt.batchSize -print('#training images = %d' % dataset_size) - - -model = Pix2PixHDModel_Mapping() -model.initialize(opt) - -path = os.path.join(opt.checkpoints_dir, opt.name, 'model.txt') -fd = open(path, 'w') - -if opt.use_skip_model: - fd.write(str(model.mapping_net)) - fd.close() -else: - fd.write(str(model.netG_A)) - fd.write(str(model.mapping_net)) - fd.close() - -if opt.isTrain and len(opt.gpu_ids) > 1: - model = torch.nn.DataParallel(model, device_ids=opt.gpu_ids) - - - -total_steps = (start_epoch-1) * dataset_size + epoch_iter - -display_delta = total_steps % opt.display_freq -print_delta = total_steps % opt.print_freq -save_delta = total_steps % opt.save_latest_freq -### used for recovering training - -for epoch in range(start_epoch, opt.niter + opt.niter_decay + 1): - epoch_s_t=datetime.datetime.now() - epoch_start_time = time.time() - if epoch != start_epoch: - epoch_iter = epoch_iter % dataset_size - for i, data in enumerate(dataset, start=epoch_iter): - iter_start_time = time.time() - total_steps += opt.batchSize - epoch_iter += opt.batchSize - - # whether to collect output images - save_fake = total_steps % opt.display_freq == display_delta - - ############## Forward Pass ###################### - #print(pair) - losses, generated = model(Variable(data['label']), Variable(data['inst']), - Variable(data['image']), Variable(data['feat']), infer=save_fake) - - # sum per device losses - losses = [ torch.mean(x) if not isinstance(x, int) else x for x in losses ] - loss_dict = dict(zip(model.module.loss_names, losses)) - - # calculate final loss scalar - loss_D = (loss_dict['D_fake'] + loss_dict['D_real']) * 0.5 - loss_G = loss_dict['G_GAN'] + loss_dict.get('G_GAN_Feat',0) + loss_dict.get('G_VGG',0) + loss_dict.get('G_Feat_L2', 0) +loss_dict.get('Smooth_L1', 0)+loss_dict.get('G_Feat_L2_Stage_1',0) - #loss_G = loss_dict['G_Feat_L2'] - - ############### Backward Pass #################### - # update generator weights - model.module.optimizer_mapping.zero_grad() - loss_G.backward() - model.module.optimizer_mapping.step() - - # update discriminator weights - model.module.optimizer_D.zero_grad() - loss_D.backward() - model.module.optimizer_D.step() - - ############## Display results and errors ########## - ### print out errors - if i == 0 or total_steps % opt.print_freq == print_delta: - errors = {k: v.data if not isinstance(v, int) else v for k, v in loss_dict.items()} - t = (time.time() - iter_start_time) / opt.batchSize - visualizer.print_current_errors(epoch, epoch_iter, errors, t,model.module.old_lr) - visualizer.plot_current_errors(errors, total_steps) - - ### display output images - if save_fake: - - if not os.path.exists(opt.outputs_dir + opt.name): - os.makedirs(opt.outputs_dir + opt.name) - - imgs_num = 5 - if opt.NL_use_mask: - mask=data['inst'][:imgs_num] - mask=mask.repeat(1,3,1,1) - imgs = torch.cat((data['label'][:imgs_num], mask,generated.data.cpu()[:imgs_num], data['image'][:imgs_num]), 0) - else: - imgs = torch.cat((data['label'][:imgs_num], generated.data.cpu()[:imgs_num], data['image'][:imgs_num]), 0) - - imgs=(imgs+1.)/2.0 ## de-normalize - - try: - image_grid = vutils.save_image(imgs, opt.outputs_dir + opt.name + '/' + str(epoch) + '_' + str(total_steps) + '.png', - nrow=imgs_num, padding=0, normalize=True) - except OSError as err: - print(err) - - if epoch_iter >= dataset_size: - break - - # end of epoch - epoch_e_t=datetime.datetime.now() - iter_end_time = time.time() - print('End of epoch %d / %d \t Time Taken: %s' % - (epoch, opt.niter + opt.niter_decay, str(epoch_e_t-epoch_s_t))) - - ### save model for this epoch - if epoch % opt.save_epoch_freq == 0: - print('saving the model at the end of epoch %d, iters %d' % (epoch, total_steps)) - model.module.save('latest') - model.module.save(epoch) - np.savetxt(iter_path, (epoch+1, 0), delimiter=',', fmt='%d') - - ### instead of only training the local enhancer, train the entire network after certain iterations - if (opt.niter_fix_global != 0) and (epoch == opt.niter_fix_global): - model.module.update_fixed_params() - - ### linearly decay learning rate after certain iterations - if epoch > opt.niter: - model.module.update_learning_rate() \ No newline at end of file diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/models/hotr_matcher.py b/spaces/MLVKU/Human_Object_Interaction/hotr/models/hotr_matcher.py deleted file mode 100644 index 3435e2ae49da7864578700fe0014cb2e9962c987..0000000000000000000000000000000000000000 --- a/spaces/MLVKU/Human_Object_Interaction/hotr/models/hotr_matcher.py +++ /dev/null @@ -1,216 +0,0 @@ -# ------------------------------------------------------------------------ -# HOTR official code : hotr/models/hotr_matcher.py -# Copyright (c) Kakao Brain, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ -import torch -from scipy.optimize import linear_sum_assignment -from torch import nn - -from hotr.util.box_ops import box_cxcywh_to_xyxy, generalized_box_iou - -import hotr.util.misc as utils -import wandb - -class HungarianPairMatcher(nn.Module): - def __init__(self, args): - """Creates the matcher - Params: - cost_action: This is the relative weight of the multi-label action classification error in the matching cost - cost_hbox: This is the relative weight of the classification error for human idx in the matching cost - cost_obox: This is the relative weight of the classification error for object idx in the matching cost - """ - super().__init__() - self.cost_action = args.set_cost_act - self.cost_hbox = self.cost_obox = args.set_cost_idx - self.cost_target = args.set_cost_tgt - self.log_printer = args.wandb - self.is_vcoco = (args.dataset_file == 'vcoco') - self.is_hico = (args.dataset_file == 'hico-det') - if self.is_vcoco: - self.valid_ids = args.valid_ids - self.invalid_ids = args.invalid_ids - assert self.cost_action != 0 or self.cost_hbox != 0 or self.cost_obox != 0, "all costs cant be 0" - - def reduce_redundant_gt_box(self, tgt_bbox, indices): - """Filters redundant Ground-Truth Bounding Boxes - Due to random crop augmentation, there exists cases where there exists - multiple redundant labels for the exact same bounding box and object class. - This function deals with the redundant labels for smoother HOTR training. - """ - tgt_bbox_unique, map_idx, idx_cnt = torch.unique(tgt_bbox, dim=0, return_inverse=True, return_counts=True) - - k_idx, bbox_idx = indices - - triggered = False - if (len(tgt_bbox) != len(tgt_bbox_unique)): - map_dict = {k: v for k, v in enumerate(map_idx)} - map_bbox2kidx = {int(bbox_id): k_id for bbox_id, k_id in zip(bbox_idx, k_idx)} - - bbox_lst, k_lst = [], [] - for bbox_id in bbox_idx: - if map_dict[int(bbox_id)] not in bbox_lst: - bbox_lst.append(map_dict[int(bbox_id)]) - k_lst.append(map_bbox2kidx[int(bbox_id)]) - bbox_idx = torch.tensor(bbox_lst) - k_idx = torch.tensor(k_lst) - tgt_bbox_res = tgt_bbox_unique - else: - tgt_bbox_res = tgt_bbox - - bbox_idx = bbox_idx.to(tgt_bbox.device) - - return tgt_bbox_res, k_idx, bbox_idx - - @torch.no_grad() - def forward(self, outputs, targets, indices, log=False): - assert "pred_actions" in outputs, "There is no action output for pair matching" - num_obj_queries = outputs["pred_boxes"].shape[1] - bs,num_path, num_queries = outputs["pred_actions"].shape[:3] - detr_query_num = outputs["pred_logits"].shape[1] \ - if (outputs["pred_oidx"].shape[-1] == (outputs["pred_logits"].shape[1] + 1)) else -1 - - return_list = [] - if self.log_printer and log: - log_dict = {'h_cost': [], 'o_cost': [], 'act_cost': []} - if self.is_hico: log_dict['tgt_cost'] = [] - - for batch_idx in range(bs): - tgt_bbox = targets[batch_idx]["boxes"] # (num_boxes, 4) - tgt_cls = targets[batch_idx]["labels"] # (num_boxes) - - if self.is_vcoco: - targets[batch_idx]["pair_actions"][:, self.invalid_ids] = 0 - keep_idx = (targets[batch_idx]["pair_actions"].sum(dim=-1) != 0) - targets[batch_idx]["pair_boxes"] = targets[batch_idx]["pair_boxes"][keep_idx] - targets[batch_idx]["pair_actions"] = targets[batch_idx]["pair_actions"][keep_idx] - targets[batch_idx]["pair_targets"] = targets[batch_idx]["pair_targets"][keep_idx] - - tgt_pbox = targets[batch_idx]["pair_boxes"] # (num_pair_boxes, 8) - tgt_act = targets[batch_idx]["pair_actions"] # (num_pair_boxes, 29) - tgt_tgt = targets[batch_idx]["pair_targets"] # (num_pair_boxes) - - tgt_hbox = tgt_pbox[:, :4] # (num_pair_boxes, 4) - tgt_obox = tgt_pbox[:, 4:] # (num_pair_boxes, 4) - elif self.is_hico: - tgt_act = targets[batch_idx]["pair_actions"] # (num_pair_boxes, 117) - tgt_tgt = targets[batch_idx]["pair_targets"] # (num_pair_boxes) - - tgt_hbox = targets[batch_idx]["sub_boxes"] # (num_pair_boxes, 4) - tgt_obox = targets[batch_idx]["obj_boxes"] # (num_pair_boxes, 4) - - # find which gt boxes match the h, o boxes in the pair - if self.is_vcoco: - hbox_with_cls = torch.cat([tgt_hbox, torch.ones((tgt_hbox.shape[0], 1)).to(tgt_hbox.device)], dim=1) - elif self.is_hico: - hbox_with_cls = torch.cat([tgt_hbox, torch.zeros((tgt_hbox.shape[0], 1)).to(tgt_hbox.device)], dim=1) - obox_with_cls = torch.cat([tgt_obox, tgt_tgt.unsqueeze(-1)], dim=1) - obox_with_cls[obox_with_cls[:, :4].sum(dim=1) == -4, -1] = -1 # turn the class of occluded objects to -1 - - bbox_with_cls = torch.cat([tgt_bbox, tgt_cls.unsqueeze(-1)], dim=1) - bbox_with_cls, k_idx, bbox_idx = self.reduce_redundant_gt_box(bbox_with_cls, indices[batch_idx]) - bbox_with_cls = torch.cat((bbox_with_cls, torch.as_tensor([-1.]*5).unsqueeze(0).to(tgt_cls.device)), dim=0) - - cost_hbox = torch.cdist(hbox_with_cls, bbox_with_cls, p=1) - cost_obox = torch.cdist(obox_with_cls, bbox_with_cls, p=1) - - # find which gt boxes matches which prediction in K - h_match_indices = torch.nonzero(cost_hbox == 0, as_tuple=False) # (num_hbox, num_boxes) - o_match_indices = torch.nonzero(cost_obox == 0, as_tuple=False) # (num_obox, num_boxes) - - tgt_hids, tgt_oids = [], [] - - # obtain ground truth indices for h - if len(h_match_indices) != len(o_match_indices): - import pdb; pdb.set_trace() - - for h_match_idx, o_match_idx in zip(h_match_indices, o_match_indices): - hbox_idx, H_bbox_idx = h_match_idx - obox_idx, O_bbox_idx = o_match_idx - if O_bbox_idx == (len(bbox_with_cls)-1): # if the object class is -1 - O_bbox_idx = H_bbox_idx # happens in V-COCO, the target object may not appear - - GT_idx_for_H = (bbox_idx == H_bbox_idx).nonzero(as_tuple=False).squeeze(-1) - query_idx_for_H = k_idx[GT_idx_for_H] - tgt_hids.append(query_idx_for_H) - - GT_idx_for_O = (bbox_idx == O_bbox_idx).nonzero(as_tuple=False).squeeze(-1) - query_idx_for_O = k_idx[GT_idx_for_O] - tgt_oids.append(query_idx_for_O) - - # check if empty - if len(tgt_hids) == 0: tgt_hids.append(torch.as_tensor([-1])) # we later ignore the label -1 - if len(tgt_oids) == 0: tgt_oids.append(torch.as_tensor([-1])) # we later ignore the label -1 - - tgt_sum = (tgt_act.sum(dim=-1)).unsqueeze(0) - flag = False - if tgt_act.shape[0] == 0: - tgt_act = torch.zeros((1, tgt_act.shape[1])).to(targets[batch_idx]["pair_actions"].device) - targets[batch_idx]["pair_actions"] = torch.zeros((1, targets[batch_idx]["pair_actions"].shape[1])).to(targets[batch_idx]["pair_actions"].device) - if self.is_hico: - pad_tgt = -1 # outputs["pred_obj_logits"].shape[-1]-1 - tgt_tgt = torch.tensor([pad_tgt]).to(targets[batch_idx]["pair_targets"]) - targets[batch_idx]["pair_targets"] = torch.tensor([pad_tgt]).to(targets[batch_idx]["pair_targets"].device) - tgt_sum = (tgt_act.sum(dim=-1) + 1).unsqueeze(0) - - # Concat target label - tgt_hids = torch.cat(tgt_hids).repeat(num_path) - tgt_oids = torch.cat(tgt_oids).repeat(num_path) - # import pdb;pdb.set_trace() - outputs_hidx=outputs["pred_hidx"].view(num_path,bs,num_queries,-1).transpose(0,1).flatten(1,2) - outputs_oidx=outputs["pred_oidx"].view(num_path,bs,num_queries,-1).transpose(0,1).flatten(1,2) - - outputs_action=outputs["pred_actions"].view(bs,num_path*num_queries,-1) - out_hprob = outputs_hidx[batch_idx].softmax(-1) - out_oprob = outputs_oidx[batch_idx].softmax(-1) - out_act = outputs_action[batch_idx].clone() - if self.is_vcoco: out_act[..., self.invalid_ids] = 0 - if self.is_hico: - outputs_obj_logits=outputs["pred_obj_logits"].view(bs,num_path,num_queries,-1).view(bs,num_path*num_queries,-1) - out_tgt = outputs_obj_logits[batch_idx].softmax(-1) - out_tgt[..., -1] = 0 # don't get cost for no-object - - tgt_act = torch.cat([tgt_act, torch.zeros(tgt_act.shape[0]).unsqueeze(-1).to(tgt_act.device)], dim=-1).repeat(num_path,1) - - cost_hclass = -out_hprob[:, tgt_hids] # [batch_size * num_queries, detr.num_queries+1] - cost_oclass = -out_oprob[:, tgt_oids] # [batch_size * num_queries, detr.num_queries+1] - # import pdb;pdb.set_trace() - cost_pos_act = (-torch.matmul(out_act, tgt_act.t().float())) / tgt_sum.repeat(1,num_path) - cost_neg_act = (torch.matmul(out_act, (~tgt_act.bool()).type(torch.int64).t().float())) / (~tgt_act.bool()).type(torch.int64).sum(dim=-1).unsqueeze(0) - cost_action = cost_pos_act + cost_neg_act - - h_cost = self.cost_hbox * cost_hclass - o_cost = self.cost_obox * cost_oclass - - act_cost = self.cost_action * cost_action - - C = h_cost + o_cost + act_cost - - if self.is_hico: - cost_target = -out_tgt[:, tgt_tgt.repeat(num_path)] - tgt_cost = self.cost_target * cost_target - C += tgt_cost - C = C.view(num_path,num_queries, -1).cpu() - - sizes = [len(tgt_hids)//num_path]*num_path - hoi_indices = [linear_sum_assignment(c[i]) for i, c in enumerate(C.split(sizes, -1))] - return_list.append([(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64)) for i, j in hoi_indices]) - # import pdb;pdb.set_trace() - targets[batch_idx]["h_labels"] = tgt_hids.to(tgt_hbox.device) - targets[batch_idx]["o_labels"] = tgt_oids.to(tgt_obox.device) - log_act_cost = torch.zeros([1]).to(tgt_act.device) if tgt_act.shape[0] == 0 else act_cost.min(dim=0)[0].mean() - - if self.log_printer and log: - log_dict['h_cost'].append(h_cost[:num_queries].min(dim=0)[0].mean()) - log_dict['o_cost'].append(o_cost[:num_queries].min(dim=0)[0].mean()) - log_dict['act_cost'].append(act_cost[:num_queries].min(dim=0)[0].mean()) - if self.is_hico: log_dict['tgt_cost'].append(tgt_cost[:num_queries].min(dim=0)[0].mean()) - if self.log_printer and log: - log_dict['h_cost'] = torch.stack(log_dict['h_cost']).mean() - log_dict['o_cost'] = torch.stack(log_dict['o_cost']).mean() - log_dict['act_cost'] = torch.stack(log_dict['act_cost']).mean() - if self.is_hico: log_dict['tgt_cost'] = torch.stack(log_dict['tgt_cost']).mean() - if utils.get_rank() == 0: wandb.log(log_dict) - return return_list, targets - -def build_hoi_matcher(args): - return HungarianPairMatcher(args) diff --git a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/inference.py b/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/inference.py deleted file mode 100644 index ec4141a56a6a1d648f9161ad4c482f3a20320f03..0000000000000000000000000000000000000000 --- a/spaces/Ma5onic/MVSEP-MDX23-music-separation-model/inference.py +++ /dev/null @@ -1,920 +0,0 @@ -# coding: utf-8 -__author__ = 'https://github.com/ZFTurbo/' - -if __name__ == '__main__': - import os - - gpu_use = "0" - print('GPU use: {}'.format(gpu_use)) - os.environ["CUDA_VISIBLE_DEVICES"] = "{}".format(gpu_use) - - -import numpy as np -import torch -import torch.nn as nn -import os -import argparse -import soundfile as sf - -from demucs.states import load_model -from demucs import pretrained -from demucs.apply import apply_model -import onnxruntime as ort -from time import time -import librosa -import hashlib - - -__VERSION__ = '1.0.1' - - -class Conv_TDF_net_trim_model(nn.Module): - def __init__(self, device, target_name, L, n_fft, hop=1024): - - super(Conv_TDF_net_trim_model, self).__init__() - - self.dim_c = 4 - self.dim_f, self.dim_t = 3072, 256 - self.n_fft = n_fft - self.hop = hop - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device) - self.target_name = target_name - - out_c = self.dim_c * 4 if target_name == '*' else self.dim_c - self.freq_pad = torch.zeros([1, out_c, self.n_bins - self.dim_f, self.dim_t]).to(device) - - self.n = L // 2 - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, self.dim_c, self.n_bins, self.dim_t]) - return x[:, :, :self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = self.freq_pad.repeat([x.shape[0], 1, 1, 1]) if freq_pad is None else freq_pad - x = torch.cat([x, freq_pad], -2) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 2, self.n_bins, self.dim_t]) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True) - return x.reshape([-1, 2, self.chunk_size]) - - def forward(self, x): - x = self.first_conv(x) - x = x.transpose(-1, -2) - - ds_outputs = [] - for i in range(self.n): - x = self.ds_dense[i](x) - ds_outputs.append(x) - x = self.ds[i](x) - - x = self.mid_dense(x) - for i in range(self.n): - x = self.us[i](x) - x *= ds_outputs[-i - 1] - x = self.us_dense[i](x) - - x = x.transpose(-1, -2) - x = self.final_conv(x) - return x - - -def get_models(name, device, load=True, vocals_model_type=0): - if vocals_model_type == 2: - model_vocals = Conv_TDF_net_trim_model( - device=device, - target_name='vocals', - L=11, - n_fft=7680 - ) - elif vocals_model_type == 3: - model_vocals = Conv_TDF_net_trim_model( - device=device, - target_name='vocals', - L=11, - n_fft=6144 - ) - - return [model_vocals] - - -def demix_base(mix, device, models, infer_session): - start_time = time() - sources = [] - n_sample = mix.shape[1] - for model in models: - trim = model.n_fft // 2 - gen_size = model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - mix_p = np.concatenate( - ( - np.zeros((2, trim)), - mix, - np.zeros((2, pad)), - np.zeros((2, trim)) - ), 1 - ) - - mix_waves = [] - i = 0 - while i < n_sample + pad: - waves = np.array(mix_p[:, i:i + model.chunk_size]) - mix_waves.append(waves) - i += gen_size - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(device) - - with torch.no_grad(): - _ort = infer_session - stft_res = model.stft(mix_waves) - res = _ort.run(None, {'input': stft_res.cpu().numpy()})[0] - ten = torch.tensor(res) - tar_waves = model.istft(ten.to(device)) - tar_waves = tar_waves.cpu() - tar_signal = tar_waves[:, :, trim:-trim].transpose(0, 1).reshape(2, -1).numpy()[:, :-pad] - - sources.append(tar_signal) - # print('Time demix base: {:.2f} sec'.format(time() - start_time)) - return np.array(sources) - - -def demix_full(mix, device, chunk_size, models, infer_session, overlap=0.75): - start_time = time() - - step = int(chunk_size * (1 - overlap)) - # print('Initial shape: {} Chunk size: {} Step: {} Device: {}'.format(mix.shape, chunk_size, step, device)) - result = np.zeros((1, 2, mix.shape[-1]), dtype=np.float32) - divider = np.zeros((1, 2, mix.shape[-1]), dtype=np.float32) - - total = 0 - for i in range(0, mix.shape[-1], step): - total += 1 - - start = i - end = min(i + chunk_size, mix.shape[-1]) - # print('Chunk: {} Start: {} End: {}'.format(total, start, end)) - mix_part = mix[:, start:end] - sources = demix_base(mix_part, device, models, infer_session) - # print(sources.shape) - result[..., start:end] += sources - divider[..., start:end] += 1 - sources = result / divider - # print('Final shape: {} Overall time: {:.2f}'.format(sources.shape, time() - start_time)) - return sources - - -class EnsembleDemucsMDXMusicSeparationModel: - """ - Doesn't do any separation just passes the input back as output - """ - def __init__(self, options): - """ - options - user options - """ - # print(options) - - if torch.cuda.is_available(): - device = 'cuda:0' - else: - device = 'cpu' - if 'cpu' in options: - if options['cpu']: - device = 'cpu' - print('Use device: {}'.format(device)) - self.single_onnx = False - if 'single_onnx' in options: - if options['single_onnx']: - self.single_onnx = True - print('Use single vocal ONNX') - - self.kim_model_1 = False - if 'use_kim_model_1' in options: - if options['use_kim_model_1']: - self.kim_model_1 = True - if self.kim_model_1: - print('Use Kim model 1') - else: - print('Use Kim model 2') - - self.overlap_large = float(options['overlap_large']) - self.overlap_small = float(options['overlap_small']) - if self.overlap_large > 0.99: - self.overlap_large = 0.99 - if self.overlap_large < 0.0: - self.overlap_large = 0.0 - if self.overlap_small > 0.99: - self.overlap_small = 0.99 - if self.overlap_small < 0.0: - self.overlap_small = 0.0 - - model_folder = os.path.dirname(os.path.realpath(__file__)) + '/models/' - remote_url = 'https://dl.fbaipublicfiles.com/demucs/hybrid_transformer/04573f0d-f3cf25b2.th' - model_path = model_folder + '04573f0d-f3cf25b2.th' - if not os.path.isfile(model_path): - torch.hub.download_url_to_file(remote_url, model_folder + '04573f0d-f3cf25b2.th') - model_vocals = load_model(model_path) - model_vocals.to(device) - self.model_vocals_only = model_vocals - - self.models = [] - self.weights_vocals = np.array([10, 1, 8, 9]) - self.weights_bass = np.array([19, 4, 5, 8]) - self.weights_drums = np.array([18, 2, 4, 9]) - self.weights_other = np.array([14, 2, 5, 10]) - - model1 = pretrained.get_model('htdemucs_ft') - model1.to(device) - self.models.append(model1) - - model2 = pretrained.get_model('htdemucs') - model2.to(device) - self.models.append(model2) - - model3 = pretrained.get_model('htdemucs_6s') - model3.to(device) - self.models.append(model3) - - model4 = pretrained.get_model('hdemucs_mmi') - model4.to(device) - self.models.append(model4) - - if 0: - for model in self.models: - print(model.sources) - ''' - ['drums', 'bass', 'other', 'vocals'] - ['drums', 'bass', 'other', 'vocals'] - ['drums', 'bass', 'other', 'vocals', 'guitar', 'piano'] - ['drums', 'bass', 'other', 'vocals'] - ''' - - if device == 'cpu': - chunk_size = 200000000 - providers = ["CPUExecutionProvider"] - else: - chunk_size = 1000000 - providers = ["CUDAExecutionProvider"] - if 'chunk_size' in options: - chunk_size = int(options['chunk_size']) - - # MDX-B model 1 initialization - self.chunk_size = chunk_size - self.mdx_models1 = get_models('tdf_extra', load=False, device=device, vocals_model_type=2) - if self.kim_model_1: - model_path_onnx1 = model_folder + 'Kim_Vocal_1.onnx' - remote_url_onnx1 = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/Kim_Vocal_1.onnx' - else: - model_path_onnx1 = model_folder + 'Kim_Vocal_2.onnx' - remote_url_onnx1 = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/Kim_Vocal_2.onnx' - if not os.path.isfile(model_path_onnx1): - torch.hub.download_url_to_file(remote_url_onnx1, model_path_onnx1) - print('Model path: {}'.format(model_path_onnx1)) - print('Device: {} Chunk size: {}'.format(device, chunk_size)) - self.infer_session1 = ort.InferenceSession( - model_path_onnx1, - providers=providers, - provider_options=[{"device_id": 0}], - ) - - if self.single_onnx is False: - # MDX-B model 2 initialization - self.chunk_size = chunk_size - self.mdx_models2 = get_models('tdf_extra', load=False, device=device, vocals_model_type=2) - root_path = os.path.dirname(os.path.realpath(__file__)) + '/' - model_path_onnx2 = model_folder + 'Kim_Inst.onnx' - remote_url_onnx2 = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/Kim_Inst.onnx' - if not os.path.isfile(model_path_onnx2): - torch.hub.download_url_to_file(remote_url_onnx2, model_path_onnx2) - print('Model path: {}'.format(model_path_onnx2)) - print('Device: {} Chunk size: {}'.format(device, chunk_size)) - self.infer_session2 = ort.InferenceSession( - model_path_onnx2, - providers=providers, - provider_options=[{"device_id": 0}], - ) - - self.device = device - pass - - @property - def instruments(self): - """ DO NOT CHANGE """ - return ['bass', 'drums', 'other', 'vocals'] - - def raise_aicrowd_error(self, msg): - """ Will be used by the evaluator to provide logs, DO NOT CHANGE """ - raise NameError(msg) - - def separate_music_file( - self, - mixed_sound_array, - sample_rate, - update_percent_func=None, - current_file_number=0, - total_files=0, - only_vocals=False, - ): - """ - Implements the sound separation for a single sound file - Inputs: Outputs from soundfile.read('mixture.wav') - mixed_sound_array - sample_rate - - Outputs: - separated_music_arrays: Dictionary numpy array of each separated instrument - output_sample_rates: Dictionary of sample rates separated sequence - """ - - # print('Update percent func: {}'.format(update_percent_func)) - - separated_music_arrays = {} - output_sample_rates = {} - - audio = np.expand_dims(mixed_sound_array.T, axis=0) - audio = torch.from_numpy(audio).type('torch.FloatTensor').to(self.device) - - overlap_large = self.overlap_large - overlap_small = self.overlap_small - - # Get Demics vocal only - model = self.model_vocals_only - shifts = 1 - overlap = overlap_large - vocals_demucs = 0.5 * apply_model(model, audio, shifts=shifts, overlap=overlap)[0][3].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.10) / total_files - update_percent_func(int(val)) - - vocals_demucs += 0.5 * -apply_model(model, -audio, shifts=shifts, overlap=overlap)[0][3].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.20) / total_files - update_percent_func(int(val)) - - overlap = overlap_large - sources1 = demix_full( - mixed_sound_array.T, - self.device, - self.chunk_size, - self.mdx_models1, - self.infer_session1, - overlap=overlap - )[0] - - vocals_mdxb1 = sources1 - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.30) / total_files - update_percent_func(int(val)) - - if self.single_onnx is False: - sources2 = -demix_full( - -mixed_sound_array.T, - self.device, - self.chunk_size, - self.mdx_models2, - self.infer_session2, - overlap=overlap - )[0] - - # it's instrumental so need to invert - instrum_mdxb2 = sources2 - vocals_mdxb2 = mixed_sound_array.T - instrum_mdxb2 - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.40) / total_files - update_percent_func(int(val)) - - # Ensemble vocals for MDX and Demucs - if self.single_onnx is False: - weights = np.array([12, 8, 3]) - vocals = (weights[0] * vocals_mdxb1.T + weights[1] * vocals_mdxb2.T + weights[2] * vocals_demucs.T) / weights.sum() - else: - weights = np.array([6, 1]) - vocals = (weights[0] * vocals_mdxb1.T + weights[1] * vocals_demucs.T) / weights.sum() - - # vocals - separated_music_arrays['vocals'] = vocals - output_sample_rates['vocals'] = sample_rate - - if not only_vocals: - # Generate instrumental - instrum = mixed_sound_array - vocals - - audio = np.expand_dims(instrum.T, axis=0) - audio = torch.from_numpy(audio).type('torch.FloatTensor').to(self.device) - - all_outs = [] - for i, model in enumerate(self.models): - if i == 0: - overlap = overlap_small - elif i > 0: - overlap = overlap_large - out = 0.5 * apply_model(model, audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() \ - + 0.5 * -apply_model(model, -audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.50 + i * 0.10) / total_files - update_percent_func(int(val)) - - if i == 2: - # ['drums', 'bass', 'other', 'vocals', 'guitar', 'piano'] - out[2] = out[2] + out[4] + out[5] - out = out[:4] - - out[0] = self.weights_drums[i] * out[0] - out[1] = self.weights_bass[i] * out[1] - out[2] = self.weights_other[i] * out[2] - out[3] = self.weights_vocals[i] * out[3] - - all_outs.append(out) - out = np.array(all_outs).sum(axis=0) - out[0] = out[0] / self.weights_drums.sum() - out[1] = out[1] / self.weights_bass.sum() - out[2] = out[2] / self.weights_other.sum() - out[3] = out[3] / self.weights_vocals.sum() - - # other - res = mixed_sound_array - vocals - out[0].T - out[1].T - res = np.clip(res, -1, 1) - separated_music_arrays['other'] = (2 * res + out[2].T) / 3.0 - output_sample_rates['other'] = sample_rate - - # drums - res = mixed_sound_array - vocals - out[1].T - out[2].T - res = np.clip(res, -1, 1) - separated_music_arrays['drums'] = (res + 2 * out[0].T.copy()) / 3.0 - output_sample_rates['drums'] = sample_rate - - # bass - res = mixed_sound_array - vocals - out[0].T - out[2].T - res = np.clip(res, -1, 1) - separated_music_arrays['bass'] = (res + 2 * out[1].T) / 3.0 - output_sample_rates['bass'] = sample_rate - - bass = separated_music_arrays['bass'] - drums = separated_music_arrays['drums'] - other = separated_music_arrays['other'] - - separated_music_arrays['other'] = mixed_sound_array - vocals - bass - drums - separated_music_arrays['drums'] = mixed_sound_array - vocals - bass - other - separated_music_arrays['bass'] = mixed_sound_array - vocals - drums - other - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.95) / total_files - update_percent_func(int(val)) - - return separated_music_arrays, output_sample_rates - - -class EnsembleDemucsMDXMusicSeparationModelLowGPU: - """ - Doesn't do any separation just passes the input back as output - """ - - def __init__(self, options): - """ - options - user options - """ - # print(options) - - if torch.cuda.is_available(): - device = 'cuda:0' - else: - device = 'cpu' - if 'cpu' in options: - if options['cpu']: - device = 'cpu' - print('Use device: {}'.format(device)) - self.single_onnx = False - if 'single_onnx' in options: - if options['single_onnx']: - self.single_onnx = True - print('Use single vocal ONNX') - - self.kim_model_1 = False - if 'use_kim_model_1' in options: - if options['use_kim_model_1']: - self.kim_model_1 = True - if self.kim_model_1: - print('Use Kim model 1') - else: - print('Use Kim model 2') - - self.overlap_large = float(options['overlap_large']) - self.overlap_small = float(options['overlap_small']) - if self.overlap_large > 0.99: - self.overlap_large = 0.99 - if self.overlap_large < 0.0: - self.overlap_large = 0.0 - if self.overlap_small > 0.99: - self.overlap_small = 0.99 - if self.overlap_small < 0.0: - self.overlap_small = 0.0 - - self.weights_vocals = np.array([10, 1, 8, 9]) - self.weights_bass = np.array([19, 4, 5, 8]) - self.weights_drums = np.array([18, 2, 4, 9]) - self.weights_other = np.array([14, 2, 5, 10]) - - if device == 'cpu': - chunk_size = 200000000 - self.providers = ["CPUExecutionProvider"] - else: - chunk_size = 1000000 - self.providers = ["CUDAExecutionProvider"] - if 'chunk_size' in options: - chunk_size = int(options['chunk_size']) - self.chunk_size = chunk_size - self.device = device - pass - - @property - def instruments(self): - """ DO NOT CHANGE """ - return ['bass', 'drums', 'other', 'vocals'] - - def raise_aicrowd_error(self, msg): - """ Will be used by the evaluator to provide logs, DO NOT CHANGE """ - raise NameError(msg) - - def separate_music_file( - self, - mixed_sound_array, - sample_rate, - update_percent_func=None, - current_file_number=0, - total_files=0, - only_vocals=False - ): - """ - Implements the sound separation for a single sound file - Inputs: Outputs from soundfile.read('mixture.wav') - mixed_sound_array - sample_rate - - Outputs: - separated_music_arrays: Dictionary numpy array of each separated instrument - output_sample_rates: Dictionary of sample rates separated sequence - """ - - # print('Update percent func: {}'.format(update_percent_func)) - - separated_music_arrays = {} - output_sample_rates = {} - - audio = np.expand_dims(mixed_sound_array.T, axis=0) - audio = torch.from_numpy(audio).type('torch.FloatTensor').to(self.device) - - overlap_large = self.overlap_large - overlap_small = self.overlap_small - - # Get Demucs vocal only - model_folder = os.path.dirname(os.path.realpath(__file__)) + '/models/' - remote_url = 'https://dl.fbaipublicfiles.com/demucs/hybrid_transformer/04573f0d-f3cf25b2.th' - model_path = model_folder + '04573f0d-f3cf25b2.th' - if not os.path.isfile(model_path): - torch.hub.download_url_to_file(remote_url, model_folder + '04573f0d-f3cf25b2.th') - model_vocals = load_model(model_path) - model_vocals.to(self.device) - shifts = 1 - overlap = overlap_large - vocals_demucs = 0.5 * apply_model(model_vocals, audio, shifts=shifts, overlap=overlap)[0][3].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.10) / total_files - update_percent_func(int(val)) - - vocals_demucs += 0.5 * -apply_model(model_vocals, -audio, shifts=shifts, overlap=overlap)[0][3].cpu().numpy() - model_vocals = model_vocals.cpu() - del model_vocals - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.20) / total_files - update_percent_func(int(val)) - - # MDX-B model 1 initialization - mdx_models1 = get_models('tdf_extra', load=False, device=self.device, vocals_model_type=2) - if self.kim_model_1: - model_path_onnx1 = model_folder + 'Kim_Vocal_1.onnx' - remote_url_onnx1 = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/Kim_Vocal_1.onnx' - else: - model_path_onnx1 = model_folder + 'Kim_Vocal_2.onnx' - remote_url_onnx1 = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/Kim_Vocal_2.onnx' - if not os.path.isfile(model_path_onnx1): - torch.hub.download_url_to_file(remote_url_onnx1, model_path_onnx1) - print('Model path: {}'.format(model_path_onnx1)) - print('Device: {} Chunk size: {}'.format(self.device, self.chunk_size)) - infer_session1 = ort.InferenceSession( - model_path_onnx1, - providers=self.providers, - provider_options=[{"device_id": 0}], - ) - overlap = overlap_large - sources1 = demix_full( - mixed_sound_array.T, - self.device, - self.chunk_size, - mdx_models1, - infer_session1, - overlap=overlap - )[0] - vocals_mdxb1 = sources1 - del infer_session1 - del mdx_models1 - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.30) / total_files - update_percent_func(int(val)) - - if self.single_onnx is False: - # MDX-B model 2 initialization - mdx_models2 = get_models('tdf_extra', load=False, device=self.device, vocals_model_type=2) - root_path = os.path.dirname(os.path.realpath(__file__)) + '/' - model_path_onnx2 = model_folder + 'Kim_Inst.onnx' - remote_url_onnx2 = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/Kim_Inst.onnx' - if not os.path.isfile(model_path_onnx2): - torch.hub.download_url_to_file(remote_url_onnx2, model_path_onnx2) - print('Model path: {}'.format(model_path_onnx2)) - print('Device: {} Chunk size: {}'.format(self.device, self.chunk_size)) - infer_session2 = ort.InferenceSession( - model_path_onnx2, - providers=self.providers, - provider_options=[{"device_id": 0}], - ) - - overlap = overlap_large - sources2 = -demix_full( - -mixed_sound_array.T, - self.device, - self.chunk_size, - mdx_models2, - infer_session2, - overlap=overlap - )[0] - - # it's instrumental so need to invert - instrum_mdxb2 = sources2 - vocals_mdxb2 = mixed_sound_array.T - instrum_mdxb2 - del infer_session2 - del mdx_models2 - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.40) / total_files - update_percent_func(int(val)) - - # Ensemble vocals for MDX and Demucs - if self.single_onnx is False: - weights = np.array([12, 8, 3]) - vocals = (weights[0] * vocals_mdxb1.T + weights[1] * vocals_mdxb2.T + weights[2] * vocals_demucs.T) / weights.sum() - else: - weights = np.array([6, 1]) - vocals = (weights[0] * vocals_mdxb1.T + weights[1] * vocals_demucs.T) / weights.sum() - - # Generate instrumental - instrum = mixed_sound_array - vocals - - audio = np.expand_dims(instrum.T, axis=0) - audio = torch.from_numpy(audio).type('torch.FloatTensor').to(self.device) - - all_outs = [] - - i = 0 - overlap = overlap_small - model = pretrained.get_model('htdemucs_ft') - model.to(self.device) - out = 0.5 * apply_model(model, audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() \ - + 0.5 * -apply_model(model, -audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.50 + i * 0.10) / total_files - update_percent_func(int(val)) - - out[0] = self.weights_drums[i] * out[0] - out[1] = self.weights_bass[i] * out[1] - out[2] = self.weights_other[i] * out[2] - out[3] = self.weights_vocals[i] * out[3] - all_outs.append(out) - model = model.cpu() - del model - - i = 1 - overlap = overlap_large - model = pretrained.get_model('htdemucs') - model.to(self.device) - out = 0.5 * apply_model(model, audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() \ - + 0.5 * -apply_model(model, -audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.50 + i * 0.10) / total_files - update_percent_func(int(val)) - - out[0] = self.weights_drums[i] * out[0] - out[1] = self.weights_bass[i] * out[1] - out[2] = self.weights_other[i] * out[2] - out[3] = self.weights_vocals[i] * out[3] - all_outs.append(out) - model = model.cpu() - del model - - i = 2 - overlap = overlap_large - model = pretrained.get_model('htdemucs_6s') - model.to(self.device) - out = 0.5 * apply_model(model, audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() \ - + 0.5 * -apply_model(model, -audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.50 + i * 0.10) / total_files - update_percent_func(int(val)) - - # More stems need to add - out[2] = out[2] + out[4] + out[5] - out = out[:4] - out[0] = self.weights_drums[i] * out[0] - out[1] = self.weights_bass[i] * out[1] - out[2] = self.weights_other[i] * out[2] - out[3] = self.weights_vocals[i] * out[3] - all_outs.append(out) - model = model.cpu() - del model - - i = 3 - model = pretrained.get_model('hdemucs_mmi') - model.to(self.device) - out = 0.5 * apply_model(model, audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() \ - + 0.5 * -apply_model(model, -audio, shifts=shifts, overlap=overlap)[0].cpu().numpy() - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.50 + i * 0.10) / total_files - update_percent_func(int(val)) - - out[0] = self.weights_drums[i] * out[0] - out[1] = self.weights_bass[i] * out[1] - out[2] = self.weights_other[i] * out[2] - out[3] = self.weights_vocals[i] * out[3] - all_outs.append(out) - model = model.cpu() - del model - - out = np.array(all_outs).sum(axis=0) - out[0] = out[0] / self.weights_drums.sum() - out[1] = out[1] / self.weights_bass.sum() - out[2] = out[2] / self.weights_other.sum() - out[3] = out[3] / self.weights_vocals.sum() - - # vocals - separated_music_arrays['vocals'] = vocals - output_sample_rates['vocals'] = sample_rate - - # other - res = mixed_sound_array - vocals - out[0].T - out[1].T - res = np.clip(res, -1, 1) - separated_music_arrays['other'] = (2 * res + out[2].T) / 3.0 - output_sample_rates['other'] = sample_rate - - # drums - res = mixed_sound_array - vocals - out[1].T - out[2].T - res = np.clip(res, -1, 1) - separated_music_arrays['drums'] = (res + 2 * out[0].T.copy()) / 3.0 - output_sample_rates['drums'] = sample_rate - - # bass - res = mixed_sound_array - vocals - out[0].T - out[2].T - res = np.clip(res, -1, 1) - separated_music_arrays['bass'] = (res + 2 * out[1].T) / 3.0 - output_sample_rates['bass'] = sample_rate - - bass = separated_music_arrays['bass'] - drums = separated_music_arrays['drums'] - other = separated_music_arrays['other'] - - separated_music_arrays['other'] = mixed_sound_array - vocals - bass - drums - separated_music_arrays['drums'] = mixed_sound_array - vocals - bass - other - separated_music_arrays['bass'] = mixed_sound_array - vocals - drums - other - - if update_percent_func is not None: - val = 100 * (current_file_number + 0.95) / total_files - update_percent_func(int(val)) - - return separated_music_arrays, output_sample_rates - - -def predict_with_model(options): - for input_audio in options['input_audio']: - if not os.path.isfile(input_audio): - print('Error. No such file: {}. Please check path!'.format(input_audio)) - return - output_folder = options['output_folder'] - if not os.path.isdir(output_folder): - os.mkdir(output_folder) - - only_vocals = False - if 'only_vocals' in options: - if options['only_vocals'] is True: - print('Generate only vocals and instrumental') - only_vocals = True - - model = None - if 'large_gpu' in options: - if options['large_gpu'] is True: - print('Use fast large GPU memory version of code') - model = EnsembleDemucsMDXMusicSeparationModel(options) - if model is None: - print('Use low GPU memory version of code') - model = EnsembleDemucsMDXMusicSeparationModelLowGPU(options) - - update_percent_func = None - if 'update_percent_func' in options: - update_percent_func = options['update_percent_func'] - - for i, input_audio in enumerate(options['input_audio']): - print('Go for: {}'.format(input_audio)) - audio, sr = librosa.load(input_audio, mono=False, sr=44100) - if len(audio.shape) == 1: - audio = np.stack([audio, audio], axis=0) - print("Input audio: {} Sample rate: {}".format(audio.shape, sr)) - result, sample_rates = model.separate_music_file( - audio.T, - sr, - update_percent_func, - i, - len(options['input_audio']), - only_vocals, - ) - all_instrum = model.instruments - if only_vocals: - all_instrum = ['vocals'] - for instrum in all_instrum: - output_name = os.path.splitext(os.path.basename(input_audio))[0] + '_{}.wav'.format(instrum) - sf.write(output_folder + '/' + output_name, result[instrum], sample_rates[instrum], subtype='FLOAT') - print('File created: {}'.format(output_folder + '/' + output_name)) - - # instrumental part 1 - inst = audio.T - result['vocals'] - output_name = os.path.splitext(os.path.basename(input_audio))[0] + '_{}.wav'.format('instrum') - sf.write(output_folder + '/' + output_name, inst, sr, subtype='FLOAT') - print('File created: {}'.format(output_folder + '/' + output_name)) - - if not only_vocals: - # instrumental part 2 - inst2 = result['bass'] + result['drums'] + result['other'] - output_name = os.path.splitext(os.path.basename(input_audio))[0] + '_{}.wav'.format('instrum2') - sf.write(output_folder + '/' + output_name, inst2, sr, subtype='FLOAT') - print('File created: {}'.format(output_folder + '/' + output_name)) - - if update_percent_func is not None: - val = 100 - update_percent_func(int(val)) - - -def md5(fname): - hash_md5 = hashlib.md5() - with open(fname, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - - -if __name__ == '__main__': - start_time = time() - - print("Version: {}".format(__VERSION__)) - m = argparse.ArgumentParser() - m.add_argument("--input_audio", "-i", nargs='+', type=str, help="Input audio location. You can provide multiple files at once", required=True) - m.add_argument("--output_folder", "-r", type=str, help="Output audio folder", required=True) - m.add_argument("--cpu", action='store_true', help="Choose CPU instead of GPU for processing. Can be very slow.") - m.add_argument("--overlap_large", "-ol", type=float, help="Overlap of splited audio for light models. Closer to 1.0 - slower", required=False, default=0.6) - m.add_argument("--overlap_small", "-os", type=float, help="Overlap of splited audio for heavy models. Closer to 1.0 - slower", required=False, default=0.5) - m.add_argument("--single_onnx", action='store_true', help="Only use single ONNX model for vocals. Can be useful if you have not enough GPU memory.") - m.add_argument("--chunk_size", "-cz", type=int, help="Chunk size for ONNX models. Set lower to reduce GPU memory consumption. Default: 1000000", required=False, default=1000000) - m.add_argument("--large_gpu", action='store_true', help="It will store all models on GPU for faster processing of multiple audio files. Requires 11 and more GB of free GPU memory.") - m.add_argument("--use_kim_model_1", action='store_true', help="Use first version of Kim model (as it was on contest).") - m.add_argument("--only_vocals", action='store_true', help="Only create vocals and instrumental. Skip bass, drums, other") - - options = m.parse_args().__dict__ - print("Options: ".format(options)) - for el in options: - print('{}: {}'.format(el, options[el])) - predict_with_model(options) - print('Time: {:.0f} sec'.format(time() - start_time)) - print('Presented by https://mvsep.com') - - -""" -Example: - python inference.py - --input_audio mixture.wav mixture1.wav - --output_folder ./results/ - --cpu - --overlap_large 0.25 - --overlap_small 0.25 - --chunk_size 500000 -""" diff --git a/spaces/Manjushri/MusicGen/audiocraft/modules/seanet.py b/spaces/Manjushri/MusicGen/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/Marshalls/testmtd/analysis/find_low_roots_sandbox.py b/spaces/Marshalls/testmtd/analysis/find_low_roots_sandbox.py deleted file mode 100644 index 0272dd64f325980b9d865de9e22452a3715035c3..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/find_low_roots_sandbox.py +++ /dev/null @@ -1,45 +0,0 @@ -from analysis.pymo.parsers import BVHParser -from analysis.pymo.data import Joint, MocapData -from analysis.pymo.preprocessing import * -from analysis.pymo.viz_tools import * -from analysis.pymo.writers import * -from sklearn.pipeline import Pipeline -from pathlib import Path -import sys -path = sys.argv[1] -#cat to_check* | parallel -L 1 -I % python3 analysis/shift_bvh.py % -34 - -from feature_extraction.utils import distribute_tasks -from mpi4py import MPI -comm = MPI.COMM_WORLD -rank = comm.Get_rank() -size = comm.Get_size() - -path = Path(path) -candidate_audio_files = sorted(path.glob('**/*.bvh'), key=lambda path: path.parent.__str__()) -tasks = distribute_tasks(candidate_audio_files,rank,size) - -p = BVHParser() -datas = [] -filenames = [] -for i in tasks: - f = candidate_audio_files[i] - print(f) - filenames.append(f) - datas.append(p.parse(f)) -data_pipe = Pipeline([ - # ('dwnsampl', DownSampler(tgt_fps=fps, keep_all=False)), - ('jtsel', JointSelector(['Spine', 'Spine1', 'Neck', 'Head', 'RightShoulder', 'RightArm', 'RightForeArm', 'RightHand', 'LeftShoulder', 'LeftArm', 'LeftForeArm', 'LeftHand', 'RightUpLeg', 'RightLeg', 'RightFoot', 'RightToeBase', 'LeftUpLeg', 'LeftLeg', 'LeftFoot', 'LeftToeBase'], include_root=True)), - ('pos', MocapParameterizer('position')), -]) - -out_data = data_pipe.fit_transform(datas) - -yposs = list(filter(lambda x: x.split("_")[1]=="Yposition", out_data[0].values.columns)) - -with open("to_check"+str(rank),"w") as f: - for i,d in enumerate(out_data): - min_y = d.values[yposs].iloc[100:].mean().min() - if min_y < -10: - print(min_y, filenames[i].__str__()) - f.writelines(filenames[i].__str__()+"\n") diff --git a/spaces/MathysL/AutoGPT4/autogpt/speech/base.py b/spaces/MathysL/AutoGPT4/autogpt/speech/base.py deleted file mode 100644 index d74fa51be75b5078134c510b393a06deb0267b2a..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/speech/base.py +++ /dev/null @@ -1,50 +0,0 @@ -"""Base class for all voice classes.""" -import abc -from threading import Lock - -from autogpt.config import AbstractSingleton - - -class VoiceBase(AbstractSingleton): - """ - Base class for all voice classes. - """ - - def __init__(self): - """ - Initialize the voice class. - """ - self._url = None - self._headers = None - self._api_key = None - self._voices = [] - self._mutex = Lock() - self._setup() - - def say(self, text: str, voice_index: int = 0) -> bool: - """ - Say the given text. - - Args: - text (str): The text to say. - voice_index (int): The index of the voice to use. - """ - with self._mutex: - return self._speech(text, voice_index) - - @abc.abstractmethod - def _setup(self) -> None: - """ - Setup the voices, API key, etc. - """ - pass - - @abc.abstractmethod - def _speech(self, text: str, voice_index: int = 0) -> bool: - """ - Play the given text. - - Args: - text (str): The text to play. - """ - pass diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/README.md b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/README.md deleted file mode 100644 index 0ccfca2435ec461c22d557792c6f4da50bc66a08..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Detic+LangChain -emoji: 🦜️🔗 -colorFrom: blue -colorTo: red -sdk: gradio -app_file: app.py -pinned: false -duplicated_from: taesiri/ChatGPT-ImageCaptioner ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/demo.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/demo.py deleted file mode 100644 index 183f6c44c4ca8cdf32d1c4b57bd75a11a07dcbde..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/demo.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import glob -import multiprocessing as mp -import numpy as np -import os -import tempfile -import time -import warnings -import cv2 -import tqdm -import sys - -from detectron2.config import get_cfg -from detectron2.data.detection_utils import read_image -from detectron2.utils.logger import setup_logger - -sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/') -sys.path.insert(0, 'third_party/CenterNet2/') - -from centernet.config import add_centernet_config -from detic.config import add_detic_config - -from detic.predictor import VisualizationDemo - - -# constants -WINDOW_NAME = "Detic" - -def setup_cfg(args): - cfg = get_cfg() - add_centernet_config(cfg) - add_detic_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - # Set score_threshold for builtin models - cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold - cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold - cfg.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_PATH = 'rand' # load later - if not args.pred_all_class: - cfg.MODEL.ROI_HEADS.ONE_CLASS_PER_PROPOSAL = True - cfg.freeze() - return cfg - - -def get_parser(): - parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs") - parser.add_argument( - "--config-file", - default="configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml", - metavar="FILE", - help="path to config file", - ) - parser.add_argument("--webcam", action="store_true", help="Take inputs from webcam.") - parser.add_argument("--video-input", help="Path to video file.") - parser.add_argument( - "--input", - nargs="+", - help="A list of space separated input images; " - "or a single glob pattern such as 'directory/*.jpg'", - ) - parser.add_argument( - "--output", - help="A file or directory to save output visualizations. " - "If not given, will show output in an OpenCV window.", - ) - parser.add_argument( - "--vocabulary", - default="lvis", - choices=['lvis', 'openimages', 'objects365', 'coco', 'custom'], - help="", - ) - parser.add_argument( - "--custom_vocabulary", - default="", - help="", - ) - parser.add_argument("--pred_all_class", action='store_true') - parser.add_argument( - "--confidence-threshold", - type=float, - default=0.5, - help="Minimum score for instance predictions to be shown", - ) - parser.add_argument( - "--opts", - help="Modify config options using the command-line 'KEY VALUE' pairs", - default=[], - nargs=argparse.REMAINDER, - ) - return parser - - -def test_opencv_video_format(codec, file_ext): - with tempfile.TemporaryDirectory(prefix="video_format_test") as dir: - filename = os.path.join(dir, "test_file" + file_ext) - writer = cv2.VideoWriter( - filename=filename, - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(30), - frameSize=(10, 10), - isColor=True, - ) - [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)] - writer.release() - if os.path.isfile(filename): - return True - return False - - -if __name__ == "__main__": - mp.set_start_method("spawn", force=True) - args = get_parser().parse_args() - setup_logger(name="fvcore") - logger = setup_logger() - logger.info("Arguments: " + str(args)) - - cfg = setup_cfg(args) - - demo = VisualizationDemo(cfg, args) - - if args.input: - if len(args.input) == 1: - args.input = glob.glob(os.path.expanduser(args.input[0])) - assert args.input, "The input path(s) was not found" - for path in tqdm.tqdm(args.input, disable=not args.output): - img = read_image(path, format="BGR") - start_time = time.time() - predictions, visualized_output = demo.run_on_image(img) - logger.info( - "{}: {} in {:.2f}s".format( - path, - "detected {} instances".format(len(predictions["instances"])) - if "instances" in predictions - else "finished", - time.time() - start_time, - ) - ) - - if args.output: - if os.path.isdir(args.output): - assert os.path.isdir(args.output), args.output - out_filename = os.path.join(args.output, os.path.basename(path)) - else: - assert len(args.input) == 1, "Please specify a directory with args.output" - out_filename = args.output - visualized_output.save(out_filename) - else: - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1]) - if cv2.waitKey(0) == 27: - break # esc to quit - elif args.webcam: - assert args.input is None, "Cannot have both --input and --webcam!" - assert args.output is None, "output not yet supported with --webcam!" - cam = cv2.VideoCapture(0) - for vis in tqdm.tqdm(demo.run_on_video(cam)): - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, vis) - if cv2.waitKey(1) == 27: - break # esc to quit - cam.release() - cv2.destroyAllWindows() - elif args.video_input: - video = cv2.VideoCapture(args.video_input) - width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) - height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) - frames_per_second = video.get(cv2.CAP_PROP_FPS) - num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) - basename = os.path.basename(args.video_input) - codec, file_ext = ( - ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4") - ) - if codec == ".mp4v": - warnings.warn("x264 codec not available, switching to mp4v") - if args.output: - if os.path.isdir(args.output): - output_fname = os.path.join(args.output, basename) - output_fname = os.path.splitext(output_fname)[0] + file_ext - else: - output_fname = args.output - assert not os.path.isfile(output_fname), output_fname - output_file = cv2.VideoWriter( - filename=output_fname, - # some installation of opencv may not support x264 (due to its license), - # you can try other format (e.g. MPEG) - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(frames_per_second), - frameSize=(width, height), - isColor=True, - ) - assert os.path.isfile(args.video_input) - for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames): - if args.output: - output_file.write(vis_frame) - else: - cv2.namedWindow(basename, cv2.WINDOW_NORMAL) - cv2.imshow(basename, vis_frame) - if cv2.waitKey(1) == 27: - break # esc to quit - video.release() - if args.output: - output_file.release() - else: - cv2.destroyAllWindows() diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/__init__.py deleted file mode 100644 index 378a0068432a371af364de9d73785901c0f83383..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/__init__.py +++ /dev/null @@ -1,69 +0,0 @@ -# flake8: noqa -# Copyright (c) OpenMMLab. All rights reserved. -from .config import Config, ConfigDict, DictAction -from .misc import (check_prerequisites, concat_list, deprecated_api_warning, - has_method, import_modules_from_strings, is_list_of, - is_method_overridden, is_seq_of, is_str, is_tuple_of, - iter_cast, list_cast, requires_executable, requires_package, - slice_list, to_1tuple, to_2tuple, to_3tuple, to_4tuple, - to_ntuple, tuple_cast) -from .path import (check_file_exist, fopen, is_filepath, mkdir_or_exist, - scandir, symlink) -from .progressbar import (ProgressBar, track_iter_progress, - track_parallel_progress, track_progress) -from .testing import (assert_attrs_equal, assert_dict_contains_subset, - assert_dict_has_keys, assert_is_norm_layer, - assert_keys_equal, assert_params_all_zeros, - check_python_script) -from .timer import Timer, TimerError, check_time -from .version_utils import digit_version, get_git_hash - -try: - import torch -except ImportError: - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'is_str', 'iter_cast', - 'list_cast', 'tuple_cast', 'is_seq_of', 'is_list_of', 'is_tuple_of', - 'slice_list', 'concat_list', 'check_prerequisites', 'requires_package', - 'requires_executable', 'is_filepath', 'fopen', 'check_file_exist', - 'mkdir_or_exist', 'symlink', 'scandir', 'ProgressBar', - 'track_progress', 'track_iter_progress', 'track_parallel_progress', - 'Timer', 'TimerError', 'check_time', 'deprecated_api_warning', - 'digit_version', 'get_git_hash', 'import_modules_from_strings', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'check_python_script', - 'to_1tuple', 'to_2tuple', 'to_3tuple', 'to_4tuple', 'to_ntuple', - 'is_method_overridden', 'has_method' - ] -else: - from .env import collect_env - from .logging import get_logger, print_log - from .parrots_jit import jit, skip_no_elena - from .parrots_wrapper import ( - TORCH_VERSION, BuildExtension, CppExtension, CUDAExtension, DataLoader, - PoolDataLoader, SyncBatchNorm, _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, - _AvgPoolNd, _BatchNorm, _ConvNd, _ConvTransposeMixin, _InstanceNorm, - _MaxPoolNd, get_build_config, is_rocm_pytorch, _get_cuda_home) - from .registry import Registry, build_from_cfg - from .trace import is_jit_tracing - __all__ = [ - 'Config', 'ConfigDict', 'DictAction', 'collect_env', 'get_logger', - 'print_log', 'is_str', 'iter_cast', 'list_cast', 'tuple_cast', - 'is_seq_of', 'is_list_of', 'is_tuple_of', 'slice_list', 'concat_list', - 'check_prerequisites', 'requires_package', 'requires_executable', - 'is_filepath', 'fopen', 'check_file_exist', 'mkdir_or_exist', - 'symlink', 'scandir', 'ProgressBar', 'track_progress', - 'track_iter_progress', 'track_parallel_progress', 'Registry', - 'build_from_cfg', 'Timer', 'TimerError', 'check_time', 'SyncBatchNorm', - '_AdaptiveAvgPoolNd', '_AdaptiveMaxPoolNd', '_AvgPoolNd', '_BatchNorm', - '_ConvNd', '_ConvTransposeMixin', '_InstanceNorm', '_MaxPoolNd', - 'get_build_config', 'BuildExtension', 'CppExtension', 'CUDAExtension', - 'DataLoader', 'PoolDataLoader', 'TORCH_VERSION', - 'deprecated_api_warning', 'digit_version', 'get_git_hash', - 'import_modules_from_strings', 'jit', 'skip_no_elena', - 'assert_dict_contains_subset', 'assert_attrs_equal', - 'assert_dict_has_keys', 'assert_keys_equal', 'assert_is_norm_layer', - 'assert_params_all_zeros', 'check_python_script', - 'is_method_overridden', 'is_jit_tracing', 'is_rocm_pytorch', - '_get_cuda_home', 'has_method' - ] diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/attention.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/attention.py deleted file mode 100644 index 509cd873768f0dd75a75ab3fcdd652822b12b59f..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/attention.py +++ /dev/null @@ -1,341 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat -from typing import Optional, Any - -from ldm.modules.diffusionmodules.util import checkpoint - - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False - -# CrossAttn precision handling -import os -_ATTN_PRECISION = os.environ.get("ATTN_PRECISION", "fp32") - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - # force cast to fp32 to avoid overflowing - if _ATTN_PRECISION =="fp32": - with torch.autocast(enabled=False, device_type = 'cuda'): - q, k = q.float(), k.float() - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - else: - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - del q, k - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - sim = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', sim, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class MemoryEfficientCrossAttention(nn.Module): - # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0): - super().__init__() - print(f"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using " - f"{heads} heads.") - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.heads = heads - self.dim_head = dim_head - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - self.attention_op: Optional[Any] = None - - def forward(self, x, context=None, mask=None): - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - b, _, _ = q.shape - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(b, t.shape[1], self.heads, self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b * self.heads, t.shape[1], self.dim_head) - .contiguous(), - (q, k, v), - ) - - # actually compute the attention, what we cannot get enough of - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - if exists(mask): - raise NotImplementedError - out = ( - out.unsqueeze(0) - .reshape(b, self.heads, out.shape[1], self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b, out.shape[1], self.heads * self.dim_head) - ) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - ATTENTION_MODES = { - "softmax": CrossAttention, # vanilla attention - "softmax-xformers": MemoryEfficientCrossAttention - } - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, - disable_self_attn=False): - super().__init__() - attn_mode = "softmax-xformers" if XFORMERS_IS_AVAILBLE else "softmax" - assert attn_mode in self.ATTENTION_MODES - attn_cls = self.ATTENTION_MODES[attn_mode] - self.disable_self_attn = disable_self_attn - self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout, - context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - NEW: use_linear for more efficiency instead of the 1x1 convs - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None, - disable_self_attn=False, use_linear=False, - use_checkpoint=True): - super().__init__() - if exists(context_dim) and not isinstance(context_dim, list): - context_dim = [context_dim] - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - if not use_linear: - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d], - disable_self_attn=disable_self_attn, checkpoint=use_checkpoint) - for d in range(depth)] - ) - if not use_linear: - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - else: - self.proj_out = zero_module(nn.Linear(in_channels, inner_dim)) - self.use_linear = use_linear - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - if not isinstance(context, list): - context = [context] - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c').contiguous() - if self.use_linear: - x = self.proj_in(x) - for i, block in enumerate(self.transformer_blocks): - x = block(x, context=context[i]) - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous() - if not self.use_linear: - x = self.proj_out(x) - return x + x_in - diff --git a/spaces/Miuzarte/SUI-svc-3.0/app.py b/spaces/Miuzarte/SUI-svc-3.0/app.py deleted file mode 100644 index e6150923cc31a08a2f6f50836f23523265531448..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/app.py +++ /dev/null @@ -1,252 +0,0 @@ -import io - -import gradio as gr -import librosa -import numpy as np -import soundfile -import torch -from inference.infer_tool import Svc -import logging -logging.getLogger('numba').setLevel(logging.WARNING) - -model_name = "logs/48k/G_1M111000_sing.pth" -config_name = "configs/config.json" - -svc_model = Svc(model_name, config_name) -def vc_fn(input_audio, vc_transform): - if input_audio is None: - return None - sampling_rate, audio = input_audio - # print(audio.shape,sampling_rate) - duration = audio.shape[0] / sampling_rate - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - print(audio.shape) - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, audio, 16000, format="wav") - out_wav_path.seek(0) - out_audio, out_sr = svc_model.infer("suiji", vc_transform, out_wav_path) - _audio = out_audio.cpu().numpy() - return (48000, _audio) - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("SUI-svc-3.0"): - gr.Markdown(value=""" - # 这是AI岁己歌声变声器的在线demo(第二代移步)[Miuzarte/SUI-svc-4.0](https://huggingface.co/spaces/Miuzarte/SUI-svc-4.0) - - ### 项目:[sovits 3.0 48kHz](https://github.com/innnky/so-vits-svc/tree/main) | 目前模型训练状态:1000000steps底模 + 111000steps - - #### 查看模型介绍、获取模型移步[Miuzarte/SUImodels](https://huggingface.co/Miuzarte/SUImodels) - - || - |-| - || - - ## 一些注意事项❗❕❗❕: - - #### 输入的音频一定要是纯净的干音,不要把歌曲直接扔进来 - - #### 和声和混响也不能有,UVR分离出人声之后需要注意一下 - - #### 对陈述语气没多大作用,实在没干音库的话,你可以自己唱然后升十几个调慢慢试效果 - - #### 推理出来有概率会给吸气音上电,需要后期小修一下,大概可能也许是因为炼太久糊了 - """) - vc_input3 = gr.Audio(label="输入音频(长度请控制在30s左右,过长可能会爆内存)") - vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - vc_submit = gr.Button("转换", variant="primary") - vc_output2 = gr.Audio(label="输出音频(最右侧三个点可以下载)") - vc_submit.click(vc_fn, [vc_input3, vc_transform], [vc_output2]) - with gr.TabItem("在本地使用MoeSS高速推理的教程"): - gr.Markdown(value=""" - # 在本地使用 [MoeSS](https://github.com/NaruseMioShirakana/MoeSS) 推理: - - #### 因为该程序每次更新都会有较大的变化,下面的下载链接都将指向[[MoeSS 4.2.3]](https://github.com/NaruseMioShirakana/MoeSS/releases/tag/4.2.3) - - ### 0. 下载[[MoeSS本体]](https://github.com/NaruseMioShirakana/MoeSS/releases/download/4.2.3/MoeSS-CPU.7z)、[[hubert]](https://huggingface.co/NaruseMioShirakana/MoeSS-SUBModel/resolve/main/hubert.7z),并解压成以下的文件结构 - - Windows 7用户需要另一个编译版本的本体[[MoeSS-Win7.7z]](https://github.com/NaruseMioShirakana/MoeSS/releases/download/4.2.3/MoeSS-Win7.7z) - - ``` - MoeSS - ├── cleaners - ├── emotion - ├── hifigan - ├── hubert - │ └── hubert.onnx - ├── Mods - ├── OutPuts - ├── temp - ├── avcodec-58.dll - ├── avformat-58.dll - ├── avutil-56.dll - ├── MoeSS.exe - ├── onnxruntime.dll - ├── onnxruntime_providers_shared.dll - ├── ParamsRegex.json - ├── ShirakanaUI.dmres - ├── swresample-3.dll - └── swscale-5.dll - ``` - - ### 1. 下载[[转换好的onnx模型]](https://huggingface.co/Miuzarte/SUImodels/blob/main/sovits3_48k/v1/Singing/suijiSUI_v1_1M111000_SoVits.onnx),放在 MoeSS\\\Mods\\suijiSUI_v1_1M111000 里面 - - ### 2. 在 MoeSS\\Mods 新建一个 岁己SUI_v1_1M111k.json (文件名不影响程序读取)并写入以下文本,保存时请确保编码为UTF-8,保存时请确保编码为UTF-8,保存时请确保编码为UTF-8 - - ```json - { - "Folder" : "suijiSUI_v1_1M111000", - "Name" : "岁己SUI_v1_1M111k", - "Type" : "SoVits", - "Rate" : 48000, - "Hop" : 320, - "Hubert": "hubert", - "SoVits3": true, - "Characters" : ["岁己SUI"] - } - ``` - - #### 以上步骤完成之后的文件结构应该长这样 - - ``` - MoeSS - ├── cleaners - ├── emotion - ├── hifigan - ├── hubert - │ └── hubert.onnx - ├── Mods - │ ├── 岁己SUI_v1_1M111k.json - │ └── suijiSUI_v1_1M111000 - │ └── suijiSUI_v1_1M111000_SoVits.onnx - ├── OutPuts - ├── temp - ├── avcodec-58.dll - ├── avformat-58.dll - ├── avutil-56.dll - ├── MoeSS.exe - ├── onnxruntime.dll - ├── onnxruntime_providers_shared.dll - ├── ParamsRegex.json - ├── ShirakanaUI.dmres - ├── swresample-3.dll - └── swscale-5.dll - ``` - - ### (A卡不用看)如果要使用GPU推理的话,下载[[MoeSS-GPU.7z]](https://github.com/NaruseMioShirakana/MoeSS/releases/download/3.2.0/MoeSS-GPU.7z)并解压"MoeSS - CUDA.exe"、"onnxruntime_providers_cuda.dll"至 MoeSS 目录(全覆盖一遍也行)。注意:需要CUDA版本 ≥ 11.6 < 12 、 CUdnn < 83.0 ,目前30系显卡最新驱动是cuda12,需要降级,建议直接选CPU版本 - - ### 3. 运行 MoeSS.exe / Moess - CUDA.exe - - 1. 在左上角选择模型 “SoVits:岁己SUI_v1_1M111k” 并等待加载,完成后右边会显示 “当前模型: 岁己SUI_v1_1M111k” - - 2. 将音频文件拖入程序窗口 或 直接点击开始转换后选择文件 或 在左下角输入框中写入音频文件路径再点击开始转换,支持批量,如: - - 从 3.0.0 到 4.0.1 MoeSS 终于支持了文件拖放 - - ``` - A:\\SUI\\so-vits-svc\\raw\\wavs\\2043.wav - A:\\SUI\\so-vits-svc\\raw\\wavs\\2044.flac - "B:\\引号\\加不加\\都行.mp3" - "D:\\应该吧\\路径有空格\\最好还是加.aac" - "Z:\\作者说\\只能用\\这五种格式.ogg" - ``` - - 3. 开始转换前可在弹出的参数框中调整对输入音频的升降调,确定后等待最下方进度条走完然后点右上角保存音频文件,批量推理会直接输出至 MoeSS\\OutPuts\\ 无需再保存 - - |下面的弃用|下面的弃用|下面的弃用| - |:-|:-:|-:| - |下面的弃用|下面的弃用|下面的弃用| - - ### 本地推理可调用GPU(NVIDIA),3060Ti 8G可推理一条20(建议) - 30s的音频,过长音频可分割后批量处理,就算用CPU推理也比 Hugging Face 快不少 - - # 在本地部署并使用 inference_main.py 处理的保姆级教程: - - #### 我都写成这样了再小白应该都能搞定(不怕麻烦的话) - - ### 0. 创建一个存放文件的目录,例如 D:\\SUI\\ - - ### 1. 安装所需的软件 - - 1. [miniconda-Python3.8](https://docs.conda.io/en/latest/miniconda.html#windows-installers)(未测试其他Python版本)[点这里可以直接下载](https://repo.anaconda.com/miniconda/Miniconda3-py38_22.11.1-1-Windows-x86_64.exe),Just Me 与 All Users 都行,其余可无脑下一步 - - 2. [git](https://git-scm.com/download/win)(建议使用便携版)[点这里可以直接下载(便携版v2.39.0.2)](https://github.com/git-for-windows/git/releases/download/v2.39.0.windows.2/PortableGit-2.39.0.2-64-bit.7z.exe),路径填 D:\\SUI\\git\\ - - 3. [Visual Studio 生成工具](https://visualstudio.microsoft.com/zh-hans/)(用于编译pyworld,流程走完后可卸载)[点这里可以直接下载](https://c2rsetup.officeapps.live.com/c2r/downloadVS.aspx?sku=community&channel=Release&version=VS2022),左边勾选“使用 C++ 的桌面开发”,右边只需以下四个,"MSVC v143 - VS 2022 C++......"、"适用于最新 v143 生成工具的 C++ ATL......"、"Windows 11 SDK......"、"用于 Windows 的 C++ CMake......" - - ### 2. 在开始菜单中运行 Anaconda Powershell Prompt 并配置环境(除了工作目录,复制粘贴回车即可) - - ``` - # 切换工作目录 - cd D:\\SUI\\ - # 拉取仓库 - .\\git\\bin\\git lfs clone https://huggingface.co/spaces/Miuzarte/SUI-svc-3.0 - # 切换工作目录至仓库内 - cd D:\\SUI\\SUI-svc-3.0\\ - # 创建并激活环境 - # 如果conda报SSL相关错误请关闭科学上网 - conda create -n sovits python=3.8 -y - conda activate sovits - - # 更换国内清华源 - conda config --set show_channel_urls yes - conda config --remove-key channels - conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/ - conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/menpo/ - conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/ - conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/ - conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/ - conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ - conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ - pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple - ``` - 推理所使用的设备取决于安装的torch是否支持cuda,请仔细阅读以下汉字 - ``` - # GPU(NVIDIA,CUDA版本不低于11.3) - # 似乎10系及以前都不支持cuda11? - # 如果pip报SSL相关错误请关闭科学上网 - pip install -r requirements_gpu.txt - pip install https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp38-cp38-win_amd64.whl - pip install https://download.pytorch.org/whl/cu113/torchvision-0.13.1%2Bcu113-cp38-cp38-win_amd64.whl - pip install https://download.pytorch.org/whl/cu113/torchaudio-0.12.1%2Bcu113-cp38-cp38-win_amd64.whl - ``` - ``` - # CPU(x86,内存建议不小于8G) - # 如果pip报SSL相关错误请关闭科学上网 - pip install -r requirements_cpu.txt - ``` - 至此环境配置完成,关闭该终端窗口(方便我写下一步) - - ### 3. 歌声音色转换 - - 1. 运行 Anaconda Powershell Prompt 切换工作目录并激活环境 - - ``` - cd D:\\SUI\\SUI-svc-3.0\\ - conda activate sovits - ``` - - 2. 如果想要像这个demo一样用网页的GUI处理,这条之后的可以跳过了 - - ``` - python app.py - # 运行完成后日志会输出应用所在的端口,默认7860,则浏览器访问 127.0.0.1:7860 - # 不排除该端口被占用后程序选择了其他端口 - ``` - - 3. 在 SUI-svc-3.0\\raw\\ 文件夹中放入需要转换的音频(wav格式),8G显存的情况下建议每条音频的长度控制在20(建议) - 30s(不包括无声部分),过长会爆显存导致处理时间超级加倍甚至直接报错 - - 4. 编辑 SUI-svc-3.0\\inference_main.py 的第23行(可参考第24行注释的格式),以及26行的变调,修改完保存时注意编码应为 UTF-8 - - 5. 在终端中运行 inference_main.py 开始推理 - - ``` - python inference_main.py - # 音频将输出至 SUI-svc-3.0\\results\\ 文件夹 - ``` - """) - app.launch() \ No newline at end of file diff --git a/spaces/MrBodean/VoiceClone/utils/profiler.py b/spaces/MrBodean/VoiceClone/utils/profiler.py deleted file mode 100644 index 17175b9e1b0eb17fdc015199e5194a5c1afb8a28..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/utils/profiler.py +++ /dev/null @@ -1,45 +0,0 @@ -from time import perf_counter as timer -from collections import OrderedDict -import numpy as np - - -class Profiler: - def __init__(self, summarize_every=5, disabled=False): - self.last_tick = timer() - self.logs = OrderedDict() - self.summarize_every = summarize_every - self.disabled = disabled - - def tick(self, name): - if self.disabled: - return - - # Log the time needed to execute that function - if not name in self.logs: - self.logs[name] = [] - if len(self.logs[name]) >= self.summarize_every: - self.summarize() - self.purge_logs() - self.logs[name].append(timer() - self.last_tick) - - self.reset_timer() - - def purge_logs(self): - for name in self.logs: - self.logs[name].clear() - - def reset_timer(self): - self.last_tick = timer() - - def summarize(self): - n = max(map(len, self.logs.values())) - assert n == self.summarize_every - print("\nAverage execution time over %d steps:" % n) - - name_msgs = ["%s (%d/%d):" % (name, len(deltas), n) for name, deltas in self.logs.items()] - pad = max(map(len, name_msgs)) - for name_msg, deltas in zip(name_msgs, self.logs.values()): - print(" %s mean: %4.0fms std: %4.0fms" % - (name_msg.ljust(pad), np.mean(deltas) * 1000, np.std(deltas) * 1000)) - print("", flush=True) - \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/research/adv_imagenet_models/imagenet.py b/spaces/NCTCMumbai/NCTC/models/research/adv_imagenet_models/imagenet.py deleted file mode 100644 index 26c4c7a388a234f647e446951a0765d1c53184cb..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/adv_imagenet_models/imagenet.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Provides data for the ImageNet ILSVRC 2012 Dataset plus some bounding boxes. - -Some images have one or more bounding boxes associated with the label of the -image. See details here: http://image-net.org/download-bboxes - -WARNING: Don't use for object detection, in this case all the bounding boxes -of the image belong to just one class. -""" -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os -import tensorflow as tf - -slim = tf.contrib.slim - -_FILE_PATTERN = '%s-*' - -_SPLITS_TO_SIZES = { - 'train': 1281167, - 'validation': 50000, -} - -_ITEMS_TO_DESCRIPTIONS = { - 'image': 'A color image of varying height and width.', - 'label': 'The label id of the image, integer between 0 and 999', - 'label_text': 'The text of the label.', - 'object/bbox': 'A list of bounding boxes.', - 'object/label': 'A list of labels, one per each object.', -} - -_NUM_CLASSES = 1001 - - -def get_split(split_name, dataset_dir, file_pattern=None, reader=None): - """Gets a dataset tuple with instructions for reading ImageNet. - - Args: - split_name: A train/test split name. - dataset_dir: The base directory of the dataset sources. - file_pattern: The file pattern to use when matching the dataset sources. - It is assumed that the pattern contains a '%s' string so that the split - name can be inserted. - reader: The TensorFlow reader type. - - Returns: - A `Dataset` namedtuple. - - Raises: - ValueError: if `split_name` is not a valid train/test split. - """ - if split_name not in _SPLITS_TO_SIZES: - raise ValueError('split name %s was not recognized.' % split_name) - - if not file_pattern: - file_pattern = _FILE_PATTERN - file_pattern = os.path.join(dataset_dir, file_pattern % split_name) - - # Allowing None in the signature so that dataset_factory can use the default. - if reader is None: - reader = tf.TFRecordReader - - keys_to_features = { - 'image/encoded': tf.FixedLenFeature( - (), tf.string, default_value=''), - 'image/format': tf.FixedLenFeature( - (), tf.string, default_value='jpeg'), - 'image/class/label': tf.FixedLenFeature( - [], dtype=tf.int64, default_value=-1), - 'image/class/text': tf.FixedLenFeature( - [], dtype=tf.string, default_value=''), - 'image/object/bbox/xmin': tf.VarLenFeature( - dtype=tf.float32), - 'image/object/bbox/ymin': tf.VarLenFeature( - dtype=tf.float32), - 'image/object/bbox/xmax': tf.VarLenFeature( - dtype=tf.float32), - 'image/object/bbox/ymax': tf.VarLenFeature( - dtype=tf.float32), - 'image/object/class/label': tf.VarLenFeature( - dtype=tf.int64), - } - - items_to_handlers = { - 'image': slim.tfexample_decoder.Image('image/encoded', 'image/format'), - 'label': slim.tfexample_decoder.Tensor('image/class/label'), - 'label_text': slim.tfexample_decoder.Tensor('image/class/text'), - 'object/bbox': slim.tfexample_decoder.BoundingBox( - ['ymin', 'xmin', 'ymax', 'xmax'], 'image/object/bbox/'), - 'object/label': slim.tfexample_decoder.Tensor('image/object/class/label'), - } - - decoder = slim.tfexample_decoder.TFExampleDecoder( - keys_to_features, items_to_handlers) - - return slim.dataset.Dataset( - data_sources=file_pattern, - reader=reader, - decoder=decoder, - num_samples=_SPLITS_TO_SIZES[split_name], - items_to_descriptions=_ITEMS_TO_DESCRIPTIONS, - num_classes=_NUM_CLASSES) diff --git a/spaces/NN520/AI/src/components/tone-selector.tsx b/spaces/NN520/AI/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
    -
    - 选择对话样式 -
    -
    -
      - { - ToneList.map(tone => ( -
    • onChange?.(tone.type)}> - -
    • - )) - } -
    -
    -
    - ) -} diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/sentence_prediction.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/sentence_prediction.py deleted file mode 100644 index d5f9302c10b3410e7650433d54f70aad4fd1cfc4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/sentence_prediction.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import contextlib -from dataclasses import dataclass, field -from typing import Optional -from omegaconf import MISSING, II, open_dict, OmegaConf - -import numpy as np -from fairseq.data import ( - ConcatSentencesDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - OffsetTokensDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - RollDataset, - SortDataset, - StripTokenDataset, - data_utils, -) -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import FairseqDataclass, FairseqTask, register_task -from fairseq.dataclass import ChoiceEnum - - -logger = logging.getLogger(__name__) -SHORTEN_METHOD_CHOICES = ChoiceEnum(["none", "truncate", "random_crop"]) - - -@dataclass -class SentencePredictionConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - num_classes: int = field( - default=-1, - metadata={"help": "number of classes or regression targets"}, - ) - init_token: Optional[int] = field( - default=None, - metadata={"help": "add token at the beginning of each batch item"}, - ) - separator_token: Optional[int] = field( - default=None, - metadata={"help": "add separator token between inputs"}, - ) - no_shuffle: bool = field( - default=False, - ) - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed tokens_per_sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - add_prev_output_tokens: bool = field( - default=False, - metadata={ - "help": "add prev_output_tokens to sample, used for encoder-decoder arch" - }, - ) - max_positions: int = field( - default=512, - metadata={"help": "max tokens per example"}, - ) - - regression_target: bool = II("criterion.regression_target") - classification_head_name: str = II("criterion.classification_head_name") - seed: int = II("common.seed") - - -@register_task("sentence_prediction", dataclass=SentencePredictionConfig) -class SentencePredictionTask(FairseqTask): - """ - Sentence (or sentence pair) prediction (classification or regression) task. - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - def __init__(self, cfg, data_dictionary, label_dictionary): - super().__init__(cfg) - self.dictionary = data_dictionary - self._label_dictionary = label_dictionary - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") - return dictionary - - @classmethod - def setup_task(cls, cfg, **kwargs): - assert cfg.num_classes > 0, "Must set task.num_classes" - - # load data dictionary - data_dict = cls.load_dictionary( - os.path.join(cfg.data, "input0", "dict.txt"), - ) - logger.info("[input] dictionary: {} types".format(len(data_dict))) - - # load label dictionary - if not cfg.regression_target: - label_dict = cls.load_dictionary( - os.path.join(cfg.data, "label", "dict.txt"), - ) - logger.info("[label] dictionary: {} types".format(len(label_dict))) - else: - label_dict = data_dict - return cls(cfg, data_dict, label_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - def get_path(key, split): - return os.path.join(self.cfg.data, key, split) - - def make_dataset(key, dictionary): - split_path = get_path(key, split) - - try: - dataset = data_utils.load_indexed_dataset( - split_path, - dictionary, - combine=combine, - ) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"dataset {e} not found") - dataset = None - else: - raise e - return dataset - - input0 = make_dataset("input0", self.source_dictionary) - assert input0 is not None, "could not find dataset: {}".format( - get_path("input0", split) - ) - input1 = make_dataset("input1", self.source_dictionary) - - if self.cfg.init_token is not None: - input0 = PrependTokenDataset(input0, self.cfg.init_token) - - if input1 is None: - src_tokens = input0 - else: - if self.cfg.separator_token is not None: - input1 = PrependTokenDataset(input1, self.cfg.separator_token) - - src_tokens = ConcatSentencesDataset(input0, input1) - - with data_utils.numpy_seed(self.cfg.seed): - shuffle = np.random.permutation(len(src_tokens)) - - src_tokens = maybe_shorten_dataset( - src_tokens, - split, - self.cfg.shorten_data_split_list, - self.cfg.shorten_method, - self.max_positions(), - self.cfg.seed, - ) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset(src_tokens, reduce=False), - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - } - - if self.cfg.add_prev_output_tokens: - prev_tokens_dataset = RightPadDataset( - RollDataset(src_tokens, 1), - pad_idx=self.dictionary.pad(), - ) - dataset["net_input"].update( - prev_output_tokens=prev_tokens_dataset, - ) - - if not self.cfg.regression_target: - label_dataset = make_dataset("label", self.label_dictionary) - if label_dataset is not None: - dataset.update( - target=OffsetTokensDataset( - StripTokenDataset( - label_dataset, - id_to_strip=self.label_dictionary.eos(), - ), - offset=-self.label_dictionary.nspecial, - ) - ) - else: - label_path = "{0}.label".format(get_path("label", split)) - if os.path.exists(label_path): - - def parse_regression_target(i, line): - values = line.split() - assert ( - len(values) == self.cfg.num_classes - ), f'expected num_classes={self.cfg.num_classes} regression target values on line {i}, found: "{line}"' - return [float(x) for x in values] - - with open(label_path) as h: - dataset.update( - target=RawLabelDataset( - [ - parse_regression_target(i, line.strip()) - for i, line in enumerate(h.readlines()) - ] - ) - ) - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[src_tokens.sizes], - ) - - if self.cfg.no_shuffle: - dataset = nested_dataset - else: - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, cfg): - from fairseq import models - - with open_dict(cfg) if OmegaConf.is_config(cfg) else contextlib.ExitStack(): - cfg.max_positions = self.cfg.max_positions - - model = models.build_model(cfg, self) - - model.register_classification_head( - self.cfg.classification_head_name, - num_classes=self.cfg.num_classes, - ) - - return model - - def max_positions(self): - return self.cfg.max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - @property - def label_dictionary(self): - return self._label_dictionary diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py deleted file mode 100644 index 7aced08d38301b98b19e2df7d19f1c61150107bc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -from examples.textless_nlp.gslm.unit2speech.tacotron2.model import Tacotron2 -from examples.textless_nlp.gslm.unit2speech.tacotron2.waveglow_denoiser import ( - Denoiser, -) - - -def load_quantized_audio_from_file(file_path): - base_fname_batch, quantized_units_batch = [], [] - with open(file_path) as f: - for line in f: - base_fname, quantized_units_str = line.rstrip().split("|") - quantized_units = [int(q) for q in quantized_units_str.split(" ")] - base_fname_batch.append(base_fname) - quantized_units_batch.append(quantized_units) - return base_fname_batch, quantized_units_batch - - -def synthesize_audio(model, waveglow, denoiser, inp, lab=None, strength=0.0): - assert inp.size(0) == 1 - inp = inp.cuda() - if lab is not None: - lab = torch.LongTensor(1).cuda().fill_(lab) - - with torch.no_grad(): - _, mel, _, ali, has_eos = model.inference(inp, lab, ret_has_eos=True) - aud = waveglow.infer(mel, sigma=0.666) - aud_dn = denoiser(aud, strength=strength).squeeze(1) - return mel, aud, aud_dn, has_eos - - -def load_tacotron(tacotron_model_path, max_decoder_steps): - ckpt_dict = torch.load(tacotron_model_path) - hparams = ckpt_dict["hparams"] - hparams.max_decoder_steps = max_decoder_steps - sr = hparams.sampling_rate - model = Tacotron2(hparams) - model.load_state_dict(ckpt_dict["model_dict"]) - model = model.cuda().eval().half() - return model, sr, hparams - - -def load_waveglow(waveglow_path): - waveglow = torch.load(waveglow_path)["model"] - waveglow = waveglow.cuda().eval().half() - for k in waveglow.convinv: - k.float() - denoiser = Denoiser(waveglow) - return waveglow, denoiser diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh deleted file mode 100644 index d8f5d596b4b4ec55f11a82dbbf83bad4a22c0b6c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -timit_root=$1 # assume it is the upper-cased version -tgt_dir=$2 -model=$3 - -set -eu - -setups="matched unmatched" -splits="test valid train train_text" - -tgt_dir=$(realpath $tgt_dir) -sph2wav=$KALDI_ROOT/tools/sph2pipe_v2.5/sph2pipe -wav_dir=$tgt_dir/wav - - -mkdir -p $tgt_dir $wav_dir -find $timit_root/{TRAIN,TEST} -iname "*.WAV" > $tgt_dir/all_sph.flist -cat $tgt_dir/all_sph.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).WAV#\1_\2#g' > $tgt_dir/all.uid -paste -d' ' $tgt_dir/{all_sph.flist,all.uid} | \ - awk -v sph2wav=$sph2wav -v wav_dir=$wav_dir '{print sph2wav " -f wav " $1 " > " wav_dir "/" $2 ".wav"}' \ - > $tgt_dir/sph2wav.sh -bash $tgt_dir/sph2wav.sh -cat $tgt_dir/all.uid | awk -v wav_dir=$(pwd)/$wav_dir '{print $1" "wav_dir"/"$1".wav"}' | sort > $tgt_dir/all_wav.scp -cut -d' ' -f2 $tgt_dir/all_wav.scp | xargs -I{} soxi -s {} > $tgt_dir/all.dur -paste -d' ' $tgt_dir/{all_wav.scp,all.dur} > $tgt_dir/all_wav_dur.scp -rm $tgt_dir/{all.uid,all_sph.flist,sph2wav.sh} - -find $timit_root/{TRAIN,TEST} -iname "*.PHN" > $tgt_dir/all_phn60.flist -while read line; do - if [ ! -f $line ]; then - >&2 echo "Cannot find transcription file '$line'" && exit 1; - fi - cut -f3 -d' ' "$line" | tr '\n' ' ' | perl -ape 's: *$:\n:;' -done < $tgt_dir/all_phn60.flist > $tgt_dir/all.phn60 -cat $tgt_dir/all_phn60.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).PHN#\1_\2#g' | \ - paste -d' ' - $tgt_dir/all.phn60 | \ - $KALDI_ROOT/egs/timit/s5/local/timit_norm_trans.pl -i - -m $KALDI_ROOT/egs/timit/s5/conf/phones.60-48-39.map -to 39 | \ - sort > $tgt_dir/all.phn -echo "done preparing wav and 39-phone transcripts" - - -for s in $setups; do - mkdir -p $tgt_dir/$s - for x in $splits; do - uid_path=config/timit_${s}/${x}.uid - grep -w -f $uid_path $tgt_dir/all.phn | cut -d' ' -f2- > $tgt_dir/$s/$x.phn - ln -sf $(realpath $tgt_dir/$s/$x.phn) $tgt_dir/$s/$x.wrd - - echo "/" > $tgt_dir/$s/$x.tsv && grep -w -f $uid_path $tgt_dir/all_wav_dur.scp | cut -d' ' -f2- | sed 's# #\t#' >> $tgt_dir/$s/$x.tsv - done - - for x in $splits; do - cat $tgt_dir/$s/$x.phn - done | tr ' ' '\n' | sort -u | awk '{print $1" "1}' > $tgt_dir/$s/dict.phn.txt - ln -sf $(realpath $tgt_dir/$s/dict.phn.txt) $tgt_dir/$s/dict.wrd.txt -done -echo "done preparing unmatched and matched setups for TIMIT" - - -for s in $setups; do - zsh scripts/prepare_audio.sh $tgt_dir/$s $tgt_dir/$s/feat $model - - lm_dir=$tgt_dir/$s/phones - fst_dir=$tgt_dir/$s/fst/phn_to_phn - - python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $tgt_dir/$s/train_text.phn --workers 10 --only-source --destdir $lm_dir --srcdict $tgt_dir/$s/dict.phn.txt - $KENLM_ROOT/lmplz -o 3 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.03.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.03.arpa $lm_dir/train_text_phn.03.bin - $KENLM_ROOT/lmplz -o 4 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.04.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.04.arpa $lm_dir/train_text_phn.04.bin - - python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$fst_dir lm_arpa=$lm_dir/train_text_phn.03.arpa data_dir=$tgt_dir/$s in_labels=phn -done -echo "done preprocessing audio and text for wav2vec-U" diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_file_io.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_file_io.py deleted file mode 100644 index 425812bf1672489093941e5fa09f9da3171559ee..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_file_io.py +++ /dev/null @@ -1,58 +0,0 @@ -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import shutil -import sys -import tempfile -import unittest -from typing import Optional -from unittest.mock import MagicMock - - -class TestFileIO(unittest.TestCase): - - _tmpdir: Optional[str] = None - _tmpfile: Optional[str] = None - _tmpfile_contents = "Hello, World" - - @classmethod - def setUpClass(cls) -> None: - cls._tmpdir = tempfile.mkdtemp() - with open(os.path.join(cls._tmpdir, "test.txt"), "w") as f: - cls._tmpfile = f.name - f.write(cls._tmpfile_contents) - f.flush() - - @classmethod - def tearDownClass(cls) -> None: - # Cleanup temp working dir. - if cls._tmpdir is not None: - shutil.rmtree(cls._tmpdir) # type: ignore - - def test_file_io(self): - from fairseq.file_io import PathManager - - with PathManager.open(os.path.join(self._tmpdir, "test.txt"), "r") as f: - s = f.read() - self.assertEqual(s, self._tmpfile_contents) - - def test_file_io_oss(self): - # Mock iopath to simulate oss environment. - sys.modules["iopath"] = MagicMock() - from fairseq.file_io import PathManager - - with PathManager.open(os.path.join(self._tmpdir, "test.txt"), "r") as f: - s = f.read() - self.assertEqual(s, self._tmpfile_contents) - - def test_file_io_async(self): - # ioPath `PathManager` is initialized after the first `opena` call. - try: - from fairseq.file_io import IOPathManager, PathManager - _asyncfile = os.path.join(self._tmpdir, "async.txt") - f = PathManager.opena(_asyncfile, "wb") - f.close() - - finally: - self.assertTrue(PathManager.async_close()) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/clib/libnat_cuda/edit_dist.h b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/clib/libnat_cuda/edit_dist.h deleted file mode 100644 index 5220c52fd80529b90a67ba74e9ca73c668dab099..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/clib/libnat_cuda/edit_dist.h +++ /dev/null @@ -1,25 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#pragma once - -#include - -torch::Tensor LevenshteinDistanceCuda( - torch::Tensor source, - torch::Tensor target, - torch::Tensor source_length, - torch::Tensor target_length); - -torch::Tensor GenerateDeletionLabelCuda( - torch::Tensor source, - torch::Tensor operations); - -std::pair GenerateInsertionLabelCuda( - torch::Tensor source, - torch::Tensor operations); diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/huggingface/hf_gpt2.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/huggingface/hf_gpt2.py deleted file mode 100644 index 3a8eb78198f5808557092f814e92f1c9d72933ec..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/huggingface/hf_gpt2.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys -from typing import Dict, List, Optional - -import torch -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -logger = logging.getLogger(__name__) - - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@register_model("hf_gpt2") -class HuggingFaceGPT2LanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--embed-dim', type=int, metavar='N', - help='embedding dimension') - parser.add_argument('--num-attention-heads', type=int, metavar='N', - help='num attention heads') - parser.add_argument('--num-layers', type=int, metavar='N', - help='num layers') - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability for all fully connected layers ' - 'in the embeddings, encoder, and pooler') - parser.add_argument('--attention-dropout', type=float, metavar='D', - help='dropout probability for attention weights') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - default_architecture(args) - return cls(HuggingFaceGPT2Decoder(args, task)) - - -class HuggingFaceGPT2Decoder(FairseqIncrementalDecoder): - def __init__(self, args, task): - try: - from transformers import GPT2Config, GPT2LMHeadModel - except ImportError: - raise ImportError( - "\n\nPlease install huggingface/transformers with:" - "\n\n pip install transformers" - ) - - super().__init__(task.target_dictionary) - - config = GPT2Config( - vocab_size=len(task.target_dictionary), - n_positions=args.max_target_positions + 1, - n_ctx=args.max_target_positions, - n_embd=args.embed_dim, - n_layer=args.num_layers, - n_head=args.num_attention_heads, - resid_pdrop=args.dropout, - embd_pdrop=args.dropout, - attn_pdrop=args.attention_dropout, - layer_norm_epsilon=1e-6, - ) - self.model = GPT2LMHeadModel(config) - - # set zero embedding for padding symbol - self.pad_idx = task.target_dictionary.pad() - self.model.transformer.wte.weight.data[self.pad_idx].zero_() - self.model.transformer.wpe.weight.data[0].zero_() - - def forward( - self, - prev_output_tokens, - src_lengths=None, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - encoder_out=None, - ): - features = self.extract_features(prev_output_tokens, incremental_state) - lm_logits = self.model.lm_head(features) - return (lm_logits,) - - def extract_features( - self, - prev_output_tokens, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - ): - if incremental_state: - past = self.get_incremental_state("past") - else: - past = None - - # don't attend to padding symbols - attention_mask = prev_output_tokens.ne(self.pad_idx).int() - - # set position ids to exclude padding symbols - position_ids = attention_mask * ( - torch.arange(1, 1 + prev_output_tokens.size(1)) - .to(prev_output_tokens) - .repeat(prev_output_tokens.size(0), 1) - ) - - outputs = self.model.transformer( - input_ids=prev_output_tokens, - past=past, - attention_mask=attention_mask, - position_ids=position_ids, - ) - last_hidden_states = outputs[0] - - if incremental_state: - self.set_incremental_state(incremental_state, "past", outputs[1]) - - return last_hidden_states - - def max_positions(self): - return self.model.config.n_positions - 1 - - -@register_model_architecture("hf_gpt2", "hf_gpt2") -def default_architecture(args): - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - args.embed_dim = getattr(args, "embed_dim", 768) - args.num_attention_heads = getattr(args, "num_attention_heads", 12) - args.num_layers = getattr(args, "num_layers", 12) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_medium") -def hf_gpt2_medium(args): - args.embed_dim = getattr(args, "embed_dim", 1024) - args.num_attention_heads = getattr(args, "num_attention_heads", 16) - args.num_layers = getattr(args, "num_layers", 24) - default_architecture(args) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_large") -def hf_gpt2_large(args): - args.embed_dim = getattr(args, "embed_dim", 1280) - args.num_attention_heads = getattr(args, "num_attention_heads", 20) - args.num_layers = getattr(args, "num_layers", 36) - default_architecture(args) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_xl") -def hf_gpt2_xl(args): - args.embed_dim = getattr(args, "embed_dim", 1600) - args.num_attention_heads = getattr(args, "num_attention_heads", 25) - args.num_layers = getattr(args, "num_layers", 48) - default_architecture(args) diff --git a/spaces/OFA-Sys/OFA-vqa/trainer.py b/spaces/OFA-Sys/OFA-vqa/trainer.py deleted file mode 100644 index 17adcbd9d31937c25c965f1ec7d9a2be90c7253d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/trainer.py +++ /dev/null @@ -1,1531 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -import contextlib -import logging -import sys -import time -from argparse import Namespace -from itertools import chain -from typing import Any, Dict, List - -import torch -from fairseq import models, optim, utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics -from fairseq.models.ema import build_ema -from fairseq.nan_detector import NanDetector -from fairseq.optim import lr_scheduler -from omegaconf import OmegaConf - -from utils import checkpoint_utils - -logger = logging.getLogger(__name__) - - -class Trainer(object): - """Main class for data parallel training. - - This class supports synchronous distributed data parallel training, - where multiple workers each have a full model replica and gradients - are accumulated across workers before each update. We use - :class:`~torch.nn.parallel.DistributedDataParallel` to handle - communication of the gradients across workers. - """ - - def __init__(self, cfg: FairseqConfig, task, model, criterion, quantizer=None): - - if isinstance(cfg, Namespace): - logger.warning( - "argparse.Namespace configuration is deprecated! Automatically converting to OmegaConf" - ) - cfg = convert_namespace_to_omegaconf(cfg) - - self.cfg = cfg - self.task = task - - # catalog shared parameters - shared_params = _catalog_shared_params(model) - self.tpu = cfg.common.tpu - self.cuda = torch.cuda.is_available() and not cfg.common.cpu and not self.tpu - if self.cuda: - self.device = torch.device("cuda") - elif self.tpu: - self.device = utils.get_tpu_device() - else: - self.device = torch.device("cpu") - - if self.is_fsdp: - import fairscale - if self.cfg.common.bf16: - raise ValueError( - "FullyShardedDataParallel is not compatible with --bf16 or " - "--memory-efficient-bf16" - ) - if self.cfg.distributed_training.zero_sharding != "none": - raise ValueError( - "FullyShardedDataParallel is not compatible with --zero-sharding " - "option (it's already built in)" - ) - if max(self.cfg.optimization.update_freq) > 1 and fairscale.__version__ < "0.4.0": - raise RuntimeError( - "Please update to fairscale 0.4.0 or newer when combining " - "--update-freq with FullyShardedDataParallel" - ) - else: - if ( - hasattr(self.cfg.distributed_training, "cpu_offload") - and self.cfg.distributed_training.cpu_offload - ): - raise ValueError("--cpu-offload requires --ddp-backend=fully_sharded") - - # copy model and criterion to current device/dtype - self._criterion = criterion - self._model = model - if not self.is_fsdp: - if cfg.common.fp16: - assert not cfg.common.amp, "Cannot use fp16 and AMP together" - self._criterion = self._criterion.half() - self._model = self._model.half() - elif cfg.common.bf16: - self._criterion = self._criterion.to(dtype=torch.bfloat16) - self._model = self._model.to(dtype=torch.bfloat16) - elif cfg.common.amp: - self._amp_retries = 0 - if ( - not cfg.distributed_training.pipeline_model_parallel - # the DistributedFairseqModel wrapper will handle moving to device, - # so only handle cases which don't use the wrapper - and not self.use_distributed_wrapper - ): - self._criterion = self._criterion.to(device=self.device) - self._model = self._model.to(device=self.device) - self.pipeline_model_parallel = cfg.distributed_training.pipeline_model_parallel - self.last_device = None - if self.cuda and self.pipeline_model_parallel: - self.last_device = torch.device( - cfg.distributed_training.pipeline_devices[-1] - ) - - # check that shared parameters are preserved after device transfer - for shared_param in shared_params: - ref = _get_module_by_path(self._model, shared_param[0]) - for path in shared_param[1:]: - logger.info( - "detected shared parameter: {} <- {}".format(shared_param[0], path) - ) - _set_module_by_path(self._model, path, ref) - - self._dummy_batch = None # indicates we don't have a dummy batch at first - self._lr_scheduler = None - self._num_updates = 0 - self._num_xla_compiles = 0 # for TPUs - self._optim_history = None - self._optimizer = None - self._warn_once = set() - self._wrapped_criterion = None - self._wrapped_model = None - self._ema = None - - # TODO(myleott): support tpu - if self.cuda and self.data_parallel_world_size > 1: - self._grad_norm_buf = torch.cuda.DoubleTensor(self.data_parallel_world_size) - else: - self._grad_norm_buf = None - - self.quantizer = quantizer - if self.quantizer is not None: - self.quantizer.set_trainer(self) - - # get detailed cuda environment - if self.cuda: - self.cuda_env = utils.CudaEnvironment() - if self.data_parallel_world_size > 1: - self.cuda_env_arr = distributed_utils.all_gather_list( - self.cuda_env, group=distributed_utils.get_global_group() - ) - else: - self.cuda_env_arr = [self.cuda_env] - if self.data_parallel_rank == 0: - utils.CudaEnvironment.pretty_print_cuda_env_list(self.cuda_env_arr) - else: - self.cuda_env = None - self.cuda_env_arr = None - - metrics.log_start_time("wall", priority=790, round=0) - - self._start_time = time.time() - self._previous_training_time = 0 - self._cumulative_training_time = None - - def reinitialize(self): - """Reinitialize the Trainer, typically after model params change.""" - self._lr_scheduler = None - self._optimizer = None - self._wrapped_criterion = None - self._wrapped_model = None - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_process_group(self): - return distributed_utils.get_data_parallel_group() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - @property - def is_data_parallel_master(self): - # NOTE: this returns true for all model parallel replicas with data - # parallel rank 0 - return self.data_parallel_rank == 0 - - @property - def use_distributed_wrapper(self) -> bool: - return ( - self.data_parallel_world_size > 1 and not self.cfg.optimization.use_bmuf - ) or ( - self.is_fsdp and self.cfg.distributed_training.cpu_offload - ) - - @property - def should_save_checkpoint_on_current_rank(self) -> bool: - """Indicates whether to save checkpoints on the current DDP rank.""" - if ( - self.is_fsdp and self.cfg.distributed_training.use_sharded_state - ) or getattr(self.cfg.model, "base_layers", 0) > 0: - return True - else: - return self.is_data_parallel_master - - @property - def always_call_state_dict_during_save_checkpoint(self) -> bool: - if self.is_fsdp and not self.cfg.distributed_training.use_sharded_state: - # FSDP calls communication collective when consolidating checkpoints - return True - else: - return False - - @property - def checkpoint_suffix(self) -> str: - """Suffix to add to the checkpoint file name.""" - if self.is_fsdp and self.cfg.distributed_training.use_sharded_state: - return self.cfg.checkpoint.checkpoint_suffix + "-shard{0}".format( - self.data_parallel_rank - ) - else: - return self.cfg.checkpoint.checkpoint_suffix or "" - - @property - def criterion(self): - if self._wrapped_criterion is None: - if utils.has_parameters(self._criterion) and self.use_distributed_wrapper: - self._wrapped_criterion = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._criterion, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_criterion = self._criterion - return self._wrapped_criterion - - @property - def model(self): - if self._wrapped_model is None: - if self.use_distributed_wrapper: - self._wrapped_model = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._model, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_model = self._model - return self._wrapped_model - - @property - def ema(self): - if self._ema is None: - self._build_ema() - return self._ema - - def _build_ema(self): - if self.cfg.ema.store_ema: - self._ema = build_ema(self._model, self.cfg.ema, self.device) - logger.info( - "Exponential Moving Average Shadow Model is initialized." - ) - - @property - def optimizer(self): - if self._optimizer is None: - self._build_optimizer() - return self._optimizer - - @property - def lr_scheduler(self): - if self._lr_scheduler is None: - self._build_optimizer() # this will initialize self._lr_scheduler - return self._lr_scheduler - - def _build_optimizer(self): - params = list( - filter( - lambda p: p.requires_grad, - chain(self.model.parameters(), self.criterion.parameters()), - ) - ) - - if self.is_fsdp and self.cfg.common.fp16: - # FullyShardedDataParallel always uses MemoryEfficientFP16 wrapper, - # mostly for the grad scaling. But if we don't have the - # --memory-efficient-fp16 flag set, then we're effectively doing - # regular --fp16 and can allow the use of optimizers that would - # otherwise be unsupported by MemoryEfficientFP16Optimizer. - allow_unsupported = not self.cfg.common.memory_efficient_fp16 - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params, allow_unsupported=allow_unsupported - ) - elif self.cfg.common.fp16 or self.cfg.common.bf16 or self.cfg.common.amp: - if self.cuda and torch.cuda.get_device_capability(0)[0] < 7: - logger.info( - "NOTE: your device does NOT support faster training with --fp16 or --amp, " - "please switch to FP32 which is likely to be faster" - ) - if ( - self.cfg.common.memory_efficient_fp16 - or self.cfg.common.memory_efficient_bf16 - ): - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params - ) - elif self.cfg.common.amp: - self._optimizer = optim.AMPOptimizer.build_optimizer(self.cfg, params) - else: - self._optimizer = optim.FP16Optimizer.build_optimizer(self.cfg, params) - else: - if self.cuda and torch.cuda.get_device_capability(0)[0] >= 7: - logger.info("NOTE: your device may support faster training with --fp16 or --amp") - self._optimizer = optim.build_optimizer(self.cfg.optimizer, params) - - if self.is_fsdp: - assert ( - not self.cfg.optimization.use_bmuf - ), "--ddp-backend=fully_sharded is not compatible with BMUF" - assert self._optimizer.supports_flat_params, ( - "--ddp-backend=fully_sharded is only compatible with pointwise " - "optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.). " - "However, the sharding will result in slightly different results when " - "using non-pointwise optimizers (e.g., Adagrad, Adafactor, LAMB)" - ) - - if self.cfg.optimization.use_bmuf: - self._optimizer = optim.FairseqBMUF( - self.cfg.bmuf, - self._optimizer, - ) - - if self.cfg.distributed_training.zero_sharding == "os": - if ( - self.cfg.common.fp16 - and not self.cfg.common.memory_efficient_fp16 - and not self.cfg.common.memory_efficient_bf16 - ) and not self.cfg.common.fp16_no_flatten_grads: - raise ValueError( - "ZeRO is incomptabile with fp16 and flattened grads. " - "Please use --fp16-no-flatten-grads" - ) - else: - optim.shard_(self._optimizer, self.data_parallel_process_group) - - # We should initialize the learning rate scheduler immediately after - # building the optimizer, so that the initial learning rate is set. - self._lr_scheduler = lr_scheduler.build_lr_scheduler( - self.cfg.lr_scheduler, - self.optimizer, - ) - self._lr_scheduler.step_update(0) - - @property - def is_fsdp(self): - return self.cfg.distributed_training.ddp_backend == "fully_sharded" - - def consolidate_optimizer(self): - """For OSS, we need to consolidate the state dict.""" - if self.cfg.checkpoint.no_save_optimizer_state: - return - self._gathered_optim_state = None - if hasattr(self.optimizer.optimizer, "consolidate_state_dict"): - self.optimizer.optimizer.consolidate_state_dict() - elif self.is_fsdp and not self.model.use_sharded_state: - st = self.model.gather_full_optim_state_dict( - self.optimizer - ) # only returns on rank 0 - self._gathered_optim_state = st - - def state_dict(self): - state_dict = { - "args": None, # legacy - "cfg": ( - OmegaConf.to_container(self.cfg, resolve=True, enum_to_str=True) - if OmegaConf.is_config(self.cfg) - else self.cfg - ), - "model": self.model.state_dict(), - "criterion": ( - self.criterion.state_dict() - if utils.has_parameters(self.criterion) - else None - ), - "optimizer_history": (self._optim_history or []) - + [ - { - "criterion_name": self.get_criterion().__class__.__name__, - "optimizer_name": self.optimizer.__class__.__name__, - "lr_scheduler_state": self.lr_scheduler.state_dict(), - "num_updates": self.get_num_updates(), - } - ], - "task_state": self.task.state_dict() if self.task is not None else {}, - "extra_state": { - "metrics": metrics.state_dict(), - "previous_training_time": self.cumulative_training_time(), - }, - } - if self.cfg.ema.store_ema: - # Save EMA model state as extra state - state_dict["extra_state"]["ema"] = self.ema.get_model().state_dict() - if self.cfg.ema.ema_fp32: - # Save EMA params in fp32 - state_dict["extra_state"]["ema_fp32_params"] = self.ema.fp32_params - if not self.cfg.checkpoint.no_save_optimizer_state: - if self._gathered_optim_state is not None: - state_dict["last_optimizer_state"] = self._gathered_optim_state - self._gathered_optim_state = None - else: - state_dict["last_optimizer_state"] = self.optimizer.state_dict() - if self.is_fsdp: - # save meta data for recombining checkpoint upon loading - state_dict["fsdp_metadata"] = self.model.local_metadata_dict() - return state_dict - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - logger.info(f"Saving checkpoint to {filename}") - # call state_dict on all ranks in case it needs internal communication - state_dict = utils.move_to_cpu(self.state_dict()) - state_dict["extra_state"].update(extra_state) - if self.should_save_checkpoint_on_current_rank: - checkpoint_utils.torch_persistent_save( - state_dict, - filename, - async_write=self.cfg.checkpoint.write_checkpoints_asynchronously, - ) - logger.info(f"Finished saving checkpoint to {filename}") - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - """ - Load all training state from a checkpoint file. - rank = 0 will load the checkpoint, and then broadcast it to all - other ranks. - """ - extra_state, self._optim_history, last_optim_state = None, [], None - - logger.info(f"Preparing to load checkpoint {filename}") - is_distributed = self.data_parallel_world_size > 1 - bexists = PathManager.isfile(filename) - if bexists: - load_on_all_ranks = ( - self.cfg.checkpoint.load_checkpoint_on_all_dp_ranks - # TPUs don't support broadcast yet, so load checkpoints - # on every worker for now - or self.tpu - # FSDP requires loading checkpoint shards on all ranks - or (self.is_fsdp and self.cfg.distributed_training.use_sharded_state) - or getattr(self.cfg.model, "base_layers", 0) > 0 - ) - - if load_on_all_ranks or self.data_parallel_rank == 0: - state = checkpoint_utils.load_checkpoint_to_cpu( - filename, load_on_all_ranks=load_on_all_ranks - ) - last_optim_state = state.get("last_optimizer_state", None) - - # If doing zero_sharding, do not broadcast global optimizer - # state. Later we will broadcast sharded states to each rank - # to avoid memory from exploding. - if ( - not load_on_all_ranks - and self.cfg.distributed_training.zero_sharding == "os" - and "last_optimizer_state" in state - and is_distributed - ): - state["last_optimizer_state"] = "SHARDED" - else: - last_optim_state = None - state = None - - if is_distributed and not load_on_all_ranks: - state = distributed_utils.broadcast_object( - state, - src_rank=0, - group=self.data_parallel_process_group, - dist_device=self.device, - ) - if self.data_parallel_rank > 0: - last_optim_state = state.get("last_optimizer_state", None) - - # load model parameters - try: - if self.cfg.checkpoint.use_ema_weights_to_init_param and "extra_state" in state and "ema" in state["extra_state"]: - logger.info("use_ema_weights_to_init_param = True, will use EMA weights in the ckpt to init the model param...") - ema_state_dict = state["extra_state"]["ema_fp32_params"] if "ema_fp32_params" in state["extra_state"] else state["extra_state"]["ema"] - self.model.load_state_dict( - ema_state_dict, strict=True, model_cfg=self.cfg.model - ) - else: - self.model.load_state_dict( - state["model"], strict=True, model_cfg=self.cfg.model - ) - # save memory for later steps - if not (self.cfg.ema.store_ema and (self.cfg.checkpoint.use_latest_weights_to_init_ema or not ("extra_state" in state and "ema" in state["extra_state"]))): - del state["model"] - if utils.has_parameters(self.get_criterion()): - self.get_criterion().load_state_dict( - state["criterion"], strict=True - ) - del state["criterion"] - - except Exception: - raise Exception( - "Cannot load model parameters from checkpoint {}; " - "please ensure that the architectures match.".format(filename) - ) - extra_state = state["extra_state"] - self._optim_history = state["optimizer_history"] - - if last_optim_state is not None and not reset_optimizer: - # rebuild optimizer after loading model, since params may have changed - self._build_optimizer() - - # only reload optimizer and lr_scheduler if they match - last_optim = self._optim_history[-1] - assert ( - last_optim["criterion_name"] == self.get_criterion().__class__.__name__ - ), f"Criterion does not match; please reset the optimizer (--reset-optimizer). {last_optim['criterion_name']} vs {self.get_criterion().__class__.__name__}" - assert ( - last_optim["optimizer_name"] == self.optimizer.__class__.__name__ - ), f"Optimizer does not match; please reset the optimizer (--reset-optimizer). {last_optim['optimizer_name']} vs {self.optimizer.__class__.__name__}" - - if not reset_lr_scheduler: - self.lr_scheduler.load_state_dict(last_optim["lr_scheduler_state"]) - - if self.is_fsdp and not self.model.use_sharded_state: - # if use_sharded_state, the last_optim_state is already sharded, skip this - last_optim_state = self.model.get_shard_from_optim_state_dict( - last_optim_state - ) - elif not load_on_all_ranks and is_distributed: - last_optim_state = self.optimizer.broadcast_global_state_dict( - last_optim_state - ) - - self.optimizer.load_state_dict(last_optim_state, optimizer_overrides) - - self.set_num_updates(last_optim["num_updates"]) - - if extra_state is not None: - itr_state = extra_state["train_iterator"] - epoch = itr_state["epoch"] - - if "previous_training_time" in extra_state: - self._previous_training_time = extra_state["previous_training_time"] - self._start_time = time.time() - - self.lr_step(epoch) - - if ( - itr_state.get("version", 1) >= 2 - and itr_state["iterations_in_epoch"] == 0 - ): - # reset meters at start of epoch - reset_meters = True - - if "metrics" in extra_state and not reset_meters: - metrics.load_state_dict(extra_state["metrics"]) - - # reset TimeMeters, since their start times don't make sense anymore - for meter in metrics.get_meters("default"): - if isinstance(meter, meters.TimeMeter): - meter.reset() - - if self.cfg.ema.store_ema: - if self.cfg.checkpoint.use_latest_weights_to_init_ema or "ema" not in extra_state: - if "ema" not in extra_state: - logger.warn( - "EMA not found in checkpoint. But store_ema is True. " - "EMA is re-initialized from checkpoint." - ) - elif self.cfg.checkpoint.use_latest_weights_to_init_ema: - logger.info( - "use_latest_weights_to_init_ema = True. EMA is re-initialized from checkpoint." - ) - self.ema.restore(state["model"], build_fp32_params=self.cfg.ema.ema_fp32) - del state["model"] - else: - logger.info( - "Loading EMA from checkpoint" - ) - self.ema.restore(extra_state["ema"], build_fp32_params=False) - - if self.cfg.ema.ema_fp32: - if "ema_fp32_params" in extra_state: - logger.info( - "Loading EMA fp32 params from checkpoint" - ) - self.ema.build_fp32_params(extra_state["ema_fp32_params"]) - else: - logger.info( - "Building EMA fp32 params from EMA model in checkpoint" - ) - self.ema.build_fp32_params() - - logger.info( - "Loaded checkpoint {} (epoch {} @ {} updates)".format( - filename, epoch, self.get_num_updates() - ) - ) - - else: - logger.info("No existing checkpoint found {}".format(filename)) - - return extra_state - - def get_train_iterator( - self, - epoch, - combine=True, - load_dataset=True, - data_selector=None, - shard_batch_itr=True, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over the training set for a given epoch.""" - if load_dataset: - logger.info("loading train data for epoch {}".format(epoch)) - self.task.load_dataset( - self.cfg.dataset.train_subset, - epoch=epoch, - combine=combine, - data_selector=data_selector, - tpu=self.tpu, - ) - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.train_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - self.cfg.dataset.max_tokens, - ), - ignore_invalid_inputs=True, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size if shard_batch_itr else 1, - shard_id=self.data_parallel_rank if shard_batch_itr else 0, - num_workers=self.cfg.dataset.num_workers, - epoch=epoch, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - batch_iterator.dataset.dataset._seek() - return batch_iterator - - def get_valid_iterator( - self, - subset, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over given validation subset for a given epoch.""" - self.task.dataset(subset).dataset._seek() - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(subset), - max_tokens=self.cfg.dataset.max_tokens_valid, - max_sentences=self.cfg.dataset.batch_size_valid, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - ), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - # always pass a fixed "epoch" to keep validation data consistent - # across training epochs - epoch=1, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - batch_iterator.dataset.dataset._seek() - return batch_iterator - - def begin_epoch(self, epoch): - """Called at the beginning of each epoch.""" - logger.info("begin training epoch {}".format(epoch)) - - self.lr_step_begin_epoch(epoch) - - if self.quantizer is not None: - self.quantizer.begin_epoch(epoch) - - # task specific setup per epoch - self.task.begin_epoch(epoch, self.get_model()) - - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("begin_epoch") # wait for all workers - xm.mark_step() - - def begin_valid_epoch(self, epoch): - """Called at the beginning of each validation epoch.""" - - # task specific setup per validation epoch - self.task.begin_valid_epoch(epoch, self.get_model()) - - def reset_dummy_batch(self, batch): - self._dummy_batch = batch - - @metrics.aggregate("train") - def train_step(self, samples, raise_oom=False): - """Do forward, backward and parameter update.""" - self._set_seed() - self.model.train() - self.criterion.train() - self.zero_grad() - - metrics.log_start_time("train_wall", priority=800, round=0) - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - # forward and backward pass - logging_outputs, sample_size, ooms = [], 0, 0 - for i, sample in enumerate(samples): # delayed update loop - sample, is_dummy_batch = self._prepare_sample(sample) - - def maybe_no_sync(): - """ - Whenever *samples* contains more than one mini-batch, we - want to accumulate gradients locally and only call - all-reduce in the last backwards pass. - """ - if ( - self.data_parallel_world_size > 1 - and hasattr(self.model, "no_sync") - and i < len(samples) - 1 - # The no_sync context manager results in increased memory - # usage with FSDP, since full-size gradients will be - # accumulated on each GPU. It's typically a better tradeoff - # to do the extra communication with FSDP. - and not self.is_fsdp - ): - return self.model.no_sync() - else: - return contextlib.ExitStack() # dummy contextmanager - - try: - with maybe_no_sync(): - # forward and backward - loss, sample_size_i, logging_output = self.task.train_step( - sample=sample, - model=self.model, - criterion=self.criterion, - optimizer=self.optimizer, - update_num=self.get_num_updates(), - ignore_grad=is_dummy_batch, - **extra_kwargs, - ) - del loss - - logging_outputs.append(logging_output) - sample_size += sample_size_i - - # emptying the CUDA cache after the first step can - # reduce the chance of OOM - if self.cuda and self.get_num_updates() == 0: - torch.cuda.empty_cache() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if raise_oom: - raise e - logger.warning( - "attempting to recover from OOM in forward/backward pass" - ) - ooms += 1 - self.zero_grad() - if self.cuda: - torch.cuda.empty_cache() - if self.cfg.distributed_training.distributed_world_size == 1: - return None - else: - raise e - - if self.tpu and i < len(samples) - 1: - # tpu-comment: every XLA operation before marking step is - # appended to the IR graph, and processing too many batches - # before marking step can lead to OOM errors. - # To handle gradient accumulation use case, we explicitly - # mark step here for every forward pass without a backward pass - self._xla_markstep_and_send_to_cpu() - - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - if torch.is_tensor(sample_size): - sample_size = sample_size.float() - else: - sample_size = float(sample_size) - - # gather logging outputs from all replicas - if self._sync_stats(): - train_time = self._local_cumulative_training_time() - logging_outputs, ( - sample_size, - ooms, - total_train_time, - ) = self._aggregate_logging_outputs( - logging_outputs, sample_size, ooms, train_time, ignore=is_dummy_batch - ) - self._cumulative_training_time = ( - total_train_time / self.data_parallel_world_size - ) - - overflow = False - try: - with torch.autograd.profiler.record_function("reduce-grads"): - # reduce gradients across workers - self.optimizer.all_reduce_grads(self.model) - if utils.has_parameters(self.criterion): - self.optimizer.all_reduce_grads(self.criterion) - - with torch.autograd.profiler.record_function("multiply-grads"): - # multiply gradients by (data_parallel_size / sample_size) since - # DDP normalizes by the number of data parallel workers for - # improved fp16 precision. - # Thus we get (sum_of_gradients / sample_size) at the end. - # In case of fp16, this step also undoes loss scaling. - # (Debugging note: Some optimizers perform this scaling on the - # fly, so inspecting model.parameters() or optimizer.params may - # still show the original, unscaled gradients.) - numer = ( - self.data_parallel_world_size - if not self.cfg.optimization.use_bmuf or self._sync_stats() - else 1 - ) - self.optimizer.multiply_grads(numer / (sample_size or 1.0)) - # Note: (sample_size or 1.0) handles the case of a zero gradient, in a - # way that avoids CPU/device transfers in case sample_size is a GPU or - # TPU object. The assumption is that the gradient itself is also 0. - - with torch.autograd.profiler.record_function("clip-grads"): - # clip grads - grad_norm = self.clip_grad_norm(self.cfg.optimization.clip_norm) - - # check that grad norms are consistent across workers - # on tpu check tensor is slow - if not self.tpu: - if ( - not self.cfg.optimization.use_bmuf - and self.cfg.distributed_training.ddp_backend != "slow_mo" - ): - self._check_grad_norms(grad_norm) - if not torch.isfinite(grad_norm).all(): - # in case of AMP, if gradients are Nan/Inf then - # optimizer step is still required - if self.cfg.common.amp: - overflow = True - else: - # check local gradnorm single GPU case, trigger NanDetector - raise FloatingPointError("gradients are Nan/Inf") - - with torch.autograd.profiler.record_function("optimizer"): - # take an optimization step - self.task.optimizer_step( - self.optimizer, model=self.model, update_num=self.get_num_updates() - ) - if self.cfg.common.amp and overflow: - if self._amp_retries == self.cfg.common.amp_batch_retries: - logger.info("AMP: skipping this batch.") - self._amp_retries = 0 - else: - self._amp_retries += 1 - return self.train_step(samples, raise_oom) # recursion to feed in same batch - - except FloatingPointError: - # re-run the forward and backward pass with hooks attached to print - # out where it fails - self.zero_grad() - with NanDetector(self.get_model()): - for _, sample in enumerate(samples): - sample, _ = self._prepare_sample(sample) - self.task.train_step( - sample, - self.model, - self.criterion, - self.optimizer, - self.get_num_updates(), - ignore_grad=False, - **extra_kwargs, - ) - raise - except OverflowError as e: - overflow = True - logger.info( - f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}" - ) - grad_norm = torch.tensor(0.0).cuda() - self.zero_grad() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - logger.error("OOM during optimization, irrecoverable") - raise e - - # Some distributed wrappers (e.g., SlowMo) need access to the optimizer - # after the step - if hasattr(self.model, "perform_additional_optimizer_actions"): - if hasattr(self.optimizer, "fp32_params"): - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer, self.optimizer.fp32_params - ) - else: - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer - ) - - logging_output = None - if not overflow or self.cfg.distributed_training.ddp_backend == "slow_mo": - self.set_num_updates(self.get_num_updates() + 1) - - if self.cfg.ema.store_ema: - # Step EMA forward with new model. - self.ema.step( - self.get_model(), - self.get_num_updates(), - ) - metrics.log_scalar( - "ema_decay", - self.ema.get_decay(), - priority=10000, - round=5, - weight=0, - ) - - if self.tpu: - import torch_xla.core.xla_model as xm - - # mark step on TPUs - self._xla_markstep_and_send_to_cpu() - - # only log stats every log_interval steps - # this causes wps to be misreported when log_interval > 1 - logging_output = {} - if self.get_num_updates() % self.cfg.common.log_interval == 0: - # log memory usage - mem_info = xm.get_memory_info(self.device) - gb_free = mem_info["kb_free"] / 1024 / 1024 - gb_total = mem_info["kb_total"] / 1024 / 1024 - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - metrics.log_scalar( - "gb_total", gb_total, priority=1600, round=1, weight=0 - ) - logging_outputs = self._xla_markstep_and_send_to_cpu( - logging_outputs - ) - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # log whenever there's an XLA compilation, since these - # slow down training and may indicate opportunities for - # optimization - self._check_xla_compilation() - else: - if self.cuda and self.cuda_env is not None: - # log minimum free memory over the iteration - gb_used = torch.cuda.max_memory_allocated() / 1024 / 1024 / 1024 - torch.cuda.reset_peak_memory_stats() - gb_free = self.cuda_env.total_memory_in_GB - gb_used - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - - # log stats - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # clear CUDA cache to reduce memory fragmentation - if ( - self.cuda - and self.cfg.common.empty_cache_freq > 0 - and ( - (self.get_num_updates() + self.cfg.common.empty_cache_freq - 1) - % self.cfg.common.empty_cache_freq - ) - == 0 - ): - torch.cuda.empty_cache() - - if self.cfg.common.fp16 or self.cfg.common.amp: - metrics.log_scalar( - "loss_scale", - ( - self.optimizer.scaler.loss_scale - if self.cfg.common.fp16 - else self.optimizer.scaler.get_scale() - ), - priority=700, - round=4, - weight=0, - ) - - metrics.log_stop_time("train_wall") - return logging_output - - @metrics.aggregate("valid") - def valid_step(self, sample, raise_oom=False): - """Do forward pass in evaluation mode.""" - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("valid_step") # wait for all workers - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - with torch.no_grad(): - self.model.eval() - self.criterion.eval() - - sample, is_dummy_batch = self._prepare_sample(sample) - - try: - _loss, sample_size, logging_output = self.task.valid_step( - sample, self.model, self.criterion, **extra_kwargs - ) - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if not raise_oom: - logger.warning( - "ran out of memory in validation step, retrying batch" - ) - for p in self.model.parameters(): - if p.grad is not None: - p.grad = None # free some memory - if self.cuda: - torch.cuda.empty_cache() - return self.valid_step(sample, raise_oom=True) - raise e - - logging_outputs = [logging_output] - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - # gather logging outputs from all replicas - if self.data_parallel_world_size > 1: - logging_outputs, (sample_size,) = self._aggregate_logging_outputs( - logging_outputs, - sample_size, - ignore=is_dummy_batch, - ) - - # log validation stats - if self.tpu: - logging_outputs = self._xla_markstep_and_send_to_cpu(logging_outputs) - logging_output = self._reduce_and_log_stats(logging_outputs, sample_size) - - return logging_output - - def zero_grad(self): - self.optimizer.zero_grad() - - def lr_step_begin_epoch(self, epoch): - """Adjust the learning rate at the beginning of the epoch.""" - self.lr_scheduler.step_begin_epoch(epoch) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_reinit(self, total_updates, num_updates): - self.lr_scheduler.reinit(total_updates, num_updates) - - def lr_step(self, epoch, val_loss=None): - """Adjust the learning rate at the end of the epoch.""" - self.lr_scheduler.step(epoch, val_loss) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step_update(self): - """Update the learning rate after each update.""" - new_lr = self.lr_scheduler.step_update(self.get_num_updates()) - if isinstance(new_lr, dict): - for k, v in new_lr.items(): - metrics.log_scalar(f"lr_{k}", v, weight=0, priority=300) - new_lr = new_lr.get("default", next(iter(new_lr.values()))) - else: - metrics.log_scalar("lr", new_lr, weight=0, priority=300) - return new_lr - - def get_lr(self): - """Get the current learning rate.""" - return self.optimizer.get_lr() - - def get_model(self): - """Get the (non-wrapped) model instance.""" - return self._model - - def get_criterion(self): - """Get the (non-wrapped) criterion instance.""" - return self._criterion - - def get_meter(self, name): - """[deprecated] Get a specific meter by name.""" - from fairseq import meters - - if "get_meter" not in self._warn_once: - self._warn_once.add("get_meter") - utils.deprecation_warning( - "Trainer.get_meter is deprecated. Please use fairseq.metrics instead." - ) - - train_meters = metrics.get_meters("train") - if train_meters is None: - train_meters = {} - - if name == "train_loss" and "loss" in train_meters: - return train_meters["loss"] - elif name == "train_nll_loss": - # support for legacy train.py, which assumed this meter is - # always initialized - m = train_meters.get("nll_loss", None) - return m or meters.AverageMeter() - elif name == "wall": - # support for legacy train.py, which assumed this meter is - # always initialized - m = metrics.get_meter("default", "wall") - return m or meters.TimeMeter() - elif name == "wps": - m = metrics.get_meter("train", "wps") - return m or meters.TimeMeter() - elif name in {"valid_loss", "valid_nll_loss"}: - # support for legacy train.py, which assumed these meters - # are always initialized - k = name[len("valid_") :] - m = metrics.get_meter("valid", k) - return m or meters.AverageMeter() - elif name == "oom": - return meters.AverageMeter() - elif name in train_meters: - return train_meters[name] - return None - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - self.lr_step_update() - if self.quantizer: - self.quantizer.step_update(self._num_updates) - metrics.log_scalar("num_updates", self._num_updates, weight=0, priority=200) - - def clip_grad_norm(self, clip_norm): - def agg_norm_fn(total_norm): - total_norm = total_norm.cuda().float() ** 2 - total_norm = distributed_utils.all_reduce( - total_norm, group=self.data_parallel_process_group - ) - return total_norm ** 0.5 - - should_agg_norm = ( - self.is_fsdp - and ( - self.data_parallel_process_group is not None - or torch.distributed.is_initialized() - ) - ) - return self.optimizer.clip_grad_norm( - clip_norm, aggregate_norm_fn=agg_norm_fn if should_agg_norm else None - ) - - def cumulative_training_time(self): - if self._cumulative_training_time is None: - # single GPU - return self._local_cumulative_training_time() - else: - return self._cumulative_training_time - - def _local_cumulative_training_time(self): - """Aggregate training time in seconds.""" - return time.time() - self._start_time + self._previous_training_time - - def _fp_convert_sample(self, sample): - def apply_half(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.half) - return t - - def apply_bfloat16(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.bfloat16) - return t - - if self.cfg.common.fp16: - sample = utils.apply_to_sample(apply_half, sample) - - if self.cfg.common.bf16: - sample = utils.apply_to_sample(apply_bfloat16, sample) - - return sample - - def _prepare_sample(self, sample, is_dummy=False): - if sample == "DUMMY": - raise Exception( - "Trying to use an uninitialized 'dummy' batch. This usually indicates " - "that the total number of batches is smaller than the number of " - "participating GPUs. Try reducing the batch size or using fewer GPUs." - ) - - if sample is None or len(sample) == 0: - assert ( - self._dummy_batch is not None and len(self._dummy_batch) > 0 - ), "Invalid dummy batch: {}".format(self._dummy_batch) - sample, _ = self._prepare_sample(self._dummy_batch, is_dummy=True) - return sample, True - - # Given that PCIe/NVLink bandwidth is significantly smaller than DRAM bandwidth - # it makes sense to do the format conversion on the CPU and then transfer - # a smaller buffer to the device. This also saves GPU memory capacity. - - if self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self.cuda: - if self.pipeline_model_parallel: - if 'target' in sample: - sample['target'] = utils.move_to_cuda(sample['target'], device=self.last_device) - else: - sample = utils.move_to_cuda(sample) - elif self.tpu and is_dummy: - # the dummy batch may not be on the appropriate device - sample = utils.move_to_cuda(sample, device=self.device) - - if not self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self._dummy_batch == "DUMMY": - self._dummy_batch = sample - - return sample, False - - def _set_seed(self): - # Set seed based on args.seed and the update number so that we get - # reproducible results when resuming from checkpoints - seed = self.cfg.common.seed + self.get_num_updates() - utils.set_torch_seed(seed) - - def _sync_stats(self): - # Return True if it's using multiple GPUs and DDP or multiple GPUs with - # BMUF and it's a bmuf sync with warmup iterations completed before. - if self.data_parallel_world_size == 1: - return False - elif self.cfg.optimization.use_bmuf: - return ( - self.get_num_updates() + 1 - ) % self.cfg.bmuf.global_sync_iter == 0 and ( - self.get_num_updates() + 1 - ) > self.cfg.bmuf.warmup_iterations - else: - return True - - def _log_oom(self, exc): - msg = "OOM: Ran out of memory with exception: {}".format(exc) - logger.warning(msg) - if torch.cuda.is_available() and hasattr(torch.cuda, "memory_summary"): - for device_idx in range(torch.cuda.device_count()): - logger.warning(torch.cuda.memory_summary(device=device_idx)) - sys.stderr.flush() - - def _aggregate_logging_outputs( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - if self.task.__class__.logging_outputs_can_be_summed(self.get_criterion()): - return self._fast_stat_sync_sum( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - else: - return self._all_gather_list_sync( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - - def _all_gather_list_sync( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. all_gather_list_sync is - suitable when logging outputs are complex types. - """ - if self.tpu: - raise NotImplementedError - if ignore: - logging_outputs = [] - results = list( - zip( - *distributed_utils.all_gather_list( - [logging_outputs] + list(extra_stats_to_sum), - max_size=getattr(self.cfg.common, "all_gather_list_size", 16384), - group=self.data_parallel_process_group, - ) - ) - ) - logging_outputs, extra_stats_to_sum = results[0], results[1:] - logging_outputs = list(chain.from_iterable(logging_outputs)) - extra_stats_to_sum = [sum(s) for s in extra_stats_to_sum] - return logging_outputs, extra_stats_to_sum - - def _fast_stat_sync_sum( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. fast_stat_sync_sum is - faster than all_gather_list_sync, but is only suitable when - logging outputs are scalars and can be summed. Note that - *logging_outputs* cannot contain any nested dicts/lists. - """ - data = {} - for i, stat in enumerate(extra_stats_to_sum): - data["extra_stats_" + str(i)] = stat - if len(logging_outputs) > 0: - log_keys = list(logging_outputs[0].keys()) - for k in log_keys: - if not ignore: - v = sum(log[k] for log in logging_outputs if k in log) - else: - v = logging_outputs[0][k] - v = torch.zeros_like(v) if torch.is_tensor(v) else 0 - data["logging_outputs_" + k] = v - else: - log_keys = None - - data = distributed_utils.all_reduce_dict( - data, device=self.device, group=self.data_parallel_process_group - ) - - extra_stats_to_sum = [ - data["extra_stats_" + str(i)] for i in range(len(extra_stats_to_sum)) - ] - if log_keys is not None: - logging_outputs = [{k: data["logging_outputs_" + k] for k in log_keys}] - else: - logging_outputs = [] - return logging_outputs, extra_stats_to_sum - - def _check_grad_norms(self, grad_norm): - """Check that grad norms are consistent across workers.""" - if self._grad_norm_buf is not None: - self._grad_norm_buf.zero_() - self._grad_norm_buf[self.data_parallel_rank] = grad_norm - distributed_utils.all_reduce( - self._grad_norm_buf, group=self.data_parallel_process_group - ) - - def is_consistent(tensor): - max_abs_diff = torch.max(torch.abs(tensor - tensor[0])) - return ( - (torch.isfinite(tensor).all() - and (max_abs_diff / (tensor[0] + 1e-6) < 1e-6).all()) - or - (self.cfg.common.amp and not torch.isfinite(tensor).all()) - # in case of amp non-finite grads are fine - ) - - if not is_consistent(self._grad_norm_buf): - pretty_detail = "\n".join( - "rank {:3d} = {:.8f}".format(r, n) - for r, n in enumerate(self._grad_norm_buf.tolist()) - ) - error_detail = "grad_norm across the workers:\n{}\n".format( - pretty_detail - ) - # use FloatingPointError to trigger NanDetector - raise FloatingPointError( - "Fatal error: gradients are inconsistent between workers. " - "Try --ddp-backend=legacy_ddp. " - "Or are you mixing up different generation of GPUs in training?" - + "\n" - + "-" * 80 - + "\n{}\n".format(error_detail) - + "-" * 80 - ) - - def _reduce_and_log_stats(self, logging_outputs, sample_size, grad_norm=None): - if grad_norm is not None and ( - not torch.is_tensor(grad_norm) or torch.isfinite(grad_norm) - ): - metrics.log_speed("ups", 1.0, priority=100, round=2) - metrics.log_scalar("gnorm", grad_norm, priority=400, round=3) - if self.cfg.optimization.clip_norm > 0: - metrics.log_scalar( - "clip", - torch.where( - grad_norm > self.cfg.optimization.clip_norm, - grad_norm.new_tensor(100), - grad_norm.new_tensor(0), - ), - priority=500, - round=1, - ) - - with metrics.aggregate() as agg: - if logging_outputs is not None: - self.task.reduce_metrics(logging_outputs, self.get_criterion()) - del logging_outputs - - # extra warning for criterions that don't properly log a loss value - if "loss" not in agg: - if "loss" not in self._warn_once: - self._warn_once.add("loss") - logger.warning( - "Criterion.reduce_metrics did not log a 'loss' value, " - "which may break some functionality" - ) - metrics.log_scalar("loss", -1) - - # support legacy interface - if self.tpu: - logging_output = {} - else: - logging_output = agg.get_smoothed_values() - logging_output["sample_size"] = sample_size - for key_to_delete in ["ppl", "wps", "wpb", "bsz"]: - if key_to_delete in logging_output: - del logging_output[key_to_delete] - return logging_output - - def _check_xla_compilation(self): - import torch_xla.debug.metrics as met - - compile_stats = met.metric_data("CompileTime") - if compile_stats is None: - return - num_xla_compiles = compile_stats[0] - if num_xla_compiles > self._num_xla_compiles: - logger.warning( - "XLA compilation detected on device #{}; too many of these can lead " - "to slow training, but we expect a few in the beginning".format( - self.cfg.distributed_training.distributed_rank - ) - ) - self._num_xla_compiles = num_xla_compiles - - def _xla_markstep_and_send_to_cpu(self, data=None): - import torch_xla.core.xla_model as xm - - xm.mark_step() - if data is not None: - from fairseq.utils import xla_device_to_cpu - - return xla_device_to_cpu(data) - - -def _catalog_shared_params(module, memo=None, prefix=""): - if memo is None: - first_call = True - memo = {} - else: - first_call = False - for name, param in module._parameters.items(): - param_prefix = prefix + ("." if prefix else "") + name - if param not in memo: - memo[param] = [] - memo[param].append(param_prefix) - for name, m in module._modules.items(): - if m is None: - continue - submodule_prefix = prefix + ("." if prefix else "") + name - _catalog_shared_params(m, memo, submodule_prefix) - if first_call: - return [x for x in memo.values() if len(x) > 1] - - -def _get_module_by_path(module, path): - path = path.split(".") - for name in path: - module = getattr(module, name) - return module - - -def _set_module_by_path(module, path, value): - path = path.split(".") - for name in path[:-1]: - module = getattr(module, name) - setattr(module, path[-1], value) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/structures/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/structures/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/PHZane/emrwa/generate.py b/spaces/PHZane/emrwa/generate.py deleted file mode 100644 index 96bff115fc6580b7399c4adf40e7f8cf71bddcfc..0000000000000000000000000000000000000000 --- a/spaces/PHZane/emrwa/generate.py +++ /dev/null @@ -1,209 +0,0 @@ -#coding:utf-8 -import torch -import torch.nn.functional as F -import os -import argparse -from tqdm import trange -from transformers import GPT2LMHeadModel -import numpy as np -import random - - -class generate: - def __init__(self, model_name): - self.model_config = 'config/model_config_small.json' # 选择模型参数 - self.tokenizer_path = 'cache/vocab_small.txt' # 选择词库 - self.model_path = 'models/{}'.format(model_name) - self.save_path = 'generated/'.format(model_name) - self.articles_per_title = 5 # 每个标题生成多少篇文章 - self.titles = "入院初诊:" - self.Fix_seeds(1) # 设置随机种子 - self.main() # 文本生成 - - # Fix random seed for reproducibility - def Fix_seeds(self, seed): - torch.manual_seed(seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - np.random.seed(seed) - random.seed(seed) - torch.backends.cudnn.benchmark = False - torch.backends.cudnn.deterministic = True - - def is_word(self, word): - for item in list(word): - if item not in 'qwertyuiopasdfghjklzxcvbnm': - return False - return True - - def _is_chinese_char(self, char): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - cp = ord(char) - if ((cp >= 0x4E00 and cp <= 0x9FFF) or # - (cp >= 0x3400 and cp <= 0x4DBF) or # - (cp >= 0x20000 and cp <= 0x2A6DF) or # - (cp >= 0x2A700 and cp <= 0x2B73F) or # - (cp >= 0x2B740 and cp <= 0x2B81F) or # - (cp >= 0x2B820 and cp <= 0x2CEAF) or - (cp >= 0xF900 and cp <= 0xFAFF) or # - (cp >= 0x2F800 and cp <= 0x2FA1F)): # - return True - - return False - - def top_k_top_p_filtering(self, logits, top_k=0, top_p=0.0, filter_value=-float('Inf')): - """ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering - Args: - logits: logits distribution shape (vocabulary size) - top_k > 0: keep only top k tokens with highest probability (top-k filtering). - top_p > 0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering). - Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) - From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317 - """ - assert logits.dim() == 1 # batch size 1 for now - could be updated for more but the code would be less clear - top_k = min(top_k, logits.size(-1)) # Safety check - if top_k > 0: - # Remove all tokens with a probability less than the last token of the top-k - indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None] - logits[indices_to_remove] = filter_value - - if top_p > 0.0: - sorted_logits, sorted_indices = torch.sort(logits, descending=True) - cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) - - # Remove tokens with cumulative probability above the threshold - sorted_indices_to_remove = cumulative_probs > top_p - # Shift the indices to the right to keep also the first token above the threshold - sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone() - sorted_indices_to_remove[..., 0] = 0 - - indices_to_remove = sorted_indices[sorted_indices_to_remove] - logits[indices_to_remove] = filter_value - return logits - - def sample_sequence(self, model, context, length, n_ctx, tokenizer, temperature=1.0, top_k=30, top_p=0.0, repitition_penalty=1.0, - device='cpu'): - context = torch.tensor(context, dtype=torch.long, device=device) - context = context.unsqueeze(0) - generated = context - with torch.no_grad(): - for _ in trange(length): - inputs = {'input_ids': generated[0][-(n_ctx - 1):].unsqueeze(0)} - outputs = model( - **inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet (cached hidden-states) - next_token_logits = outputs[0][0, -1, :] - for id in set(generated): - next_token_logits[id] /= repitition_penalty - next_token_logits = next_token_logits / temperature - next_token_logits[tokenizer.convert_tokens_to_ids('[UNK]')] = -float('Inf') - filtered_logits = self.top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) - next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1) - generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) - return generated - - def main(self): - parser = argparse.ArgumentParser() - parser.add_argument('--device', default='0,1,2,3', type=str, required=False, help='设置使用哪些显卡') - parser.add_argument('--length', default=-1, type=int, required=False, help='生成长度') - parser.add_argument('--temperature', default=1, type=float, required=False, help='生成温度,越高越随机') - parser.add_argument('--topk', default=8, type=int, required=False, help='生成的时候最高几选一') - parser.add_argument('--topp', default=0, type=float, required=False, help='生成的时候积累概率最高多少') - parser.add_argument('--model_config', default=self.model_config, type=str, required=False, - help='模型参数路径') - parser.add_argument('--tokenizer_path', default=self.tokenizer_path, type=str, required=False, help='词表路径') - parser.add_argument('--model_path', default=self.model_path, type=str, required=False, help='模型路径') - parser.add_argument('--save_path', default=self.save_path, type=str, required=False, help='存放生成的文件的路径') - parser.add_argument('--articles_per_title', default=self.articles_per_title, type=int, required=False, help='每个标题生成多少篇文章') - parser.add_argument('--titles', default=self.titles, type=str, required=False, help='标题列表,是一个字符串,用空格分开') - parser.add_argument('--titles_file', default='', type=str, required=False, - help='标题列表文件,文件中每行一个标题。如果这个选项有值则titles无效') - parser.add_argument('--no_wordpiece', action='store_true', help='不做word piece切词') - parser.add_argument('--segment', action='store_true', help='中文以词为单位') - parser.add_argument('--repetition_penalty', default=1.0, type=float, required=False) - - args = parser.parse_args(args=[]) - print('args:\n' + args.__repr__()) - - if args.segment: - from tokenizations import tokenization_bert_word_level as tokenization_bert - else: - from tokenizations import tokenization_bert - - os.environ["CUDA_VISIBLE_DEVICES"] = args.device # 此处设置程序使用哪些显卡 - length = args.length - temperature = args.temperature - topk = args.topk - topp = args.topp - repetition_penalty = args.repetition_penalty - - titles = args.titles.split() # 列表,里面每个元素是一个生成的标题 - if args.titles_file: - with open(args.titles_file, 'r') as f: - titles = [line.strip('\n') for line in f.readlines()] - articles_per_title = args.articles_per_title # 这里定义一个标题生成多少篇文章 - save_path = args.save_path # 设置存到哪 - - device = "cuda" if torch.cuda.is_available() else "cpu" - - tokenizer = tokenization_bert.BertTokenizer(vocab_file=args.tokenizer_path) - model = GPT2LMHeadModel.from_pretrained(args.model_path) - model.to(device) - model.eval() - - n_ctx = model.config.n_ctx - - if not os.path.exists(save_path): - os.mkdir(save_path) - if length == -1: - length = model.config.n_ctx - - for i, title in enumerate(titles): - for j in range(articles_per_title): - with open(save_path + title.replace('入院初诊:', '') + '-' + str(j) + '.txt', 'w') as f: - context_tokens = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(title)) - generated = 0 - out = self.sample_sequence( - n_ctx=n_ctx, - model=model, length=length, - context=context_tokens, tokenizer=tokenizer, - temperature=temperature, top_k=topk, top_p=topp, repitition_penalty=repetition_penalty, - device=device - ) - out = out.tolist()[0] - - generated += 1 - text = tokenizer.convert_ids_to_tokens(out) - - for i, item in enumerate(text[:-1]): # 确保英文前后有空格 - if self.is_word(item) and self.is_word(text[i + 1]): - text[i] = item + ' ' - - for i, item in enumerate(text): - if item == '[MASK]': - text[i] = '' - if item == '[CLS]' or item == '[SEP]': - text[i] = '\n' - - print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40) - text = ''.join(text).replace('##', '').strip() - # text = ''.join(text.split('\n')[:-1]) - print(text) - f.write(text + '\n') - print("=" * 80) - - - - - - - - diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/upfirdn2d.cpp b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/domain_specific_deblur.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/domain_specific_deblur.py deleted file mode 100644 index e45dcd256b61ad94f92bbdc886f08459ab31c8a6..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/domain_specific_deblur.py +++ /dev/null @@ -1,89 +0,0 @@ -import argparse -from math import ceil, log10 -from pathlib import Path - -import torchvision -import yaml -from PIL import Image -from torch.nn import DataParallel -from torch.utils.data import DataLoader, Dataset - - -class Images(Dataset): - def __init__(self, root_dir, duplicates): - self.root_path = Path(root_dir) - self.image_list = list(self.root_path.glob("*.png")) - self.duplicates = ( - duplicates # Number of times to duplicate the image in the dataset to produce multiple HR images - ) - - def __len__(self): - return self.duplicates * len(self.image_list) - - def __getitem__(self, idx): - img_path = self.image_list[idx // self.duplicates] - image = torchvision.transforms.ToTensor()(Image.open(img_path)) - if self.duplicates == 1: - return image, img_path.stem - else: - return image, img_path.stem + f"_{(idx % self.duplicates)+1}" - - -parser = argparse.ArgumentParser(description="PULSE") - -# I/O arguments -parser.add_argument("--input_dir", type=str, default="imgs/blur_faces", help="input data directory") -parser.add_argument( - "--output_dir", type=str, default="experiments/domain_specific_deblur/results", help="output data directory" -) -parser.add_argument( - "--cache_dir", - type=str, - default="experiments/domain_specific_deblur/cache", - help="cache directory for model weights", -) -parser.add_argument( - "--yml_path", type=str, default="options/domain_specific_deblur/stylegan2.yml", help="configuration file" -) - -kwargs = vars(parser.parse_args()) - -with open(kwargs["yml_path"], "rb") as f: - opt = yaml.safe_load(f) - -dataset = Images(kwargs["input_dir"], duplicates=opt["duplicates"]) -out_path = Path(kwargs["output_dir"]) -out_path.mkdir(parents=True, exist_ok=True) - -dataloader = DataLoader(dataset, batch_size=opt["batch_size"]) - -if opt["stylegan_ver"] == 1: - from models.dsd.dsd_stylegan import DSDStyleGAN - - model = DSDStyleGAN(opt=opt, cache_dir=kwargs["cache_dir"]) -else: - from models.dsd.dsd_stylegan2 import DSDStyleGAN2 - - model = DSDStyleGAN2(opt=opt, cache_dir=kwargs["cache_dir"]) - -model = DataParallel(model) - -toPIL = torchvision.transforms.ToPILImage() - -for ref_im, ref_im_name in dataloader: - if opt["save_intermediate"]: - padding = ceil(log10(100)) - for i in range(opt["batch_size"]): - int_path_HR = Path(out_path / ref_im_name[i] / "HR") - int_path_LR = Path(out_path / ref_im_name[i] / "LR") - int_path_HR.mkdir(parents=True, exist_ok=True) - int_path_LR.mkdir(parents=True, exist_ok=True) - for j, (HR, LR) in enumerate(model(ref_im)): - for i in range(opt["batch_size"]): - toPIL(HR[i].cpu().detach().clamp(0, 1)).save(int_path_HR / f"{ref_im_name[i]}_{j:0{padding}}.png") - toPIL(LR[i].cpu().detach().clamp(0, 1)).save(int_path_LR / f"{ref_im_name[i]}_{j:0{padding}}.png") - else: - # out_im = model(ref_im,**kwargs) - for j, (HR, LR) in enumerate(model(ref_im)): - for i in range(opt["batch_size"]): - toPIL(HR[i].cpu().detach().clamp(0, 1)).save(out_path / f"{ref_im_name[i]}.png") diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/correlation.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/correlation.py deleted file mode 100644 index 3d0b79c301b29915dfaf4d2b1846c59be73127d3..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/correlation.py +++ /dev/null @@ -1,196 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import Tensor, nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['correlation_forward', 'correlation_backward']) - - -class CorrelationFunction(Function): - - @staticmethod - def forward(ctx, - input1, - input2, - kernel_size=1, - max_displacement=1, - stride=1, - padding=1, - dilation=1, - dilation_patch=1): - - ctx.save_for_backward(input1, input2) - - kH, kW = ctx.kernel_size = _pair(kernel_size) - patch_size = max_displacement * 2 + 1 - ctx.patch_size = patch_size - dH, dW = ctx.stride = _pair(stride) - padH, padW = ctx.padding = _pair(padding) - dilationH, dilationW = ctx.dilation = _pair(dilation) - dilation_patchH, dilation_patchW = ctx.dilation_patch = _pair( - dilation_patch) - - output_size = CorrelationFunction._output_size(ctx, input1) - - output = input1.new_zeros(output_size) - - ext_module.correlation_forward( - input1, - input2, - output, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input1, input2 = ctx.saved_tensors - - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilation_patchH, dilation_patchW = ctx.dilation_patch - dH, dW = ctx.stride - grad_input1 = torch.zeros_like(input1) - grad_input2 = torch.zeros_like(input2) - - ext_module.correlation_backward( - grad_output, - input1, - input2, - grad_input1, - grad_input2, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - return grad_input1, grad_input2, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input1): - iH, iW = input1.size(2), input1.size(3) - batch_size = input1.size(0) - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - dH, dW = ctx.stride - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilatedKH = (kH - 1) * dilationH + 1 - dilatedKW = (kW - 1) * dilationW + 1 - - oH = int((iH + 2 * padH - dilatedKH) / dH + 1) - oW = int((iW + 2 * padW - dilatedKW) / dW + 1) - - output_size = (batch_size, patch_size, patch_size, oH, oW) - return output_size - - -class Correlation(nn.Module): - r"""Correlation operator - - This correlation operator works for optical flow correlation computation. - - There are two batched tensors with shape :math:`(N, C, H, W)`, - and the correlation output's shape is :math:`(N, max\_displacement \times - 2 + 1, max\_displacement * 2 + 1, H_{out}, W_{out})` - - where - - .. math:: - H_{out} = \left\lfloor\frac{H_{in} + 2 \times padding - - dilation \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - .. math:: - W_{out} = \left\lfloor\frac{W_{in} + 2 \times padding - dilation - \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - the correlation item :math:`(N_i, dy, dx)` is formed by taking the sliding - window convolution between input1 and shifted input2, - - .. math:: - Corr(N_i, dx, dy) = - \sum_{c=0}^{C-1} - input1(N_i, c) \star - \mathcal{S}(input2(N_i, c), dy, dx) - - where :math:`\star` is the valid 2d sliding window convolution operator, - and :math:`\mathcal{S}` means shifting the input features (auto-complete - zero marginal), and :math:`dx, dy` are shifting distance, :math:`dx, dy \in - [-max\_displacement \times dilation\_patch, max\_displacement \times - dilation\_patch]`. - - Args: - kernel_size (int): The size of sliding window i.e. local neighborhood - representing the center points and involved in correlation - computation. Defaults to 1. - max_displacement (int): The radius for computing correlation volume, - but the actual working space can be dilated by dilation_patch. - Defaults to 1. - stride (int): The stride of the sliding blocks in the input spatial - dimensions. Defaults to 1. - padding (int): Zero padding added to all four sides of the input1. - Defaults to 0. - dilation (int): The spacing of local neighborhood that will involved - in correlation. Defaults to 1. - dilation_patch (int): The spacing between position need to compute - correlation. Defaults to 1. - """ - - def __init__(self, - kernel_size: int = 1, - max_displacement: int = 1, - stride: int = 1, - padding: int = 0, - dilation: int = 1, - dilation_patch: int = 1) -> None: - super().__init__() - self.kernel_size = kernel_size - self.max_displacement = max_displacement - self.stride = stride - self.padding = padding - self.dilation = dilation - self.dilation_patch = dilation_patch - - def forward(self, input1: Tensor, input2: Tensor) -> Tensor: - return CorrelationFunction.apply(input1, input2, self.kernel_size, - self.max_displacement, self.stride, - self.padding, self.dilation, - self.dilation_patch) - - def __repr__(self) -> str: - s = self.__class__.__name__ - s += f'(kernel_size={self.kernel_size}, ' - s += f'max_displacement={self.max_displacement}, ' - s += f'stride={self.stride}, ' - s += f'padding={self.padding}, ' - s += f'dilation={self.dilation}, ' - s += f'dilation_patch={self.dilation_patch})' - return s diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/registry.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/registry.py deleted file mode 100644 index c3204e14148fe3341307c5d24ba9154c07449511..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/registry.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - - -def _register_generic(module_dict, module_name, module): - assert module_name not in module_dict - module_dict[module_name] = module - - -class Registry(dict): - ''' - A helper class for managing registering modules, it extends a dictionary - and provides a register functions. - - Eg. creeting a registry: - some_registry = Registry({"default": default_module}) - - There're two ways of registering new modules: - 1): normal way is just calling register function: - def foo(): - ... - some_registry.register("foo_module", foo) - 2): used as decorator when declaring the module: - @some_registry.register("foo_module") - @some_registry.register("foo_modeul_nickname") - def foo(): - ... - - Access of module is just like using a dictionary, eg: - f = some_registry["foo_modeul"] - ''' - def __init__(self, *args, **kwargs): - super(Registry, self).__init__(*args, **kwargs) - - def register(self, module_name, module=None): - # used as function call - if module is not None: - _register_generic(self, module_name, module) - return - - # used as decorator - def register_fn(fn): - _register_generic(self, module_name, fn) - return fn - - return register_fn diff --git a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/sanskrit.py b/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/Potato-ML/Spaceship_Titanic/app.py b/spaces/Potato-ML/Spaceship_Titanic/app.py deleted file mode 100644 index 32e93e3624c87e43a7bb436ebe460939805fe0a7..0000000000000000000000000000000000000000 --- a/spaces/Potato-ML/Spaceship_Titanic/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!" - -demo = gr.Interface(fn=greet, inputs="text", outputs="text") - -demo.launch() \ No newline at end of file diff --git a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Qosmo/GPT-Infinite-Radio/app.py b/spaces/Qosmo/GPT-Infinite-Radio/app.py deleted file mode 100644 index 067f95ee69d168ea081c74b035e5363e8f4bb257..0000000000000000000000000000000000000000 --- a/spaces/Qosmo/GPT-Infinite-Radio/app.py +++ /dev/null @@ -1,15 +0,0 @@ -#%% -import git -import os -import subprocess -subprocess.call("git lfs install", shell=True) - -# Clone private repo -git_securet = os.environ["SECRET_GIT"] -git.Git().clone("https://%s@huggingface.co/spaces/qosmoinc/ChatGPT-Composer.git" % git_securet) - -# Run! -app_dir = os.path.join(os.getcwd(), "ChatGPT-Composer") -os.chdir(app_dir) -subprocess.call("python app.py", shell=True) -# %% diff --git a/spaces/RamAnanth1/Youtube-to-HF-Dataset/dataset/hf_dataset.py b/spaces/RamAnanth1/Youtube-to-HF-Dataset/dataset/hf_dataset.py deleted file mode 100644 index f7e60ea9578cdbfc8c0cf9e214f511c7f4b9854c..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/Youtube-to-HF-Dataset/dataset/hf_dataset.py +++ /dev/null @@ -1,39 +0,0 @@ -from abc import ABC, abstractmethod -from datasets import load_dataset, Dataset -from datasets.data_files import EmptyDatasetError - -class HFDataset(ABC): - """ - Create a dataset to save the transcripts from Youtube. - """ - def __init__(self, name) -> None: - self.name = name - if name != "": - self._init_dataset() - else: - self.dataset = Dataset.from_dict({}) - self.exist = False - self.is_empty = True - - @abstractmethod - def generate_dataset(): - pass - - def _init_dataset(self): - try: - self.dataset = load_dataset(self.name) - self.exist = True - self.is_empty = False - except EmptyDatasetError: - self.dataset = Dataset.from_dict({}) - self.exist = True - self.is_empty = True - pass - except FileNotFoundError: - self.dataset = Dataset.from_dict({}) - self.exist = False - self.is_empty = True - pass - - def upload(self, token): - self.dataset.push_to_hub(self.name, token = token) \ No newline at end of file diff --git a/spaces/Rbrq/DeticChatGPT/detic/data/datasets/lvis_v1.py b/spaces/Rbrq/DeticChatGPT/detic/data/datasets/lvis_v1.py deleted file mode 100644 index 4b9b279f17663def1c4913321efbb7490d591e90..0000000000000000000000000000000000000000 --- a/spaces/Rbrq/DeticChatGPT/detic/data/datasets/lvis_v1.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os - -from fvcore.common.timer import Timer -from detectron2.structures import BoxMode -from fvcore.common.file_io import PathManager -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.lvis import get_lvis_instances_meta - -logger = logging.getLogger(__name__) - -__all__ = ["custom_load_lvis_json", "custom_register_lvis_instances"] - - -def custom_register_lvis_instances(name, metadata, json_file, image_root): - """ - """ - DatasetCatalog.register(name, lambda: custom_load_lvis_json( - json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, - evaluator_type="lvis", **metadata - ) - - -def custom_load_lvis_json(json_file, image_root, dataset_name=None): - ''' - Modifications: - use `file_name` - convert neg_category_ids - add pos_category_ids - ''' - from lvis import LVIS - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format( - json_file, timer.seconds())) - - catid2contid = {x['id']: i for i, x in enumerate( - sorted(lvis_api.dataset['categories'], key=lambda x: x['id']))} - if len(lvis_api.dataset['categories']) == 1203: - for x in lvis_api.dataset['categories']: - assert catid2contid[x['id']] == x['id'] - 1 - img_ids = sorted(lvis_api.imgs.keys()) - imgs = lvis_api.load_imgs(img_ids) - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), \ - "Annotation ids in '{}' are not unique".format(json_file) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in the LVIS v1 format from {}".format( - len(imgs_anns), json_file)) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - if "file_name" in img_dict: - file_name = img_dict["file_name"] - if img_dict["file_name"].startswith("COCO"): - file_name = file_name[-16:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'coco_url' in img_dict: - # e.g., http://images.cocodataset.org/train2017/000000391895.jpg - file_name = img_dict["coco_url"][30:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'tar_index' in img_dict: - record['tar_index'] = img_dict['tar_index'] - - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - record["not_exhaustive_category_ids"] = img_dict.get( - "not_exhaustive_category_ids", []) - record["neg_category_ids"] = img_dict.get("neg_category_ids", []) - # NOTE: modified by Xingyi: convert to 0-based - record["neg_category_ids"] = [ - catid2contid[x] for x in record["neg_category_ids"]] - if 'pos_category_ids' in img_dict: - record['pos_category_ids'] = [ - catid2contid[x] for x in img_dict.get("pos_category_ids", [])] - if 'captions' in img_dict: - record['captions'] = img_dict['captions'] - if 'caption_features' in img_dict: - record['caption_features'] = img_dict['caption_features'] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - if anno.get('iscrowd', 0) > 0: - continue - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - obj["category_id"] = catid2contid[anno['category_id']] - if 'segmentation' in anno: - segm = anno["segmentation"] - valid_segm = [poly for poly in segm \ - if len(poly) % 2 == 0 and len(poly) >= 6] - # assert len(segm) == len( - # valid_segm - # ), "Annotation contains an invalid polygon with < 3 points" - if not len(segm) == len(valid_segm): - print('Annotation contains an invalid polygon with < 3 points') - assert len(segm) > 0 - obj["segmentation"] = segm - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - return dataset_dicts - -_CUSTOM_SPLITS_LVIS = { - "lvis_v1_train+coco": ("coco/", "lvis/lvis_v1_train+coco_mask.json"), - "lvis_v1_train_norare": ("coco/", "lvis/lvis_v1_train_norare.json"), -} - - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items(): - custom_register_lvis_instances( - key, - get_lvis_instances_meta(key), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) - - -def get_lvis_22k_meta(): - from .lvis_22k_categories import CATEGORIES - cat_ids = [k["id"] for k in CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["name"] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - -_CUSTOM_SPLITS_LVIS_22K = { - "lvis_v1_train_22k": ("coco/", "lvis/lvis_v1_train_lvis-22k.json"), -} - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS_22K.items(): - custom_register_lvis_instances( - key, - get_lvis_22k_meta(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/RealKintaro/Offensive-Speech-Detection-From-Arabic-Dialects/Deployment/app.py b/spaces/RealKintaro/Offensive-Speech-Detection-From-Arabic-Dialects/Deployment/app.py deleted file mode 100644 index e898d07b5e799868ffac4666475487ba7153b877..0000000000000000000000000000000000000000 --- a/spaces/RealKintaro/Offensive-Speech-Detection-From-Arabic-Dialects/Deployment/app.py +++ /dev/null @@ -1,292 +0,0 @@ -# Delete all objects from memory - -keys = list(globals().keys()) - -for o in keys: - if not o.startswith('_'): - print(o) - del globals()[o] - -# Imort from a file called Bert-medium.py - -from Bert_medium import MediumBert -from Offensive_Bert import BertClassifier -from data_cleaning import cleaning_content -from Dialect_Bert import Dialect_Detection - -import torch -device = torch.device("cpu") - - -from transformers import BertTokenizer, AutoTokenizer, BertTokenizerFast -import streamlit as st - -# file path -import os - -path_file = os.path.dirname(os.path.abspath(__file__)) -parent_path = os.path.dirname(path_file) - -##########################FUNCTIONS######################## - -def predict_off(review_text,model,device,tokenizer): - - encoded_review = tokenizer.encode_plus( - review_text, - max_length=256, - add_special_tokens=True, - return_token_type_ids=False, - padding='longest', - return_attention_mask=True, - return_tensors='pt', - ) - - input_ids = encoded_review['input_ids'].to(device) - attention_mask = encoded_review['attention_mask'].to(device) - output = model(input_ids, attention_mask) - _, prediction = torch.max(output, dim=1) - #print(f'Review text: {review_text}') - index = output.cpu().data.numpy().argmax() - #print(f'Sentiment : {index}') - # decode the output of the model to get the predicted label - pred = index - - return pred -#########################################"" -def predict_other(review_text,model,device,tokenizer): - - encoded_review = tokenizer.encode_plus( - review_text, - max_length=217, - add_special_tokens=True, - return_token_type_ids=False, - padding='longest', - return_attention_mask=True, - return_tensors='pt', - ) - - input_ids = encoded_review['input_ids'].to(device) - attention_mask = encoded_review['attention_mask'].to(device) - output = model(input_ids, attention_mask) - _, prediction = torch.max(output, dim=1) - #print(f'Review text: {review_text}') - index = output.cpu().data.numpy().argmax() - #print(f'Sentiment : {index}') - # decode the output of the model to get the predicted label - - return index -#########################"################## - -def predict_dialect(review_text,model,device,tokenizer): - - encoded_review = tokenizer.encode_plus( - review_text, - max_length=123, - add_special_tokens=True, - return_token_type_ids=False, - padding='longest', - return_attention_mask=True, - return_tensors='pt', - ) - - input_ids = encoded_review['input_ids'].to(device) - attention_mask = encoded_review['attention_mask'].to(device) - output = model(input_ids, attention_mask) - _, prediction = torch.max(output, dim=1) - #print(f'Review text: {review_text}') - index = output.cpu().data.numpy().argmax() - #print(f'Sentiment : {index}') - pred = index - return pred - - -# Main prediction function - -def predict(text,device,offensive_model,offensive_tokenizer,racism_model,misogyny_model,verbalabuse_model,dialect_model,religionhate_model,tokenizer_dialect,other_tokenizer,off_dictionary,racism_dict,misogyny_dict,verbalabuse_dict,dialect_dict,religionhate_dict): - # clean text - text = cleaning_content(text) - - # predict using offensive model - off_pred = off_dictionary[predict_off(text,offensive_model,device,offensive_tokenizer)] - - if off_pred == 'offensive': - # predict using racism model - rac_pred = racism_dict[predict_other(text,racism_model,device,other_tokenizer)] - # predict using misogyny model - misog_pred = misogyny_dict[predict_other(text,misogyny_model,device,other_tokenizer)] - # predict using verbal abuse model - ver_pred = verbalabuse_dict[predict_other(text,verbalabuse_model,device,other_tokenizer)] - # predict using dialect model - dialect_pred = dialect_dict[predict_dialect(text,dialect_model,device,tokenizer_dialect)] - # predict using religion hate model - Religion_Hate_pred = religionhate_dict[predict_other(text,religionhate_model,device,other_tokenizer)] - # return the prediction - return {"Offensiveness": off_pred, "Dialect": dialect_pred, "Misogyny": misog_pred, "Racism": rac_pred, "Verbal Abuse": ver_pred, "Religion Hate": Religion_Hate_pred} - - # predict using misogyny model - misog_pred = misogyny_dict[predict_other(text,misogyny_model,device,other_tokenizer)] - # predict using dialect model - dialect_pred = dialect_dict[predict_dialect(text,dialect_model,device,tokenizer_dialect)] - - # return the prediction as a dataframe row - return {"Offensiveness": off_pred, "Dialect": dialect_pred, "Misogyny": misog_pred, "Racism": "Not_Racism", "Verbal Abuse": "Not Verbal Abuse", "Religion Hate": "Not Religion Hate"} -############################################### - -from geopy.geocoders import Nominatim -import numpy as np -import pandas as pd - -geolocator = Nominatim(user_agent="NLP") - -def geolocate(country): - try: - # Geolocate the center of the country - loc = geolocator.geocode(country) - # And return latitude and longitude - return (loc.latitude, loc.longitude) - except: - # Return missing value - return np.nan - -# Stream lit app - -st.title("Arabic Hate Speech Detection") - -st.write("This app detects hate speech in Arabic dialect text") - -st.write("Please enter your text below") - - -# Session state -if 'Loaded' not in st.session_state: - st.markdown('### Loading models ...') - st.session_state['Loaded'] = False -else: - print('Model already loaded') - st.session_state['Loaded'] = True - - -if st.session_state['Loaded'] == False: - - # Offensiveness detection model - - offensive_model = BertClassifier() - offensive_model.load_state_dict(torch.load(os.path.join(parent_path,'models/modelv3.pt'), map_location=torch.device('cpu'))) - offensive_tokenizer = BertTokenizer.from_pretrained('aubmindlab/bert-base-arabertv02', do_lower_case=True) - - #send model to device - - offensive_model = offensive_model.to(device) - st.session_state['Offensive_model'] = offensive_model - st.session_state['Offensive_tokenizer'] = offensive_tokenizer - print('Offensive model loaded') - off_dictionary = {1: 'offensive', 0: 'non_offensive'} - st.session_state['Offensive_dictionary'] = off_dictionary - - ############################################################################################################################## - - # Other four models - - other_tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-medium-arabic") - st.session_state['Other_tokenizer'] = other_tokenizer - - racism_model,religionhate_model,verbalabuse_model,misogyny_model = MediumBert(),MediumBert(),MediumBert(),MediumBert() - ################################################################ - - racism_model.load_state_dict(torch.load(os.path.join(parent_path,'models/racism/racism_arabert.pt'), map_location=torch.device('cpu'))) - racism_dict = {0: 'non_racist', 1: 'racist'} - - racism_model = racism_model.to(device) - - st.session_state['Racism_model'] = racism_model - st.session_state['Racism_dictionary'] = racism_dict - - print('Racism model loaded') - ################################################################ - - religionhate_model.load_state_dict(torch.load(os.path.join(parent_path,'models/religion_hate/religion_hate_params.pt'), map_location=torch.device('cpu'))) - religionhate_dict = {0: 'Religion Hate', 1: 'Not Religion Hate'} - - religionhate_model = religionhate_model.to(device) - - st.session_state['Religion_hate_model'] = religionhate_model - st.session_state['Religion_hate_dictionary'] = religionhate_dict - - print('Religion Hate model loaded') - ################################################################ - - verbalabuse_model.load_state_dict(torch.load(os.path.join(parent_path,'models/verbal_abuse/verbal_abuse_arabert.pt'), map_location=torch.device('cpu'))) - verbalabuse_dict = {0: 'Verbal Abuse', 1: 'Not Verbal Abuse'} - - verbalabuse_model=verbalabuse_model.to(device) - - st.session_state['Verbal_abuse_model'] = verbalabuse_model - st.session_state['Verbal_abuse_dictionary'] = verbalabuse_dict - - print('Verbal Abuse model loaded') - ################################################################ - - misogyny_model.load_state_dict(torch.load(os.path.join(parent_path,'models/misogyny/misogyny.pt'), map_location=torch.device('cpu'))) - misogyny_dict = {0: 'misogyny', 1: 'non_misogyny'} - - misogyny_model=misogyny_model.to(device) - - st.session_state['Misogyny_model'] = misogyny_model - st.session_state['Misogyny_dictionary'] = misogyny_dict - - - print('Misogyny model loaded') - ################################################################ - - # Dialect detection model - - dialect_model = Dialect_Detection(10) - dialect_model.load_state_dict(torch.load(os.path.join(parent_path,'models/dialect_classifier.pt'), map_location=torch.device('cpu'))) - - dialect_model = dialect_model.to(device) - - st.session_state['Dialect_model'] = dialect_model - - print('Dialect model loaded') - - tokenizer_dialect = BertTokenizerFast.from_pretrained('alger-ia/dziribert') - - st.session_state['Dialect_tokenizer'] = tokenizer_dialect - - # load the model - dialect_dict = {0: 'lebanon', 1: 'egypt', 2: 'morocco', 3: 'tunisia', 4: 'algeria', 5: 'qatar', 6: 'iraq', 7: 'saudi arabia', 8: 'libya', 9: 'jordan'} - - st.session_state['Dialect_dictionary'] = dialect_dict - - st.session_state['Loaded'] = True - -text = st.text_area("Enter Text") - -if st.button("Predict") and text != '': - result = predict(text = text, device = device, - offensive_model= st.session_state['Offensive_model'], - offensive_tokenizer= st.session_state['Offensive_tokenizer'], - racism_model= st.session_state['Racism_model'], - misogyny_model=st.session_state['Misogyny_model'], - verbalabuse_model= st.session_state['Verbal_abuse_model'], - dialect_model=st.session_state['Dialect_model'], - religionhate_model=st.session_state['Religion_hate_model'], - tokenizer_dialect=st.session_state['Dialect_tokenizer'], - other_tokenizer=st.session_state['Other_tokenizer'], - off_dictionary=st.session_state['Offensive_dictionary'], - racism_dict=st.session_state['Racism_dictionary'], - misogyny_dict=st.session_state['Misogyny_dictionary'], - verbalabuse_dict=st.session_state['Verbal_abuse_dictionary'], - dialect_dict=st.session_state['Dialect_dictionary'], - religionhate_dict=st.session_state['Religion_hate_dictionary']) - - st.write(result) - - location = geolocate(result['Dialect']) - - # map with contry highlited - location = pd.DataFrame({'lat': [location[0]], 'lon': [location[1]]}) - st.map(data= location , zoom=5) - -elif text == '': - st.write('Please enter text to predict') diff --git a/spaces/Realcat/image-matching-webui/hloc/extractors/darkfeat.py b/spaces/Realcat/image-matching-webui/hloc/extractors/darkfeat.py deleted file mode 100644 index 80cee30c2327e49efea8ad615496c992a9c6291e..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/extractors/darkfeat.py +++ /dev/null @@ -1,60 +0,0 @@ -import sys -from pathlib import Path -import subprocess -from ..utils.base_model import BaseModel -from .. import logger - -darkfeat_path = Path(__file__).parent / "../../third_party/DarkFeat" -sys.path.append(str(darkfeat_path)) -from darkfeat import DarkFeat as DarkFeat_ - - -class DarkFeat(BaseModel): - default_conf = { - "model_name": "DarkFeat.pth", - "max_keypoints": 1000, - "detection_threshold": 0.5, - "sub_pixel": False, - } - weight_urls = { - "DarkFeat.pth": "https://drive.google.com/uc?id=1Thl6m8NcmQ7zSAF-1_xaFs3F4H8UU6HX&confirm=t", - } - proxy = "http://localhost:1080" - required_inputs = ["image"] - - def _init(self, conf): - model_path = darkfeat_path / "checkpoints" / conf["model_name"] - link = self.weight_urls[conf["model_name"]] - if not model_path.exists(): - model_path.parent.mkdir(exist_ok=True) - cmd_wo_proxy = ["gdown", link, "-O", str(model_path)] - cmd = ["gdown", link, "-O", str(model_path), "--proxy", self.proxy] - logger.info( - f"Downloading the DarkFeat model with `{cmd_wo_proxy}`." - ) - try: - subprocess.run(cmd_wo_proxy, check=True) - except subprocess.CalledProcessError as e: - logger.info(f"Downloading the DarkFeat model with `{cmd}`.") - try: - subprocess.run(cmd, check=True) - except subprocess.CalledProcessError as e: - logger.error(f"Failed to download the DarkFeat model.") - raise e - - self.net = DarkFeat_(model_path) - - def _forward(self, data): - pred = self.net({"image": data["image"]}) - keypoints = pred["keypoints"] - descriptors = pred["descriptors"] - scores = pred["scores"] - idxs = scores.argsort()[-self.conf["max_keypoints"] or None :] - keypoints = keypoints[idxs, :2] - descriptors = descriptors[:, idxs] - scores = scores[idxs] - return { - "keypoints": keypoints[None], # 1 x N x 2 - "scores": scores[None], # 1 x N - "descriptors": descriptors[None], # 1 x 128 x N - } diff --git a/spaces/Realcat/image-matching-webui/hloc/match_features.py b/spaces/Realcat/image-matching-webui/hloc/match_features.py deleted file mode 100644 index b22ba461342dac33916b2bbb5c25dd647e6b7c2e..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/match_features.py +++ /dev/null @@ -1,405 +0,0 @@ -import argparse -from typing import Union, Optional, Dict, List, Tuple -from pathlib import Path -import pprint -from queue import Queue -from threading import Thread -from functools import partial -from tqdm import tqdm -import h5py -import torch - -from . import matchers, logger -from .utils.base_model import dynamic_load -from .utils.parsers import names_to_pair, names_to_pair_old, parse_retrieval -import numpy as np - -""" -A set of standard configurations that can be directly selected from the command -line using their name. Each is a dictionary with the following entries: - - output: the name of the match file that will be generated. - - model: the model configuration, as passed to a feature matcher. -""" -confs = { - "superglue": { - "output": "matches-superglue", - "model": { - "name": "superglue", - "weights": "outdoor", - "sinkhorn_iterations": 50, - "match_threshold": 0.2, - }, - "preprocessing": { - "grayscale": True, - "resize_max": 1024, - "dfactor": 8, - "force_resize": False, - }, - }, - "superglue-fast": { - "output": "matches-superglue-it5", - "model": { - "name": "superglue", - "weights": "outdoor", - "sinkhorn_iterations": 5, - "match_threshold": 0.2, - }, - }, - "superpoint-lightglue": { - "output": "matches-lightglue", - "model": { - "name": "lightglue", - "match_threshold": 0.2, - "width_confidence": 0.99, # for point pruning - "depth_confidence": 0.95, # for early stopping, - "features": "superpoint", - "model_name": "superpoint_lightglue.pth", - }, - "preprocessing": { - "grayscale": True, - "resize_max": 1024, - "dfactor": 8, - "force_resize": False, - }, - }, - "disk-lightglue": { - "output": "matches-lightglue", - "model": { - "name": "lightglue", - "match_threshold": 0.2, - "width_confidence": 0.99, # for point pruning - "depth_confidence": 0.95, # for early stopping, - "features": "disk", - "model_name": "disk_lightglue.pth", - }, - "preprocessing": { - "grayscale": True, - "resize_max": 1024, - "dfactor": 8, - "force_resize": False, - }, - }, - "sgmnet": { - "output": "matches-sgmnet", - "model": { - "name": "sgmnet", - "seed_top_k": [256, 256], - "seed_radius_coe": 0.01, - "net_channels": 128, - "layer_num": 9, - "head": 4, - "seedlayer": [0, 6], - "use_mc_seeding": True, - "use_score_encoding": False, - "conf_bar": [1.11, 0.1], - "sink_iter": [10, 100], - "detach_iter": 1000000, - "match_threshold": 0.2, - }, - "preprocessing": { - "grayscale": True, - "resize_max": 1024, - "dfactor": 8, - "force_resize": False, - }, - }, - "NN-superpoint": { - "output": "matches-NN-mutual-dist.7", - "model": { - "name": "nearest_neighbor", - "do_mutual_check": True, - "distance_threshold": 0.7, - "match_threshold": 0.2, - }, - }, - "NN-ratio": { - "output": "matches-NN-mutual-ratio.8", - "model": { - "name": "nearest_neighbor", - "do_mutual_check": True, - "ratio_threshold": 0.8, - "match_threshold": 0.2, - }, - }, - "NN-mutual": { - "output": "matches-NN-mutual", - "model": { - "name": "nearest_neighbor", - "do_mutual_check": True, - "match_threshold": 0.2, - }, - }, - "Dual-Softmax": { - "output": "matches-Dual-Softmax", - "model": { - "name": "dual_softmax", - "do_mutual_check": True, - "match_threshold": 0.2, # TODO - }, - }, - "adalam": { - "output": "matches-adalam", - "model": { - "name": "adalam", - "match_threshold": 0.2, - }, - }, -} - - -class WorkQueue: - def __init__(self, work_fn, num_threads=1): - self.queue = Queue(num_threads) - self.threads = [ - Thread(target=self.thread_fn, args=(work_fn,)) - for _ in range(num_threads) - ] - for thread in self.threads: - thread.start() - - def join(self): - for thread in self.threads: - self.queue.put(None) - for thread in self.threads: - thread.join() - - def thread_fn(self, work_fn): - item = self.queue.get() - while item is not None: - work_fn(item) - item = self.queue.get() - - def put(self, data): - self.queue.put(data) - - -class FeaturePairsDataset(torch.utils.data.Dataset): - def __init__(self, pairs, feature_path_q, feature_path_r): - self.pairs = pairs - self.feature_path_q = feature_path_q - self.feature_path_r = feature_path_r - - def __getitem__(self, idx): - name0, name1 = self.pairs[idx] - data = {} - with h5py.File(self.feature_path_q, "r") as fd: - grp = fd[name0] - for k, v in grp.items(): - data[k + "0"] = torch.from_numpy(v.__array__()).float() - # some matchers might expect an image but only use its size - data["image0"] = torch.empty((1,) + tuple(grp["image_size"])[::-1]) - with h5py.File(self.feature_path_r, "r") as fd: - grp = fd[name1] - for k, v in grp.items(): - data[k + "1"] = torch.from_numpy(v.__array__()).float() - data["image1"] = torch.empty((1,) + tuple(grp["image_size"])[::-1]) - return data - - def __len__(self): - return len(self.pairs) - - -def writer_fn(inp, match_path): - pair, pred = inp - with h5py.File(str(match_path), "a", libver="latest") as fd: - if pair in fd: - del fd[pair] - grp = fd.create_group(pair) - matches = pred["matches0"][0].cpu().short().numpy() - grp.create_dataset("matches0", data=matches) - if "matching_scores0" in pred: - scores = pred["matching_scores0"][0].cpu().half().numpy() - grp.create_dataset("matching_scores0", data=scores) - - -def main( - conf: Dict, - pairs: Path, - features: Union[Path, str], - export_dir: Optional[Path] = None, - matches: Optional[Path] = None, - features_ref: Optional[Path] = None, - overwrite: bool = False, -) -> Path: - if isinstance(features, Path) or Path(features).exists(): - features_q = features - if matches is None: - raise ValueError( - "Either provide both features and matches as Path" - " or both as names." - ) - else: - if export_dir is None: - raise ValueError( - "Provide an export_dir if features is not" - f" a file path: {features}." - ) - features_q = Path(export_dir, features + ".h5") - if matches is None: - matches = Path( - export_dir, f'{features}_{conf["output"]}_{pairs.stem}.h5' - ) - - if features_ref is None: - features_ref = features_q - match_from_paths(conf, pairs, matches, features_q, features_ref, overwrite) - - return matches - - -def find_unique_new_pairs(pairs_all: List[Tuple[str]], match_path: Path = None): - """Avoid to recompute duplicates to save time.""" - pairs = set() - for i, j in pairs_all: - if (j, i) not in pairs: - pairs.add((i, j)) - pairs = list(pairs) - if match_path is not None and match_path.exists(): - with h5py.File(str(match_path), "r", libver="latest") as fd: - pairs_filtered = [] - for i, j in pairs: - if ( - names_to_pair(i, j) in fd - or names_to_pair(j, i) in fd - or names_to_pair_old(i, j) in fd - or names_to_pair_old(j, i) in fd - ): - continue - pairs_filtered.append((i, j)) - return pairs_filtered - return pairs - - -@torch.no_grad() -def match_from_paths( - conf: Dict, - pairs_path: Path, - match_path: Path, - feature_path_q: Path, - feature_path_ref: Path, - overwrite: bool = False, -) -> Path: - logger.info( - "Matching local features with configuration:" - f"\n{pprint.pformat(conf)}" - ) - - if not feature_path_q.exists(): - raise FileNotFoundError(f"Query feature file {feature_path_q}.") - if not feature_path_ref.exists(): - raise FileNotFoundError(f"Reference feature file {feature_path_ref}.") - match_path.parent.mkdir(exist_ok=True, parents=True) - - assert pairs_path.exists(), pairs_path - pairs = parse_retrieval(pairs_path) - pairs = [(q, r) for q, rs in pairs.items() for r in rs] - pairs = find_unique_new_pairs(pairs, None if overwrite else match_path) - if len(pairs) == 0: - logger.info("Skipping the matching.") - return - - device = "cuda" if torch.cuda.is_available() else "cpu" - Model = dynamic_load(matchers, conf["model"]["name"]) - model = Model(conf["model"]).eval().to(device) - - dataset = FeaturePairsDataset(pairs, feature_path_q, feature_path_ref) - loader = torch.utils.data.DataLoader( - dataset, num_workers=5, batch_size=1, shuffle=False, pin_memory=True - ) - writer_queue = WorkQueue(partial(writer_fn, match_path=match_path), 5) - - for idx, data in enumerate(tqdm(loader, smoothing=0.1)): - data = { - k: v if k.startswith("image") else v.to(device, non_blocking=True) - for k, v in data.items() - } - pred = model(data) - pair = names_to_pair(*pairs[idx]) - writer_queue.put((pair, pred)) - writer_queue.join() - logger.info("Finished exporting matches.") - - -def scale_keypoints(kpts, scale): - if np.any(scale != 1.0): - kpts *= kpts.new_tensor(scale) - return kpts - - -@torch.no_grad() -def match_images(model, feat0, feat1): - # forward pass to match keypoints - desc0 = feat0["descriptors"][0] - desc1 = feat1["descriptors"][0] - if len(desc0.shape) == 2: - desc0 = desc0.unsqueeze(0) - if len(desc1.shape) == 2: - desc1 = desc1.unsqueeze(0) - if isinstance(feat0["keypoints"], list): - feat0["keypoints"] = feat0["keypoints"][0][None] - if isinstance(feat1["keypoints"], list): - feat1["keypoints"] = feat1["keypoints"][0][None] - - pred = model( - { - "image0": feat0["image"], - "keypoints0": feat0["keypoints"], - "scores0": feat0["scores"][0].unsqueeze(0), - "descriptors0": desc0, - "image1": feat1["image"], - "keypoints1": feat1["keypoints"], - "scores1": feat1["scores"][0].unsqueeze(0), - "descriptors1": desc1, - } - ) - pred = { - k: v.cpu().detach()[0] if isinstance(v, torch.Tensor) else v - for k, v in pred.items() - } - kpts0, kpts1 = ( - feat0["keypoints"][0].cpu().numpy(), - feat1["keypoints"][0].cpu().numpy(), - ) - matches, confid = pred["matches0"], pred["matching_scores0"] - # Keep the matching keypoints. - valid = matches > -1 - mkpts0 = kpts0[valid] - mkpts1 = kpts1[matches[valid]] - mconfid = confid[valid] - # rescale the keypoints to their original size - s0 = feat0["original_size"] / feat0["size"] - s1 = feat1["original_size"] / feat1["size"] - kpts0_origin = scale_keypoints(torch.from_numpy(kpts0 + 0.5), s0) - 0.5 - kpts1_origin = scale_keypoints(torch.from_numpy(kpts1 + 0.5), s1) - 0.5 - - mkpts0_origin = scale_keypoints(torch.from_numpy(mkpts0 + 0.5), s0) - 0.5 - mkpts1_origin = scale_keypoints(torch.from_numpy(mkpts1 + 0.5), s1) - 0.5 - - ret = { - "image0_orig": feat0["image_orig"], - "image1_orig": feat1["image_orig"], - "keypoints0": kpts0_origin.numpy(), - "keypoints1": kpts1_origin.numpy(), - "keypoints0_orig": mkpts0_origin.numpy(), - "keypoints1_orig": mkpts1_origin.numpy(), - "mconf": mconfid, - } - del feat0, feat1, desc0, desc1, kpts0, kpts1, kpts0_origin, kpts1_origin - torch.cuda.empty_cache() - - return ret - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--pairs", type=Path, required=True) - parser.add_argument("--export_dir", type=Path) - parser.add_argument( - "--features", type=str, default="feats-superpoint-n4096-r1024" - ) - parser.add_argument("--matches", type=Path) - parser.add_argument( - "--conf", type=str, default="superglue", choices=list(confs.keys()) - ) - args = parser.parse_args() - main(confs[args.conf], args.pairs, args.features, args.export_dir) diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/fine_matching.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/fine_matching.py deleted file mode 100644 index 7156e3e1f22e2e341062565e5ad6baee41dd9bc6..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/fine_matching.py +++ /dev/null @@ -1,100 +0,0 @@ -import math -import torch -import torch.nn as nn -import torch.nn.functional as F - -from kornia.geometry.subpix import dsnt -from kornia.utils.grid import create_meshgrid - - -class FineMatching(nn.Module): - """FineMatching with s2d paradigm""" - - def __init__(self): - super().__init__() - - def forward(self, feat_f0, feat_f1, data): - """ - Args: - feat0 (torch.Tensor): [M, WW, C] - feat1 (torch.Tensor): [M, WW, C] - data (dict) - Update: - data (dict):{ - 'expec_f' (torch.Tensor): [M, 3], - 'mkpts0_f' (torch.Tensor): [M, 2], - 'mkpts1_f' (torch.Tensor): [M, 2]} - """ - M, WW, C = feat_f0.shape - W = int(math.sqrt(WW)) - scale = data["hw0_i"][0] / data["hw0_f"][0] - self.M, self.W, self.WW, self.C, self.scale = M, W, WW, C, scale - - # corner case: if no coarse matches found - if M == 0: - assert ( - self.training == False - ), "M is always >0, when training, see coarse_matching.py" - # logger.warning('No matches found in coarse-level.') - data.update( - { - "expec_f": torch.empty(0, 3, device=feat_f0.device), - "mkpts0_f": data["mkpts0_c"], - "mkpts1_f": data["mkpts1_c"], - } - ) - return - - feat_f0_picked = feat_f0[:, WW // 2, :] - - sim_matrix = torch.einsum("mc,mrc->mr", feat_f0_picked, feat_f1) - softmax_temp = 1.0 / C**0.5 - heatmap = torch.softmax(softmax_temp * sim_matrix, dim=1) - feat_f1_picked = (feat_f1 * heatmap.unsqueeze(-1)).sum(dim=1) # [M, C] - heatmap = heatmap.view(-1, W, W) - - # compute coordinates from heatmap - coords1_normalized = dsnt.spatial_expectation2d(heatmap[None], True)[ - 0 - ] # [M, 2] - grid_normalized = create_meshgrid(W, W, True, heatmap.device).reshape( - 1, -1, 2 - ) # [1, WW, 2] - - # compute std over - var = ( - torch.sum(grid_normalized**2 * heatmap.view(-1, WW, 1), dim=1) - - coords1_normalized**2 - ) # [M, 2] - std = torch.sum( - torch.sqrt(torch.clamp(var, min=1e-10)), -1 - ) # [M] clamp needed for numerical stability - - # for fine-level supervision - data.update( - { - "expec_f": torch.cat([coords1_normalized, std.unsqueeze(1)], -1), - "descriptors0": feat_f0_picked.detach(), - "descriptors1": feat_f1_picked.detach(), - } - ) - - # compute absolute kpt coords - self.get_fine_match(coords1_normalized, data) - - @torch.no_grad() - def get_fine_match(self, coords1_normed, data): - W, WW, C, scale = self.W, self.WW, self.C, self.scale - - # mkpts0_f and mkpts1_f - # scale0 = scale * data['scale0'][data['b_ids']] if 'scale0' in data else scale - mkpts0_f = data[ - "mkpts0_c" - ] # + (coords0_normed * (W // 2) * scale0 )[:len(data['mconf'])] - scale1 = scale * data["scale1"][data["b_ids"]] if "scale1" in data else scale - mkpts1_f = ( - data["mkpts1_c"] - + (coords1_normed * (W // 2) * scale1)[: len(data["mconf"])] - ) - - data.update({"mkpts0_f": mkpts0_f, "mkpts1_f": mkpts1_f}) diff --git a/spaces/Redgon/bingo/src/pages/api/sydney.ts b/spaces/Redgon/bingo/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/readme.md b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/readme.md deleted file mode 100644 index c0f2bce780fe2d7a9239c944b165eee7bcdeb9cb..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/readme.md +++ /dev/null @@ -1,7 +0,0 @@ -# StyleGAN 2 in PyTorch - -Implementation of Analyzing and Improving the Image Quality of StyleGAN (https://arxiv.org/abs/1912.04958) in PyTorch - -Fork from [https://github.com/rosinality/stylegan2-pytorch](https://github.com/rosinality/stylegan2-pytorch) - -In VToonify, we modify it to accept z+ latent codes. diff --git a/spaces/Riksarkivet/htr_demo/helper/text/overview/changelog_roadmap/changelog.md b/spaces/Riksarkivet/htr_demo/helper/text/overview/changelog_roadmap/changelog.md deleted file mode 100644 index 25ff1d03257c4bc5355eb9f58d648e643e8f119c..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/helper/text/overview/changelog_roadmap/changelog.md +++ /dev/null @@ -1,14 +0,0 @@ -## Changelog - -All notable changes to HTRFLOW will be documented here. - -### [0.1.0] - 2023-11-08 - -#### Added - -- Support for TROCR -> Latin and Eng model -- New feature! Compare different runs with GT, see tab **Fast track** > **Compare** - -#### Fixed - -- Fixed bug for Docker and running app locally, [issue](https://github.com/Riksarkivet/HTRFLOW/issues/2) diff --git a/spaces/RitaParadaRamos/SmallCapDemo/app.py b/spaces/RitaParadaRamos/SmallCapDemo/app.py deleted file mode 100644 index 45a9368c3774beb36dad8dbf1b7d543cd2e29bad..0000000000000000000000000000000000000000 --- a/spaces/RitaParadaRamos/SmallCapDemo/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import requests - -import gradio as gr - -import torch -from transformers import ViTFeatureExtractor, AutoTokenizer, CLIPFeatureExtractor, AutoModel, AutoModelForCausalLM -from transformers.models.auto.configuration_auto import AutoConfig -from src.vision_encoder_decoder import SmallCap, SmallCapConfig -from src.gpt2 import ThisGPT2Config, ThisGPT2LMHeadModel -from src.utils import prep_strings, postprocess_preds -import json - -from src.retrieve_caps import * -from PIL import Image -from torchvision import transforms - -from src.opt import ThisOPTConfig, ThisOPTForCausalLM - - -device = "cuda" if torch.cuda.is_available() else "cpu" - -# load feature extractor -feature_extractor = CLIPFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32") - -# load and configure tokenizer -tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m") -tokenizer.pad_token = '!' -tokenizer.eos_token = '.' - -# load model -# AutoConfig.register("this_gpt2", ThisGPT2Config) -# AutoModel.register(ThisGPT2Config, ThisGPT2LMHeadModel) -# AutoModelForCausalLM.register(ThisGPT2Config, ThisGPT2LMHeadModel) -# AutoConfig.register("smallcap", SmallCapConfig) -# AutoModel.register(SmallCapConfig, SmallCap) -# model = AutoModel.from_pretrained("Yova/SmallCap7M") - - -AutoConfig.register("this_opt", ThisOPTConfig) -AutoModel.register(ThisOPTConfig, ThisOPTForCausalLM) -AutoModelForCausalLM.register(ThisOPTConfig, ThisOPTForCausalLM) -AutoConfig.register("smallcap", SmallCapConfig) -AutoModel.register(SmallCapConfig, SmallCap) -model = AutoModel.from_pretrained("Yova/SmallCapOPT7M") - -model= model.to(device) - -template = open('src/template.txt').read().strip() + ' ' - -# precompute captions for retrieval -captions = json.load(open('coco_index_captions.json')) -retrieval_model, feature_extractor_retrieval = clip.load("RN50x64", device=device) -retrieval_index = faiss.read_index('coco_index') -#res = faiss.StandardGpuResources() -#retrieval_index = faiss.index_cpu_to_gpu(res, 0, retrieval_index) - -# Download human-readable labels for ImageNet. -response = requests.get("https://git.io/JJkYN") -labels = response.text.split("\n") - - -def retrieve_caps(image_embedding, index, k=4): - xq = image_embedding.astype(np.float32) - faiss.normalize_L2(xq) - D, I = index.search(xq, k) - return I - -def classify_image(image): - inp = transforms.ToTensor()(image) - - pixel_values_retrieval = feature_extractor_retrieval(image).to(device) - with torch.no_grad(): - image_embedding = retrieval_model.encode_image(pixel_values_retrieval.unsqueeze(0)).cpu().numpy() - - nns = retrieve_caps(image_embedding, retrieval_index)[0] - caps = [captions[i] for i in nns][:4] - - # prepare prompt - decoder_input_ids = prep_strings('', tokenizer, template=template, retrieved_caps=caps, k=4, is_test=True) - - # generate caption - pixel_values = feature_extractor(image, return_tensors="pt").pixel_values - with torch.no_grad(): - pred = model.generate(pixel_values.to(device), - decoder_input_ids=torch.tensor([decoder_input_ids]).to(device), - max_new_tokens=25, no_repeat_ngram_size=0, length_penalty=0, - min_length=1, num_beams=3, eos_token_id=tokenizer.eos_token_id) - #inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) - #prediction = inception_net.predict(inp).flatten() - retrieved_caps="Retrieved captions: \n{}\n{}\n{}\n{}".format(*caps) - #return retrieved_caps + "\n\n\n Generated caption:\n" + str(postprocess_preds(tokenizer.decode(pred[0]), tokenizer)) - - return str(postprocess_preds(tokenizer.decode(pred[0]), tokenizer)) + "\n\n\n"+ retrieved_caps - -image = gr.Image(type="pil") - -textbox = gr.Textbox(placeholder="Generated caption and retrieved captions...", lines=4) - -title = "SmallCap Demo" -gr.Interface( - fn=classify_image, inputs=image, outputs=textbox, title=title -).launch() \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test_config_g.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test_config_g.py deleted file mode 100644 index 365549336f936ea2865640d1c467f1936be2157c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test_config_g.py +++ /dev/null @@ -1,49 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from UniFormer repo: From https://github.com/Sense-X/UniFormer - * Apache-2.0 license -''' -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[5, 8, 20, 7], - head_dim=64, - drop_path_rate=0.4, - windows=False, - hybrid=False, - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/__init__.py deleted file mode 100644 index 2051b85f7e59bff7bdbaa131849ce8cd31f059a4..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .file_client import BaseStorageBackend, FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler -from .io import dump, load, register_handler -from .parse import dict_from_file, list_from_file - -__all__ = [ - 'BaseStorageBackend', 'FileClient', 'load', 'dump', 'register_handler', - 'BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler', - 'list_from_file', 'dict_from_file' -] diff --git a/spaces/Rongjiehuang/GenerSpeech/tasks/tts/tts_utils.py b/spaces/Rongjiehuang/GenerSpeech/tasks/tts/tts_utils.py deleted file mode 100644 index e13439ee72e4fda220605c5868b3159110d9129b..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/tasks/tts/tts_utils.py +++ /dev/null @@ -1,54 +0,0 @@ -import importlib - -from data_gen.tts.base_binarizer import BaseBinarizer -from data_gen.tts.base_preprocess import BasePreprocessor -from data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls -from utils.hparams import hparams - - -def parse_dataset_configs(): - max_tokens = hparams['max_tokens'] - max_sentences = hparams['max_sentences'] - max_valid_tokens = hparams['max_valid_tokens'] - if max_valid_tokens == -1: - hparams['max_valid_tokens'] = max_valid_tokens = max_tokens - max_valid_sentences = hparams['max_valid_sentences'] - if max_valid_sentences == -1: - hparams['max_valid_sentences'] = max_valid_sentences = max_sentences - return max_tokens, max_sentences, max_valid_tokens, max_valid_sentences - - -def parse_mel_losses(): - mel_losses = hparams['mel_losses'].split("|") - loss_and_lambda = {} - for i, l in enumerate(mel_losses): - if l == '': - continue - if ':' in l: - l, lbd = l.split(":") - lbd = float(lbd) - else: - lbd = 1.0 - loss_and_lambda[l] = lbd - print("| Mel losses:", loss_and_lambda) - return loss_and_lambda - - -def load_data_preprocessor(): - preprocess_cls = hparams["preprocess_cls"] - pkg = ".".join(preprocess_cls.split(".")[:-1]) - cls_name = preprocess_cls.split(".")[-1] - preprocessor: BasePreprocessor = getattr(importlib.import_module(pkg), cls_name)() - preprocess_args = {} - preprocess_args.update(hparams['preprocess_args']) - return preprocessor, preprocess_args - - -def load_data_binarizer(): - binarizer_cls = hparams['binarizer_cls'] - pkg = ".".join(binarizer_cls.split(".")[:-1]) - cls_name = binarizer_cls.split(".")[-1] - binarizer: BaseBinarizer = getattr(importlib.import_module(pkg), cls_name)() - binarization_args = {} - binarization_args.update(hparams['binarization_args']) - return binarizer, binarization_args \ No newline at end of file diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/docstore/in_memory.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/docstore/in_memory.py deleted file mode 100644 index b15396939975b665ee7c2dbecd1b284654f70aae..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/docstore/in_memory.py +++ /dev/null @@ -1,27 +0,0 @@ -"""Simple in memory docstore in the form of a dict.""" -from typing import Dict, Union - -from langchain.docstore.base import AddableMixin, Docstore -from langchain.docstore.document import Document - - -class InMemoryDocstore(Docstore, AddableMixin): - """Simple in memory docstore in the form of a dict.""" - - def __init__(self, dict_: Dict[str, Document]): - """Initialize with dict.""" - self.dict_ = dict_ - - def add(self, texts: Dict[str, Document]) -> None: - """Add texts to in memory dictionary.""" - overlapping = set(texts).intersection(self.dict_) - if overlapping: - raise ValueError(f"Tried to add ids that already exist: {overlapping}") - self.dict_ = dict(self.dict_, **texts) - - def search(self, search: str) -> Union[str, Document]: - """Search via direct lookup.""" - if search not in self.dict_: - return f"ID {search} not found." - else: - return self.dict_[search] diff --git "a/spaces/SarthakSidhant/Go-Cattle/pages/1_\360\237\220\256_Prediction.py" "b/spaces/SarthakSidhant/Go-Cattle/pages/1_\360\237\220\256_Prediction.py" deleted file mode 100644 index bece13ab7784b5346f0c44c2da66a27a13fc929f..0000000000000000000000000000000000000000 --- "a/spaces/SarthakSidhant/Go-Cattle/pages/1_\360\237\220\256_Prediction.py" +++ /dev/null @@ -1,114 +0,0 @@ -import streamlit as st -import pandas as pd -from sklearn.ensemble import RandomForestClassifier -import joblib -import numpy as np - - - - -df = pd.read_csv('augean.csv') -df_s = df.drop(columns=['Disease']) -# Calculate the sum of each column -column_sums = df_s.sum() -del df_s -# Select the top ten columns with the highest sums -top_columns = column_sums.nlargest(10).index.tolist() - -symptoms = df.columns -symptoms2 = symptoms[1:] -symptoms = sorted(symptoms[1:]) - -def highlight_list_elements(lst): - highlighted_list = "" - for item,n in enumerate(lst): - highlighted_list += f'{item+1}). {n} \n' - return highlighted_list - -with st.sidebar: - st.subheader(":green[List of Symptoms]") - st.markdown("##### **Select from the Symptoms below**") - st.markdown(f'
    ', unsafe_allow_html=True) - selected_items = [] - highlighted_elements = highlight_list_elements(symptoms) - # for symptom in symptoms: - # st.markdown(f"{symptom}",unsafe_allow_html=True) - for item in symptoms: - checkbox = st.checkbox(item) - if checkbox: - selected_items.append(item) - # st.markdown(highlighted_elements, unsafe_allow_html=True) - -#model part -# Initialize the Random Forest Classifier -classifier = RandomForestClassifier() -st.markdown("## :green[Go-Cattle's ]:orange[ Disease Analyzer]") -model = joblib.load('model.pkl') -## Select symtoms part -st.markdown('### :green[**Select**] :red[**Symptoms:**]') -pred_symptoms=st.multiselect(label=" ", options=symptoms2) -pred_symptoms.extend(selected_items) -pred_df = pd.DataFrame(0,index=[0],columns=symptoms2) -pred_df[pred_symptoms]=1 -# st.table(pred_df) -result = model.predict(pred_df) - -if pred_symptoms == []: - st.markdown(f"No Symptoms Selected{', '.join(pred_symptoms)}",unsafe_allow_html=True) - st.subheader(":red[Please Select a Symptom to begin with]") -else: - st.markdown(f"Selected Symtomps are: {', '.join(pred_symptoms)}",unsafe_allow_html=True) - st.subheader(f":orange[Most Probable Disease:] :red[{result[0]}]") - try : - # Read the markdown file as plain text - file_path = f"diseases/{result[0]}.md" - # st.markdown(file_path) - with open(file_path, "r") as file: - markdown_text = file.read() - - # Display the markdown content - st.write(markdown_text) - except FileNotFoundError as FE: - - st.markdown("Information on the disease not avalable at the moment") - top_five = model.predict_proba(pred_df) - # Get the classes from the random forest model - classes = model.classes_ - # Get the indices of the top five classes based on probability - top_class_indices = np.argsort(top_five, axis=1)[:, -5:] - # Get the top five classes and their probabilities - top_classes = classes[top_class_indices][:, ::-1] - top_probabilities = np.take_along_axis(top_five, top_class_indices, axis=1)[:, ::-1] - - - - # st.subheader(f"List of most probable Deseases ") - - # displaying the Top diseses part - data = [] - for i, row in enumerate(top_classes): - input_row = [] - for j, cls in enumerate(row): - probability = top_probabilities[i][j] - input_row.append((cls, probability)) - data.append(input_row) - - n_dis = 3 - column_names = [f'Top {i+1} Class' for i in range(5)] + [f'Top {i+1} Probability' for i in range(5)] - - - df3 = pd.DataFrame(data[0], columns = ['Disease','Probability(%)']) - df3['Probability(%)'] = df3['Probability(%)']*100 - df3.index = range(1, len(df3) + 1) - - col1,col2 = st.columns([3,1]) - - with col1: - st.text("\n") - # st.markdown(f"Most Probable Diseases",unsafe_allow_html=True) - st.markdown("#### :violet[Other Probable Diseases]") - with col2: - n_dis = st.selectbox(" ",(3,5)) - - st.dataframe(df3.head(n_dis),1000) - # st.dataframe(st.dataframe(df3.style.highlight_max(axis=0))) diff --git a/spaces/SeyedAli/Image-Object-Detection/README.md b/spaces/SeyedAli/Image-Object-Detection/README.md deleted file mode 100644 index a105a8e79c98b9b137a0cc0dc63fece8753b446b..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Image-Object-Detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Object Detection -emoji: 🕵️🖼️ -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.45.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sonnt/Fracture_Webapp/pages/3_Fracture_Training_Models.py b/spaces/Sonnt/Fracture_Webapp/pages/3_Fracture_Training_Models.py deleted file mode 100644 index c9015c60f3d33ba0843de150492cda673de76217..0000000000000000000000000000000000000000 --- a/spaces/Sonnt/Fracture_Webapp/pages/3_Fracture_Training_Models.py +++ /dev/null @@ -1,388 +0,0 @@ -import numpy as np -import pandas as pd - -import altair as alt -import lightgbm as lgb -import matplotlib.pyplot as plt -import pickle, os, datetime -import bz2file as bz2 - -from sklearn.model_selection import train_test_split, StratifiedShuffleSplit -from sklearn.metrics import roc_auc_score, balanced_accuracy_score, f1_score, recall_score, precision_score -from sklearn.metrics import roc_curve, precision_recall_curve, auc - -import streamlit as st - -from ui import * -from mLogsFunctions import * - - -#------------------------------------------------------------------------------------------ -# processing pipeline -def remove_negative_val(df, col): - return df.drop(index=df[df[col] < 0].index) -def rel_depth(df): - dfx = [] - for well in df.WELL.unique(): - df_ = df[df.WELL==well].sort_values(by="DEPTH", ascending=True) - dfx.append(df_.assign(rel_depth=df_.DEPTH / df_.DEPTH.values[0])) - return pd.concat(dfx).reset_index(drop=True) - -def tweak_data_S(df): - return ( - df.assign( - FRACTURE_ZONE=df.FRACTURE_ZONE.replace({-9999: 0, np.nan: 0}).astype('int8'), - GR=df.GR.replace({-9999.:0.}).astype('float32'), - DCALI_FINAL=df.DCALI_FINAL.replace({-9999.:0.}).astype('float32'), - LLD=df.LLD.replace({-9999.:0.}).astype('float32'), - LLS=df.LLS.replace({-9999.:0.}).astype('float32'), - NPHI=df.NPHI.replace({-9999.:0.}).astype('float32'), - RHOB=df.RHOB.replace({-9999.:0.}).astype('float32'), - DTC=df.DTC.replace({-9999.:0.}).astype('float32'), - DTS=df.DTS.replace({-9999.:0.}).astype('float32'), - DEPTH=df.DEPTH.astype('float32') - ) - .pipe(remove_negative_val, "RHOB") - .pipe(remove_negative_val, "DTC") - .pipe(remove_negative_val, "DTS") - .pipe(remove_negative_val, "GR") - .pipe(remove_negative_val, "LLD") - ).pipe(rel_depth) - -#Streamlit Dashboard------------------------------------------------------------------------------------------ -pagetile = """

    TRAINING SITE

    """ -set_page_config(page='custom') -hide_menu_button() -condense_layout() - -logo_site, info_site = st.columns([1.5, 8.5]) -with logo_site: - st.image("https://i.ibb.co/Yd42K98/LogoVPI.png", use_column_width='auto') -with info_site: - st.set_option('deprecation.showfileUploaderEncoding', False) - # st.set_option('maxUploadSize', 200*1024) # 200 MB - st.markdown(pagetile, unsafe_allow_html=True) - # Option 1: CSV File Loading - st.write('You can load your csv file using the file upload or selection from LAS Exploration option below.') - - st.subheader("1. CSV File Loading") - df = csv_uploader() - # df = tweak_data(df,resample=False, reindex=True) - - # Option 2: CSV from LAS Exploration - st.subheader("2. CSV from LAS Exploration") - dir_path = '/work/2022_VPIMLogs_WebApp/data/merged/' - csv_files = [filename for filename in os.listdir(dir_path) if filename.endswith('.csv')] - selected_csv_file= st.multiselect('Select a CSV file', csv_files, key = 'st.session_state.selected_well_multi') - - # # Đọc file csv được chọn vào DataFrame - if selected_csv_file: # Nếu người dùng đã chọn file CSV - # Đọc file csv được chọn vào DataFrame - file_path = '/work/2022_VPIMLogs_WebApp/data/merged/' - merged_data = pd.concat([pd.read_csv(file_path + f) for f in selected_csv_file]) - # df = tweak_data_S(merged_data, resample=False, reindex=True) - df = merged_data - else: # Nếu người dùng không chọn file CSV - merged_data = df - -# df = tweak_data(merged_data, resample=False, reindex=True) -#------------------------------------------------------------------------------------------ -if df is not None: - st.caption("Data Preparation") - # Processing data - # df = tweak_data_S(df) - try: - df = tweak_data_S(df) - except AttributeError as e: - print(" ") - st.info("Tweak Data") - i1, i2 = st.columns(2) - for i, v in enumerate(["FRACTURE_ZONE", "GR", "DCAL", "LLD", "LLS", "NPHI", "RHOB", "DTC", "DTS", "DEPTH"]): - if i%2==0: - with i1: - st.success(f"{v}: Replaced nan values by 0") - if i%2==1: - with i2: - st.success(f"{v}: Replaced nan values by 0") - st.info(" Negative values removal in RHOB, DTC, DTS, GR, LLD: Done!") - st.write("---") - - #-------------------------------------------------------------------------------------- - # define training/testing data - st.write("Please to slectect Curves input for Traning Model") - feature_names_dict = [col for col in df.columns if col not in ["WELL", - "DEPTH", - # "Fracture Intensity", - # "FRACTURE_ZONE", - ]] - feature_names = st.multiselect('Select curves', feature_names_dict, key = 'st.session_state.selected_well_multi_curves') - feature_names_label = [col for col in df.columns if col not in ["WELL", - "DEPTH", - # "Fracture Intensity", - # "FRACTURE_ZONE", - ]] - st.write("Please to slectect a Label input for Traning Model") - feature_names_label = st.selectbox('Select a curves', feature_names_label, key = 'st.session_state.selected_well_multi_label') - - label_name = feature_names_label - st.caption("Features Selection") - st.info(f"Label names: {label_name}") - st.info(f"Feature names: {feature_names}") - st.write("---") - #-------------------------------------------------------------------------------------- - st.caption("Split Data") - - ## split data - ### some data for test model after deploy - ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) - - for _train_inx, _test_inx in ss.split(df, df["WELL"]): - train_df, test_df = df.loc[_train_inx, :], df.loc[_test_inx, :] - - X_train, X_test, y_train, y_test = train_test_split( - train_df[feature_names], - train_df[label_name], - stratify=train_df[label_name], - train_size=0.9, - random_state=42, - ) - - ### create lgb dataset - train_set = lgb.Dataset(X_train, - label=y_train, - feature_name=feature_names, - ) - valid_set = lgb.Dataset(X_test, - label=y_test, - reference=train_set, - feature_name=feature_names, - ) - - st.info(f"Size of FULL Dataset: {len(df)}") - st.info(f"Size of TRAINING set: {train_set.construct().num_data()}") - st.info(f"Size of VALIDATION set: {valid_set.construct().num_data()}") - st.info(f"Size of TESTING set: {len(test_df)}") - st.write("---") -#Traning model-------------------------------------------------------------------------------------- - if st.button("Start Train"): - # Modeling - ## custom metric - st.caption("Training") - from sklearn.metrics import recall_score, precision_score, accuracy_score, f1_score, roc_auc_score, precision_recall_curve - model = lgb.train( - params={"boosting_type": "gbdt", - "objective": "cross_entropy", - "metric": ["rmse","recall"], - "is_unbalance": True, - }, - train_set=lgb.Dataset(data=X_train, label=y_train, - feature_name=feature_names), - num_boost_round=2000, - valid_sets=lgb.Dataset(data=X_test, label=y_test, - feature_name=feature_names), - early_stopping_rounds=5, - verbose_eval=0, - ) - st.success("Finished Training!") - st.success("Saved Model!") - st.write("---") - - now = datetime.datetime.now() - current_time = now.strftime("%m_%d_%Y_%H_%M_%S") - link= f"/work/2022_VPIMLogs_WebApp/models/{current_time}_model_LGBM.json" - model.save_model(filename= link) - - with open(link, 'r') as f: - file_content = f.read() - if st.download_button(label='Download JSON File', - data=file_content, - file_name=link, - mime='application/json'): - pass - else: - st.text(" ") -#Scores-------------------------------------------------------------------------------------- - ## using model to make prediction - st.caption("Modeling Scores") - - threshold = 0.5 - #Make label Prediction - predictions = (model.predict(df[feature_names])> threshold).astype(int) - df['FRACTURE_ZONE_PRED'] = predictions - test_preds = model.predict(test_df[feature_names]) - train_preds = model.predict(X_train) - valid_preds = model.predict(X_test) - - valid_recall = recall_score(y_test, valid_preds >= threshold, average = 'weighted') - valid_precision = precision_score(y_test, valid_preds >= threshold, average = 'weighted') - valid_acc = accuracy_score(y_test, valid_preds >= threshold) - valid_f1 = f1_score(y_test, valid_preds >= threshold, average = 'weighted') - valid_aoc = roc_auc_score(y_test, valid_preds >= threshold) - - train_recall = recall_score(y_train, train_preds >= threshold, average = 'weighted') - train_precision = precision_score(y_train, train_preds >= threshold, average = 'weighted') - train_acc = accuracy_score(y_train, train_preds >= threshold) - train_aoc = roc_auc_score(y_train, train_preds >= threshold) - train_f1 = f1_score(y_train, train_preds >= threshold, average = 'weighted') - - test_recall = recall_score(test_df[label_name], test_preds >= threshold, average = 'weighted') - test_precision = precision_score(test_df[label_name], test_preds >= threshold, average = 'weighted') - test_acc = accuracy_score(test_df[label_name], test_preds >= threshold) - test_aoc = roc_auc_score(test_df[label_name], test_preds >= threshold) - test_f1 = f1_score(test_df[label_name], test_preds >= threshold, average = 'weighted') - - sc1, sc2, sc3 = st.columns(3) - with sc1: - st.info(f"Training score (RECALL): {train_recall}") - st.info(f"Training score (PRECISION): {train_precision}") - st.info(f"Training score (ACC): {train_acc}") - st.info(f"Training score (F1): {train_f1}") - st.info(f"Training score (AOC): {train_aoc}") - - with sc2: - st.info(f"Validation score (RECALL): {valid_recall}") - st.info(f"Validation score (PRECISION): {valid_precision}") - st.info(f"Validation score (ACC): {valid_acc}") - st.info(f"Validation score (F1): {valid_f1}") - st.info(f"Validation score (AOC): {valid_aoc}") - - with sc3: - st.info(f"Testing score (RECALL): {test_recall}") - st.info(f"Testing score (PRECISION): {test_precision}") - st.info(f"Testing score (ACC): {test_acc}") - st.info(f"Testing score (F1): {test_f1}") - st.info(f"Testing score (AOC): {test_aoc}") - st.write("---") -#Measure Scores-------------------------------------------------------------------------------------- - st.caption("Scores plotting charts") - - ## roc valid - fpr_valid, tpr_valid, threshold_valid = roc_curve(y_test, valid_preds) - roc_auc_valid = auc(fpr_valid, tpr_valid) - ## precision recall valid - pr_valid, rc_valid, threshold_valid= precision_recall_curve(y_test, valid_preds) - ## roc training - tfpr_train, ttpr_train, tthreshold_train = roc_curve(y_train, train_preds) - troc_auc_train = auc(tfpr_train, ttpr_train) - ## precision recall training - tpr_train, trc_train, tthreshold_train = precision_recall_curve(y_train, train_preds) - ## roc test - tfpr_test, ttpr_test, tthreshold = roc_curve(test_df[label_name], test_preds) - troc_auc_test = auc(tfpr_test, ttpr_test) - ## precision recall testing - tpr_test, trc_test, tthreshold_test = precision_recall_curve(test_df[label_name], test_preds) - -#Plot Scores-------------------------------------------------------------------------------------- - fig, ax = plt.subplots(figsize=(40,40)) - ax1 = plt.subplot2grid((7,7), (0,0), rowspan=1, colspan = 1) - ax2 = plt.subplot2grid((7,7), (0,1), rowspan=1, colspan = 1) - ax3 = plt.subplot2grid((7,7), (0,2), rowspan=1, colspan = 1) - ax4 = plt.subplot2grid((7,7), (1,0), rowspan=1, colspan = 1) - ax5 = plt.subplot2grid((7,7), (1,1), rowspan=1, colspan = 1) - ax6 = plt.subplot2grid((7,7), (1,2), rowspan=1, colspan = 1) - - def set_ax(ax, - x, y, color, label, legend, - line:bool=False, - title:str=None, - x_label:str=None, - y_label:str=None, - ): - ax.plot(x, y, color, label=label) - ax.set_title(title) - ax.legend(loc = legend) - if line == True: - ax.plot([0, 1], [0, 1],'r--') - ax.set_xlim([0, 1]) - ax.set_ylim([0, 1]) - ax.set_ylabel(y_label) - ax.set_xlabel(x_label) - p1, p2, p3 = st.columns([1,14,1]) - with p2: - ## roc valid - set_ax(ax2, fpr_valid, tpr_valid, 'b', - label = 'AUC = %0.2f' % roc_auc_valid, - legend='lower right', - title='Receiver Operating Characteristic - Validation', - line=True, - x_label='False Positive Rate', - y_label='True Positive Rate', - ) - - ## precision recall valid - set_ax(ax5, pr_valid, rc_valid, 'orange', - label = 'PR Curve', - legend='lower right', - title='Precision Recall Curve - Validation', - line=True, - x_label='Recall', - y_label='Precision', - ) - - ## roc training - set_ax(ax1, tfpr_train, ttpr_train, 'b', - label = 'AUC = %0.2f' % troc_auc_train, - legend='lower right', - title='Receiver Operating Characteristic - Training', - line=True, - x_label='False Positive Rate', - y_label='True Positive Rate', - ) - - ## precision recall training - set_ax(ax4, tpr_train, trc_train, 'orange', - label = 'PR Curve', - legend='lower right', - title='Precision Recall Curve - Training', - line=True, - x_label='Recall', - y_label='Precision', - ) - - ## roc test - set_ax(ax3, tfpr_test, ttpr_test, 'b', - label = 'AUC = %0.2f' % troc_auc_test, - legend='lower right', - title='Receiver Operating Characteristic - Blind test', - line=True, - x_label='False Positive Rate', - y_label='True Positive Rate', - ) - - ## precision recall testing - set_ax(ax6, tpr_test, trc_test, 'orange', - label = 'PR Curve', - legend='lower right', - title='Precision Recall Curve - Blind test', - line=True, - x_label='Recall', - y_label='Precision', - ) - - st.pyplot(fig) - -#Plot Data------------------------------------------------------------------ - plotting_curves = [c for c in df.columns.unique() if c not in ["DEPTH", "WELL", "TVD", "DCALI_FINAL", "INCL", "AZIM_TN", "rel_depth"]] - plotting_curves.sort() - if "FRACTURE_ZONE_PRED" in df.columns.unique(): - plotting_curves.append("FRACTURE_ZONE_PRED") - for well in df.WELL.unique(): - st.write('---') - st.write(f"{well} Logs: \n") - well_plot = df[df.WELL == well] - charts_dict={} - for i, c in enumerate(plotting_curves): - charts_dict[i] = curve_plot(data=well_plot,filted_data=None, x_column=c) - #Show Curve----------------------------------------------------------------------- - st.write(alt.concat(*charts_dict.values(), columns = 12).configure(autosize='fit')) - # st.snow() -#DOWNLOAD----------------------------------------------------------------- - # Define the download button - - # if st.download_button('Download Modeling (with format JSON)'): - # with open(filename, 'r') as f: - # data = json.load(f) - # href = f"data:text/json;charset=utf-8,{json.dumps(data, indent=2)}" - # st.markdown(f'Download Modeling (with format JSON)') -hide_menu_button() -condense_layout() \ No newline at end of file diff --git a/spaces/SpacesExamples/test-docker-go/README.md b/spaces/SpacesExamples/test-docker-go/README.md deleted file mode 100644 index f74cbd5a82be32cf3dd95ba8f43698b8ff4d201a..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/test-docker-go/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Test Docker Go -emoji: 🐨 -colorFrom: blue -colorTo: purple -sdk: docker -app_port: 8080 -app_file: app.py -pinned: false -duplicated_from: XciD/test-docker-go ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/pt_inputhooks/wx.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/pt_inputhooks/wx.py deleted file mode 100644 index a0f4442c7717dc588510813753adc7b10bd3785b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/pt_inputhooks/wx.py +++ /dev/null @@ -1,219 +0,0 @@ -"""Enable wxPython to be used interactively in prompt_toolkit -""" - -import sys -import signal -import time -from timeit import default_timer as clock -import wx - - -def ignore_keyboardinterrupts(func): - """Decorator which causes KeyboardInterrupt exceptions to be ignored during - execution of the decorated function. - - This is used by the inputhook functions to handle the event where the user - presses CTRL+C while IPython is idle, and the inputhook loop is running. In - this case, we want to ignore interrupts. - """ - def wrapper(*args, **kwargs): - try: - func(*args, **kwargs) - except KeyboardInterrupt: - pass - return wrapper - - -@ignore_keyboardinterrupts -def inputhook_wx1(context): - """Run the wx event loop by processing pending events only. - - This approach seems to work, but its performance is not great as it - relies on having PyOS_InputHook called regularly. - """ - app = wx.GetApp() - if app is not None: - assert wx.Thread_IsMain() - - # Make a temporary event loop and process system events until - # there are no more waiting, then allow idle events (which - # will also deal with pending or posted wx events.) - evtloop = wx.EventLoop() - ea = wx.EventLoopActivator(evtloop) - while evtloop.Pending(): - evtloop.Dispatch() - app.ProcessIdle() - del ea - return 0 - - -class EventLoopTimer(wx.Timer): - - def __init__(self, func): - self.func = func - wx.Timer.__init__(self) - - def Notify(self): - self.func() - - -class EventLoopRunner(object): - - def Run(self, time, input_is_ready): - self.input_is_ready = input_is_ready - self.evtloop = wx.EventLoop() - self.timer = EventLoopTimer(self.check_stdin) - self.timer.Start(time) - self.evtloop.Run() - - def check_stdin(self): - if self.input_is_ready(): - self.timer.Stop() - self.evtloop.Exit() - - -@ignore_keyboardinterrupts -def inputhook_wx2(context): - """Run the wx event loop, polling for stdin. - - This version runs the wx eventloop for an undetermined amount of time, - during which it periodically checks to see if anything is ready on - stdin. If anything is ready on stdin, the event loop exits. - - The argument to elr.Run controls how often the event loop looks at stdin. - This determines the responsiveness at the keyboard. A setting of 1000 - enables a user to type at most 1 char per second. I have found that a - setting of 10 gives good keyboard response. We can shorten it further, - but eventually performance would suffer from calling select/kbhit too - often. - """ - app = wx.GetApp() - if app is not None: - assert wx.Thread_IsMain() - elr = EventLoopRunner() - # As this time is made shorter, keyboard response improves, but idle - # CPU load goes up. 10 ms seems like a good compromise. - elr.Run(time=10, # CHANGE time here to control polling interval - input_is_ready=context.input_is_ready) - return 0 - - -@ignore_keyboardinterrupts -def inputhook_wx3(context): - """Run the wx event loop by processing pending events only. - - This is like inputhook_wx1, but it keeps processing pending events - until stdin is ready. After processing all pending events, a call to - time.sleep is inserted. This is needed, otherwise, CPU usage is at 100%. - This sleep time should be tuned though for best performance. - """ - app = wx.GetApp() - if app is not None: - assert wx.Thread_IsMain() - - # The import of wx on Linux sets the handler for signal.SIGINT - # to 0. This is a bug in wx or gtk. We fix by just setting it - # back to the Python default. - if not callable(signal.getsignal(signal.SIGINT)): - signal.signal(signal.SIGINT, signal.default_int_handler) - - evtloop = wx.EventLoop() - ea = wx.EventLoopActivator(evtloop) - t = clock() - while not context.input_is_ready(): - while evtloop.Pending(): - t = clock() - evtloop.Dispatch() - app.ProcessIdle() - # We need to sleep at this point to keep the idle CPU load - # low. However, if sleep to long, GUI response is poor. As - # a compromise, we watch how often GUI events are being processed - # and switch between a short and long sleep time. Here are some - # stats useful in helping to tune this. - # time CPU load - # 0.001 13% - # 0.005 3% - # 0.01 1.5% - # 0.05 0.5% - used_time = clock() - t - if used_time > 10.0: - # print 'Sleep for 1 s' # dbg - time.sleep(1.0) - elif used_time > 0.1: - # Few GUI events coming in, so we can sleep longer - # print 'Sleep for 0.05 s' # dbg - time.sleep(0.05) - else: - # Many GUI events coming in, so sleep only very little - time.sleep(0.001) - del ea - return 0 - - -@ignore_keyboardinterrupts -def inputhook_wxphoenix(context): - """Run the wx event loop until the user provides more input. - - This input hook is suitable for use with wxPython >= 4 (a.k.a. Phoenix). - - It uses the same approach to that used in - ipykernel.eventloops.loop_wx. The wx.MainLoop is executed, and a wx.Timer - is used to periodically poll the context for input. As soon as input is - ready, the wx.MainLoop is stopped. - """ - - app = wx.GetApp() - - if app is None: - return - - if context.input_is_ready(): - return - - assert wx.IsMainThread() - - # Wx uses milliseconds - poll_interval = 100 - - # Use a wx.Timer to periodically check whether input is ready - as soon as - # it is, we exit the main loop - timer = wx.Timer() - - def poll(ev): - if context.input_is_ready(): - timer.Stop() - app.ExitMainLoop() - - timer.Start(poll_interval) - timer.Bind(wx.EVT_TIMER, poll) - - # The import of wx on Linux sets the handler for signal.SIGINT to 0. This - # is a bug in wx or gtk. We fix by just setting it back to the Python - # default. - if not callable(signal.getsignal(signal.SIGINT)): - signal.signal(signal.SIGINT, signal.default_int_handler) - - # The SetExitOnFrameDelete call allows us to run the wx mainloop without - # having a frame open. - app.SetExitOnFrameDelete(False) - app.MainLoop() - - -# Get the major wx version number to figure out what input hook we should use. -major_version = 3 - -try: - major_version = int(wx.__version__[0]) -except Exception: - pass - -# Use the phoenix hook on all platforms for wxpython >= 4 -if major_version >= 4: - inputhook = inputhook_wxphoenix -# On OSX, evtloop.Pending() always returns True, regardless of there being -# any events pending. As such we can't use implementations 1 or 3 of the -# inputhook as those depend on a pending/dispatch loop. -elif sys.platform == 'darwin': - inputhook = inputhook_wx2 -else: - inputhook = inputhook_wx3 diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/anchor_generator.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/anchor_generator.py deleted file mode 100644 index 04127c4af440b4623427b4c0911ee299166d1d7d..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/anchor_generator.py +++ /dev/null @@ -1,386 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import collections -import math -from typing import List -import torch -from torch import nn - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.layers import ShapeSpec, move_device_like -from annotator.oneformer.detectron2.structures import Boxes, RotatedBoxes -from annotator.oneformer.detectron2.utils.registry import Registry - -ANCHOR_GENERATOR_REGISTRY = Registry("ANCHOR_GENERATOR") -ANCHOR_GENERATOR_REGISTRY.__doc__ = """ -Registry for modules that creates object detection anchors for feature maps. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -class BufferList(nn.Module): - """ - Similar to nn.ParameterList, but for buffers - """ - - def __init__(self, buffers): - super().__init__() - for i, buffer in enumerate(buffers): - # Use non-persistent buffer so the values are not saved in checkpoint - self.register_buffer(str(i), buffer, persistent=False) - - def __len__(self): - return len(self._buffers) - - def __iter__(self): - return iter(self._buffers.values()) - - -def _create_grid_offsets( - size: List[int], stride: int, offset: float, target_device_tensor: torch.Tensor -): - grid_height, grid_width = size - shifts_x = move_device_like( - torch.arange(offset * stride, grid_width * stride, step=stride, dtype=torch.float32), - target_device_tensor, - ) - shifts_y = move_device_like( - torch.arange(offset * stride, grid_height * stride, step=stride, dtype=torch.float32), - target_device_tensor, - ) - - shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x) - shift_x = shift_x.reshape(-1) - shift_y = shift_y.reshape(-1) - return shift_x, shift_y - - -def _broadcast_params(params, num_features, name): - """ - If one size (or aspect ratio) is specified and there are multiple feature - maps, we "broadcast" anchors of that single size (or aspect ratio) - over all feature maps. - - If params is list[float], or list[list[float]] with len(params) == 1, repeat - it num_features time. - - Returns: - list[list[float]]: param for each feature - """ - assert isinstance( - params, collections.abc.Sequence - ), f"{name} in anchor generator has to be a list! Got {params}." - assert len(params), f"{name} in anchor generator cannot be empty!" - if not isinstance(params[0], collections.abc.Sequence): # params is list[float] - return [params] * num_features - if len(params) == 1: - return list(params) * num_features - assert len(params) == num_features, ( - f"Got {name} of length {len(params)} in anchor generator, " - f"but the number of input features is {num_features}!" - ) - return params - - -@ANCHOR_GENERATOR_REGISTRY.register() -class DefaultAnchorGenerator(nn.Module): - """ - Compute anchors in the standard ways described in - "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". - """ - - box_dim: torch.jit.Final[int] = 4 - """ - the dimension of each anchor box. - """ - - @configurable - def __init__(self, *, sizes, aspect_ratios, strides, offset=0.5): - """ - This interface is experimental. - - Args: - sizes (list[list[float]] or list[float]): - If ``sizes`` is list[list[float]], ``sizes[i]`` is the list of anchor sizes - (i.e. sqrt of anchor area) to use for the i-th feature map. - If ``sizes`` is list[float], ``sizes`` is used for all feature maps. - Anchor sizes are given in absolute lengths in units of - the input image; they do not dynamically scale if the input image size changes. - aspect_ratios (list[list[float]] or list[float]): list of aspect ratios - (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. - strides (list[int]): stride of each input feature. - offset (float): Relative offset between the center of the first anchor and the top-left - corner of the image. Value has to be in [0, 1). - Recommend to use 0.5, which means half stride. - """ - super().__init__() - - self.strides = strides - self.num_features = len(self.strides) - sizes = _broadcast_params(sizes, self.num_features, "sizes") - aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") - self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios) - - self.offset = offset - assert 0.0 <= self.offset < 1.0, self.offset - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - return { - "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, - "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, - "strides": [x.stride for x in input_shape], - "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, - } - - def _calculate_anchors(self, sizes, aspect_ratios): - cell_anchors = [ - self.generate_cell_anchors(s, a).float() for s, a in zip(sizes, aspect_ratios) - ] - return BufferList(cell_anchors) - - @property - @torch.jit.unused - def num_cell_anchors(self): - """ - Alias of `num_anchors`. - """ - return self.num_anchors - - @property - @torch.jit.unused - def num_anchors(self): - """ - Returns: - list[int]: Each int is the number of anchors at every pixel - location, on that feature map. - For example, if at every pixel we use anchors of 3 aspect - ratios and 5 sizes, the number of anchors is 15. - (See also ANCHOR_GENERATOR.SIZES and ANCHOR_GENERATOR.ASPECT_RATIOS in config) - - In standard RPN models, `num_anchors` on every feature map is the same. - """ - return [len(cell_anchors) for cell_anchors in self.cell_anchors] - - def _grid_anchors(self, grid_sizes: List[List[int]]): - """ - Returns: - list[Tensor]: #featuremap tensors, each is (#locations x #cell_anchors) x 4 - """ - anchors = [] - # buffers() not supported by torchscript. use named_buffers() instead - buffers: List[torch.Tensor] = [x[1] for x in self.cell_anchors.named_buffers()] - for size, stride, base_anchors in zip(grid_sizes, self.strides, buffers): - shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors) - shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1) - - anchors.append((shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4)) - - return anchors - - def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)): - """ - Generate a tensor storing canonical anchor boxes, which are all anchor - boxes of different sizes and aspect_ratios centered at (0, 0). - We can later build the set of anchors for a full feature map by - shifting and tiling these tensors (see `meth:_grid_anchors`). - - Args: - sizes (tuple[float]): - aspect_ratios (tuple[float]]): - - Returns: - Tensor of shape (len(sizes) * len(aspect_ratios), 4) storing anchor boxes - in XYXY format. - """ - - # This is different from the anchor generator defined in the original Faster R-CNN - # code or Detectron. They yield the same AP, however the old version defines cell - # anchors in a less natural way with a shift relative to the feature grid and - # quantization that results in slightly different sizes for different aspect ratios. - # See also https://github.com/facebookresearch/Detectron/issues/227 - - anchors = [] - for size in sizes: - area = size**2.0 - for aspect_ratio in aspect_ratios: - # s * s = w * h - # a = h / w - # ... some algebra ... - # w = sqrt(s * s / a) - # h = a * w - w = math.sqrt(area / aspect_ratio) - h = aspect_ratio * w - x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0 - anchors.append([x0, y0, x1, y1]) - return torch.tensor(anchors) - - def forward(self, features: List[torch.Tensor]): - """ - Args: - features (list[Tensor]): list of backbone feature maps on which to generate anchors. - - Returns: - list[Boxes]: a list of Boxes containing all the anchors for each feature map - (i.e. the cell anchors repeated over all locations in the feature map). - The number of anchors of each feature map is Hi x Wi x num_cell_anchors, - where Hi, Wi are resolution of the feature map divided by anchor stride. - """ - grid_sizes = [feature_map.shape[-2:] for feature_map in features] - anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) - return [Boxes(x) for x in anchors_over_all_feature_maps] - - -@ANCHOR_GENERATOR_REGISTRY.register() -class RotatedAnchorGenerator(nn.Module): - """ - Compute rotated anchors used by Rotated RPN (RRPN), described in - "Arbitrary-Oriented Scene Text Detection via Rotation Proposals". - """ - - box_dim: int = 5 - """ - the dimension of each anchor box. - """ - - @configurable - def __init__(self, *, sizes, aspect_ratios, strides, angles, offset=0.5): - """ - This interface is experimental. - - Args: - sizes (list[list[float]] or list[float]): - If sizes is list[list[float]], sizes[i] is the list of anchor sizes - (i.e. sqrt of anchor area) to use for the i-th feature map. - If sizes is list[float], the sizes are used for all feature maps. - Anchor sizes are given in absolute lengths in units of - the input image; they do not dynamically scale if the input image size changes. - aspect_ratios (list[list[float]] or list[float]): list of aspect ratios - (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. - strides (list[int]): stride of each input feature. - angles (list[list[float]] or list[float]): list of angles (in degrees CCW) - to use for anchors. Same "broadcast" rule for `sizes` applies. - offset (float): Relative offset between the center of the first anchor and the top-left - corner of the image. Value has to be in [0, 1). - Recommend to use 0.5, which means half stride. - """ - super().__init__() - - self.strides = strides - self.num_features = len(self.strides) - sizes = _broadcast_params(sizes, self.num_features, "sizes") - aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") - angles = _broadcast_params(angles, self.num_features, "angles") - self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios, angles) - - self.offset = offset - assert 0.0 <= self.offset < 1.0, self.offset - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - return { - "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, - "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, - "strides": [x.stride for x in input_shape], - "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, - "angles": cfg.MODEL.ANCHOR_GENERATOR.ANGLES, - } - - def _calculate_anchors(self, sizes, aspect_ratios, angles): - cell_anchors = [ - self.generate_cell_anchors(size, aspect_ratio, angle).float() - for size, aspect_ratio, angle in zip(sizes, aspect_ratios, angles) - ] - return BufferList(cell_anchors) - - @property - def num_cell_anchors(self): - """ - Alias of `num_anchors`. - """ - return self.num_anchors - - @property - def num_anchors(self): - """ - Returns: - list[int]: Each int is the number of anchors at every pixel - location, on that feature map. - For example, if at every pixel we use anchors of 3 aspect - ratios, 2 sizes and 5 angles, the number of anchors is 30. - (See also ANCHOR_GENERATOR.SIZES, ANCHOR_GENERATOR.ASPECT_RATIOS - and ANCHOR_GENERATOR.ANGLES in config) - - In standard RRPN models, `num_anchors` on every feature map is the same. - """ - return [len(cell_anchors) for cell_anchors in self.cell_anchors] - - def _grid_anchors(self, grid_sizes): - anchors = [] - for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors): - shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors) - zeros = torch.zeros_like(shift_x) - shifts = torch.stack((shift_x, shift_y, zeros, zeros, zeros), dim=1) - - anchors.append((shifts.view(-1, 1, 5) + base_anchors.view(1, -1, 5)).reshape(-1, 5)) - - return anchors - - def generate_cell_anchors( - self, - sizes=(32, 64, 128, 256, 512), - aspect_ratios=(0.5, 1, 2), - angles=(-90, -60, -30, 0, 30, 60, 90), - ): - """ - Generate a tensor storing canonical anchor boxes, which are all anchor - boxes of different sizes, aspect_ratios, angles centered at (0, 0). - We can later build the set of anchors for a full feature map by - shifting and tiling these tensors (see `meth:_grid_anchors`). - - Args: - sizes (tuple[float]): - aspect_ratios (tuple[float]]): - angles (tuple[float]]): - - Returns: - Tensor of shape (len(sizes) * len(aspect_ratios) * len(angles), 5) - storing anchor boxes in (x_ctr, y_ctr, w, h, angle) format. - """ - anchors = [] - for size in sizes: - area = size**2.0 - for aspect_ratio in aspect_ratios: - # s * s = w * h - # a = h / w - # ... some algebra ... - # w = sqrt(s * s / a) - # h = a * w - w = math.sqrt(area / aspect_ratio) - h = aspect_ratio * w - anchors.extend([0, 0, w, h, a] for a in angles) - - return torch.tensor(anchors) - - def forward(self, features): - """ - Args: - features (list[Tensor]): list of backbone feature maps on which to generate anchors. - - Returns: - list[RotatedBoxes]: a list of Boxes containing all the anchors for each feature map - (i.e. the cell anchors repeated over all locations in the feature map). - The number of anchors of each feature map is Hi x Wi x num_cell_anchors, - where Hi, Wi are resolution of the feature map divided by anchor stride. - """ - grid_sizes = [feature_map.shape[-2:] for feature_map in features] - anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) - return [RotatedBoxes(x) for x in anchors_over_all_feature_maps] - - -def build_anchor_generator(cfg, input_shape): - """ - Built an anchor generator from `cfg.MODEL.ANCHOR_GENERATOR.NAME`. - """ - anchor_generator = cfg.MODEL.ANCHOR_GENERATOR.NAME - return ANCHOR_GENERATOR_REGISTRY.get(anchor_generator)(cfg, input_shape) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py deleted file mode 100644 index c52dda18b41705705b47dd0e995b124048c16fba..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd.function import Function, once_differentiable - -from annotator.uniformer.mmcv import deprecated_api_warning -from annotator.uniformer.mmcv.cnn import constant_init, xavier_init -from annotator.uniformer.mmcv.cnn.bricks.registry import ATTENTION -from annotator.uniformer.mmcv.runner import BaseModule -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['ms_deform_attn_backward', 'ms_deform_attn_forward']) - - -class MultiScaleDeformableAttnFunction(Function): - - @staticmethod - def forward(ctx, value, value_spatial_shapes, value_level_start_index, - sampling_locations, attention_weights, im2col_step): - """GPU version of multi-scale deformable attention. - - Args: - value (Tensor): The value has shape - (bs, num_keys, mum_heads, embed_dims//num_heads) - value_spatial_shapes (Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (Tensor): The weight of sampling points used - when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - im2col_step (Tensor): The step used in image to column. - - Returns: - Tensor: has shape (bs, num_queries, embed_dims) - """ - - ctx.im2col_step = im2col_step - output = ext_module.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step=ctx.im2col_step) - ctx.save_for_backward(value, value_spatial_shapes, - value_level_start_index, sampling_locations, - attention_weights) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - """GPU version of backward function. - - Args: - grad_output (Tensor): Gradient - of output tensor of forward. - - Returns: - Tuple[Tensor]: Gradient - of input tensors in forward. - """ - value, value_spatial_shapes, value_level_start_index,\ - sampling_locations, attention_weights = ctx.saved_tensors - grad_value = torch.zeros_like(value) - grad_sampling_loc = torch.zeros_like(sampling_locations) - grad_attn_weight = torch.zeros_like(attention_weights) - - ext_module.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output.contiguous(), - grad_value, - grad_sampling_loc, - grad_attn_weight, - im2col_step=ctx.im2col_step) - - return grad_value, None, None, \ - grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch(value, value_spatial_shapes, - sampling_locations, attention_weights): - """CPU version of multi-scale deformable attention. - - Args: - value (Tensor): The value has shape - (bs, num_keys, mum_heads, embed_dims//num_heads) - value_spatial_shapes (Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (Tensor): The weight of sampling points used - when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - - Returns: - Tensor: has shape (bs, num_queries, embed_dims) - """ - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ =\ - sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], - dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = value_list[level].flatten(2).transpose(1, 2).reshape( - bs * num_heads, embed_dims, H_, W_) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, - level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, - sampling_grid_l_, - mode='bilinear', - padding_mode='zeros', - align_corners=False) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points) - output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * - attention_weights).sum(-1).view(bs, num_heads * embed_dims, - num_queries) - return output.transpose(1, 2).contiguous() - - -@ATTENTION.register_module() -class MultiScaleDeformableAttention(BaseModule): - """An attention module used in Deformable-Detr. - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dims (int): The embedding dimension of Attention. - Default: 256. - num_heads (int): Parallel attention heads. Default: 64. - num_levels (int): The number of feature map used in - Attention. Default: 4. - num_points (int): The number of sampling points for - each query in each head. Default: 4. - im2col_step (int): The step used in image_to_column. - Default: 64. - dropout (float): A Dropout layer on `inp_identity`. - Default: 0.1. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - norm_cfg (dict): Config dict for normalization layer. - Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims=256, - num_heads=8, - num_levels=4, - num_points=4, - im2col_step=64, - dropout=0.1, - batch_first=False, - norm_cfg=None, - init_cfg=None): - super().__init__(init_cfg) - if embed_dims % num_heads != 0: - raise ValueError(f'embed_dims must be divisible by num_heads, ' - f'but got {embed_dims} and {num_heads}') - dim_per_head = embed_dims // num_heads - self.norm_cfg = norm_cfg - self.dropout = nn.Dropout(dropout) - self.batch_first = batch_first - - # you'd better set dim_per_head to a power of 2 - # which is more efficient in the CUDA implementation - def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError( - 'invalid input for _is_power_of_2: {} (type: {})'.format( - n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - if not _is_power_of_2(dim_per_head): - warnings.warn( - "You'd better set embed_dims in " - 'MultiScaleDeformAttention to make ' - 'the dimension of each attention head a power of 2 ' - 'which is more efficient in our CUDA implementation.') - - self.im2col_step = im2col_step - self.embed_dims = embed_dims - self.num_levels = num_levels - self.num_heads = num_heads - self.num_points = num_points - self.sampling_offsets = nn.Linear( - embed_dims, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dims, - num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dims, embed_dims) - self.output_proj = nn.Linear(embed_dims, embed_dims) - self.init_weights() - - def init_weights(self): - """Default initialization for Parameters of Module.""" - constant_init(self.sampling_offsets, 0.) - thetas = torch.arange( - self.num_heads, - dtype=torch.float32) * (2.0 * math.pi / self.num_heads) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = (grid_init / - grid_init.abs().max(-1, keepdim=True)[0]).view( - self.num_heads, 1, 1, - 2).repeat(1, self.num_levels, self.num_points, 1) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - - self.sampling_offsets.bias.data = grid_init.view(-1) - constant_init(self.attention_weights, val=0., bias=0.) - xavier_init(self.value_proj, distribution='uniform', bias=0.) - xavier_init(self.output_proj, distribution='uniform', bias=0.) - self._is_init = True - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiScaleDeformableAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_padding_mask=None, - reference_points=None, - spatial_shapes=None, - level_start_index=None, - **kwargs): - """Forward Function of MultiScaleDeformAttention. - - Args: - query (Tensor): Query of Transformer with shape - (num_query, bs, embed_dims). - key (Tensor): The key tensor with shape - `(num_key, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_key, bs, embed_dims)`. - identity (Tensor): The tensor used for addition, with the - same shape as `query`. Default None. If None, - `query` will be used. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. Default - None. - reference_points (Tensor): The normalized reference - points with shape (bs, num_query, num_levels, 2), - all elements is range in [0, 1], top-left (0,0), - bottom-right (1, 1), including padding area. - or (N, Length_{query}, num_levels, 4), add - additional two dimensions is (w, h) to - form reference boxes. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_key]. - spatial_shapes (Tensor): Spatial shape of features in - different levels. With shape (num_levels, 2), - last dimension represents (h, w). - level_start_index (Tensor): The start index of each level. - A tensor has shape ``(num_levels, )`` and can be represented - as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...]. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - - if value is None: - value = query - - if identity is None: - identity = query - if query_pos is not None: - query = query + query_pos - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], 0.0) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points) - attention_weights = attention_weights.softmax(-1) - - attention_weights = attention_weights.view(bs, num_query, - self.num_heads, - self.num_levels, - self.num_points) - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack( - [spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = reference_points[:, :, None, :, None, :] \ - + sampling_offsets \ - / offset_normalizer[None, None, None, :, None, :] - elif reference_points.shape[-1] == 4: - sampling_locations = reference_points[:, :, None, :, None, :2] \ - + sampling_offsets / self.num_points \ - * reference_points[:, :, None, :, None, 2:] \ - * 0.5 - else: - raise ValueError( - f'Last dim of reference_points must be' - f' 2 or 4, but get {reference_points.shape[-1]} instead.') - if torch.cuda.is_available() and value.is_cuda: - output = MultiScaleDeformableAttnFunction.apply( - value, spatial_shapes, level_start_index, sampling_locations, - attention_weights, self.im2col_step) - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights) - - output = self.output_proj(output) - - if not self.batch_first: - # (num_query, bs ,embed_dims) - output = output.permute(1, 0, 2) - - return self.dropout(output) + identity diff --git a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/text/chinese_bert.py b/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/text/chinese_bert.py deleted file mode 100644 index 8159425df4bf7e577008b22f44e84f3147fdce14..0000000000000000000000000000000000000000 --- a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/text/chinese_bert.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import sys -from transformers import AutoTokenizer, AutoModelForMaskedLM - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") - -models = dict() - - -def get_bert_feature(text, word2ph, device=None): - if ( - sys.platform == "darwin" - and torch.backends.mps.is_available() - and device == "cpu" - ): - device = "mps" - if not device: - device = "cuda" - if device not in models.keys(): - models[device] = AutoModelForMaskedLM.from_pretrained( - "./bert/chinese-roberta-wwm-ext-large" - ).to(device) - with torch.no_grad(): - inputs = tokenizer(text, return_tensors="pt") - for i in inputs: - inputs[i] = inputs[i].to(device) - res = models[device](**inputs, output_hidden_states=True) - res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text) + 2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - return phone_level_feature.T - - -if __name__ == "__main__": - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [ - 1, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 2, - 1, - 1, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 1, - ] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) diff --git a/spaces/TabPFN/TabPFNPrediction/TabPFN/priors/flexible_categorical.py b/spaces/TabPFN/TabPFNPrediction/TabPFN/priors/flexible_categorical.py deleted file mode 100644 index c26481cac3b01edeed8c9915cbf16dad6f11e583..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNPrediction/TabPFN/priors/flexible_categorical.py +++ /dev/null @@ -1,286 +0,0 @@ -import time -import random - -import torch -from torch import nn - -from .utils import get_batch_to_dataloader -from utils import normalize_data, nan_handling_missing_for_unknown_reason_value, nan_handling_missing_for_no_reason_value, nan_handling_missing_for_a_reason_value, to_ranking_low_mem, remove_outliers -from .utils import normalize_by_used_features_f, randomize_classes, CategoricalActivation -from .utils import uniform_int_sampler_f - -time_it = False - -class BalancedBinarize(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x): - return (x > torch.median(x)).float() - -def class_sampler_f(min_, max_): - def s(): - if random.random() > 0.5: - return uniform_int_sampler_f(min_, max_)() - return 2 - return s - -class RegressionNormalized(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x): - # x has shape (T,B) - - # TODO: Normalize to -1, 1 or gaussian normal - maxima = torch.max(x, 0)[0] - minima = torch.min(x, 0)[0] - norm = (x - minima) / (maxima-minima) - - return norm - -class MulticlassRank(nn.Module): - def __init__(self, num_classes, ordered_p=0.5): - super().__init__() - self.num_classes = class_sampler_f(2, num_classes)() - self.ordered_p = ordered_p - - def forward(self, x): - # x has shape (T,B,H) - - # CAUTION: This samples the same idx in sequence for each class boundary in a batch - class_boundaries = torch.randint(0, x.shape[0], (self.num_classes - 1,)) - class_boundaries = x[class_boundaries].unsqueeze(1) - - d = (x > class_boundaries).sum(axis=0) - - randomized_classes = torch.rand((d.shape[1], )) > self.ordered_p - d[:, randomized_classes] = randomize_classes(d[:, randomized_classes], self.num_classes) - reverse_classes = torch.rand((d.shape[1],)) > 0.5 - d[:, reverse_classes] = self.num_classes - 1 - d[:, reverse_classes] - return d - -class MulticlassValue(nn.Module): - def __init__(self, num_classes, ordered_p=0.5): - super().__init__() - self.num_classes = class_sampler_f(2, num_classes)() - self.classes = nn.Parameter(torch.randn(self.num_classes-1), requires_grad=False) - self.ordered_p = ordered_p - - def forward(self, x): - # x has shape (T,B,H) - d = (x > (self.classes.unsqueeze(-1).unsqueeze(-1))).sum(axis=0) - - randomized_classes = torch.rand((d.shape[1],)) > self.ordered_p - d[:, randomized_classes] = randomize_classes(d[:, randomized_classes], self.num_classes) - reverse_classes = torch.rand((d.shape[1],)) > 0.5 - d[:, reverse_classes] = self.num_classes - 1 - d[:, reverse_classes] - return d - -class MulticlassMultiNode(nn.Module): - def __init__(self, num_classes, ordered_p=0.5): - super().__init__() - self.num_classes = class_sampler_f(2, num_classes)() - self.classes = nn.Parameter(torch.randn(num_classes-1), requires_grad=False) - self.alt_multi_class = MulticlassValue(num_classes, ordered_p) - - def forward(self, x): - # x has shape T, B, H - if len(x.shape) == 2: - return self.alt_multi_class(x) - T = 3 - x[torch.isnan(x)] = 0.00001 - d = torch.multinomial(torch.pow(0.00001+torch.sigmoid(x[:, :, 0:self.num_classes]).reshape(-1, self.num_classes), T), 1, replacement=True).reshape(x.shape[0], x.shape[1])#.float() - return d - - -class FlexibleCategorical(torch.nn.Module): - def __init__(self, get_batch, hyperparameters, args): - super(FlexibleCategorical, self).__init__() - - self.h = {k: hyperparameters[k]() if callable(hyperparameters[k]) else hyperparameters[k] for k in - hyperparameters.keys()} - self.args = args - self.args_passed = {**self.args} - self.args_passed.update({'num_features': self.h['num_features_used']}) - self.get_batch = get_batch - - if self.h['num_classes'] == 0: - self.class_assigner = RegressionNormalized() - else: - if self.h['num_classes'] > 1 and not self.h['balanced']: - if self.h['multiclass_type'] == 'rank': - self.class_assigner = MulticlassRank(self.h['num_classes'] - , ordered_p=self.h['output_multiclass_ordered_p'] - ) - elif self.h['multiclass_type'] == 'value': - self.class_assigner = MulticlassValue(self.h['num_classes'] - , ordered_p=self.h['output_multiclass_ordered_p'] - ) - elif self.h['multiclass_type'] == 'multi_node': - self.class_assigner = MulticlassMultiNode(self.h['num_classes']) - else: - raise ValueError("Unknow Multiclass type") - elif self.h['num_classes'] == 2 and self.h['balanced']: - self.class_assigner = BalancedBinarize() - elif self.h['num_classes'] > 2 and self.h['balanced']: - raise NotImplementedError("Balanced multiclass training is not possible") - - def drop_for_reason(self, x, v): - nan_prob_sampler = CategoricalActivation(ordered_p=0.0 - , categorical_p=1.0 - , keep_activation_size=False, - num_classes_sampler=lambda: 20) - d = nan_prob_sampler(x) - # TODO: Make a different ordering for each activation - x[d < torch.rand((1,), device=x.device) * 20 * self.h['nan_prob_no_reason'] * random.random()] = v - return x - - def drop_for_no_reason(self, x, v): - x[torch.rand(x.shape, device=self.args['device']) < random.random() * self.h['nan_prob_no_reason']] = v - return x - - def forward(self, batch_size): - start = time.time() - x, y, y_ = self.get_batch(hyperparameters=self.h, **self.args_passed) - if time_it: - print('Flex Forward Block 1', round(time.time() - start, 3)) - - start = time.time() - - if self.h['nan_prob_no_reason']+self.h['nan_prob_a_reason']+self.h['nan_prob_unknown_reason'] > 0 and random.random() > 0.5: # Only one out of two datasets should have nans - if random.random() < self.h['nan_prob_no_reason']: # Missing for no reason - x = self.drop_for_no_reason(x, nan_handling_missing_for_no_reason_value(self.h['set_value_to_nan'])) - - if self.h['nan_prob_a_reason'] > 0 and random.random() > 0.5: # Missing for a reason - x = self.drop_for_reason(x, nan_handling_missing_for_a_reason_value(self.h['set_value_to_nan'])) - - if self.h['nan_prob_unknown_reason'] > 0: # Missing for unknown reason and random.random() > 0.5 - if random.random() < self.h['nan_prob_unknown_reason_reason_prior']: - x = self.drop_for_no_reason(x, nan_handling_missing_for_unknown_reason_value(self.h['set_value_to_nan'])) - else: - x = self.drop_for_reason(x, nan_handling_missing_for_unknown_reason_value(self.h['set_value_to_nan'])) - - # Categorical features - if 'categorical_feature_p' in self.h and random.random() < self.h['categorical_feature_p']: - p = random.random() - for col in range(x.shape[2]): - num_unique_features = max(round(random.gammavariate(1,10)),2) - m = MulticlassRank(num_unique_features, ordered_p=0.3) - if random.random() < p: - x[:, :, col] = m(x[:, :, col]) - - if time_it: - print('Flex Forward Block 2', round(time.time() - start, 3)) - start = time.time() - - if self.h['normalize_to_ranking']: - x = to_ranking_low_mem(x) - else: - x = remove_outliers(x) - x, y = normalize_data(x), normalize_data(y) - - if time_it: - print('Flex Forward Block 3', round(time.time() - start, 3)) - start = time.time() - - # Cast to classification if enabled - y = self.class_assigner(y).float() - - if time_it: - print('Flex Forward Block 4', round(time.time() - start, 3)) - start = time.time() - if self.h['normalize_by_used_features']: - x = normalize_by_used_features_f(x, self.h['num_features_used'], self.args['num_features'], normalize_with_sqrt=self.h.get('normalize_with_sqrt',False)) - if time_it: - print('Flex Forward Block 5', round(time.time() - start, 3)) - - start = time.time() - # Append empty features if enabled - x = torch.cat( - [x, torch.zeros((x.shape[0], x.shape[1], self.args['num_features'] - self.h['num_features_used']), - device=self.args['device'])], -1) - if time_it: - print('Flex Forward Block 6', round(time.time() - start, 3)) - - if torch.isnan(y).sum() > 0: - print('Nans in target!') - - if self.h['check_is_compatible']: - for b in range(y.shape[1]): - is_compatible, N = False, 0 - while not is_compatible and N < 10: - targets_in_train = torch.unique(y[:self.args['single_eval_pos'], b], sorted=True) - targets_in_eval = torch.unique(y[self.args['single_eval_pos']:, b], sorted=True) - - is_compatible = len(targets_in_train) == len(targets_in_eval) and ( - targets_in_train == targets_in_eval).all() and len(targets_in_train) > 1 - - if not is_compatible: - randperm = torch.randperm(x.shape[0]) - x[:, b], y[:, b] = x[randperm, b], y[randperm, b] - N = N + 1 - if not is_compatible: - if not is_compatible: - # todo check that it really does this and how many together - y[:, b] = -100 # Relies on CE having `ignore_index` set to -100 (default) - - if self.h['normalize_labels']: - #assert self.h['output_multiclass_ordered_p'] == 0., "normalize_labels destroys ordering of labels anyways." - for b in range(y.shape[1]): - valid_labels = y[:,b] != -100 - if self.h.get('normalize_ignore_label_too', False): - valid_labels[:] = True - y[valid_labels, b] = (y[valid_labels, b] > y[valid_labels, b].unique().unsqueeze(1)).sum(axis=0).unsqueeze(0).float() - - if y[valid_labels, b].numel() != 0 and self.h.get('rotate_normalized_labels', True): - num_classes_float = (y[valid_labels, b].max() + 1).cpu() - num_classes = num_classes_float.int().item() - assert num_classes == num_classes_float.item() - random_shift = torch.randint(0, num_classes, (1,), device=self.args['device']) - y[valid_labels, b] = (y[valid_labels, b] + random_shift) % num_classes - - return x, y, y # x.shape = (T,B,H) - -import torch.cuda as cutorch - -@torch.no_grad() -def get_batch(batch_size, seq_len, num_features, get_batch, device, hyperparameters=None, batch_size_per_gp_sample=None, **kwargs): - batch_size_per_gp_sample = batch_size_per_gp_sample or (min(32, batch_size)) - num_models = batch_size // batch_size_per_gp_sample - assert num_models > 0, f'Batch size ({batch_size}) is too small for batch_size_per_gp_sample ({batch_size_per_gp_sample})' - assert num_models * batch_size_per_gp_sample == batch_size, f'Batch size ({batch_size}) not divisible by batch_size_per_gp_sample ({batch_size_per_gp_sample})' - - # Sample one seq_len for entire batch - seq_len = hyperparameters['seq_len_used']() if callable(hyperparameters['seq_len_used']) else seq_len - - args = {'device': device, 'seq_len': seq_len, 'num_features': num_features, 'batch_size': batch_size_per_gp_sample, **kwargs} - - models = [FlexibleCategorical(get_batch, hyperparameters, args).to(device) for _ in range(num_models)] - - sample = [model(batch_size=batch_size_per_gp_sample) for model in models] - - x, y, y_ = zip(*sample) - x, y, y_ = torch.cat(x, 1).detach(), torch.cat(y, 1).detach(), torch.cat(y_, 1).detach() - - return x, y, y_ - -# num_features_used = num_features_used_sampler() -# prior_outputscale = prior_outputscale_sampler() -# prior_lengthscale = prior_lengthscale_sampler() -# -# x, sample = normalize_data(x), normalize_data(sample) -# -# if is_binary_classification: -# sample = (sample > torch.median(sample, dim=0)[0]).float() -# -# if normalize_by_used_features: -# x = normalize_by_used_features_f(x, num_features_used, num_features) -# -# # # if is_binary_classification and order_y: -# # # x, sample = order_by_y(x, sample) -# # -# # Append empty features if enabled -# x = torch.cat([x, torch.zeros((x.shape[0], x.shape[1], num_features - num_features_used), device=device)], -1) - -DataLoader = get_batch_to_dataloader(get_batch) \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/wheel.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/wheel.py deleted file mode 100644 index e5e3f34ed81453ce759c6ade8b2def733e9063e2..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/wheel.py +++ /dev/null @@ -1,136 +0,0 @@ -"""Support functions for working with wheel files. -""" - -import logging -from email.message import Message -from email.parser import Parser -from typing import Tuple -from zipfile import BadZipFile, ZipFile - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.exceptions import UnsupportedWheel - -VERSION_COMPATIBLE = (1, 0) - - -logger = logging.getLogger(__name__) - - -def parse_wheel(wheel_zip: ZipFile, name: str) -> Tuple[str, Message]: - """Extract information from the provided wheel, ensuring it meets basic - standards. - - Returns the name of the .dist-info directory and the parsed WHEEL metadata. - """ - try: - info_dir = wheel_dist_info_dir(wheel_zip, name) - metadata = wheel_metadata(wheel_zip, info_dir) - version = wheel_version(metadata) - except UnsupportedWheel as e: - raise UnsupportedWheel("{} has an invalid wheel, {}".format(name, str(e))) - - check_compatibility(version, name) - - return info_dir, metadata - - -def wheel_dist_info_dir(source: ZipFile, name: str) -> str: - """Returns the name of the contained .dist-info directory. - - Raises AssertionError or UnsupportedWheel if not found, >1 found, or - it doesn't match the provided name. - """ - # Zip file path separators must be / - subdirs = {p.split("/", 1)[0] for p in source.namelist()} - - info_dirs = [s for s in subdirs if s.endswith(".dist-info")] - - if not info_dirs: - raise UnsupportedWheel(".dist-info directory not found") - - if len(info_dirs) > 1: - raise UnsupportedWheel( - "multiple .dist-info directories found: {}".format(", ".join(info_dirs)) - ) - - info_dir = info_dirs[0] - - info_dir_name = canonicalize_name(info_dir) - canonical_name = canonicalize_name(name) - if not info_dir_name.startswith(canonical_name): - raise UnsupportedWheel( - ".dist-info directory {!r} does not start with {!r}".format( - info_dir, canonical_name - ) - ) - - return info_dir - - -def read_wheel_metadata_file(source: ZipFile, path: str) -> bytes: - try: - return source.read(path) - # BadZipFile for general corruption, KeyError for missing entry, - # and RuntimeError for password-protected files - except (BadZipFile, KeyError, RuntimeError) as e: - raise UnsupportedWheel(f"could not read {path!r} file: {e!r}") - - -def wheel_metadata(source: ZipFile, dist_info_dir: str) -> Message: - """Return the WHEEL metadata of an extracted wheel, if possible. - Otherwise, raise UnsupportedWheel. - """ - path = f"{dist_info_dir}/WHEEL" - # Zip file path separators must be / - wheel_contents = read_wheel_metadata_file(source, path) - - try: - wheel_text = wheel_contents.decode() - except UnicodeDecodeError as e: - raise UnsupportedWheel(f"error decoding {path!r}: {e!r}") - - # FeedParser (used by Parser) does not raise any exceptions. The returned - # message may have .defects populated, but for backwards-compatibility we - # currently ignore them. - return Parser().parsestr(wheel_text) - - -def wheel_version(wheel_data: Message) -> Tuple[int, ...]: - """Given WHEEL metadata, return the parsed Wheel-Version. - Otherwise, raise UnsupportedWheel. - """ - version_text = wheel_data["Wheel-Version"] - if version_text is None: - raise UnsupportedWheel("WHEEL is missing Wheel-Version") - - version = version_text.strip() - - try: - return tuple(map(int, version.split("."))) - except ValueError: - raise UnsupportedWheel(f"invalid Wheel-Version: {version!r}") - - -def check_compatibility(version: Tuple[int, ...], name: str) -> None: - """Raises errors or warns if called with an incompatible Wheel-Version. - - pip should refuse to install a Wheel-Version that's a major series - ahead of what it's compatible with (e.g 2.0 > 1.1); and warn when - installing a version only minor version ahead (e.g 1.2 > 1.1). - - version: a 2-tuple representing a Wheel-Version (Major, Minor) - name: name of wheel or package to raise exception about - - :raises UnsupportedWheel: when an incompatible Wheel-Version is given - """ - if version[0] > VERSION_COMPATIBLE[0]: - raise UnsupportedWheel( - "{}'s Wheel-Version ({}) is not compatible with this version " - "of pip".format(name, ".".join(map(str, version))) - ) - elif version > VERSION_COMPATIBLE: - logger.warning( - "Installing from a newer Wheel-Version (%s)", - ".".join(map(str, version)), - ) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/cookies.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/cookies.py deleted file mode 100644 index bf54ab237e410603061b8cec8fd195912d3cfb08..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/cookies.py +++ /dev/null @@ -1,561 +0,0 @@ -""" -requests.cookies -~~~~~~~~~~~~~~~~ - -Compatibility code to be able to use `cookielib.CookieJar` with requests. - -requests.utils imports from here, so be careful with imports. -""" - -import calendar -import copy -import time - -from ._internal_utils import to_native_string -from .compat import Morsel, MutableMapping, cookielib, urlparse, urlunparse - -try: - import threading -except ImportError: - import dummy_threading as threading - - -class MockRequest: - """Wraps a `requests.Request` to mimic a `urllib2.Request`. - - The code in `cookielib.CookieJar` expects this interface in order to correctly - manage cookie policies, i.e., determine whether a cookie can be set, given the - domains of the request and the cookie. - - The original request object is read-only. The client is responsible for collecting - the new headers via `get_new_headers()` and interpreting them appropriately. You - probably want `get_cookie_header`, defined below. - """ - - def __init__(self, request): - self._r = request - self._new_headers = {} - self.type = urlparse(self._r.url).scheme - - def get_type(self): - return self.type - - def get_host(self): - return urlparse(self._r.url).netloc - - def get_origin_req_host(self): - return self.get_host() - - def get_full_url(self): - # Only return the response's URL if the user hadn't set the Host - # header - if not self._r.headers.get("Host"): - return self._r.url - # If they did set it, retrieve it and reconstruct the expected domain - host = to_native_string(self._r.headers["Host"], encoding="utf-8") - parsed = urlparse(self._r.url) - # Reconstruct the URL as we expect it - return urlunparse( - [ - parsed.scheme, - host, - parsed.path, - parsed.params, - parsed.query, - parsed.fragment, - ] - ) - - def is_unverifiable(self): - return True - - def has_header(self, name): - return name in self._r.headers or name in self._new_headers - - def get_header(self, name, default=None): - return self._r.headers.get(name, self._new_headers.get(name, default)) - - def add_header(self, key, val): - """cookielib has no legitimate use for this method; add it back if you find one.""" - raise NotImplementedError( - "Cookie headers should be added with add_unredirected_header()" - ) - - def add_unredirected_header(self, name, value): - self._new_headers[name] = value - - def get_new_headers(self): - return self._new_headers - - @property - def unverifiable(self): - return self.is_unverifiable() - - @property - def origin_req_host(self): - return self.get_origin_req_host() - - @property - def host(self): - return self.get_host() - - -class MockResponse: - """Wraps a `httplib.HTTPMessage` to mimic a `urllib.addinfourl`. - - ...what? Basically, expose the parsed HTTP headers from the server response - the way `cookielib` expects to see them. - """ - - def __init__(self, headers): - """Make a MockResponse for `cookielib` to read. - - :param headers: a httplib.HTTPMessage or analogous carrying the headers - """ - self._headers = headers - - def info(self): - return self._headers - - def getheaders(self, name): - self._headers.getheaders(name) - - -def extract_cookies_to_jar(jar, request, response): - """Extract the cookies from the response into a CookieJar. - - :param jar: cookielib.CookieJar (not necessarily a RequestsCookieJar) - :param request: our own requests.Request object - :param response: urllib3.HTTPResponse object - """ - if not (hasattr(response, "_original_response") and response._original_response): - return - # the _original_response field is the wrapped httplib.HTTPResponse object, - req = MockRequest(request) - # pull out the HTTPMessage with the headers and put it in the mock: - res = MockResponse(response._original_response.msg) - jar.extract_cookies(res, req) - - -def get_cookie_header(jar, request): - """ - Produce an appropriate Cookie header string to be sent with `request`, or None. - - :rtype: str - """ - r = MockRequest(request) - jar.add_cookie_header(r) - return r.get_new_headers().get("Cookie") - - -def remove_cookie_by_name(cookiejar, name, domain=None, path=None): - """Unsets a cookie by name, by default over all domains and paths. - - Wraps CookieJar.clear(), is O(n). - """ - clearables = [] - for cookie in cookiejar: - if cookie.name != name: - continue - if domain is not None and domain != cookie.domain: - continue - if path is not None and path != cookie.path: - continue - clearables.append((cookie.domain, cookie.path, cookie.name)) - - for domain, path, name in clearables: - cookiejar.clear(domain, path, name) - - -class CookieConflictError(RuntimeError): - """There are two cookies that meet the criteria specified in the cookie jar. - Use .get and .set and include domain and path args in order to be more specific. - """ - - -class RequestsCookieJar(cookielib.CookieJar, MutableMapping): - """Compatibility class; is a cookielib.CookieJar, but exposes a dict - interface. - - This is the CookieJar we create by default for requests and sessions that - don't specify one, since some clients may expect response.cookies and - session.cookies to support dict operations. - - Requests does not use the dict interface internally; it's just for - compatibility with external client code. All requests code should work - out of the box with externally provided instances of ``CookieJar``, e.g. - ``LWPCookieJar`` and ``FileCookieJar``. - - Unlike a regular CookieJar, this class is pickleable. - - .. warning:: dictionary operations that are normally O(1) may be O(n). - """ - - def get(self, name, default=None, domain=None, path=None): - """Dict-like get() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - - .. warning:: operation is O(n), not O(1). - """ - try: - return self._find_no_duplicates(name, domain, path) - except KeyError: - return default - - def set(self, name, value, **kwargs): - """Dict-like set() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - """ - # support client code that unsets cookies by assignment of a None value: - if value is None: - remove_cookie_by_name( - self, name, domain=kwargs.get("domain"), path=kwargs.get("path") - ) - return - - if isinstance(value, Morsel): - c = morsel_to_cookie(value) - else: - c = create_cookie(name, value, **kwargs) - self.set_cookie(c) - return c - - def iterkeys(self): - """Dict-like iterkeys() that returns an iterator of names of cookies - from the jar. - - .. seealso:: itervalues() and iteritems(). - """ - for cookie in iter(self): - yield cookie.name - - def keys(self): - """Dict-like keys() that returns a list of names of cookies from the - jar. - - .. seealso:: values() and items(). - """ - return list(self.iterkeys()) - - def itervalues(self): - """Dict-like itervalues() that returns an iterator of values of cookies - from the jar. - - .. seealso:: iterkeys() and iteritems(). - """ - for cookie in iter(self): - yield cookie.value - - def values(self): - """Dict-like values() that returns a list of values of cookies from the - jar. - - .. seealso:: keys() and items(). - """ - return list(self.itervalues()) - - def iteritems(self): - """Dict-like iteritems() that returns an iterator of name-value tuples - from the jar. - - .. seealso:: iterkeys() and itervalues(). - """ - for cookie in iter(self): - yield cookie.name, cookie.value - - def items(self): - """Dict-like items() that returns a list of name-value tuples from the - jar. Allows client-code to call ``dict(RequestsCookieJar)`` and get a - vanilla python dict of key value pairs. - - .. seealso:: keys() and values(). - """ - return list(self.iteritems()) - - def list_domains(self): - """Utility method to list all the domains in the jar.""" - domains = [] - for cookie in iter(self): - if cookie.domain not in domains: - domains.append(cookie.domain) - return domains - - def list_paths(self): - """Utility method to list all the paths in the jar.""" - paths = [] - for cookie in iter(self): - if cookie.path not in paths: - paths.append(cookie.path) - return paths - - def multiple_domains(self): - """Returns True if there are multiple domains in the jar. - Returns False otherwise. - - :rtype: bool - """ - domains = [] - for cookie in iter(self): - if cookie.domain is not None and cookie.domain in domains: - return True - domains.append(cookie.domain) - return False # there is only one domain in jar - - def get_dict(self, domain=None, path=None): - """Takes as an argument an optional domain and path and returns a plain - old Python dict of name-value pairs of cookies that meet the - requirements. - - :rtype: dict - """ - dictionary = {} - for cookie in iter(self): - if (domain is None or cookie.domain == domain) and ( - path is None or cookie.path == path - ): - dictionary[cookie.name] = cookie.value - return dictionary - - def __contains__(self, name): - try: - return super().__contains__(name) - except CookieConflictError: - return True - - def __getitem__(self, name): - """Dict-like __getitem__() for compatibility with client code. Throws - exception if there are more than one cookie with name. In that case, - use the more explicit get() method instead. - - .. warning:: operation is O(n), not O(1). - """ - return self._find_no_duplicates(name) - - def __setitem__(self, name, value): - """Dict-like __setitem__ for compatibility with client code. Throws - exception if there is already a cookie of that name in the jar. In that - case, use the more explicit set() method instead. - """ - self.set(name, value) - - def __delitem__(self, name): - """Deletes a cookie given a name. Wraps ``cookielib.CookieJar``'s - ``remove_cookie_by_name()``. - """ - remove_cookie_by_name(self, name) - - def set_cookie(self, cookie, *args, **kwargs): - if ( - hasattr(cookie.value, "startswith") - and cookie.value.startswith('"') - and cookie.value.endswith('"') - ): - cookie.value = cookie.value.replace('\\"', "") - return super().set_cookie(cookie, *args, **kwargs) - - def update(self, other): - """Updates this jar with cookies from another CookieJar or dict-like""" - if isinstance(other, cookielib.CookieJar): - for cookie in other: - self.set_cookie(copy.copy(cookie)) - else: - super().update(other) - - def _find(self, name, domain=None, path=None): - """Requests uses this method internally to get cookie values. - - If there are conflicting cookies, _find arbitrarily chooses one. - See _find_no_duplicates if you want an exception thrown if there are - conflicting cookies. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :return: cookie.value - """ - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - return cookie.value - - raise KeyError(f"name={name!r}, domain={domain!r}, path={path!r}") - - def _find_no_duplicates(self, name, domain=None, path=None): - """Both ``__get_item__`` and ``get`` call this function: it's never - used elsewhere in Requests. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :raises KeyError: if cookie is not found - :raises CookieConflictError: if there are multiple cookies - that match name and optionally domain and path - :return: cookie.value - """ - toReturn = None - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - if toReturn is not None: - # if there are multiple cookies that meet passed in criteria - raise CookieConflictError( - f"There are multiple cookies with name, {name!r}" - ) - # we will eventually return this as long as no cookie conflict - toReturn = cookie.value - - if toReturn: - return toReturn - raise KeyError(f"name={name!r}, domain={domain!r}, path={path!r}") - - def __getstate__(self): - """Unlike a normal CookieJar, this class is pickleable.""" - state = self.__dict__.copy() - # remove the unpickleable RLock object - state.pop("_cookies_lock") - return state - - def __setstate__(self, state): - """Unlike a normal CookieJar, this class is pickleable.""" - self.__dict__.update(state) - if "_cookies_lock" not in self.__dict__: - self._cookies_lock = threading.RLock() - - def copy(self): - """Return a copy of this RequestsCookieJar.""" - new_cj = RequestsCookieJar() - new_cj.set_policy(self.get_policy()) - new_cj.update(self) - return new_cj - - def get_policy(self): - """Return the CookiePolicy instance used.""" - return self._policy - - -def _copy_cookie_jar(jar): - if jar is None: - return None - - if hasattr(jar, "copy"): - # We're dealing with an instance of RequestsCookieJar - return jar.copy() - # We're dealing with a generic CookieJar instance - new_jar = copy.copy(jar) - new_jar.clear() - for cookie in jar: - new_jar.set_cookie(copy.copy(cookie)) - return new_jar - - -def create_cookie(name, value, **kwargs): - """Make a cookie from underspecified parameters. - - By default, the pair of `name` and `value` will be set for the domain '' - and sent on every request (this is sometimes called a "supercookie"). - """ - result = { - "version": 0, - "name": name, - "value": value, - "port": None, - "domain": "", - "path": "/", - "secure": False, - "expires": None, - "discard": True, - "comment": None, - "comment_url": None, - "rest": {"HttpOnly": None}, - "rfc2109": False, - } - - badargs = set(kwargs) - set(result) - if badargs: - raise TypeError( - f"create_cookie() got unexpected keyword arguments: {list(badargs)}" - ) - - result.update(kwargs) - result["port_specified"] = bool(result["port"]) - result["domain_specified"] = bool(result["domain"]) - result["domain_initial_dot"] = result["domain"].startswith(".") - result["path_specified"] = bool(result["path"]) - - return cookielib.Cookie(**result) - - -def morsel_to_cookie(morsel): - """Convert a Morsel object into a Cookie containing the one k/v pair.""" - - expires = None - if morsel["max-age"]: - try: - expires = int(time.time() + int(morsel["max-age"])) - except ValueError: - raise TypeError(f"max-age: {morsel['max-age']} must be integer") - elif morsel["expires"]: - time_template = "%a, %d-%b-%Y %H:%M:%S GMT" - expires = calendar.timegm(time.strptime(morsel["expires"], time_template)) - return create_cookie( - comment=morsel["comment"], - comment_url=bool(morsel["comment"]), - discard=False, - domain=morsel["domain"], - expires=expires, - name=morsel.key, - path=morsel["path"], - port=None, - rest={"HttpOnly": morsel["httponly"]}, - rfc2109=False, - secure=bool(morsel["secure"]), - value=morsel.value, - version=morsel["version"] or 0, - ) - - -def cookiejar_from_dict(cookie_dict, cookiejar=None, overwrite=True): - """Returns a CookieJar from a key/value dictionary. - - :param cookie_dict: Dict of key/values to insert into CookieJar. - :param cookiejar: (optional) A cookiejar to add the cookies to. - :param overwrite: (optional) If False, will not replace cookies - already in the jar with new ones. - :rtype: CookieJar - """ - if cookiejar is None: - cookiejar = RequestsCookieJar() - - if cookie_dict is not None: - names_from_jar = [cookie.name for cookie in cookiejar] - for name in cookie_dict: - if overwrite or (name not in names_from_jar): - cookiejar.set_cookie(create_cookie(name, cookie_dict[name])) - - return cookiejar - - -def merge_cookies(cookiejar, cookies): - """Add cookies to cookiejar and returns a merged CookieJar. - - :param cookiejar: CookieJar object to add the cookies to. - :param cookies: Dictionary or CookieJar object to be added. - :rtype: CookieJar - """ - if not isinstance(cookiejar, cookielib.CookieJar): - raise ValueError("You can only merge into CookieJar") - - if isinstance(cookies, dict): - cookiejar = cookiejar_from_dict(cookies, cookiejar=cookiejar, overwrite=False) - elif isinstance(cookies, cookielib.CookieJar): - try: - cookiejar.update(cookies) - except AttributeError: - for cookie_in_jar in cookies: - cookiejar.set_cookie(cookie_in_jar) - - return cookiejar diff --git a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/models_onnx.py b/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Toaster496/HugChatWithPlugin/app.py b/spaces/Toaster496/HugChatWithPlugin/app.py deleted file mode 100644 index f435febd1927e0a6e4fe13eb1f1e8bae52ef6e92..0000000000000000000000000000000000000000 --- a/spaces/Toaster496/HugChatWithPlugin/app.py +++ /dev/null @@ -1,997 +0,0 @@ -import io -import random -import shutil -import string -from zipfile import ZipFile -import streamlit as st -from streamlit_extras.colored_header import colored_header -from streamlit_extras.add_vertical_space import add_vertical_space -from hugchat import hugchat -from hugchat.login import Login -import pandas as pd -import asyncio -loop = asyncio.new_event_loop() -asyncio.set_event_loop(loop) -import sketch -from langchain.text_splitter import CharacterTextSplitter -from promptTemplate import prompt4conversation, prompt4Data, prompt4Code, prompt4Context, prompt4Audio, prompt4YT -from promptTemplate import prompt4conversationInternet -# FOR DEVELOPMENT NEW PLUGIN -# from promptTemplate import yourPLUGIN -from exportchat import export_chat -from langchain.vectorstores import Chroma -from langchain.chains import RetrievalQA -from HuggingChatAPI import HuggingChat -from langchain.embeddings import HuggingFaceHubEmbeddings -from youtube_transcript_api import YouTubeTranscriptApi -import requests -from bs4 import BeautifulSoup -import speech_recognition as sr -import pdfplumber -import docx2txt -from duckduckgo_search import DDGS -from itertools import islice -from os import path -from pydub import AudioSegment -import os - - -hf = None -repo_id = "sentence-transformers/all-mpnet-base-v2" - -if 'hf_token' in st.session_state: - if 'hf' not in st.session_state: - hf = HuggingFaceHubEmbeddings( - repo_id=repo_id, - task="feature-extraction", - huggingfacehub_api_token=st.session_state['hf_token'], - ) # type: ignore - st.session_state['hf'] = hf - - - -st.set_page_config( - page_title="Talk with ToastGPT💬", page_icon="✅", layout="wide", initial_sidebar_state="expanded" -) - -st.markdown('', unsafe_allow_html=True) - - - - - - -# Sidebar contents for logIN, choose plugin, and export chat -with st.sidebar: - st.title("🤗💬 Product Description Masterpiece") - - if 'hf_email' not in st.session_state or 'hf_pass' not in st.session_state: - with st.expander("ℹ️ Login in Hugging Face", expanded=True): - st.write("⚠️ You need to login in Hugging Face to use this app. You can register [here](https://huggingface.co/join).") - st.header('Hugging Face Login') - hf_email = st.text_input('Enter E-mail:') - hf_pass = st.text_input('Enter password:', type='password') - hf_token = st.text_input('Enter API Token:', type='password') - if st.button('Login 🚀') and hf_email and hf_pass and hf_token: - with st.spinner('🚀 Logging in...'): - st.session_state['hf_email'] = hf_email - st.session_state['hf_pass'] = hf_pass - st.session_state['hf_token'] = hf_token - - try: - - sign = Login(st.session_state['hf_email'], st.session_state['hf_pass']) - cookies = sign.login() - chatbot = hugchat.ChatBot(cookies=cookies.get_dict()) - except Exception as e: - st.error(e) - st.info("⚠️ Please check your credentials and try again.") - st.error("⚠️ dont abuse the ToastGPT") - st.warning("⚠️ If you don't have an account, you can register [here](https://huggingface.co/join).") - from time import sleep - sleep(3) - del st.session_state['hf_email'] - del st.session_state['hf_pass'] - del st.session_state['hf_token'] - st.experimental_rerun() - - st.session_state['chatbot'] = chatbot - - id = st.session_state['chatbot'].new_conversation() - st.session_state['chatbot'].change_conversation(id) - - st.session_state['conversation'] = id - # Generate empty lists for generated and past. - ## generated stores AI generated responses - if 'generated' not in st.session_state: - st.session_state['generated'] = ["I'm **ToastGPT**, How may I help you ? "] - ## past stores User's questions - if 'past' not in st.session_state: - st.session_state['past'] = ['Hi!'] - - st.session_state['LLM'] = HuggingChat(email=st.session_state['hf_email'], psw=st.session_state['hf_pass']) - - st.experimental_rerun() - - - else: - with st.expander("ℹ️ Advanced Settings"): - #temperature: Optional[float]. Default is 0.5 - #top_p: Optional[float]. Default is 0.95 - #repetition_penalty: Optional[float]. Default is 1.2 - #top_k: Optional[int]. Default is 50 - #max_new_tokens: Optional[int]. Default is 1024 - - temperature = st.slider('🌡 Temperature', min_value=0.1, max_value=1.0, value=0.5, step=0.01) - top_p = st.slider('💡 Top P', min_value=0.1, max_value=1.0, value=0.95, step=0.01) - repetition_penalty = st.slider('🖌 Repetition Penalty', min_value=1.0, max_value=2.0, value=1.2, step=0.01) - top_k = st.slider('❄️ Top K', min_value=1, max_value=100, value=50, step=1) - max_new_tokens = st.slider('📝 Max New Tokens', min_value=1, max_value=1024, value=1024, step=1) - - - # FOR DEVELOPMENT NEW PLUGIN YOU MUST ADD IT HERE INTO THE LIST - # YOU NEED ADD THE NAME AT 144 LINE - - #plugins for conversation - plugins = ["🛑 No PLUGIN","🌐 Web Search", "🔗 Talk with Website" , "📋 Talk with your DATA", "📝 Talk with your DOCUMENTS", "🎧 Talk with your AUDIO", "🎥 Talk with YT video", "🧠 GOD MODE" ,"💾 Upload saved VectorStore"] - if 'plugin' not in st.session_state: - st.session_state['plugin'] = st.selectbox('🔌 Plugins', plugins, index=0) - else: - if st.session_state['plugin'] == "🛑 No PLUGIN": - st.session_state['plugin'] = st.selectbox('🔌 Plugins', plugins, index=plugins.index(st.session_state['plugin'])) - - -# FOR DEVELOPMENT NEW PLUGIN FOLLOW THIS TEMPLATE -# PLUGIN TEMPLATE -# if st.session_state['plugin'] == "🔌 PLUGIN NAME" and 'PLUGIN NAME' not in st.session_state: -# # PLUGIN SETTINGS -# with st.expander("🔌 PLUGIN NAME Settings", expanded=True): -# if 'PLUGIN NAME' not in st.session_state or st.session_state['PLUGIN NAME'] == False: -# # PLUGIN CODE -# st.session_state['PLUGIN NAME'] = True -# elif st.session_state['PLUGIN NAME'] == True: -# # PLUGIN CODE -# if st.button('🔌 Disable PLUGIN NAME'): -# st.session_state['plugin'] = "🛑 No PLUGIN" -# st.session_state['PLUGIN NAME'] = False -# del ALL SESSION STATE VARIABLES RELATED TO PLUGIN -# st.experimental_rerun() -# # PLUGIN UPLOADER -# if st.session_state['PLUGIN NAME'] == True: -# with st.expander("🔌 PLUGIN NAME Uploader", expanded=True): -# # PLUGIN UPLOADER CODE -# load file -# if load file and st.button('🔌 Upload PLUGIN NAME'): -# qa = RetrievalQA.from_chain_type(llm=st.session_state['LLM'], chain_type='stuff', retriever=retriever, return_source_documents=True) -# st.session_state['PLUGIN DB'] = qa -# st.experimental_rerun() -# - - - -# WEB SEARCH PLUGIN - if st.session_state['plugin'] == "🌐 Web Search" and 'web_search' not in st.session_state: - # web search settings - with st.expander("🌐 Web Search Settings", expanded=True): - if 'web_search' not in st.session_state or st.session_state['web_search'] == False: - reg = ['us-en', 'uk-en', 'it-it'] - sf = ['on', 'moderate', 'off'] - tl = ['d', 'w', 'm', 'y'] - if 'region' not in st.session_state: - st.session_state['region'] = st.selectbox('🗺 Region', reg, index=1) - else: - st.session_state['region'] = st.selectbox('🗺 Region', reg, index=reg.index(st.session_state['region'])) - if 'safesearch' not in st.session_state: - st.session_state['safesearch'] = st.selectbox('🚨 Safe Search', sf, index=1) - else: - st.session_state['safesearch'] = st.selectbox('🚨 Safe Search', sf, index=sf.index(st.session_state['safesearch'])) - if 'timelimit' not in st.session_state: - st.session_state['timelimit'] = st.selectbox('📅 Time Limit', tl, index=1) - else: - st.session_state['timelimit'] = st.selectbox('📅 Time Limit', tl, index=tl.index(st.session_state['timelimit'])) - if 'max_results' not in st.session_state: - st.session_state['max_results'] = st.slider('📊 Max Results', min_value=1, max_value=5, value=2, step=1) - else: - st.session_state['max_results'] = st.slider('📊 Max Results', min_value=1, max_value=5, value=st.session_state['max_results'], step=1) - if st.button('🌐 Save change'): - st.session_state['web_search'] = "True" - st.experimental_rerun() - - elif st.session_state['plugin'] == "🌐 Web Search" and st.session_state['web_search'] == 'True': - with st.expander("🌐 Web Search Settings", expanded=True): - st.write('🚀 Web Search is enabled') - st.write('🗺 Region: ', st.session_state['region']) - st.write('🚨 Safe Search: ', st.session_state['safesearch']) - st.write('📅 Time Limit: ', st.session_state['timelimit']) - if st.button('🌐🛑 Disable Web Search'): - del st.session_state['web_search'] - del st.session_state['region'] - del st.session_state['safesearch'] - del st.session_state['timelimit'] - del st.session_state['max_results'] - del st.session_state['plugin'] - st.experimental_rerun() - -# GOD MODE PLUGIN - if st.session_state['plugin'] == "🧠 GOD MODE" and 'god_mode' not in st.session_state: - with st.expander("🧠 GOD MODE Settings", expanded=True): - if 'god_mode' not in st.session_state or st.session_state['god_mode'] == False: - topic = st.text_input('🔎 Topic', "What is ToastGPT?") - web_result = st.checkbox('🌐 Web Search', value=True, disabled=True) - yt_result = st.checkbox('🎥 YT Search', value=True, disabled=True) - website_result = st.checkbox('🔗 Website Search', value=True, disabled=True) - deep_of_search = st.slider('📊 Deep of Search', min_value=1, max_value=100, value=2, step=1) - if st.button('🧠✅ Give knowledge to the model'): - full_text = [] - links = [] - news = [] - yt_ids = [] - source = [] - if web_result == True: - internet_result = "" - internet_answer = "" - with DDGS() as ddgs: - with st.spinner('🌐 Searching on the web...'): - ddgs_gen = ddgs.text(topic, region="us-en") - for r in islice(ddgs_gen, deep_of_search): - l = r['href'] - source.append(l) - links.append(l) - internet_result += str(r) + "\n\n" - - fast_answer = ddgs.news(topic) - for r in islice(fast_answer, deep_of_search): - internet_answer += str(r) + "\n\n" - l = r['url'] - source.append(l) - news.append(r) - - - full_text.append(internet_result) - full_text.append(internet_answer) - - if yt_result == True: - with st.spinner('🎥 Searching on YT...'): - from youtubesearchpython import VideosSearch - videosSearch = VideosSearch(topic, limit = deep_of_search) - yt_result = videosSearch.result() - for i in yt_result['result']: # type: ignore - duration = i['duration'] # type: ignore - duration = duration.split(':') - if len(duration) == 3: - #skip videos longer than 1 hour - if int(duration[0]) > 1: - continue - if len(duration) == 2: - #skip videos longer than 30 minutes - if int(duration[0]) > 30: - continue - yt_ids.append(i['id']) # type: ignore - source.append("https://www.youtube.com/watch?v="+i['id']) # type: ignore - full_text.append(i['title']) # type: ignore - - - if website_result == True: - for l in links: - try: - with st.spinner(f'👨‍💻 Scraping website : {l}'): - r = requests.get(l) - soup = BeautifulSoup(r.content, 'html.parser') - full_text.append(soup.get_text()+"\n\n") - except: - pass - - for id in yt_ids: - try: - yt_video_txt= [] - with st.spinner(f'👨‍💻 Scraping YT video : {id}'): - transcript_list = YouTubeTranscriptApi.list_transcripts(id) - transcript_en = None - last_language = "" - for transcript in transcript_list: - if transcript.language_code == 'en': - transcript_en = transcript - break - else: - last_language = transcript.language_code - if transcript_en is None: - transcript_en = transcript_list.find_transcript([last_language]) - transcript_en = transcript_en.translate('en') - - text = transcript_en.fetch() - yt_video_txt.append(text) - - for i in range(len(yt_video_txt)): - for j in range(len(yt_video_txt[i])): - full_text.append(yt_video_txt[i][j]['text']) - - - except: - pass - - with st.spinner('🧠 Building vectorstore with knowledge...'): - full_text = "\n".join(full_text) - st.session_state['god_text'] = [full_text] - text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - texts = text_splitter.create_documents([full_text]) - # Select embeddings - embeddings = st.session_state['hf'] - # Create a vectorstore from documents - random_str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=10)) - db = Chroma.from_documents(texts, embeddings, persist_directory="./chroma_db_" + random_str) - - with st.spinner('🔨 Saving vectorstore...'): - # save vectorstore - db.persist() - #create .zip file of directory to download - shutil.make_archive("./chroma_db_" + random_str, 'zip', "./chroma_db_" + random_str) - # save in session state and download - st.session_state['db'] = "./chroma_db_" + random_str + ".zip" - - with st.spinner('🔨 Creating QA chain...'): - # Create retriever interface - retriever = db.as_retriever() - # Create QA chain - qa = RetrievalQA.from_chain_type(llm=st.session_state['LLM'], chain_type='stuff', retriever=retriever, return_source_documents=True) - st.session_state['god_mode'] = qa - st.session_state['god_mode_source'] = source - st.session_state['god_mode_info'] = "🧠 GOD MODE have builded a vectorstore about **" + topic + f"**. The knowledge is based on\n- {len(news)} news🗞\n- {len(yt_ids)} YT videos📺\n- {len(links)} websites🌐 \n" - - st.experimental_rerun() - - - if st.session_state['plugin'] == "🧠 GOD MODE" and 'god_mode' in st.session_state: - with st.expander("**✅ GOD MODE is enabled 🚀**", expanded=True): - st.markdown(st.session_state['god_mode_info']) - if 'db' in st.session_state: - # leave ./ from name for download - file_name = st.session_state['db'][2:] - st.download_button( - label="📩 Download vectorstore", - data=open(file_name, 'rb').read(), - file_name=file_name, - mime='application/zip' - ) - if st.button('🧠🛑 Disable GOD MODE'): - del st.session_state['god_mode'] - del st.session_state['db'] - del st.session_state['god_text'] - del st.session_state['god_mode_info'] - del st.session_state['god_mode_source'] - del st.session_state['plugin'] - st.experimental_rerun() - - -# DATA PLUGIN - if st.session_state['plugin'] == "📋 Talk with your DATA" and 'df' not in st.session_state: - with st.expander("📋 Talk with your DATA", expanded= True): - upload_csv = st.file_uploader("Upload your CSV", type=['csv']) - if upload_csv is not None: - df = pd.read_csv(upload_csv) - st.session_state['df'] = df - st.experimental_rerun() - if st.session_state['plugin'] == "📋 Talk with your DATA": - if st.button('🛑📋 Remove DATA from context'): - if 'df' in st.session_state: - del st.session_state['df'] - del st.session_state['plugin'] - st.experimental_rerun() - - - -# DOCUMENTS PLUGIN - if st.session_state['plugin'] == "📝 Talk with your DOCUMENTS" and 'documents' not in st.session_state: - with st.expander("📝 Talk with your DOCUMENT", expanded=True): - upload_pdf = st.file_uploader("Upload your DOCUMENT", type=['txt', 'pdf', 'docx'], accept_multiple_files=True) - if upload_pdf is not None and st.button('📝✅ Load Documents'): - documents = [] - with st.spinner('🔨 Reading documents...'): - for upload_pdf in upload_pdf: - print(upload_pdf.type) - if upload_pdf.type == 'text/plain': - documents += [upload_pdf.read().decode()] - elif upload_pdf.type == 'application/pdf': - with pdfplumber.open(upload_pdf) as pdf: - documents += [page.extract_text() for page in pdf.pages] - elif upload_pdf.type == "application/vnd.openxmlformats-officedocument.wordprocessingml.document": - text = docx2txt.process(upload_pdf) - documents += [text] - st.session_state['documents'] = documents - # Split documents into chunks - with st.spinner('🔨 Creating vectorstore...'): - text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - texts = text_splitter.create_documents(documents) - # Select embeddings - embeddings = st.session_state['hf'] - # Create a vectorstore from documents - random_str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=10)) - db = Chroma.from_documents(texts, embeddings, persist_directory="./chroma_db_" + random_str) - - with st.spinner('🔨 Saving vectorstore...'): - # save vectorstore - db.persist() - #create .zip file of directory to download - shutil.make_archive("./chroma_db_" + random_str, 'zip', "./chroma_db_" + random_str) - # save in session state and download - st.session_state['db'] = "./chroma_db_" + random_str + ".zip" - - with st.spinner('🔨 Creating QA chain...'): - # Create retriever interface - retriever = db.as_retriever() - # Create QA chain - qa = RetrievalQA.from_chain_type(llm=st.session_state['LLM'], chain_type='stuff', retriever=retriever, return_source_documents=True) - st.session_state['pdf'] = qa - - st.experimental_rerun() - - if st.session_state['plugin'] == "📝 Talk with your DOCUMENTS": - if 'db' in st.session_state: - # leave ./ from name for download - file_name = st.session_state['db'][2:] - st.download_button( - label="📩 Download vectorstore", - data=open(file_name, 'rb').read(), - file_name=file_name, - mime='application/zip' - ) - if st.button('🛑📝 Remove PDF from context'): - if 'pdf' in st.session_state: - del st.session_state['db'] - del st.session_state['pdf'] - del st.session_state['documents'] - del st.session_state['plugin'] - - st.experimental_rerun() - -# AUDIO PLUGIN - if st.session_state['plugin'] == "🎧 Talk with your AUDIO" and 'audio' not in st.session_state: - with st.expander("🎙 Talk with your AUDIO", expanded=True): - f = st.file_uploader("Upload your AUDIO", type=['wav', 'mp3']) - if f is not None: - if f.type == 'audio/mpeg': - #convert mp3 to wav - with st.spinner('🔨 Converting mp3 to wav...'): - #save mp3 - with open('audio.mp3', 'wb') as out: - out.write(f.read()) - #convert to wav - sound = AudioSegment.from_mp3("audio.mp3") - sound.export("audio.wav", format="wav") - file_name = 'audio.wav' - else: - with open(f.name, 'wb') as out: - out.write(f.read()) - - bytes_data = f.read() - file_name = f.name - - r = sr.Recognizer() - #Given audio file must be a filename string or a file-like object - - - with st.spinner('🔨 Reading audio...'): - with sr.AudioFile(file_name) as source: - # listen for the data (load audio to memory) - audio_data = r.record(source) - # recognize (convert from speech to text) - text = r.recognize_google(audio_data) - data = [text] - # data = query(bytes_data) - with st.spinner('🎙 Creating Vectorstore...'): - - #split text into chunks - text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - texts = text_splitter.create_documents(text) - - embeddings = st.session_state['hf'] - # Create a vectorstore from documents - random_str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=10)) - db = Chroma.from_documents(texts, embeddings, persist_directory="./chroma_db_" + random_str) - # save vectorstore - - with st.spinner('🎙 Saving Vectorstore...'): - db.persist() - #create .zip file of directory to download - shutil.make_archive("./chroma_db_" + random_str, 'zip', "./chroma_db_" + random_str) - # save in session state and download - st.session_state['db'] = "./chroma_db_" + random_str + ".zip" - - with st.spinner('🎙 Creating QA chain...'): - # Create retriever interface - retriever = db.as_retriever() - # Create QA chain - qa = RetrievalQA.from_chain_type(llm=st.session_state['LLM'], chain_type='stuff', retriever=retriever, return_source_documents=True) - st.session_state['audio'] = qa - st.session_state['audio_text'] = text - st.experimental_rerun() - - if st.session_state['plugin'] == "🎧 Talk with your AUDIO": - if 'db' in st.session_state: - # leave ./ from name for download - file_name = st.session_state['db'][2:] - st.download_button( - label="📩 Download vectorstore", - data=open(file_name, 'rb').read(), - file_name=file_name, - mime='application/zip' - ) - if st.button('🛑🎙 Remove AUDIO from context'): - if 'audio' in st.session_state: - del st.session_state['db'] - del st.session_state['audio'] - del st.session_state['audio_text'] - del st.session_state['plugin'] - st.experimental_rerun() - - -# YT PLUGIN - if st.session_state['plugin'] == "🎥 Talk with YT video" and 'yt' not in st.session_state: - with st.expander("🎥 Talk with YT video", expanded=True): - yt_url = st.text_input("1.📺 Enter a YouTube URL") - yt_url2 = st.text_input("2.📺 Enter a YouTube URL") - yt_url3 = st.text_input("3.📺 Enter a YouTube URL") - if yt_url is not None and st.button('🎥✅ Add YouTube video to context'): - if yt_url != "": - video = 1 - yt_url = yt_url.split("=")[1] - if yt_url2 != "": - yt_url2 = yt_url2.split("=")[1] - video = 2 - if yt_url3 != "": - yt_url3 = yt_url3.split("=")[1] - video = 3 - - text_yt = [] - text_list = [] - for i in range(video): - with st.spinner(f'🎥 Extracting TEXT from YouTube video {str(i)} ...'): - #get en subtitles - transcript_list = YouTubeTranscriptApi.list_transcripts(yt_url) - transcript_en = None - last_language = "" - for transcript in transcript_list: - if transcript.language_code == 'en': - transcript_en = transcript - break - else: - last_language = transcript.language_code - if transcript_en is None: - transcript_en = transcript_list.find_transcript([last_language]) - transcript_en = transcript_en.translate('en') - - text = transcript_en.fetch() - text_yt.append(text) - - for i in range(len(text_yt)): - for j in range(len(text_yt[i])): - text_list.append(text_yt[i][j]['text']) - - # creating a vectorstore - - with st.spinner('🎥 Creating Vectorstore...'): - text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - texts = text_splitter.create_documents(text_list) - # Select embeddings - embeddings = st.session_state['hf'] - # Create a vectorstore from documents - random_str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=10)) - db = Chroma.from_documents(texts, embeddings, persist_directory="./chroma_db_" + random_str) - - with st.spinner('🎥 Saving Vectorstore...'): - # save vectorstore - db.persist() - #create .zip file of directory to download - shutil.make_archive("./chroma_db_" + random_str, 'zip', "./chroma_db_" + random_str) - # save in session state and download - st.session_state['db'] = "./chroma_db_" + random_str + ".zip" - - with st.spinner('🎥 Creating QA chain...'): - # Create retriever interface - retriever = db.as_retriever() - # Create QA chain - qa = RetrievalQA.from_chain_type(llm=st.session_state['LLM'], chain_type='stuff', retriever=retriever, return_source_documents=True) - st.session_state['yt'] = qa - st.session_state['yt_text'] = text_list - st.experimental_rerun() - - if st.session_state['plugin'] == "🎥 Talk with YT video": - if 'db' in st.session_state: - # leave ./ from name for download - file_name = st.session_state['db'][2:] - st.download_button( - label="📩 Download vectorstore", - data=open(file_name, 'rb').read(), - file_name=file_name, - mime='application/zip' - ) - - if st.button('🛑🎥 Remove YT video from context'): - if 'yt' in st.session_state: - del st.session_state['db'] - del st.session_state['yt'] - del st.session_state['yt_text'] - del st.session_state['plugin'] - st.experimental_rerun() - -# WEBSITE PLUGIN - if st.session_state['plugin'] == "🔗 Talk with Website" and 'web_sites' not in st.session_state: - with st.expander("🔗 Talk with Website", expanded=True): - web_url = st.text_area("🔗 Enter a website URLs , one for each line") - if web_url is not None and st.button('🔗✅ Add website to context'): - if web_url != "": - text = [] - #max 10 websites - with st.spinner('🔗 Extracting TEXT from Websites ...'): - for url in web_url.split("\n")[:10]: - page = requests.get(url) - soup = BeautifulSoup(page.content, 'html.parser') - text.append(soup.get_text()) - # creating a vectorstore - - with st.spinner('🔗 Creating Vectorstore...'): - text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - texts = text_splitter.create_documents(text) - # Select embeddings - embeddings = st.session_state['hf'] - # Create a vectorstore from documents - random_str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=10)) - db = Chroma.from_documents(texts, embeddings, persist_directory="./chroma_db_" + random_str) - - with st.spinner('🔗 Saving Vectorstore...'): - # save vectorstore - db.persist() - #create .zip file of directory to download - shutil.make_archive("./chroma_db_" + random_str, 'zip', "./chroma_db_" + random_str) - # save in session state and download - st.session_state['db'] = "./chroma_db_" + random_str + ".zip" - - with st.spinner('🔗 Creating QA chain...'): - # Create retriever interface - retriever = db.as_retriever() - # Create QA chain - qa = RetrievalQA.from_chain_type(llm=st.session_state['LLM'], chain_type='stuff', retriever=retriever, return_source_documents=True) - st.session_state['web_sites'] = qa - st.session_state['web_text'] = text - st.experimental_rerun() - - if st.session_state['plugin'] == "🔗 Talk with Website": - if 'db' in st.session_state: - # leave ./ from name for download - file_name = st.session_state['db'][2:] - st.download_button( - label="📩 Download vectorstore", - data=open(file_name, 'rb').read(), - file_name=file_name, - mime='application/zip' - ) - - if st.button('🛑🔗 Remove Website from context'): - if 'web_sites' in st.session_state: - del st.session_state['db'] - del st.session_state['web_sites'] - del st.session_state['web_text'] - del st.session_state['plugin'] - st.experimental_rerun() - - -# UPLOAD PREVIUS VECTORSTORE - if st.session_state['plugin'] == "💾 Upload saved VectorStore" and 'old_db' not in st.session_state: - with st.expander("💾 Upload saved VectorStore", expanded=True): - db_file = st.file_uploader("Upload a saved VectorStore", type=["zip"]) - if db_file is not None and st.button('✅💾 Add saved VectorStore to context'): - if db_file != "": - with st.spinner('💾 Extracting VectorStore...'): - # unzip file in a new directory - with ZipFile(db_file, 'r') as zipObj: - # Extract all the contents of zip file in different directory - random_str = ''.join(random.choices(string.ascii_uppercase + string.digits, k=10)) - zipObj.extractall("chroma_db_" + random_str) - # save in session state the path of the directory - st.session_state['old_db'] = "chroma_db_" + random_str - hf = st.session_state['hf'] - # Create retriever interface - db = Chroma("chroma_db_" + random_str, embedding_function=hf) - - with st.spinner('💾 Creating QA chain...'): - retriever = db.as_retriever() - # Create QA chain - qa = RetrievalQA.from_chain_type(llm=st.session_state['LLM'], chain_type='stuff', retriever=retriever, return_source_documents=True) - st.session_state['old_db'] = qa - st.experimental_rerun() - - if st.session_state['plugin'] == "💾 Upload saved VectorStore": - if st.button('🛑💾 Remove VectorStore from context'): - if 'old_db' in st.session_state: - del st.session_state['old_db'] - del st.session_state['plugin'] - st.experimental_rerun() - - -# END OF PLUGIN - add_vertical_space(4) - if 'hf_email' in st.session_state: - if st.button('🗑 Logout'): - keys = list(st.session_state.keys()) - for key in keys: - del st.session_state[key] - st.experimental_rerun() - - export_chat() - add_vertical_space(5) - -##### End of sidebar - - -# User input -# Layout of input/response containers -input_container = st.container() -response_container = st.container() -data_view_container = st.container() -loading_container = st.container() - - - -## Applying the user input box -with input_container: - input_text = st.chat_input("🧑‍💻 Write here 👇", key="input") - -with data_view_container: - if 'df' in st.session_state: - with st.expander("🤖 View your **DATA**"): - st.data_editor(st.session_state['df'], use_container_width=True) - if 'pdf' in st.session_state: - with st.expander("🤖 View your **DOCUMENTs**"): - st.write(st.session_state['documents']) - if 'audio' in st.session_state: - with st.expander("🤖 View your **AUDIO**"): - st.write(st.session_state['audio_text']) - if 'yt' in st.session_state: - with st.expander("🤖 View your **YT video**"): - st.write(st.session_state['yt_text']) - if 'web_text' in st.session_state: - with st.expander("🤖 View the **Website content**"): - st.write(st.session_state['web_text']) - if 'old_db' in st.session_state: - with st.expander("🗂 View your **saved VectorStore**"): - st.success("📚 VectorStore loaded") - if 'god_mode_source' in st.session_state: - with st.expander("🌍 View source"): - for s in st.session_state['god_mode_source']: - st.markdown("- " + s) - -# Response output -## Function for taking user prompt as input followed by producing AI generated responses -def generate_response(prompt): - final_prompt = "" - make_better = True - source = "" - - with loading_container: - - # FOR DEVELOPMENT PLUGIN - # if st.session_state['plugin'] == "🔌 PLUGIN NAME" and 'PLUGIN DB' in st.session_state: - # with st.spinner('🚀 Using PLUGIN NAME...'): - # solution = st.session_state['PLUGIN DB']({"query": prompt}) - # final_prompt = YourCustomPrompt(prompt, context) - - - if st.session_state['plugin'] == "📋 Talk with your DATA" and 'df' in st.session_state: - #get only last message - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - if prompt.find('python') != -1 or prompt.find('Code') != -1 or prompt.find('code') != -1 or prompt.find('Python') != -1: - with st.spinner('🚀 Using tool for python code...'): - solution = "\n```python\n" - solution += st.session_state['df'].sketch.howto(prompt, call_display=False) - solution += "\n```\n\n" - final_prompt = prompt4Code(prompt, context, solution) - else: - with st.spinner('🚀 Using tool to get information...'): - solution = st.session_state['df'].sketch.ask(prompt, call_display=False) - final_prompt = prompt4Data(prompt, context, solution) - - - elif st.session_state['plugin'] == "📝 Talk with your DOCUMENTS" and 'pdf' in st.session_state: - #get only last message - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - with st.spinner('🚀 Using tool to get information...'): - result = st.session_state['pdf']({"query": prompt}) - solution = result["result"] - if len(solution.split()) > 110: - make_better = False - final_prompt = solution - if 'source_documents' in result and len(result["source_documents"]) > 0: - final_prompt += "\n\n✅Source:\n" - for d in result["source_documents"]: - final_prompt += "- " + str(d) + "\n" - else: - final_prompt = prompt4Context(prompt, context, solution) - if 'source_documents' in result and len(result["source_documents"]) > 0: - source += "\n\n✅Source:\n" - for d in result["source_documents"]: - source += "- " + str(d) + "\n" - - - elif st.session_state['plugin'] == "🧠 GOD MODE" and 'god_mode' in st.session_state: - #get only last message - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - with st.spinner('🚀 Using tool to get information...'): - result = st.session_state['god_mode']({"query": prompt}) - solution = result["result"] - if len(solution.split()) > 110: - make_better = False - final_prompt = solution - if 'source_documents' in result and len(result["source_documents"]) > 0: - final_prompt += "\n\n✅Source:\n" - for d in result["source_documents"]: - final_prompt += "- " + str(d) + "\n" - else: - final_prompt = prompt4Context(prompt, context, solution) - if 'source_documents' in result and len(result["source_documents"]) > 0: - source += "\n\n✅Source:\n" - for d in result["source_documents"]: - source += "- " + str(d) + "\n" - - - elif st.session_state['plugin'] == "🔗 Talk with Website" and 'web_sites' in st.session_state: - #get only last message - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - with st.spinner('🚀 Using tool to get information...'): - result = st.session_state['web_sites']({"query": prompt}) - solution = result["result"] - if len(solution.split()) > 110: - make_better = False - final_prompt = solution - if 'source_documents' in result and len(result["source_documents"]) > 0: - final_prompt += "\n\n✅Source:\n" - for d in result["source_documents"]: - final_prompt += "- " + str(d) + "\n" - else: - final_prompt = prompt4Context(prompt, context, solution) - if 'source_documents' in result and len(result["source_documents"]) > 0: - source += "\n\n✅Source:\n" - for d in result["source_documents"]: - source += "- " + str(d) + "\n" - - - - elif st.session_state['plugin'] == "💾 Upload saved VectorStore" and 'old_db' in st.session_state: - #get only last message - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - with st.spinner('🚀 Using tool to get information...'): - result = st.session_state['old_db']({"query": prompt}) - solution = result["result"] - if len(solution.split()) > 110: - make_better = False - final_prompt = solution - if 'source_documents' in result and len(result["source_documents"]) > 0: - final_prompt += "\n\n✅Source:\n" - for d in result["source_documents"]: - final_prompt += "- " + str(d) + "\n" - else: - final_prompt = prompt4Context(prompt, context, solution) - if 'source_documents' in result and len(result["source_documents"]) > 0: - source += "\n\n✅Source:\n" - for d in result["source_documents"]: - source += "- " + str(d) + "\n" - - - elif st.session_state['plugin'] == "🎧 Talk with your AUDIO" and 'audio' in st.session_state: - #get only last message - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - with st.spinner('🚀 Using tool to get information...'): - result = st.session_state['audio']({"query": prompt}) - solution = result["result"] - if len(solution.split()) > 110: - make_better = False - final_prompt = solution - if 'source_documents' in result and len(result["source_documents"]) > 0: - final_prompt += "\n\n✅Source:\n" - for d in result["source_documents"]: - final_prompt += "- " + str(d) + "\n" - else: - final_prompt = prompt4Audio(prompt, context, solution) - if 'source_documents' in result and len(result["source_documents"]) > 0: - source += "\n\n✅Source:\n" - for d in result["source_documents"]: - source += "- " + str(d) + "\n" - - - elif st.session_state['plugin'] == "🎥 Talk with YT video" and 'yt' in st.session_state: - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - with st.spinner('🚀 Using tool to get information...'): - result = st.session_state['yt']({"query": prompt}) - solution = result["result"] - if len(solution.split()) > 110: - make_better = False - final_prompt = solution - if 'source_documents' in result and len(result["source_documents"]) > 0: - final_prompt += "\n\n✅Source:\n" - for d in result["source_documents"]: - final_prompt += "- " + str(d) + "\n" - else: - final_prompt = prompt4YT(prompt, context, solution) - if 'source_documents' in result and len(result["source_documents"]) > 0: - source += "\n\n✅Source:\n" - for d in result["source_documents"]: - source += "- " + str(d) + "\n" - - - else: - #get last message if exists - if len(st.session_state['past']) == 1: - context = f"User: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - else: - context = f"User: {st.session_state['past'][-2]}\nBot: {st.session_state['generated'][-2]}\nUser: {st.session_state['past'][-1]}\nBot: {st.session_state['generated'][-1]}\n" - - if 'web_search' in st.session_state: - if st.session_state['web_search'] == "True": - with st.spinner('🚀 Using internet to get information...'): - internet_result = "" - internet_answer = "" - with DDGS() as ddgs: - ddgs_gen = ddgs.text(prompt, region=st.session_state['region'], safesearch=st.session_state['safesearch'], timelimit=st.session_state['timelimit']) - for r in islice(ddgs_gen, st.session_state['max_results']): - internet_result += str(r) + "\n\n" - fast_answer = ddgs.answers(prompt) - for r in islice(fast_answer, 2): - internet_answer += str(r) + "\n\n" - - final_prompt = prompt4conversationInternet(prompt, context, internet_result, internet_answer) - else: - final_prompt = prompt4conversation(prompt, context) - else: - final_prompt = prompt4conversation(prompt, context) - - if make_better: - with st.spinner('🚀 Generating response...'): - print(final_prompt) - response = st.session_state['chatbot'].chat(final_prompt, temperature=temperature, top_p=top_p, repetition_penalty=repetition_penalty, top_k=top_k, max_new_tokens=max_new_tokens) - response += source - else: - print(final_prompt) - response = final_prompt - - return response - -## Conditional display of AI generated responses as a function of user provided prompts -with response_container: - if input_text and 'hf_email' in st.session_state and 'hf_pass' in st.session_state: - response = generate_response(input_text) - st.session_state.past.append(input_text) - st.session_state.generated.append(response) - - - #print message in normal order, frist user then bot - if 'generated' in st.session_state: - if st.session_state['generated']: - for i in range(len(st.session_state['generated'])): - with st.chat_message(name="user"): - st.markdown(st.session_state['past'][i]) - - with st.chat_message(name="assistant"): - if len(st.session_state['generated'][i].split("✅Source:")) > 1: - source = st.session_state['generated'][i].split("✅Source:")[1] - mess = st.session_state['generated'][i].split("✅Source:")[0] - - st.markdown(mess) - with st.expander("📚 Source of message number " + str(i+1)): - st.markdown(source) - - else: - st.markdown(st.session_state['generated'][i]) - - st.markdown('', unsafe_allow_html=True) - - - else: - st.info("👋 Hey , we are very happy to see you here 🤗") - st.info("👉 Please Login to continue, click on top left corner to login 🚀") - st.error("👉 If you are not registered on Hugging Face, please register first and then login 🤗") diff --git a/spaces/VectorologyArt/Sygil-Sygil-Diffusion/README.md b/spaces/VectorologyArt/Sygil-Sygil-Diffusion/README.md deleted file mode 100644 index 3c88c3b771900bcec56355dd2f1ba19bbf1c42e5..0000000000000000000000000000000000000000 --- a/spaces/VectorologyArt/Sygil-Sygil-Diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sygil Sygil Diffusion -emoji: 🐠 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/audiocraft_package.py b/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/audiocraft_package.py deleted file mode 100644 index 501d015d94087715030be4d4cf448416181ddd57..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/audiocraft_package.py +++ /dev/null @@ -1,8 +0,0 @@ -from setup_tools.magicinstaller.requirement import SimpleRequirement - - -class AudioCraft(SimpleRequirement): - package_name = 'audiocraft' - - def install(self) -> tuple[int, str, str]: - return self.install_pip('git+https://github.com/facebookresearch/audiocraft.git', 'audiocraft') diff --git a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/pytube_package.py b/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/pytube_package.py deleted file mode 100644 index 5721770cf192d7717438851a3e4ff1cb4d68979a..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/pytube_package.py +++ /dev/null @@ -1,5 +0,0 @@ -from setup_tools.magicinstaller.requirement import SimpleRequirement - - -class PyTube(SimpleRequirement): - package_name = 'pytube' diff --git a/spaces/Wayben/ChatGPT/README.md b/spaces/Wayben/ChatGPT/README.md deleted file mode 100644 index bd206234a71b7d72e756319a70fb5feb1dbafb54..0000000000000000000000000000000000000000 --- a/spaces/Wayben/ChatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: syjs10/ChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/CHANGELOG.md b/spaces/Wrathless/Dkrotzer-MusicalMagic/CHANGELOG.md deleted file mode 100644 index a685bcae80d0c64e64f5f51a9b9aa9245cec4b9e..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/CHANGELOG.md +++ /dev/null @@ -1,9 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.1a] - TBD - -Initial release, with model evaluation only. \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/modules/test_transformer.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/modules/test_transformer.py deleted file mode 100644 index 8c9953d9e8f139db7b8ce3063e3d5a78d2f5d088..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/modules/test_transformer.py +++ /dev/null @@ -1,247 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import StreamingMultiheadAttention, StreamingTransformer - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), (y - y2).norm() - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - torch.manual_seed(1234) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly yhe same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7) - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/XzJosh/Jianmo-Bert-VITS2/losses.py b/spaces/XzJosh/Jianmo-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jianmo-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Yusin/docker_test/aberconnect.sh b/spaces/Yusin/docker_test/aberconnect.sh deleted file mode 100644 index 21c976cc6a6823404a6a86095f1c8c46d4d71ed2..0000000000000000000000000000000000000000 --- a/spaces/Yusin/docker_test/aberconnect.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/expect -f - -set timeout -1 - -set username yuxing.chen@unitn.it -set password cyx521301 - -spawn ./openconnect --protocol=gp vpn.icts.unitn.it --user=$username - -expect "Password:*" -send -- "$password\r" - -expect eof - - -##!/bin/bash -#/usr/bin/expect < 1, 'Res2Net degenerates to ResNet when scales = 1.' - width = int(math.floor(self.planes * (base_width / base_channels))) - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width * scales, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width * scales, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - - if stage_type == 'stage' and self.conv2_stride != 1: - self.pool = nn.AvgPool2d( - kernel_size=3, stride=self.conv2_stride, padding=1) - convs = [] - bns = [] - - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width * scales, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.stage_type = stage_type - self.scales = scales - self.width = width - delattr(self, 'conv2') - delattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - spx = torch.split(out, self.width, 1) - sp = self.convs[0](spx[0].contiguous()) - sp = self.relu(self.bns[0](sp)) - out = sp - for i in range(1, self.scales - 1): - if self.stage_type == 'stage': - sp = spx[i] - else: - sp = sp + spx[i] - sp = self.convs[i](sp.contiguous()) - sp = self.relu(self.bns[i](sp)) - out = torch.cat((out, sp), 1) - - if self.stage_type == 'normal' or self.conv2_stride == 1: - out = torch.cat((out, spx[self.scales - 1]), 1) - elif self.stage_type == 'stage': - out = torch.cat((out, self.pool(spx[self.scales - 1])), 1) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Res2Layer(nn.Sequential): - """Res2Layer to build Res2Net style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - scales=4, - base_width=26, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False), - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=1, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1], - ) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - stage_type='stage', - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - **kwargs)) - super(Res2Layer, self).__init__(*layers) - - -@BACKBONES.register_module() -class Res2Net(ResNet): - """Res2Net backbone. - - Args: - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - depth (int): Depth of res2net, from {50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Res2net stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet.models import Res2Net - >>> import torch - >>> self = Res2Net(depth=50, scales=4, base_width=26) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottle2neck, (3, 4, 6, 3)), - 101: (Bottle2neck, (3, 4, 23, 3)), - 152: (Bottle2neck, (3, 8, 36, 3)) - } - - def __init__(self, - scales=4, - base_width=26, - style='pytorch', - deep_stem=True, - avg_down=True, - **kwargs): - self.scales = scales - self.base_width = base_width - super(Res2Net, self).__init__( - style='pytorch', deep_stem=True, avg_down=True, **kwargs) - - def make_res_layer(self, **kwargs): - return Res2Layer( - scales=self.scales, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottle2neck): - # dcn in Res2Net bottle2neck is in ModuleList - for n in m.convs: - if hasattr(n, 'conv_offset'): - constant_init(n.conv_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottle2neck): - constant_init(m.norm3, 0) - else: - raise TypeError('pretrained must be a str or None') diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/corner_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/corner_head.py deleted file mode 100644 index 50cdb49a29f2ced1a31a50e654a3bdc14f5f5004..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/corner_head.py +++ /dev/null @@ -1,1074 +0,0 @@ -from logging import warning -from math import ceil, log - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, bias_init_with_prob -from mmcv.ops import CornerPool, batched_nms - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from ..utils import gaussian_radius, gen_gaussian_target -from .base_dense_head import BaseDenseHead - - -class BiCornerPool(nn.Module): - """Bidirectional Corner Pooling Module (TopLeft, BottomRight, etc.) - - Args: - in_channels (int): Input channels of module. - out_channels (int): Output channels of module. - feat_channels (int): Feature channels of module. - directions (list[str]): Directions of two CornerPools. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - in_channels, - directions, - feat_channels=128, - out_channels=128, - norm_cfg=dict(type='BN', requires_grad=True)): - super(BiCornerPool, self).__init__() - self.direction1_conv = ConvModule( - in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg) - self.direction2_conv = ConvModule( - in_channels, feat_channels, 3, padding=1, norm_cfg=norm_cfg) - - self.aftpool_conv = ConvModule( - feat_channels, - out_channels, - 3, - padding=1, - norm_cfg=norm_cfg, - act_cfg=None) - - self.conv1 = ConvModule( - in_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None) - self.conv2 = ConvModule( - in_channels, out_channels, 3, padding=1, norm_cfg=norm_cfg) - - self.direction1_pool = CornerPool(directions[0]) - self.direction2_pool = CornerPool(directions[1]) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward features from the upstream network. - - Args: - x (tensor): Input feature of BiCornerPool. - - Returns: - conv2 (tensor): Output feature of BiCornerPool. - """ - direction1_conv = self.direction1_conv(x) - direction2_conv = self.direction2_conv(x) - direction1_feat = self.direction1_pool(direction1_conv) - direction2_feat = self.direction2_pool(direction2_conv) - aftpool_conv = self.aftpool_conv(direction1_feat + direction2_feat) - conv1 = self.conv1(x) - relu = self.relu(aftpool_conv + conv1) - conv2 = self.conv2(relu) - return conv2 - - -@HEADS.register_module() -class CornerHead(BaseDenseHead): - """Head of CornerNet: Detecting Objects as Paired Keypoints. - - Code is modified from the `official github repo - `_ . - - More details can be found in the `paper - `_ . - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - num_feat_levels (int): Levels of feature from the previous module. 2 - for HourglassNet-104 and 1 for HourglassNet-52. Because - HourglassNet-104 outputs the final feature and intermediate - supervision feature and HourglassNet-52 only outputs the final - feature. Default: 2. - corner_emb_channels (int): Channel of embedding vector. Default: 1. - train_cfg (dict | None): Training config. Useless in CornerHead, - but we keep this variable for SingleStageDetector. Default: None. - test_cfg (dict | None): Testing config of CornerHead. Default: None. - loss_heatmap (dict | None): Config of corner heatmap loss. Default: - GaussianFocalLoss. - loss_embedding (dict | None): Config of corner embedding loss. Default: - AssociativeEmbeddingLoss. - loss_offset (dict | None): Config of corner offset loss. Default: - SmoothL1Loss. - """ - - def __init__(self, - num_classes, - in_channels, - num_feat_levels=2, - corner_emb_channels=1, - train_cfg=None, - test_cfg=None, - loss_heatmap=dict( - type='GaussianFocalLoss', - alpha=2.0, - gamma=4.0, - loss_weight=1), - loss_embedding=dict( - type='AssociativeEmbeddingLoss', - pull_weight=0.25, - push_weight=0.25), - loss_offset=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1)): - super(CornerHead, self).__init__() - self.num_classes = num_classes - self.in_channels = in_channels - self.corner_emb_channels = corner_emb_channels - self.with_corner_emb = self.corner_emb_channels > 0 - self.corner_offset_channels = 2 - self.num_feat_levels = num_feat_levels - self.loss_heatmap = build_loss( - loss_heatmap) if loss_heatmap is not None else None - self.loss_embedding = build_loss( - loss_embedding) if loss_embedding is not None else None - self.loss_offset = build_loss( - loss_offset) if loss_offset is not None else None - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - self._init_layers() - - def _make_layers(self, out_channels, in_channels=256, feat_channels=256): - """Initialize conv sequential for CornerHead.""" - return nn.Sequential( - ConvModule(in_channels, feat_channels, 3, padding=1), - ConvModule( - feat_channels, out_channels, 1, norm_cfg=None, act_cfg=None)) - - def _init_corner_kpt_layers(self): - """Initialize corner keypoint layers. - - Including corner heatmap branch and corner offset branch. Each branch - has two parts: prefix `tl_` for top-left and `br_` for bottom-right. - """ - self.tl_pool, self.br_pool = nn.ModuleList(), nn.ModuleList() - self.tl_heat, self.br_heat = nn.ModuleList(), nn.ModuleList() - self.tl_off, self.br_off = nn.ModuleList(), nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_pool.append( - BiCornerPool( - self.in_channels, ['top', 'left'], - out_channels=self.in_channels)) - self.br_pool.append( - BiCornerPool( - self.in_channels, ['bottom', 'right'], - out_channels=self.in_channels)) - - self.tl_heat.append( - self._make_layers( - out_channels=self.num_classes, - in_channels=self.in_channels)) - self.br_heat.append( - self._make_layers( - out_channels=self.num_classes, - in_channels=self.in_channels)) - - self.tl_off.append( - self._make_layers( - out_channels=self.corner_offset_channels, - in_channels=self.in_channels)) - self.br_off.append( - self._make_layers( - out_channels=self.corner_offset_channels, - in_channels=self.in_channels)) - - def _init_corner_emb_layers(self): - """Initialize corner embedding layers. - - Only include corner embedding branch with two parts: prefix `tl_` for - top-left and `br_` for bottom-right. - """ - self.tl_emb, self.br_emb = nn.ModuleList(), nn.ModuleList() - - for _ in range(self.num_feat_levels): - self.tl_emb.append( - self._make_layers( - out_channels=self.corner_emb_channels, - in_channels=self.in_channels)) - self.br_emb.append( - self._make_layers( - out_channels=self.corner_emb_channels, - in_channels=self.in_channels)) - - def _init_layers(self): - """Initialize layers for CornerHead. - - Including two parts: corner keypoint layers and corner embedding layers - """ - self._init_corner_kpt_layers() - if self.with_corner_emb: - self._init_corner_emb_layers() - - def init_weights(self): - """Initialize weights of the head.""" - bias_init = bias_init_with_prob(0.1) - for i in range(self.num_feat_levels): - # The initialization of parameters are different between nn.Conv2d - # and ConvModule. Our experiments show that using the original - # initialization of nn.Conv2d increases the final mAP by about 0.2% - self.tl_heat[i][-1].conv.reset_parameters() - self.tl_heat[i][-1].conv.bias.data.fill_(bias_init) - self.br_heat[i][-1].conv.reset_parameters() - self.br_heat[i][-1].conv.bias.data.fill_(bias_init) - self.tl_off[i][-1].conv.reset_parameters() - self.br_off[i][-1].conv.reset_parameters() - if self.with_corner_emb: - self.tl_emb[i][-1].conv.reset_parameters() - self.br_emb[i][-1].conv.reset_parameters() - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of corner heatmaps, offset heatmaps and - embedding heatmaps. - - tl_heats (list[Tensor]): Top-left corner heatmaps for all - levels, each is a 4D-tensor, the channels number is - num_classes. - - br_heats (list[Tensor]): Bottom-right corner heatmaps for all - levels, each is a 4D-tensor, the channels number is - num_classes. - - tl_embs (list[Tensor] | list[None]): Top-left embedding - heatmaps for all levels, each is a 4D-tensor or None. - If not None, the channels number is corner_emb_channels. - - br_embs (list[Tensor] | list[None]): Bottom-right embedding - heatmaps for all levels, each is a 4D-tensor or None. - If not None, the channels number is corner_emb_channels. - - tl_offs (list[Tensor]): Top-left offset heatmaps for all - levels, each is a 4D-tensor. The channels number is - corner_offset_channels. - - br_offs (list[Tensor]): Bottom-right offset heatmaps for all - levels, each is a 4D-tensor. The channels number is - corner_offset_channels. - """ - lvl_ind = list(range(self.num_feat_levels)) - return multi_apply(self.forward_single, feats, lvl_ind) - - def forward_single(self, x, lvl_ind, return_pool=False): - """Forward feature of a single level. - - Args: - x (Tensor): Feature of a single level. - lvl_ind (int): Level index of current feature. - return_pool (bool): Return corner pool feature or not. - - Returns: - tuple[Tensor]: A tuple of CornerHead's output for current feature - level. Containing the following Tensors: - - - tl_heat (Tensor): Predicted top-left corner heatmap. - - br_heat (Tensor): Predicted bottom-right corner heatmap. - - tl_emb (Tensor | None): Predicted top-left embedding heatmap. - None for `self.with_corner_emb == False`. - - br_emb (Tensor | None): Predicted bottom-right embedding - heatmap. None for `self.with_corner_emb == False`. - - tl_off (Tensor): Predicted top-left offset heatmap. - - br_off (Tensor): Predicted bottom-right offset heatmap. - - tl_pool (Tensor): Top-left corner pool feature. Not must - have. - - br_pool (Tensor): Bottom-right corner pool feature. Not must - have. - """ - tl_pool = self.tl_pool[lvl_ind](x) - tl_heat = self.tl_heat[lvl_ind](tl_pool) - br_pool = self.br_pool[lvl_ind](x) - br_heat = self.br_heat[lvl_ind](br_pool) - - tl_emb, br_emb = None, None - if self.with_corner_emb: - tl_emb = self.tl_emb[lvl_ind](tl_pool) - br_emb = self.br_emb[lvl_ind](br_pool) - - tl_off = self.tl_off[lvl_ind](tl_pool) - br_off = self.br_off[lvl_ind](br_pool) - - result_list = [tl_heat, br_heat, tl_emb, br_emb, tl_off, br_off] - if return_pool: - result_list.append(tl_pool) - result_list.append(br_pool) - - return result_list - - def get_targets(self, - gt_bboxes, - gt_labels, - feat_shape, - img_shape, - with_corner_emb=False, - with_guiding_shift=False, - with_centripetal_shift=False): - """Generate corner targets. - - Including corner heatmap, corner offset. - - Optional: corner embedding, corner guiding shift, centripetal shift. - - For CornerNet, we generate corner heatmap, corner offset and corner - embedding from this function. - - For CentripetalNet, we generate corner heatmap, corner offset, guiding - shift and centripetal shift from this function. - - Args: - gt_bboxes (list[Tensor]): Ground truth bboxes of each image, each - has shape (num_gt, 4). - gt_labels (list[Tensor]): Ground truth labels of each box, each has - shape (num_gt,). - feat_shape (list[int]): Shape of output feature, - [batch, channel, height, width]. - img_shape (list[int]): Shape of input image, - [height, width, channel]. - with_corner_emb (bool): Generate corner embedding target or not. - Default: False. - with_guiding_shift (bool): Generate guiding shift target or not. - Default: False. - with_centripetal_shift (bool): Generate centripetal shift target or - not. Default: False. - - Returns: - dict: Ground truth of corner heatmap, corner offset, corner - embedding, guiding shift and centripetal shift. Containing the - following keys: - - - topleft_heatmap (Tensor): Ground truth top-left corner - heatmap. - - bottomright_heatmap (Tensor): Ground truth bottom-right - corner heatmap. - - topleft_offset (Tensor): Ground truth top-left corner offset. - - bottomright_offset (Tensor): Ground truth bottom-right corner - offset. - - corner_embedding (list[list[list[int]]]): Ground truth corner - embedding. Not must have. - - topleft_guiding_shift (Tensor): Ground truth top-left corner - guiding shift. Not must have. - - bottomright_guiding_shift (Tensor): Ground truth bottom-right - corner guiding shift. Not must have. - - topleft_centripetal_shift (Tensor): Ground truth top-left - corner centripetal shift. Not must have. - - bottomright_centripetal_shift (Tensor): Ground truth - bottom-right corner centripetal shift. Not must have. - """ - batch_size, _, height, width = feat_shape - img_h, img_w = img_shape[:2] - - width_ratio = float(width / img_w) - height_ratio = float(height / img_h) - - gt_tl_heatmap = gt_bboxes[-1].new_zeros( - [batch_size, self.num_classes, height, width]) - gt_br_heatmap = gt_bboxes[-1].new_zeros( - [batch_size, self.num_classes, height, width]) - gt_tl_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width]) - gt_br_offset = gt_bboxes[-1].new_zeros([batch_size, 2, height, width]) - - if with_corner_emb: - match = [] - - # Guiding shift is a kind of offset, from center to corner - if with_guiding_shift: - gt_tl_guiding_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - gt_br_guiding_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - # Centripetal shift is also a kind of offset, from center to corner - # and normalized by log. - if with_centripetal_shift: - gt_tl_centripetal_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - gt_br_centripetal_shift = gt_bboxes[-1].new_zeros( - [batch_size, 2, height, width]) - - for batch_id in range(batch_size): - # Ground truth of corner embedding per image is a list of coord set - corner_match = [] - for box_id in range(len(gt_labels[batch_id])): - left, top, right, bottom = gt_bboxes[batch_id][box_id] - center_x = (left + right) / 2.0 - center_y = (top + bottom) / 2.0 - label = gt_labels[batch_id][box_id] - - # Use coords in the feature level to generate ground truth - scale_left = left * width_ratio - scale_right = right * width_ratio - scale_top = top * height_ratio - scale_bottom = bottom * height_ratio - scale_center_x = center_x * width_ratio - scale_center_y = center_y * height_ratio - - # Int coords on feature map/ground truth tensor - left_idx = int(min(scale_left, width - 1)) - right_idx = int(min(scale_right, width - 1)) - top_idx = int(min(scale_top, height - 1)) - bottom_idx = int(min(scale_bottom, height - 1)) - - # Generate gaussian heatmap - scale_box_width = ceil(scale_right - scale_left) - scale_box_height = ceil(scale_bottom - scale_top) - radius = gaussian_radius((scale_box_height, scale_box_width), - min_overlap=0.3) - radius = max(0, int(radius)) - gt_tl_heatmap[batch_id, label] = gen_gaussian_target( - gt_tl_heatmap[batch_id, label], [left_idx, top_idx], - radius) - gt_br_heatmap[batch_id, label] = gen_gaussian_target( - gt_br_heatmap[batch_id, label], [right_idx, bottom_idx], - radius) - - # Generate corner offset - left_offset = scale_left - left_idx - top_offset = scale_top - top_idx - right_offset = scale_right - right_idx - bottom_offset = scale_bottom - bottom_idx - gt_tl_offset[batch_id, 0, top_idx, left_idx] = left_offset - gt_tl_offset[batch_id, 1, top_idx, left_idx] = top_offset - gt_br_offset[batch_id, 0, bottom_idx, right_idx] = right_offset - gt_br_offset[batch_id, 1, bottom_idx, - right_idx] = bottom_offset - - # Generate corner embedding - if with_corner_emb: - corner_match.append([[top_idx, left_idx], - [bottom_idx, right_idx]]) - # Generate guiding shift - if with_guiding_shift: - gt_tl_guiding_shift[batch_id, 0, top_idx, - left_idx] = scale_center_x - left_idx - gt_tl_guiding_shift[batch_id, 1, top_idx, - left_idx] = scale_center_y - top_idx - gt_br_guiding_shift[batch_id, 0, bottom_idx, - right_idx] = right_idx - scale_center_x - gt_br_guiding_shift[ - batch_id, 1, bottom_idx, - right_idx] = bottom_idx - scale_center_y - # Generate centripetal shift - if with_centripetal_shift: - gt_tl_centripetal_shift[batch_id, 0, top_idx, - left_idx] = log(scale_center_x - - scale_left) - gt_tl_centripetal_shift[batch_id, 1, top_idx, - left_idx] = log(scale_center_y - - scale_top) - gt_br_centripetal_shift[batch_id, 0, bottom_idx, - right_idx] = log(scale_right - - scale_center_x) - gt_br_centripetal_shift[batch_id, 1, bottom_idx, - right_idx] = log(scale_bottom - - scale_center_y) - - if with_corner_emb: - match.append(corner_match) - - target_result = dict( - topleft_heatmap=gt_tl_heatmap, - topleft_offset=gt_tl_offset, - bottomright_heatmap=gt_br_heatmap, - bottomright_offset=gt_br_offset) - - if with_corner_emb: - target_result.update(corner_embedding=match) - if with_guiding_shift: - target_result.update( - topleft_guiding_shift=gt_tl_guiding_shift, - bottomright_guiding_shift=gt_br_guiding_shift) - if with_centripetal_shift: - target_result.update( - topleft_centripetal_shift=gt_tl_centripetal_shift, - bottomright_centripetal_shift=gt_br_centripetal_shift) - - return target_result - - def loss(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [left, top, right, bottom] format. - gt_labels (list[Tensor]): Class indices corresponding to each box. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. Containing the - following losses: - - - det_loss (list[Tensor]): Corner keypoint losses of all - feature levels. - - pull_loss (list[Tensor]): Part one of AssociativeEmbedding - losses of all feature levels. - - push_loss (list[Tensor]): Part two of AssociativeEmbedding - losses of all feature levels. - - off_loss (list[Tensor]): Corner offset losses of all feature - levels. - """ - targets = self.get_targets( - gt_bboxes, - gt_labels, - tl_heats[-1].shape, - img_metas[0]['pad_shape'], - with_corner_emb=self.with_corner_emb) - mlvl_targets = [targets for _ in range(self.num_feat_levels)] - det_losses, pull_losses, push_losses, off_losses = multi_apply( - self.loss_single, tl_heats, br_heats, tl_embs, br_embs, tl_offs, - br_offs, mlvl_targets) - loss_dict = dict(det_loss=det_losses, off_loss=off_losses) - if self.with_corner_emb: - loss_dict.update(pull_loss=pull_losses, push_loss=push_losses) - return loss_dict - - def loss_single(self, tl_hmp, br_hmp, tl_emb, br_emb, tl_off, br_off, - targets): - """Compute losses for single level. - - Args: - tl_hmp (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_hmp (Tensor): Bottom-right corner heatmap for current level with - shape (N, num_classes, H, W). - tl_emb (Tensor): Top-left corner embedding for current level with - shape (N, corner_emb_channels, H, W). - br_emb (Tensor): Bottom-right corner embedding for current level - with shape (N, corner_emb_channels, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - targets (dict): Corner target generated by `get_targets`. - - Returns: - tuple[torch.Tensor]: Losses of the head's differnet branches - containing the following losses: - - - det_loss (Tensor): Corner keypoint loss. - - pull_loss (Tensor): Part one of AssociativeEmbedding loss. - - push_loss (Tensor): Part two of AssociativeEmbedding loss. - - off_loss (Tensor): Corner offset loss. - """ - gt_tl_hmp = targets['topleft_heatmap'] - gt_br_hmp = targets['bottomright_heatmap'] - gt_tl_off = targets['topleft_offset'] - gt_br_off = targets['bottomright_offset'] - gt_embedding = targets['corner_embedding'] - - # Detection loss - tl_det_loss = self.loss_heatmap( - tl_hmp.sigmoid(), - gt_tl_hmp, - avg_factor=max(1, - gt_tl_hmp.eq(1).sum())) - br_det_loss = self.loss_heatmap( - br_hmp.sigmoid(), - gt_br_hmp, - avg_factor=max(1, - gt_br_hmp.eq(1).sum())) - det_loss = (tl_det_loss + br_det_loss) / 2.0 - - # AssociativeEmbedding loss - if self.with_corner_emb and self.loss_embedding is not None: - pull_loss, push_loss = self.loss_embedding(tl_emb, br_emb, - gt_embedding) - else: - pull_loss, push_loss = None, None - - # Offset loss - # We only compute the offset loss at the real corner position. - # The value of real corner would be 1 in heatmap ground truth. - # The mask is computed in class agnostic mode and its shape is - # batch * 1 * width * height. - tl_off_mask = gt_tl_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_tl_hmp) - br_off_mask = gt_br_hmp.eq(1).sum(1).gt(0).unsqueeze(1).type_as( - gt_br_hmp) - tl_off_loss = self.loss_offset( - tl_off, - gt_tl_off, - tl_off_mask, - avg_factor=max(1, tl_off_mask.sum())) - br_off_loss = self.loss_offset( - br_off, - gt_br_off, - br_off_mask, - avg_factor=max(1, br_off_mask.sum())) - - off_loss = (tl_off_loss + br_off_loss) / 2.0 - - return det_loss, pull_loss, push_loss, off_loss - - def get_bboxes(self, - tl_heats, - br_heats, - tl_embs, - br_embs, - tl_offs, - br_offs, - img_metas, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - tl_heats (list[Tensor]): Top-left corner heatmaps for each level - with shape (N, num_classes, H, W). - br_heats (list[Tensor]): Bottom-right corner heatmaps for each - level with shape (N, num_classes, H, W). - tl_embs (list[Tensor]): Top-left corner embeddings for each level - with shape (N, corner_emb_channels, H, W). - br_embs (list[Tensor]): Bottom-right corner embeddings for each - level with shape (N, corner_emb_channels, H, W). - tl_offs (list[Tensor]): Top-left corner offsets for each level - with shape (N, corner_offset_channels, H, W). - br_offs (list[Tensor]): Bottom-right corner offsets for each level - with shape (N, corner_offset_channels, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - assert tl_heats[-1].shape[0] == br_heats[-1].shape[0] == len(img_metas) - result_list = [] - for img_id in range(len(img_metas)): - result_list.append( - self._get_bboxes_single( - tl_heats[-1][img_id:img_id + 1, :], - br_heats[-1][img_id:img_id + 1, :], - tl_offs[-1][img_id:img_id + 1, :], - br_offs[-1][img_id:img_id + 1, :], - img_metas[img_id], - tl_emb=tl_embs[-1][img_id:img_id + 1, :], - br_emb=br_embs[-1][img_id:img_id + 1, :], - rescale=rescale, - with_nms=with_nms)) - - return result_list - - def _get_bboxes_single(self, - tl_heat, - br_heat, - tl_off, - br_off, - img_meta, - tl_emb=None, - br_emb=None, - tl_centripetal_shift=None, - br_centripetal_shift=None, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into bbox predictions. - - Args: - tl_heat (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_heat (Tensor): Bottom-right corner heatmap for current level - with shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - tl_emb (Tensor): Top-left corner embedding for current level with - shape (N, corner_emb_channels, H, W). - br_emb (Tensor): Bottom-right corner embedding for current level - with shape (N, corner_emb_channels, H, W). - tl_centripetal_shift: Top-left corner's centripetal shift for - current level with shape (N, 2, H, W). - br_centripetal_shift: Bottom-right corner's centripetal shift for - current level with shape (N, 2, H, W). - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - """ - if isinstance(img_meta, (list, tuple)): - img_meta = img_meta[0] - - batch_bboxes, batch_scores, batch_clses = self.decode_heatmap( - tl_heat=tl_heat.sigmoid(), - br_heat=br_heat.sigmoid(), - tl_off=tl_off, - br_off=br_off, - tl_emb=tl_emb, - br_emb=br_emb, - tl_centripetal_shift=tl_centripetal_shift, - br_centripetal_shift=br_centripetal_shift, - img_meta=img_meta, - k=self.test_cfg.corner_topk, - kernel=self.test_cfg.local_maximum_kernel, - distance_threshold=self.test_cfg.distance_threshold) - - if rescale: - batch_bboxes /= batch_bboxes.new_tensor(img_meta['scale_factor']) - - bboxes = batch_bboxes.view([-1, 4]) - scores = batch_scores.view([-1, 1]) - clses = batch_clses.view([-1, 1]) - - idx = scores.argsort(dim=0, descending=True) - bboxes = bboxes[idx].view([-1, 4]) - scores = scores[idx].view(-1) - clses = clses[idx].view(-1) - - detections = torch.cat([bboxes, scores.unsqueeze(-1)], -1) - keepinds = (detections[:, -1] > -0.1) - detections = detections[keepinds] - labels = clses[keepinds] - - if with_nms: - detections, labels = self._bboxes_nms(detections, labels, - self.test_cfg) - - return detections, labels - - def _bboxes_nms(self, bboxes, labels, cfg): - if labels.numel() == 0: - return bboxes, labels - - if 'nms_cfg' in cfg: - warning.warn('nms_cfg in test_cfg will be deprecated. ' - 'Please rename it as nms') - if 'nms' not in cfg: - cfg.nms = cfg.nms_cfg - - out_bboxes, keep = batched_nms(bboxes[:, :4], bboxes[:, -1], labels, - cfg.nms) - out_labels = labels[keep] - - if len(out_bboxes) > 0: - idx = torch.argsort(out_bboxes[:, -1], descending=True) - idx = idx[:cfg.max_per_img] - out_bboxes = out_bboxes[idx] - out_labels = out_labels[idx] - - return out_bboxes, out_labels - - def _gather_feat(self, feat, ind, mask=None): - """Gather feature according to index. - - Args: - feat (Tensor): Target feature map. - ind (Tensor): Target coord index. - mask (Tensor | None): Mask of featuremap. Default: None. - - Returns: - feat (Tensor): Gathered feature. - """ - dim = feat.size(2) - ind = ind.unsqueeze(2).repeat(1, 1, dim) - feat = feat.gather(1, ind) - if mask is not None: - mask = mask.unsqueeze(2).expand_as(feat) - feat = feat[mask] - feat = feat.view(-1, dim) - return feat - - def _local_maximum(self, heat, kernel=3): - """Extract local maximum pixel with given kernel. - - Args: - heat (Tensor): Target heatmap. - kernel (int): Kernel size of max pooling. Default: 3. - - Returns: - heat (Tensor): A heatmap where local maximum pixels maintain its - own value and other positions are 0. - """ - pad = (kernel - 1) // 2 - hmax = F.max_pool2d(heat, kernel, stride=1, padding=pad) - keep = (hmax == heat).float() - return heat * keep - - def _transpose_and_gather_feat(self, feat, ind): - """Transpose and gather feature according to index. - - Args: - feat (Tensor): Target feature map. - ind (Tensor): Target coord index. - - Returns: - feat (Tensor): Transposed and gathered feature. - """ - feat = feat.permute(0, 2, 3, 1).contiguous() - feat = feat.view(feat.size(0), -1, feat.size(3)) - feat = self._gather_feat(feat, ind) - return feat - - def _topk(self, scores, k=20): - """Get top k positions from heatmap. - - Args: - scores (Tensor): Target heatmap with shape - [batch, num_classes, height, width]. - k (int): Target number. Default: 20. - - Returns: - tuple[torch.Tensor]: Scores, indexes, categories and coords of - topk keypoint. Containing following Tensors: - - - topk_scores (Tensor): Max scores of each topk keypoint. - - topk_inds (Tensor): Indexes of each topk keypoint. - - topk_clses (Tensor): Categories of each topk keypoint. - - topk_ys (Tensor): Y-coord of each topk keypoint. - - topk_xs (Tensor): X-coord of each topk keypoint. - """ - batch, _, height, width = scores.size() - topk_scores, topk_inds = torch.topk(scores.view(batch, -1), k) - topk_clses = topk_inds // (height * width) - topk_inds = topk_inds % (height * width) - topk_ys = topk_inds // width - topk_xs = (topk_inds % width).int().float() - return topk_scores, topk_inds, topk_clses, topk_ys, topk_xs - - def decode_heatmap(self, - tl_heat, - br_heat, - tl_off, - br_off, - tl_emb=None, - br_emb=None, - tl_centripetal_shift=None, - br_centripetal_shift=None, - img_meta=None, - k=100, - kernel=3, - distance_threshold=0.5, - num_dets=1000): - """Transform outputs for a single batch item into raw bbox predictions. - - Args: - tl_heat (Tensor): Top-left corner heatmap for current level with - shape (N, num_classes, H, W). - br_heat (Tensor): Bottom-right corner heatmap for current level - with shape (N, num_classes, H, W). - tl_off (Tensor): Top-left corner offset for current level with - shape (N, corner_offset_channels, H, W). - br_off (Tensor): Bottom-right corner offset for current level with - shape (N, corner_offset_channels, H, W). - tl_emb (Tensor | None): Top-left corner embedding for current - level with shape (N, corner_emb_channels, H, W). - br_emb (Tensor | None): Bottom-right corner embedding for current - level with shape (N, corner_emb_channels, H, W). - tl_centripetal_shift (Tensor | None): Top-left centripetal shift - for current level with shape (N, 2, H, W). - br_centripetal_shift (Tensor | None): Bottom-right centripetal - shift for current level with shape (N, 2, H, W). - img_meta (dict): Meta information of current image, e.g., - image size, scaling factor, etc. - k (int): Get top k corner keypoints from heatmap. - kernel (int): Max pooling kernel for extract local maximum pixels. - distance_threshold (float): Distance threshold. Top-left and - bottom-right corner keypoints with feature distance less than - the threshold will be regarded as keypoints from same object. - num_dets (int): Num of raw boxes before doing nms. - - Returns: - tuple[torch.Tensor]: Decoded output of CornerHead, containing the - following Tensors: - - - bboxes (Tensor): Coords of each box. - - scores (Tensor): Scores of each box. - - clses (Tensor): Categories of each box. - """ - with_embedding = tl_emb is not None and br_emb is not None - with_centripetal_shift = ( - tl_centripetal_shift is not None - and br_centripetal_shift is not None) - assert with_embedding + with_centripetal_shift == 1 - batch, _, height, width = tl_heat.size() - inp_h, inp_w, _ = img_meta['pad_shape'] - - # perform nms on heatmaps - tl_heat = self._local_maximum(tl_heat, kernel=kernel) - br_heat = self._local_maximum(br_heat, kernel=kernel) - - tl_scores, tl_inds, tl_clses, tl_ys, tl_xs = self._topk(tl_heat, k=k) - br_scores, br_inds, br_clses, br_ys, br_xs = self._topk(br_heat, k=k) - - # We use repeat instead of expand here because expand is a - # shallow-copy function. Thus it could cause unexpected testing result - # sometimes. Using expand will decrease about 10% mAP during testing - # compared to repeat. - tl_ys = tl_ys.view(batch, k, 1).repeat(1, 1, k) - tl_xs = tl_xs.view(batch, k, 1).repeat(1, 1, k) - br_ys = br_ys.view(batch, 1, k).repeat(1, k, 1) - br_xs = br_xs.view(batch, 1, k).repeat(1, k, 1) - - tl_off = self._transpose_and_gather_feat(tl_off, tl_inds) - tl_off = tl_off.view(batch, k, 1, 2) - br_off = self._transpose_and_gather_feat(br_off, br_inds) - br_off = br_off.view(batch, 1, k, 2) - - tl_xs = tl_xs + tl_off[..., 0] - tl_ys = tl_ys + tl_off[..., 1] - br_xs = br_xs + br_off[..., 0] - br_ys = br_ys + br_off[..., 1] - - if with_centripetal_shift: - tl_centripetal_shift = self._transpose_and_gather_feat( - tl_centripetal_shift, tl_inds).view(batch, k, 1, 2).exp() - br_centripetal_shift = self._transpose_and_gather_feat( - br_centripetal_shift, br_inds).view(batch, 1, k, 2).exp() - - tl_ctxs = tl_xs + tl_centripetal_shift[..., 0] - tl_ctys = tl_ys + tl_centripetal_shift[..., 1] - br_ctxs = br_xs - br_centripetal_shift[..., 0] - br_ctys = br_ys - br_centripetal_shift[..., 1] - - # all possible boxes based on top k corners (ignoring class) - tl_xs *= (inp_w / width) - tl_ys *= (inp_h / height) - br_xs *= (inp_w / width) - br_ys *= (inp_h / height) - - if with_centripetal_shift: - tl_ctxs *= (inp_w / width) - tl_ctys *= (inp_h / height) - br_ctxs *= (inp_w / width) - br_ctys *= (inp_h / height) - - x_off = img_meta['border'][2] - y_off = img_meta['border'][0] - - tl_xs -= x_off - tl_ys -= y_off - br_xs -= x_off - br_ys -= y_off - - tl_xs *= tl_xs.gt(0.0).type_as(tl_xs) - tl_ys *= tl_ys.gt(0.0).type_as(tl_ys) - br_xs *= br_xs.gt(0.0).type_as(br_xs) - br_ys *= br_ys.gt(0.0).type_as(br_ys) - - bboxes = torch.stack((tl_xs, tl_ys, br_xs, br_ys), dim=3) - area_bboxes = ((br_xs - tl_xs) * (br_ys - tl_ys)).abs() - - if with_centripetal_shift: - tl_ctxs -= x_off - tl_ctys -= y_off - br_ctxs -= x_off - br_ctys -= y_off - - tl_ctxs *= tl_ctxs.gt(0.0).type_as(tl_ctxs) - tl_ctys *= tl_ctys.gt(0.0).type_as(tl_ctys) - br_ctxs *= br_ctxs.gt(0.0).type_as(br_ctxs) - br_ctys *= br_ctys.gt(0.0).type_as(br_ctys) - - ct_bboxes = torch.stack((tl_ctxs, tl_ctys, br_ctxs, br_ctys), - dim=3) - area_ct_bboxes = ((br_ctxs - tl_ctxs) * (br_ctys - tl_ctys)).abs() - - rcentral = torch.zeros_like(ct_bboxes) - # magic nums from paper section 4.1 - mu = torch.ones_like(area_bboxes) / 2.4 - mu[area_bboxes > 3500] = 1 / 2.1 # large bbox have smaller mu - - bboxes_center_x = (bboxes[..., 0] + bboxes[..., 2]) / 2 - bboxes_center_y = (bboxes[..., 1] + bboxes[..., 3]) / 2 - rcentral[..., 0] = bboxes_center_x - mu * (bboxes[..., 2] - - bboxes[..., 0]) / 2 - rcentral[..., 1] = bboxes_center_y - mu * (bboxes[..., 3] - - bboxes[..., 1]) / 2 - rcentral[..., 2] = bboxes_center_x + mu * (bboxes[..., 2] - - bboxes[..., 0]) / 2 - rcentral[..., 3] = bboxes_center_y + mu * (bboxes[..., 3] - - bboxes[..., 1]) / 2 - area_rcentral = ((rcentral[..., 2] - rcentral[..., 0]) * - (rcentral[..., 3] - rcentral[..., 1])).abs() - dists = area_ct_bboxes / area_rcentral - - tl_ctx_inds = (ct_bboxes[..., 0] <= rcentral[..., 0]) | ( - ct_bboxes[..., 0] >= rcentral[..., 2]) - tl_cty_inds = (ct_bboxes[..., 1] <= rcentral[..., 1]) | ( - ct_bboxes[..., 1] >= rcentral[..., 3]) - br_ctx_inds = (ct_bboxes[..., 2] <= rcentral[..., 0]) | ( - ct_bboxes[..., 2] >= rcentral[..., 2]) - br_cty_inds = (ct_bboxes[..., 3] <= rcentral[..., 1]) | ( - ct_bboxes[..., 3] >= rcentral[..., 3]) - - if with_embedding: - tl_emb = self._transpose_and_gather_feat(tl_emb, tl_inds) - tl_emb = tl_emb.view(batch, k, 1) - br_emb = self._transpose_and_gather_feat(br_emb, br_inds) - br_emb = br_emb.view(batch, 1, k) - dists = torch.abs(tl_emb - br_emb) - - tl_scores = tl_scores.view(batch, k, 1).repeat(1, 1, k) - br_scores = br_scores.view(batch, 1, k).repeat(1, k, 1) - - scores = (tl_scores + br_scores) / 2 # scores for all possible boxes - - # tl and br should have same class - tl_clses = tl_clses.view(batch, k, 1).repeat(1, 1, k) - br_clses = br_clses.view(batch, 1, k).repeat(1, k, 1) - cls_inds = (tl_clses != br_clses) - - # reject boxes based on distances - dist_inds = dists > distance_threshold - - # reject boxes based on widths and heights - width_inds = (br_xs <= tl_xs) - height_inds = (br_ys <= tl_ys) - - scores[cls_inds] = -1 - scores[width_inds] = -1 - scores[height_inds] = -1 - scores[dist_inds] = -1 - if with_centripetal_shift: - scores[tl_ctx_inds] = -1 - scores[tl_cty_inds] = -1 - scores[br_ctx_inds] = -1 - scores[br_cty_inds] = -1 - - scores = scores.view(batch, -1) - scores, inds = torch.topk(scores, num_dets) - scores = scores.unsqueeze(2) - - bboxes = bboxes.view(batch, -1, 4) - bboxes = self._gather_feat(bboxes, inds) - - clses = tl_clses.contiguous().view(batch, -1, 1) - clses = self._gather_feat(clses, inds).float() - - return bboxes, scores, clses diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/__init__.py deleted file mode 100644 index be4ea28a86e3c165cc2556f860079305f316294e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -# from .cityscapes import CityscapesDataset -# from .coco import CocoDataset -# from .custom import CustomDataset -# from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, -# RepeatDataset) -# from .deepfashion import DeepFashionDataset -# from .lvis import LVISDataset, LVISV1Dataset, LVISV05Dataset -# from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler -# from .utils import (NumClassCheckHook, get_loading_pipeline, -# replace_ImageToTensor) -# from .voc import VOCDataset -# from .wider_face import WIDERFaceDataset -# from .xml_style import XMLDataset - -# __all__ = [ -# 'CustomDataset', 'XMLDataset', 'CocoDataset', 'DeepFashionDataset', -# 'VOCDataset', 'CityscapesDataset', 'LVISDataset', 'LVISV05Dataset', -# 'LVISV1Dataset', 'GroupSampler', 'DistributedGroupSampler', -# 'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', -# 'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES', -# 'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline', -# 'NumClassCheckHook' -# ] -from .utils import replace_ImageToTensor -__all__ = ['replace_ImageToTensor'] \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/seg/sampler/base_pixel_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/seg/sampler/base_pixel_sampler.py deleted file mode 100644 index b75b1566c9f18169cee51d4b55d75e0357b69c57..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/seg/sampler/base_pixel_sampler.py +++ /dev/null @@ -1,12 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BasePixelSampler(metaclass=ABCMeta): - """Base class of pixel sampler.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def sample(self, seg_logit, seg_label): - """Placeholder for sample function.""" diff --git a/spaces/abidlabs/Lime/app.py b/spaces/abidlabs/Lime/app.py deleted file mode 100644 index 206c03e4658f2489bd92089f01d13bde77a24aab..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/Lime/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='abidlabs/Lime') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `Lime` - To use this theme, set `theme='abidlabs/Lime'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/evaluator_wrapper.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/evaluator_wrapper.py deleted file mode 100644 index fe4558a13ccc2ce0579540b8b77f958096e9984c..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/evaluator_wrapper.py +++ /dev/null @@ -1,92 +0,0 @@ - -import torch -from os.path import join as pjoin -import numpy as np -from models.modules import MovementConvEncoder, TextEncoderBiGRUCo, MotionEncoderBiGRUCo -from utils.word_vectorizer import POS_enumerator - -def build_models(opt): - movement_enc = MovementConvEncoder(opt.dim_pose-4, opt.dim_movement_enc_hidden, opt.dim_movement_latent) - text_enc = TextEncoderBiGRUCo(word_size=opt.dim_word, - pos_size=opt.dim_pos_ohot, - hidden_size=opt.dim_text_hidden, - output_size=opt.dim_coemb_hidden, - device=opt.device) - - motion_enc = MotionEncoderBiGRUCo(input_size=opt.dim_movement_latent, - hidden_size=opt.dim_motion_hidden, - output_size=opt.dim_coemb_hidden, - device=opt.device) - - checkpoint = torch.load(pjoin(opt.checkpoints_dir, opt.dataset_name, 'text_mot_match', 'model', 'finest.tar'), - map_location=opt.device) - movement_enc.load_state_dict(checkpoint['movement_encoder']) - text_enc.load_state_dict(checkpoint['text_encoder']) - motion_enc.load_state_dict(checkpoint['motion_encoder']) - print('Loading Evaluation Model Wrapper (Epoch %d) Completed!!' % (checkpoint['epoch'])) - return text_enc, motion_enc, movement_enc - - -class EvaluatorModelWrapper(object): - - def __init__(self, opt): - - if opt.dataset_name == 't2m': - opt.dim_pose = 263 - elif opt.dataset_name == 'kit': - opt.dim_pose = 251 - else: - raise KeyError('Dataset not Recognized!!!') - - opt.dim_word = 300 - opt.max_motion_length = 196 - opt.dim_pos_ohot = len(POS_enumerator) - opt.dim_motion_hidden = 1024 - opt.max_text_len = 20 - opt.dim_text_hidden = 512 - opt.dim_coemb_hidden = 512 - - # print(opt) - - self.text_encoder, self.motion_encoder, self.movement_encoder = build_models(opt) - self.opt = opt - self.device = opt.device - - self.text_encoder.to(opt.device) - self.motion_encoder.to(opt.device) - self.movement_encoder.to(opt.device) - - self.text_encoder.eval() - self.motion_encoder.eval() - self.movement_encoder.eval() - - # Please note that the results does not following the order of inputs - def get_co_embeddings(self, word_embs, pos_ohot, cap_lens, motions, m_lens): - with torch.no_grad(): - word_embs = word_embs.detach().to(self.device).float() - pos_ohot = pos_ohot.detach().to(self.device).float() - motions = motions.detach().to(self.device).float() - - '''Movement Encoding''' - movements = self.movement_encoder(motions[..., :-4]).detach() - m_lens = m_lens // self.opt.unit_length - motion_embedding = self.motion_encoder(movements, m_lens) - - '''Text Encoding''' - text_embedding = self.text_encoder(word_embs, pos_ohot, cap_lens) - return text_embedding, motion_embedding - - # Please note that the results does not following the order of inputs - def get_motion_embeddings(self, motions, m_lens): - with torch.no_grad(): - motions = motions.detach().to(self.device).float() - - align_idx = np.argsort(m_lens.data.tolist())[::-1].copy() - motions = motions[align_idx] - m_lens = m_lens[align_idx] - - '''Movement Encoding''' - movements = self.movement_encoder(motions[..., :-4]).detach() - m_lens = m_lens // self.opt.unit_length - motion_embedding = self.motion_encoder(movements, m_lens) - return motion_embedding diff --git a/spaces/ad2/youtube-whisper/README.md b/spaces/ad2/youtube-whisper/README.md deleted file mode 100644 index c3180680339155aaf1d27f629129b68d12cac021..0000000000000000000000000000000000000000 --- a/spaces/ad2/youtube-whisper/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Youtube Whisper -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: unknown -duplicated_from: kazuk/youtube-whisper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ahmedghani/Editing-Tools/watermark_remover.py b/spaces/ahmedghani/Editing-Tools/watermark_remover.py deleted file mode 100644 index 8aa3de19a8da8dc836aaca32b2856c65baed0d6a..0000000000000000000000000000000000000000 --- a/spaces/ahmedghani/Editing-Tools/watermark_remover.py +++ /dev/null @@ -1,147 +0,0 @@ -import glob -import os -from PIL import Image -import shutil -import concurrent.futures -import gradio as gr -import cv2 -import re -import numpy as np -import torch -from lama_cleaner.helper import ( - norm_img, - get_cache_path_by_url, - load_jit_model, -) -from lama_cleaner.model.base import InpaintModel -from lama_cleaner.schema import Config - -LAMA_MODEL_URL = os.environ.get( - "LAMA_MODEL_URL", - "https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt", -) -LAMA_MODEL_MD5 = os.environ.get("LAMA_MODEL_MD5", "e3aa4aaa15225a33ec84f9f4bc47e500") - -class LaMa(InpaintModel): - name = "lama" - pad_mod = 8 - - def init_model(self, device, **kwargs): - self.model = load_jit_model(LAMA_MODEL_URL, device, LAMA_MODEL_MD5).eval() - - @staticmethod - def is_downloaded() -> bool: - return os.path.exists(get_cache_path_by_url(LAMA_MODEL_URL)) - - def forward(self, image, mask, config: Config): - """Input image and output image have same size - image: [H, W, C] RGB - mask: [H, W] - return: BGR IMAGE - """ - image = norm_img(image) - mask = norm_img(mask) - - mask = (mask > 0) * 1 - image = torch.from_numpy(image).unsqueeze(0).to(self.device) - mask = torch.from_numpy(mask).unsqueeze(0).to(self.device) - - inpainted_image = self.model(image, mask) - - cur_res = inpainted_image[0].permute(1, 2, 0).detach().cpu().numpy() - cur_res = np.clip(cur_res * 255, 0, 255).astype("uint8") - cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR) - return cur_res - -lama_model = LaMa("cuda" if torch.cuda.is_available() else "cpu") -config = Config(hd_strategy_crop_margin=196, ldm_steps=25, hd_strategy='Original', hd_strategy_crop_trigger_size=1280, hd_strategy_resize_limit=2048) - -def remove_image_watermark(inputs): - alpha_channel = None - image, mask = inputs["image"], inputs["mask"] - if image.mode == "RGBA": - image = np.array(image) - alpha_channel = image[:, :, -1] - image = cv2.cvtColor(image, cv2.COLOR_RGBA2RGB) - else: - image = np.array(image) - mask = cv2.threshold(np.array(mask.convert("L")), 127, 255, cv2.THRESH_BINARY)[1] - output = lama_model(image, mask, config) - output = cv2.cvtColor(output.astype(np.uint8), cv2.COLOR_BGR2RGB) - if alpha_channel is not None: - if alpha_channel.shape[:2] != output.shape[:2]: - alpha_channel = cv2.resize( - alpha_channel, dsize=(output.shape[1], output.shape[0]) - ) - output = np.concatenate( - (output, alpha_channel[:, :, np.newaxis]), axis=-1 - ) - return Image.fromarray(output) - -def process_image(mask_data, image_path): - output = remove_image_watermark({"image": Image.open(image_path), "mask": mask_data}) - output_image_path = os.path.join('output_images', os.path.splitext(os.path.basename(image_path))[0] + '_inpainted' + os.path.splitext(image_path)[1]) - output.save(output_image_path) - return output_image_path - -def remove_video_watermark(sketch, images_path='frames', output_path='output_images'): - if os.path.exists('output_images'): - shutil.rmtree('output_images') - os.makedirs('output_images') - - image_paths = glob.glob(f'{images_path}/*.*') - - with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor: - executor.map(lambda image_path: process_image(sketch["mask"], image_path), image_paths) - - return gr.File.update(value=convert_frames_to_video('output_images'), visible=True), gr.Button.update(value='Done!') - -def convert_video_to_frames(video): - if os.path.exists('input_video.mp4'): - os.remove('input_video.mp4') - # save the video to the current directory from temporary file - with open(video, 'rb') as f: - with open('input_video.mp4', 'wb') as f2: - f2.write(f.read()) - # os.system(f"ffmpeg -i {video} input_video.mp4") - video_path = 'input_video.mp4' - - if os.path.exists('frames'): - shutil.rmtree('frames') - os.makedirs('frames') - - video_name = os.path.splitext(os.path.basename(video_path))[0] - vidcap = cv2.VideoCapture(video_path) - success, image = vidcap.read() - count = 1 - while success: - cv2.imwrite(f"frames/{video_name}_{count}.jpg", image) - success, image = vidcap.read() - count += 1 - - return gr.Image.update(value=f"{os.getcwd()}/frames/{video_name}_1.jpg", interactive=True), gr.Button.update(interactive=True) - -def convert_frames_to_video(frames_path): - if os.path.exists('output_video.mp4'): - os.remove('output_video.mp4') - - img_array = [] - filelist = glob.glob(f"{frames_path}/*.jpg") - - # Sort frames by number - frame_numbers = [int(re.findall(r'\d+', os.path.basename(frame))[0]) for frame in filelist] - sorted_frames = [frame for _, frame in sorted(zip(frame_numbers, filelist), key=lambda pair: pair[0])] - - for filename in sorted_frames: - img = cv2.imread(filename) - height, width, layers = img.shape - size = (width, height) - img_array.append(img) - - out = cv2.VideoWriter('output_video.mp4', cv2.VideoWriter_fourcc(*'mp4v'), 25, size) - - for i in range(len(img_array)): - out.write(img_array[i]) - out.release() - - return 'output_video.mp4' \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/src/vision.cpp b/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/src/vision.cpp deleted file mode 100644 index 4a08821e0121a77556aa7a263ec8ebfa928b13b6..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/src/vision.cpp +++ /dev/null @@ -1,21 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -/*! -* Copyright (c) Facebook, Inc. and its affiliates. -* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR -*/ - -#include "ms_deform_attn.h" - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward"); - m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward"); -} diff --git a/spaces/akhaliq/deeplab2/model/layers/__init__.py b/spaces/akhaliq/deeplab2/model/layers/__init__.py deleted file mode 100644 index 35e4ce02ff422f3aa84ab644b88d65b13e0cbc03..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/layers/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - diff --git a/spaces/akhaliq/lama/bin/evaluate_predicts.py b/spaces/akhaliq/lama/bin/evaluate_predicts.py deleted file mode 100644 index a4c182a50bc0cc3e2e03c713c2c0be2a804b04b8..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/bin/evaluate_predicts.py +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env python3 - -import os - -import pandas as pd - -from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset -from saicinpainting.evaluation.evaluator import InpaintingEvaluator, lpips_fid100_f1 -from saicinpainting.evaluation.losses.base_loss import SegmentationAwareSSIM, \ - SegmentationClassStats, SSIMScore, LPIPSScore, FIDScore, SegmentationAwareLPIPS, SegmentationAwareFID -from saicinpainting.evaluation.utils import load_yaml - - -def main(args): - config = load_yaml(args.config) - - dataset = PrecomputedInpaintingResultsDataset(args.datadir, args.predictdir, **config.dataset_kwargs) - - metrics = { - 'ssim': SSIMScore(), - 'lpips': LPIPSScore(), - 'fid': FIDScore() - } - enable_segm = config.get('segmentation', dict(enable=False)).get('enable', False) - if enable_segm: - weights_path = os.path.expandvars(config.segmentation.weights_path) - metrics.update(dict( - segm_stats=SegmentationClassStats(weights_path=weights_path), - segm_ssim=SegmentationAwareSSIM(weights_path=weights_path), - segm_lpips=SegmentationAwareLPIPS(weights_path=weights_path), - segm_fid=SegmentationAwareFID(weights_path=weights_path) - )) - evaluator = InpaintingEvaluator(dataset, scores=metrics, - integral_title='lpips_fid100_f1', integral_func=lpips_fid100_f1, - **config.evaluator_kwargs) - - os.makedirs(os.path.dirname(args.outpath), exist_ok=True) - - results = evaluator.evaluate() - - results = pd.DataFrame(results).stack(1).unstack(0) - results.dropna(axis=1, how='all', inplace=True) - results.to_csv(args.outpath, sep='\t', float_format='%.4f') - - if enable_segm: - only_short_results = results[[c for c in results.columns if not c[0].startswith('segm_')]].dropna(axis=1, how='all') - only_short_results.to_csv(args.outpath + '_short', sep='\t', float_format='%.4f') - - print(only_short_results) - - segm_metrics_results = results[['segm_ssim', 'segm_lpips', 'segm_fid']].dropna(axis=1, how='all').transpose().unstack(0).reorder_levels([1, 0], axis=1) - segm_metrics_results.drop(['mean', 'std'], axis=0, inplace=True) - - segm_stats_results = results['segm_stats'].dropna(axis=1, how='all').transpose() - segm_stats_results.index = pd.MultiIndex.from_tuples(n.split('/') for n in segm_stats_results.index) - segm_stats_results = segm_stats_results.unstack(0).reorder_levels([1, 0], axis=1) - segm_stats_results.sort_index(axis=1, inplace=True) - segm_stats_results.dropna(axis=0, how='all', inplace=True) - - segm_results = pd.concat([segm_metrics_results, segm_stats_results], axis=1, sort=True) - segm_results.sort_values(('mask_freq', 'total'), ascending=False, inplace=True) - - segm_results.to_csv(args.outpath + '_segm', sep='\t', float_format='%.4f') - else: - print(results) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to evaluation config') - aparser.add_argument('datadir', type=str, - help='Path to folder with images and masks (output of gen_mask_dataset.py)') - aparser.add_argument('predictdir', type=str, - help='Path to folder with predicts (e.g. predict_hifill_baseline.py)') - aparser.add_argument('outpath', type=str, help='Where to put results') - - main(aparser.parse_args()) diff --git a/spaces/akhaliq/yolov7/utils/autoanchor.py b/spaces/akhaliq/yolov7/utils/autoanchor.py deleted file mode 100644 index bec9017711006fae4f34cf96e1a28ebbafd0c516..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/yolov7/utils/autoanchor.py +++ /dev/null @@ -1,160 +0,0 @@ -# Auto-anchor utils - -import numpy as np -import torch -import yaml -from scipy.cluster.vq import kmeans -from tqdm import tqdm - -from utils.general import colorstr - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary - a = m.anchor_grid.prod(-1).view(-1) # anchor area - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da.sign() != ds.sign(): # same order - print('Reversing anchor order') - m.anchors[:] = m.anchors.flip(0) - m.anchor_grid[:] = m.anchor_grid.flip(0) - - -def check_anchors(dataset, model, thr=4.0, imgsz=640): - # Check anchor fit to data, recompute if necessary - prefix = colorstr('autoanchor: ') - print(f'\n{prefix}Analyzing anchors... ', end='') - m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() - shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) - scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale - wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh - - def metric(k): # compute metric - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - best = x.max(1)[0] # best_x - aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold - bpr = (best > 1. / thr).float().mean() # best possible recall - return bpr, aat - - anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors - bpr, aat = metric(anchors) - print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='') - if bpr < 0.98: # threshold to recompute - print('. Attempting to improve anchors, please wait...') - na = m.anchor_grid.numel() // 2 # number of anchors - try: - anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) - except Exception as e: - print(f'{prefix}ERROR: {e}') - new_bpr = metric(anchors)[0] - if new_bpr > bpr: # replace anchors - anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) - m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference - m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss - check_anchor_order(m) - print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.') - else: - print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.') - print('') # newline - - -def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): - """ Creates kmeans-evolved anchors from training dataset - - Arguments: - path: path to dataset *.yaml, or a loaded dataset - n: number of anchors - img_size: image size used for training - thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 - gen: generations to evolve anchors using genetic algorithm - verbose: print all results - - Return: - k: kmeans evolved anchors - - Usage: - from utils.autoanchor import *; _ = kmean_anchors() - """ - thr = 1. / thr - prefix = colorstr('autoanchor: ') - - def metric(k, wh): # compute metrics - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - # x = wh_iou(wh, torch.tensor(k)) # iou metric - return x, x.max(1)[0] # x, best_x - - def anchor_fitness(k): # mutation fitness - _, best = metric(torch.tensor(k, dtype=torch.float32), wh) - return (best * (best > thr).float()).mean() # fitness - - def print_results(k): - k = k[np.argsort(k.prod(1))] # sort small to large - x, best = metric(k, wh0) - bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr - print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr') - print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' - f'past_thr={x[x > thr].mean():.3f}-mean: ', end='') - for i, x in enumerate(k): - print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg - return k - - if isinstance(path, str): # *.yaml file - with open(path) as f: - data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict - from utils.datasets import LoadImagesAndLabels - dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) - else: - dataset = path # dataset - - # Get label wh - shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) - wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh - - # Filter - i = (wh0 < 3.0).any(1).sum() - if i: - print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.') - wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels - # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 - - # Kmeans calculation - print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...') - s = wh.std(0) # sigmas for whitening - k, dist = kmeans(wh / s, n, iter=30) # points, mean distance - assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}') - k *= s - wh = torch.tensor(wh, dtype=torch.float32) # filtered - wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered - k = print_results(k) - - # Plot - # k, d = [None] * 20, [None] * 20 - # for i in tqdm(range(1, 21)): - # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance - # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True) - # ax = ax.ravel() - # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh - # ax[0].hist(wh[wh[:, 0]<100, 0],400) - # ax[1].hist(wh[wh[:, 1]<100, 1],400) - # fig.savefig('wh.png', dpi=200) - - # Evolve - npr = np.random - f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar - for _ in pbar: - v = np.ones(sh) - while (v == 1).all(): # mutate until a change occurs (prevent duplicates) - v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) - kg = (k.copy() * v).clip(min=2.0) - fg = anchor_fitness(kg) - if fg > f: - f, k = fg, kg.copy() - pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}' - if verbose: - print_results(k) - - return print_results(k) diff --git a/spaces/akuysal/SMS-spam-English-sklearn/app.py b/spaces/akuysal/SMS-spam-English-sklearn/app.py deleted file mode 100644 index 316fa761d7ae7b9c1ff6e70ac64851fadec01b5c..0000000000000000000000000000000000000000 --- a/spaces/akuysal/SMS-spam-English-sklearn/app.py +++ /dev/null @@ -1,49 +0,0 @@ -from sklearn.feature_extraction.text import TfidfVectorizer -# import for loading python objects (scikit-learn models) -import pickle -import nltk -from nltk.data import load -from nltk.stem import PorterStemmer -import streamlit as st -import sklearn - -nltk.download('punkt') - -def custom_tokenizer_with_English_stemmer(text): - # my text was unicode so I had to use the unicode-specific translate function. If your documents are strings, you will need to use a different `translate` function here. `Translated` here just does search-replace. See the trans_table: any matching character in the set is replaced with `None` - tokens = [word for word in nltk.word_tokenize(text)] - stems = [stemmerEN.stem(item.lower()) for item in tokens] - return stems - -def predictSMSdata(test_text): - categories = ["legitimate", "spam"] - categories.sort() - - # load model - filename1 = "LinearSVC_SMS_spam_EN.pickle" - file_handle1 = open(filename1, "rb") - classifier = pickle.load(file_handle1) - file_handle1.close() - - # load tfidf_vectorizer for transforming test text data - filename2 = "tfidf_vectorizer_EN.pickle" - file_handle2 = open(filename2, "rb") - tfidf_vectorizer = pickle.load(file_handle2) - file_handle2.close() - - test_list=[test_text] - tfidf_vectorizer_vectors_test = tfidf_vectorizer.transform(test_list) - predicted = classifier.predict(tfidf_vectorizer_vectors_test) - print(categories[predicted[0]]) - return categories[predicted[0]] - -# Porter Stemmer for English -stemmerEN = PorterStemmer() - -# adding the text that will show in the text box -default_value = "ASKED 3MOBILE IF 0870 CHATLINES INCLU IN FREE MINS. INDIA CUST SERVs SED YES. L8ER GOT MEGA BILL. 3 DONT GIV A SHIT. BAILIFF DUE IN DAYS. I O £250 3 WANT £800" -text = st.text_area("enter some text!", default_value) -if text: - out = predictSMSdata(text) - st.write("The category of SMS = " + out.upper()) - \ No newline at end of file diff --git a/spaces/alex-mindspace/gpt-agents/gradio_app/interface.py b/spaces/alex-mindspace/gpt-agents/gradio_app/interface.py deleted file mode 100644 index 47ebea3791ab29d93fc8370f95480f02104704bb..0000000000000000000000000000000000000000 --- a/spaces/alex-mindspace/gpt-agents/gradio_app/interface.py +++ /dev/null @@ -1,126 +0,0 @@ -import sys -import gradio as gr -import json -import threading -import subprocess -from pathlib import Path -import time - -root_dir = Path(__file__).parent.parent -sys.path.append(str(root_dir)) -from gradio_app.interacton_with_swarm import * - -SWARM_IS_RUNNING = False -SWARM_THREAD = None - -def display_logs(): - return read_swarm_logs() - -def display_output(): - return read_swarm_output() - -def swarm_interface(): -# def swarm_interface(swarm_role, swarm_global_goal, swarm_goals, n_managers, n_analysts, n_googlers): - global SWARM_IS_RUNNING, SWARM_THREAD - # please, don't judge me for this hardcoding. it's 3am and it's the first time i use gradio =))) - # Call the necessary set_ functions with the user inputs - # set_swarm_role(swarm_role) - # set_swarm_global_goal(swarm_global_goal) - # set_swarm_goals(swarm_goals) - # agents_config = [ - # {"type": "manager", "n": n_managers}, - # {"type": "analyst", "n": n_analysts}, - # {"type": "googler", "n": n_googlers} - # ] - # set_swarm_agents_config(agents_config) - - SWARM_THREAD = threading.Thread(target=execute_swarm, args=(SWARM_IS_RUNNING,)) - SWARM_THREAD.start() - SWARM_IS_RUNNING = True - - print(f"Swarm is running. SWARM_IS_RUNNING = {SWARM_IS_RUNNING}") - -def create_gradio_interface(): - global SWARM_IS_RUNNING, SWARM_THREAD - title = """ -

    🐝🐝 Swarm Intelligence 🐝🐝

    -
    - -
    -

    ⚠️Attention please⚠️! This version is specifically limited to the input of any case due to the heavy load on the server. We use the case 'VC evaluates the investment potential of Brainamics'

    -

    Role: professional venture capital agency, who has a proven track reckord of consistently funding successful startups

    -

    Goal: A new startup just send us their pitch. Find if the startup is worth investing in. The startup is called Brainamics and it is in the space of brain computer interfaces.

    - -
    -
    - ⚠️⚠️We Highly recommend to use a duplicate space!⚠️⚠️ -
    Due to huggingface's API limitations, there can only be one instance of the swarm running at a time. You can see it's output, but your input will be ignored. The swarm resets every 10 minutes, so maybe you get lucky. - Duplicate Space -
    - """ - - #display message for themes feature - theme_addon_msg = """ - The swarm of agents combines a huge number of parallel agents divided into roles, including (for now) managers, analytics, and googlers. - The agents all interact with each other through the shared memory and the task queue. - """ - - #Modifying existing Gradio Theme - theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green", - text_size=gr.themes.sizes.text_lg) - - with gr.Blocks() as demo: - # Create a container on the left for the inputs - gr.HTML(title) - gr.HTML(theme_addon_msg) - - # layout - with gr.Row(): - # with gr.Column(variant="panel", scale=0.4): - # with gr.Accordion(label="Swarm goals (can leave empty for default)", open=False): - # # Create a textbox for swarm role - # swarm_role = gr.Textbox(placeholder=get_swarm_role(), label="Swarm role", disabled=True) - # # Create a textbox for swarm global goal - # swarm_global_goal = gr.Textbox(placeholder=get_swarm_global_goal(), label="Swarm global goal", disabled=True, description="⚠️Attention please⚠️! This version is specifically limited to the input of any case due to the heavy load on the server. We use the case \"VC evaluates the investment potential of Brainamics\"") - # # Create a list for swarm goals - # swarm_goals = gr.List(headers=None, col_count=(1, "fixed"), max_cols=1) - # with gr.Accordion(label="Agents Setup:", open=False): - # Create a textbox for number of manager agents - # n_managers = gr.Textbox(placeholder=get_swarm_agents_config()[0]["n"], label="Number of manager agents", disabled=True) - # Create a textbox for number of analyst agents - # n_analysts = gr.Textbox(placeholder=get_swarm_agents_config()[1]["n"], label="Number of analyst agents", disabled=True) - # Create a textbox for number of googler agents - # n_googlers = gr.Textbox(placeholder=get_swarm_agents_config()[2]["n"], label="Number of googler agents", disabled=True) - # create a submit button - - # Create a container on the right for the outputs - with gr.Column(variant="panel", scale=1): - submit = gr.Button(value="Start the Swarm 🚀") - # Create a textbox for output - output_textbox = gr.Textbox(label="Output", lines=20) - # Create a textbox for logs - logs_textbox = gr.Textbox(label="Logs", lines=8) - update_view_button = gr.Button(value="Update Results Display 🔄") - gr.HTML("""

    (If someone knows how to update dynamically, please save us, that's emberrasing 😳)

    """) - - #Event handling - def update_view_callback(): - return display_logs(), display_output() - - def submit_callback(): - # def submit_callback(swarm_role, swarm_global_goal, swarm_goals, n_managers, n_analysts, n_googlers): - global SWARM_IS_RUNNING, SWARM_THREAD - if isinstance(SWARM_THREAD, threading.Thread): - SWARM_IS_RUNNING = SWARM_THREAD.is_alive() - print(f"Swarm is running. SWARM_IS_RUNNING = {SWARM_IS_RUNNING}") - if not SWARM_IS_RUNNING: - swarm_interface() - # swarm_interface(swarm_role, swarm_global_goal, swarm_goals) - # swarm_interface(swarm_role, swarm_global_goal, swarm_goals, n_managers, n_analysts, n_googlers) - return display_logs(), display_output() - - submit.click(submit_callback, outputs=[logs_textbox, output_textbox]) - # submit.click(submit_callback, inputs=[swarm_role, swarm_global_goal, swarm_goals, n_managers, n_analysts, n_googlers], outputs=[logs_textbox, output_textbox]) - update_view_button.click(update_view_callback, outputs=[logs_textbox, output_textbox]) - - return demo \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pep517/meta.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pep517/meta.py deleted file mode 100644 index d525de5c6c8791def8dbb2f460027e200c934874..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pep517/meta.py +++ /dev/null @@ -1,92 +0,0 @@ -"""Build metadata for a project using PEP 517 hooks. -""" -import argparse -import logging -import os -import shutil -import functools - -try: - import importlib.metadata as imp_meta -except ImportError: - import importlib_metadata as imp_meta - -try: - from zipfile import Path -except ImportError: - from zipp import Path - -from .envbuild import BuildEnvironment -from .wrappers import Pep517HookCaller, quiet_subprocess_runner -from .dirtools import tempdir, mkdir_p, dir_to_zipfile -from .build import validate_system, load_system, compat_system - -log = logging.getLogger(__name__) - - -def _prep_meta(hooks, env, dest): - reqs = hooks.get_requires_for_build_wheel({}) - log.info('Got build requires: %s', reqs) - - env.pip_install(reqs) - log.info('Installed dynamic build dependencies') - - with tempdir() as td: - log.info('Trying to build metadata in %s', td) - filename = hooks.prepare_metadata_for_build_wheel(td, {}) - source = os.path.join(td, filename) - shutil.move(source, os.path.join(dest, os.path.basename(filename))) - - -def build(source_dir='.', dest=None, system=None): - system = system or load_system(source_dir) - dest = os.path.join(source_dir, dest or 'dist') - mkdir_p(dest) - validate_system(system) - hooks = Pep517HookCaller( - source_dir, system['build-backend'], system.get('backend-path') - ) - - with hooks.subprocess_runner(quiet_subprocess_runner): - with BuildEnvironment() as env: - env.pip_install(system['requires']) - _prep_meta(hooks, env, dest) - - -def build_as_zip(builder=build): - with tempdir() as out_dir: - builder(dest=out_dir) - return dir_to_zipfile(out_dir) - - -def load(root): - """ - Given a source directory (root) of a package, - return an importlib.metadata.Distribution object - with metadata build from that package. - """ - root = os.path.expanduser(root) - system = compat_system(root) - builder = functools.partial(build, source_dir=root, system=system) - path = Path(build_as_zip(builder)) - return imp_meta.PathDistribution(path) - - -parser = argparse.ArgumentParser() -parser.add_argument( - 'source_dir', - help="A directory containing pyproject.toml", -) -parser.add_argument( - '--out-dir', '-o', - help="Destination in which to save the builds relative to source dir", -) - - -def main(): - args = parser.parse_args() - build(args.source_dir, args.out_dir) - - -if __name__ == '__main__': - main() diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py deleted file mode 100644 index b3dfa5706378331a971fe402642590099018909b..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py +++ /dev/null @@ -1,538 +0,0 @@ -# -*- coding: utf-8 -*- - -""" -requests.adapters -~~~~~~~~~~~~~~~~~ - -This module contains the transport adapters that Requests uses to define -and maintain connections. -""" - -import os.path -import socket - -from pip._vendor.urllib3.poolmanager import PoolManager, proxy_from_url -from pip._vendor.urllib3.response import HTTPResponse -from pip._vendor.urllib3.util import parse_url -from pip._vendor.urllib3.util import Timeout as TimeoutSauce -from pip._vendor.urllib3.util.retry import Retry -from pip._vendor.urllib3.exceptions import ClosedPoolError -from pip._vendor.urllib3.exceptions import ConnectTimeoutError -from pip._vendor.urllib3.exceptions import HTTPError as _HTTPError -from pip._vendor.urllib3.exceptions import InvalidHeader as _InvalidHeader -from pip._vendor.urllib3.exceptions import MaxRetryError -from pip._vendor.urllib3.exceptions import NewConnectionError -from pip._vendor.urllib3.exceptions import ProxyError as _ProxyError -from pip._vendor.urllib3.exceptions import ProtocolError -from pip._vendor.urllib3.exceptions import ReadTimeoutError -from pip._vendor.urllib3.exceptions import SSLError as _SSLError -from pip._vendor.urllib3.exceptions import ResponseError -from pip._vendor.urllib3.exceptions import LocationValueError - -from .models import Response -from .compat import urlparse, basestring -from .utils import (DEFAULT_CA_BUNDLE_PATH, extract_zipped_paths, - get_encoding_from_headers, prepend_scheme_if_needed, - get_auth_from_url, urldefragauth, select_proxy) -from .structures import CaseInsensitiveDict -from .cookies import extract_cookies_to_jar -from .exceptions import (ConnectionError, ConnectTimeout, ReadTimeout, SSLError, - ProxyError, RetryError, InvalidSchema, InvalidProxyURL, - InvalidURL, InvalidHeader) -from .auth import _basic_auth_str - -try: - from pip._vendor.urllib3.contrib.socks import SOCKSProxyManager -except ImportError: - def SOCKSProxyManager(*args, **kwargs): - raise InvalidSchema("Missing dependencies for SOCKS support.") - -DEFAULT_POOLBLOCK = False -DEFAULT_POOLSIZE = 10 -DEFAULT_RETRIES = 0 -DEFAULT_POOL_TIMEOUT = None - - -class BaseAdapter(object): - """The Base Transport Adapter""" - - def __init__(self): - super(BaseAdapter, self).__init__() - - def send(self, request, stream=False, timeout=None, verify=True, - cert=None, proxies=None): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple - :param verify: (optional) Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - """ - raise NotImplementedError - - def close(self): - """Cleans up adapter specific items.""" - raise NotImplementedError - - -class HTTPAdapter(BaseAdapter): - """The built-in HTTP Adapter for urllib3. - - Provides a general-case interface for Requests sessions to contact HTTP and - HTTPS urls by implementing the Transport Adapter interface. This class will - usually be created by the :class:`Session ` class under the - covers. - - :param pool_connections: The number of urllib3 connection pools to cache. - :param pool_maxsize: The maximum number of connections to save in the pool. - :param max_retries: The maximum number of retries each connection - should attempt. Note, this applies only to failed DNS lookups, socket - connections and connection timeouts, never to requests where data has - made it to the server. By default, Requests does not retry failed - connections. If you need granular control over the conditions under - which we retry a request, import urllib3's ``Retry`` class and pass - that instead. - :param pool_block: Whether the connection pool should block for connections. - - Usage:: - - >>> import requests - >>> s = requests.Session() - >>> a = requests.adapters.HTTPAdapter(max_retries=3) - >>> s.mount('http://', a) - """ - __attrs__ = ['max_retries', 'config', '_pool_connections', '_pool_maxsize', - '_pool_block'] - - def __init__(self, pool_connections=DEFAULT_POOLSIZE, - pool_maxsize=DEFAULT_POOLSIZE, max_retries=DEFAULT_RETRIES, - pool_block=DEFAULT_POOLBLOCK): - if max_retries == DEFAULT_RETRIES: - self.max_retries = Retry(0, read=False) - else: - self.max_retries = Retry.from_int(max_retries) - self.config = {} - self.proxy_manager = {} - - super(HTTPAdapter, self).__init__() - - self._pool_connections = pool_connections - self._pool_maxsize = pool_maxsize - self._pool_block = pool_block - - self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block) - - def __getstate__(self): - return {attr: getattr(self, attr, None) for attr in self.__attrs__} - - def __setstate__(self, state): - # Can't handle by adding 'proxy_manager' to self.__attrs__ because - # self.poolmanager uses a lambda function, which isn't pickleable. - self.proxy_manager = {} - self.config = {} - - for attr, value in state.items(): - setattr(self, attr, value) - - self.init_poolmanager(self._pool_connections, self._pool_maxsize, - block=self._pool_block) - - def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs): - """Initializes a urllib3 PoolManager. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param connections: The number of urllib3 connection pools to cache. - :param maxsize: The maximum number of connections to save in the pool. - :param block: Block when no free connections are available. - :param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager. - """ - # save these values for pickling - self._pool_connections = connections - self._pool_maxsize = maxsize - self._pool_block = block - - self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize, - block=block, strict=True, **pool_kwargs) - - def proxy_manager_for(self, proxy, **proxy_kwargs): - """Return urllib3 ProxyManager for the given proxy. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The proxy to return a urllib3 ProxyManager for. - :param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager. - :returns: ProxyManager - :rtype: urllib3.ProxyManager - """ - if proxy in self.proxy_manager: - manager = self.proxy_manager[proxy] - elif proxy.lower().startswith('socks'): - username, password = get_auth_from_url(proxy) - manager = self.proxy_manager[proxy] = SOCKSProxyManager( - proxy, - username=username, - password=password, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs - ) - else: - proxy_headers = self.proxy_headers(proxy) - manager = self.proxy_manager[proxy] = proxy_from_url( - proxy, - proxy_headers=proxy_headers, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs) - - return manager - - def cert_verify(self, conn, url, verify, cert): - """Verify a SSL certificate. This method should not be called from user - code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param conn: The urllib3 connection object associated with the cert. - :param url: The requested URL. - :param verify: Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: The SSL certificate to verify. - """ - if url.lower().startswith('https') and verify: - - cert_loc = None - - # Allow self-specified cert location. - if verify is not True: - cert_loc = verify - - if not cert_loc: - cert_loc = extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH) - - if not cert_loc or not os.path.exists(cert_loc): - raise IOError("Could not find a suitable TLS CA certificate bundle, " - "invalid path: {}".format(cert_loc)) - - conn.cert_reqs = 'CERT_REQUIRED' - - if not os.path.isdir(cert_loc): - conn.ca_certs = cert_loc - else: - conn.ca_cert_dir = cert_loc - else: - conn.cert_reqs = 'CERT_NONE' - conn.ca_certs = None - conn.ca_cert_dir = None - - if cert: - if not isinstance(cert, basestring): - conn.cert_file = cert[0] - conn.key_file = cert[1] - else: - conn.cert_file = cert - conn.key_file = None - if conn.cert_file and not os.path.exists(conn.cert_file): - raise IOError("Could not find the TLS certificate file, " - "invalid path: {}".format(conn.cert_file)) - if conn.key_file and not os.path.exists(conn.key_file): - raise IOError("Could not find the TLS key file, " - "invalid path: {}".format(conn.key_file)) - - def build_response(self, req, resp): - """Builds a :class:`Response ` object from a urllib3 - response. This should not be called from user code, and is only exposed - for use when subclassing the - :class:`HTTPAdapter ` - - :param req: The :class:`PreparedRequest ` used to generate the response. - :param resp: The urllib3 response object. - :rtype: requests.Response - """ - response = Response() - - # Fallback to None if there's no status_code, for whatever reason. - response.status_code = getattr(resp, 'status', None) - - # Make headers case-insensitive. - response.headers = CaseInsensitiveDict(getattr(resp, 'headers', {})) - - # Set encoding. - response.encoding = get_encoding_from_headers(response.headers) - response.raw = resp - response.reason = response.raw.reason - - if isinstance(req.url, bytes): - response.url = req.url.decode('utf-8') - else: - response.url = req.url - - # Add new cookies from the server. - extract_cookies_to_jar(response.cookies, req, resp) - - # Give the Response some context. - response.request = req - response.connection = self - - return response - - def get_connection(self, url, proxies=None): - """Returns a urllib3 connection for the given URL. This should not be - called from user code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param url: The URL to connect to. - :param proxies: (optional) A Requests-style dictionary of proxies used on this request. - :rtype: urllib3.ConnectionPool - """ - proxy = select_proxy(url, proxies) - - if proxy: - proxy = prepend_scheme_if_needed(proxy, 'http') - proxy_url = parse_url(proxy) - if not proxy_url.host: - raise InvalidProxyURL("Please check proxy URL. It is malformed" - " and could be missing the host.") - proxy_manager = self.proxy_manager_for(proxy) - conn = proxy_manager.connection_from_url(url) - else: - # Only scheme should be lower case - parsed = urlparse(url) - url = parsed.geturl() - conn = self.poolmanager.connection_from_url(url) - - return conn - - def close(self): - """Disposes of any internal state. - - Currently, this closes the PoolManager and any active ProxyManager, - which closes any pooled connections. - """ - self.poolmanager.clear() - for proxy in self.proxy_manager.values(): - proxy.clear() - - def request_url(self, request, proxies): - """Obtain the url to use when making the final request. - - If the message is being sent through a HTTP proxy, the full URL has to - be used. Otherwise, we should only use the path portion of the URL. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` being sent. - :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs. - :rtype: str - """ - proxy = select_proxy(request.url, proxies) - scheme = urlparse(request.url).scheme - - is_proxied_http_request = (proxy and scheme != 'https') - using_socks_proxy = False - if proxy: - proxy_scheme = urlparse(proxy).scheme.lower() - using_socks_proxy = proxy_scheme.startswith('socks') - - url = request.path_url - if is_proxied_http_request and not using_socks_proxy: - url = urldefragauth(request.url) - - return url - - def add_headers(self, request, **kwargs): - """Add any headers needed by the connection. As of v2.0 this does - nothing by default, but is left for overriding by users that subclass - the :class:`HTTPAdapter `. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` to add headers to. - :param kwargs: The keyword arguments from the call to send(). - """ - pass - - def proxy_headers(self, proxy): - """Returns a dictionary of the headers to add to any request sent - through a proxy. This works with urllib3 magic to ensure that they are - correctly sent to the proxy, rather than in a tunnelled request if - CONNECT is being used. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The url of the proxy being used for this request. - :rtype: dict - """ - headers = {} - username, password = get_auth_from_url(proxy) - - if username: - headers['Proxy-Authorization'] = _basic_auth_str(username, - password) - - return headers - - def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple or urllib3 Timeout object - :param verify: (optional) Either a boolean, in which case it controls whether - we verify the server's TLS certificate, or a string, in which case it - must be a path to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - :rtype: requests.Response - """ - - try: - conn = self.get_connection(request.url, proxies) - except LocationValueError as e: - raise InvalidURL(e, request=request) - - self.cert_verify(conn, request.url, verify, cert) - url = self.request_url(request, proxies) - self.add_headers(request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies) - - chunked = not (request.body is None or 'Content-Length' in request.headers) - - if isinstance(timeout, tuple): - try: - connect, read = timeout - timeout = TimeoutSauce(connect=connect, read=read) - except ValueError as e: - # this may raise a string formatting error. - err = ("Invalid timeout {}. Pass a (connect, read) " - "timeout tuple, or a single float to set " - "both timeouts to the same value".format(timeout)) - raise ValueError(err) - elif isinstance(timeout, TimeoutSauce): - pass - else: - timeout = TimeoutSauce(connect=timeout, read=timeout) - - try: - if not chunked: - resp = conn.urlopen( - method=request.method, - url=url, - body=request.body, - headers=request.headers, - redirect=False, - assert_same_host=False, - preload_content=False, - decode_content=False, - retries=self.max_retries, - timeout=timeout - ) - - # Send the request. - else: - if hasattr(conn, 'proxy_pool'): - conn = conn.proxy_pool - - low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) - - try: - skip_host = 'Host' in request.headers - low_conn.putrequest(request.method, - url, - skip_accept_encoding=True, - skip_host=skip_host) - - for header, value in request.headers.items(): - low_conn.putheader(header, value) - - low_conn.endheaders() - - for i in request.body: - low_conn.send(hex(len(i))[2:].encode('utf-8')) - low_conn.send(b'\r\n') - low_conn.send(i) - low_conn.send(b'\r\n') - low_conn.send(b'0\r\n\r\n') - - # Receive the response from the server - try: - # For Python 2.7, use buffering of HTTP responses - r = low_conn.getresponse(buffering=True) - except TypeError: - # For compatibility with Python 3.3+ - r = low_conn.getresponse() - - resp = HTTPResponse.from_httplib( - r, - pool=conn, - connection=low_conn, - preload_content=False, - decode_content=False - ) - except: - # If we hit any problems here, clean up the connection. - # Then, reraise so that we can handle the actual exception. - low_conn.close() - raise - - except (ProtocolError, socket.error) as err: - raise ConnectionError(err, request=request) - - except MaxRetryError as e: - if isinstance(e.reason, ConnectTimeoutError): - # TODO: Remove this in 3.0.0: see #2811 - if not isinstance(e.reason, NewConnectionError): - raise ConnectTimeout(e, request=request) - - if isinstance(e.reason, ResponseError): - raise RetryError(e, request=request) - - if isinstance(e.reason, _ProxyError): - raise ProxyError(e, request=request) - - if isinstance(e.reason, _SSLError): - # This branch is for urllib3 v1.22 and later. - raise SSLError(e, request=request) - - raise ConnectionError(e, request=request) - - except ClosedPoolError as e: - raise ConnectionError(e, request=request) - - except _ProxyError as e: - raise ProxyError(e) - - except (_SSLError, _HTTPError) as e: - if isinstance(e, _SSLError): - # This branch is for urllib3 versions earlier than v1.22 - raise SSLError(e, request=request) - elif isinstance(e, ReadTimeoutError): - raise ReadTimeout(e, request=request) - elif isinstance(e, _InvalidHeader): - raise InvalidHeader(e, request=request) - else: - raise - - return self.build_response(request, resp) diff --git a/spaces/alfabill/stable-diffusion-inpainting-2/clipseg/datasets/coco_wrapper.py b/spaces/alfabill/stable-diffusion-inpainting-2/clipseg/datasets/coco_wrapper.py deleted file mode 100644 index f88d98bb41ee23682a6aaea75a50a3b61e569304..0000000000000000000000000000000000000000 --- a/spaces/alfabill/stable-diffusion-inpainting-2/clipseg/datasets/coco_wrapper.py +++ /dev/null @@ -1,99 +0,0 @@ -import pickle -from types import new_class -import torch -import numpy as np -import os -import json - -from os.path import join, dirname, isdir, isfile, expanduser, realpath, basename -from random import shuffle, seed as set_seed -from PIL import Image - -from itertools import combinations -from torchvision import transforms -from torchvision.transforms.transforms import Resize - -from datasets.utils import blend_image_segmentation -from general_utils import get_from_repository - -COCO_CLASSES = {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'} - -class COCOWrapper(object): - - def __init__(self, split, fold=0, image_size=400, aug=None, mask='separate', negative_prob=0, - with_class_label=False): - super().__init__() - - self.mask = mask - self.with_class_label = with_class_label - self.negative_prob = negative_prob - - from third_party.hsnet.data.coco import DatasetCOCO - - get_from_repository('COCO-20i', ['COCO-20i.tar']) - - foldpath = join(dirname(__file__), '../third_party/hsnet/data/splits/coco/%s/fold%d.pkl') - - def build_img_metadata_classwise(self): - with open(foldpath % (self.split, self.fold), 'rb') as f: - img_metadata_classwise = pickle.load(f) - return img_metadata_classwise - - - DatasetCOCO.build_img_metadata_classwise = build_img_metadata_classwise - # DatasetCOCO.read_mask = read_mask - - mean = [0.485, 0.456, 0.406] - std = [0.229, 0.224, 0.225] - transform = transforms.Compose([ - transforms.Resize((image_size, image_size)), - transforms.ToTensor(), - transforms.Normalize(mean, std) - ]) - - self.coco = DatasetCOCO(expanduser('~/datasets/COCO-20i/'), fold, transform, split, 1, False) - - self.all_classes = [self.coco.class_ids] - self.coco.base_path = join(expanduser('~/datasets/COCO-20i')) - - def __len__(self): - return len(self.coco) - - def __getitem__(self, i): - sample = self.coco[i] - - label_name = COCO_CLASSES[int(sample['class_id'])] - - img_s, seg_s = sample['support_imgs'][0], sample['support_masks'][0] - - if self.negative_prob > 0 and torch.rand(1).item() < self.negative_prob: - new_class_id = sample['class_id'] - while new_class_id == sample['class_id']: - sample2 = self.coco[torch.randint(0, len(self), (1,)).item()] - new_class_id = sample2['class_id'] - img_s = sample2['support_imgs'][0] - seg_s = torch.zeros_like(seg_s) - - mask = self.mask - if mask == 'separate': - supp = (img_s, seg_s) - elif mask == 'text_label': - # DEPRECATED - supp = [int(sample['class_id'])] - elif mask == 'text': - supp = [label_name] - else: - if mask.startswith('text_and_'): - mask = mask[9:] - label_add = [label_name] - else: - label_add = [] - - supp = label_add + blend_image_segmentation(img_s, seg_s, mode=mask) - - if self.with_class_label: - label = (torch.zeros(0), sample['class_id'],) - else: - label = (torch.zeros(0), ) - - return (sample['query_img'],) + tuple(supp), (sample['query_mask'].unsqueeze(0),) + label \ No newline at end of file diff --git a/spaces/ali-ghamdan/realesrgan-models/docs/anime_comparisons.md b/spaces/ali-ghamdan/realesrgan-models/docs/anime_comparisons.md deleted file mode 100644 index 09603bdc989bbf68b1f9f466acac5d8e442b8a01..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/realesrgan-models/docs/anime_comparisons.md +++ /dev/null @@ -1,66 +0,0 @@ -# Comparisons among different anime models - -[English](anime_comparisons.md) **|** [简体中文](anime_comparisons_CN.md) - -## Update News - -- 2022/04/24: Release **AnimeVideo-v3**. We have made the following improvements: - - **better naturalness** - - **Fewer artifacts** - - **more faithful to the original colors** - - **better texture restoration** - - **better background restoration** - -## Comparisons - -We have compared our RealESRGAN-AnimeVideo-v3 with the following methods. -Our RealESRGAN-AnimeVideo-v3 can achieve better results with faster inference speed. - -- [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) with the hyperparameters: `tile=0`, `noiselevel=2` -- [Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN): we use the [20220227](https://github.com/bilibili/ailab/releases/tag/Real-CUGAN-add-faster-low-memory-mode) version, the hyperparameters are: `cache_mode=0`, `tile=0`, `alpha=1`. -- our RealESRGAN-AnimeVideo-v3 - -## Results - -You may need to **zoom in** for comparing details, or **click the image** to see in the full size. Please note that the images -in the table below are the resized and cropped patches from the original images, you can download the original inputs and outputs from [Google Drive](https://drive.google.com/drive/folders/1bc_Hje1Nqop9NDkUvci2VACSjL7HZMRp?usp=sharing) . - -**More natural results, better background restoration** -| Input | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![157083983-bec52c67-9a5e-4eed-afef-01fe6cd2af85_patch](https://user-images.githubusercontent.com/11482921/164452769-5d8cb4f8-1708-42d2-b941-f44a6f136feb.png) | ![](https://user-images.githubusercontent.com/11482921/164452767-c825cdec-f721-4ff1-aef1-fec41f146c4c.png) | ![](https://user-images.githubusercontent.com/11482921/164452755-3be50895-e3d4-432d-a7b9-9085c2a8e771.png) | ![](https://user-images.githubusercontent.com/11482921/164452771-be300656-379a-4323-a755-df8025a8c451.png) | -|![a0010_patch](https://user-images.githubusercontent.com/11482921/164454047-22eeb493-3fa9-4142-9fc2-6f2a1c074cd5.png) | ![](https://user-images.githubusercontent.com/11482921/164454046-d5e79f8f-00a0-4b55-bc39-295d0d69747a.png) | ![](https://user-images.githubusercontent.com/11482921/164454040-87886b11-9d08-48bd-862f-0d4aed72eb19.png) | ![](https://user-images.githubusercontent.com/11482921/164454055-73dc9f02-286e-4d5c-8f70-c13742e08f42.png) | -|![00000044_patch](https://user-images.githubusercontent.com/11482921/164451232-bacf64fc-e55a-44db-afbb-6b31ab0f8973.png) | ![](https://user-images.githubusercontent.com/11482921/164451318-f309b61a-75b8-4b74-b5f3-595725f1cf0b.png) | ![](https://user-images.githubusercontent.com/11482921/164451348-994f8a35-adbe-4a4b-9c61-feaa294af06a.png) | ![](https://user-images.githubusercontent.com/11482921/164451361-9b7d376e-6f75-4648-b752-542b44845d1c.png) | - -**Fewer artifacts, better detailed textures** -| Input | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![00000053_patch](https://user-images.githubusercontent.com/11482921/164448411-148a7e5c-cfcd-4504-8bc7-e318eb883bb6.png) | ![](https://user-images.githubusercontent.com/11482921/164448633-dfc15224-b6d2-4403-a3c9-4bb819979364.png) | ![](https://user-images.githubusercontent.com/11482921/164448771-0d359509-5293-4d4c-8e3c-86a2a314ea88.png) | ![](https://user-images.githubusercontent.com/11482921/164448848-1a4ff99e-075b-4458-9db7-2c89e8160aa0.png) | -|![Disney_v4_22_018514_s2_patch](https://user-images.githubusercontent.com/11482921/164451898-83311cdf-bd3e-450f-b9f6-34d7fea3ab79.png) | ![](https://user-images.githubusercontent.com/11482921/164451894-6c56521c-6561-40d6-a3a5-8dde2c167b8a.png) | ![](https://user-images.githubusercontent.com/11482921/164451888-af9b47e3-39dc-4f3e-b0d7-d372d8191e2a.png) | ![](https://user-images.githubusercontent.com/11482921/164451901-31ca4dd4-9847-4baa-8cde-ad50f4053dcf.png) | -|![Japan_v2_0_007261_s2_patch](https://user-images.githubusercontent.com/11482921/164454578-73c77392-77de-49c5-b03c-c36631723192.png) | ![](https://user-images.githubusercontent.com/11482921/164454574-b1ede5f0-4520-4eaa-8f59-086751a34e62.png) | ![](https://user-images.githubusercontent.com/11482921/164454567-4cb3fdd8-6a2d-4016-85b2-a305a8ff80e4.png) | ![](https://user-images.githubusercontent.com/11482921/164454583-7f243f20-eca3-4500-ac43-eb058a4a101a.png) | -|![huluxiongdi_2_patch](https://user-images.githubusercontent.com/11482921/164453482-0726c842-337e-40ec-bf6c-f902ee956a8b.png) | ![](https://user-images.githubusercontent.com/11482921/164453480-71d5e091-5bfa-4c77-9c57-4e37f66ca0a3.png) | ![](https://user-images.githubusercontent.com/11482921/164453468-c295d3c9-3661-45f0-9ecd-406a1877f76e.png) | ![](https://user-images.githubusercontent.com/11482921/164453486-3091887c-587c-450e-b6fe-905cb518d57e.png) | - -**Other better results** -| Input | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![Japan_v2_1_128525_s1_patch](https://user-images.githubusercontent.com/11482921/164454933-67697f7c-b6ef-47dc-bfca-822a78af8acf.png) | ![](https://user-images.githubusercontent.com/11482921/164454931-9450de7c-f0b3-4638-9c1e-0668e0c41ef0.png) | ![](https://user-images.githubusercontent.com/11482921/164454926-ed746976-786d-41c5-8a83-7693cd774c3a.png) | ![](https://user-images.githubusercontent.com/11482921/164454936-8abdf0f0-fb30-40eb-8281-3b46c0bcb9ae.png) | -|![tianshuqitan_2_patch](https://user-images.githubusercontent.com/11482921/164456948-807c1476-90b6-4507-81da-cb986d01600c.png) | ![](https://user-images.githubusercontent.com/11482921/164456943-25e89de9-d7e5-4f61-a2e1-96786af6ae9e.png) | ![](https://user-images.githubusercontent.com/11482921/164456954-b468c447-59f5-4594-9693-3683e44ba3e6.png) | ![](https://user-images.githubusercontent.com/11482921/164456957-640f910c-3b04-407c-ac20-044d72e19735.png) | -|![00000051_patch](https://user-images.githubusercontent.com/11482921/164456044-e9a6b3fa-b24e-4eb7-acf9-1f7746551b1e.png) ![00000051_patch](https://user-images.githubusercontent.com/11482921/164456421-b67245b0-767d-4250-9105-80bbe507ecfc.png) | ![](https://user-images.githubusercontent.com/11482921/164456040-85763cf2-cb28-4ba3-abb6-1dbb48c55713.png) ![](https://user-images.githubusercontent.com/11482921/164456419-59cf342e-bc1e-4044-868c-e1090abad313.png) | ![](https://user-images.githubusercontent.com/11482921/164456031-4244bb7b-8649-4e01-86f4-40c2099c5afd.png) ![](https://user-images.githubusercontent.com/11482921/164456411-b6afcbe9-c054-448d-a6df-96d3ba3047f8.png) | ![](https://user-images.githubusercontent.com/11482921/164456035-12e270be-fd52-46d4-b18a-3d3b680731fe.png) ![](https://user-images.githubusercontent.com/11482921/164456417-dcaa8b62-f497-427d-b2d2-f390f1200fb9.png) | -|![00000099_patch](https://user-images.githubusercontent.com/11482921/164455312-6411b6e1-5823-4131-a4b0-a6be8a9ae89f.png) | ![](https://user-images.githubusercontent.com/11482921/164455310-f2b99646-3a22-47a4-805b-dc451ac86ddb.png) | ![](https://user-images.githubusercontent.com/11482921/164455294-35471b42-2826-4451-b7ec-6de01344954c.png) | ![](https://user-images.githubusercontent.com/11482921/164455305-fa4c9758-564a-4081-8b4e-f11057a0404d.png) | -|![00000016_patch](https://user-images.githubusercontent.com/11482921/164455672-447353c9-2da2-4fcb-ba4a-7dd6b94c19c1.png) | ![](https://user-images.githubusercontent.com/11482921/164455669-df384631-baaa-42f8-9150-40f658471558.png) | ![](https://user-images.githubusercontent.com/11482921/164455657-68006bf0-138d-4981-aaca-8aa927d2f78a.png) | ![](https://user-images.githubusercontent.com/11482921/164455664-0342b93e-a62a-4b36-a90e-7118f3f1e45d.png) | - -## Inference Speed - -### PyTorch - -Note that we only report the **model** time, and ignore the IO time. - -| GPU | Input Resolution | waifu2x | Real-CUGAN | RealESRGAN-AnimeVideo-v3 -| :---: | :---: | :---: | :---: | :---: | -| V100 | 1921 x 1080 | - | 3.4 fps | **10.0** fps | -| V100 | 1280 x 720 | - | 7.2 fps | **22.6** fps | -| V100 | 640 x 480 | - | 24.4 fps | **65.9** fps | - -### ncnn - -- [ ] TODO diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/DocumentFragment.pod b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/DocumentFragment.pod deleted file mode 100644 index aae2cd61f4b94daffae0c3b41b7835ac674ac1b7..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/DocumentFragment.pod +++ /dev/null @@ -1,40 +0,0 @@ -=head1 NAME - -XML::DOM::DocumentFragment - Facilitates cut & paste in XML::DOM documents - -=head1 DESCRIPTION - -XML::DOM::DocumentFragment extends L - -DocumentFragment is a "lightweight" or "minimal" Document object. It is -very common to want to be able to extract a portion of a document's -tree or to create a new fragment of a document. Imagine implementing a -user command like cut or rearranging a document by moving fragments -around. It is desirable to have an object which can hold such fragments -and it is quite natural to use a Node for this purpose. While it is -true that a Document object could fulfil this role, a Document object -can potentially be a heavyweight object, depending on the underlying -implementation. What is really needed for this is a very lightweight -object. DocumentFragment is such an object. - -Furthermore, various operations -- such as inserting nodes as children -of another Node -- may take DocumentFragment objects as arguments; this -results in all the child nodes of the DocumentFragment being moved to -the child list of this node. - -The children of a DocumentFragment node are zero or more nodes -representing the tops of any sub-trees defining the structure of the -document. DocumentFragment nodes do not need to be well-formed XML -documents (although they do need to follow the rules imposed upon -well-formed XML parsed entities, which can have multiple top nodes). -For example, a DocumentFragment might have only one child and that -child node could be a Text node. Such a structure model represents -neither an HTML document nor a well-formed XML document. - -When a DocumentFragment is inserted into a Document (or indeed any -other Node that may take children) the children of the DocumentFragment -and not the DocumentFragment itself are inserted into the Node. This -makes the DocumentFragment very useful when the user wishes to create -nodes that are siblings; the DocumentFragment acts as the parent of -these nodes so that the user can use the standard methods from the Node -interface, such as insertBefore() and appendChild(). diff --git a/spaces/allknowingroger/Image-Models-Test155/app.py b/spaces/allknowingroger/Image-Models-Test155/app.py deleted file mode 100644 index 9b3edfe2f2ff55c0fe3071c3b9c9e6d2ea80c62a..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test155/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "livingbox/model-test-aug-06-v2-no-size", - "sams1234/bikes-xzg", - "lccllccc/0920_sdxl_lora_5000_steps", - "akhilantony11/my-pet-cat-gcz", - "Kendong/ef_pin", - "jmanuoz/lora-trained-xl-enzofernan", - "jtlowell/lora_lofi_beach", - "pranav12091/my-pet-dog", - "Alexzyx/lora-trained-xl", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test208/app.py b/spaces/allknowingroger/Image-Models-Test208/app.py deleted file mode 100644 index e5bdf9452ba448d392d76e22051262041cc4eb01..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test208/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "lberglund/sweep_full_5_20231012151035", - "melaris/Natalia2ai-v1.2", - "lberglund/sweep_full_4_20231012144600", - "lberglund/sweep_full_3_20231012130016", - "LiLyFalcon/lora_Galaxiga_battle_aircraft-trained-15-colab", - "ahmedghani/waqas-ramzam-sdxl-1200", - "zeerakwyne/dreambooth_lora_model_1000", - "shekar426/my-lion", - "zeerakwyne/dreambooth_lora_model_jennifer", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/text-generation-webui-space-1/modules/extensions.py b/spaces/allknowingroger/text-generation-webui-space-1/modules/extensions.py deleted file mode 100644 index c8de8a7bc9ebd331d65704996a764e7cc279a6e5..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/text-generation-webui-space-1/modules/extensions.py +++ /dev/null @@ -1,45 +0,0 @@ -import extensions -import modules.shared as shared - -state = {} -available_extensions = [] - -def load_extensions(): - global state - for i, name in enumerate(shared.args.extensions): - if name in available_extensions: - print(f'Loading the extension "{name}"... ', end='') - exec(f"import extensions.{name}.script") - state[name] = [True, i] - print('Ok.') - -# This iterator returns the extensions in the order specified in the command-line -def iterator(): - for name in sorted(state, key=lambda x : state[x][1]): - if state[name][0] == True: - yield eval(f"extensions.{name}.script"), name - -# Extension functions that map string -> string -def apply_extensions(text, typ): - for extension, _ in iterator(): - if typ == "input" and hasattr(extension, "input_modifier"): - text = extension.input_modifier(text) - elif typ == "output" and hasattr(extension, "output_modifier"): - text = extension.output_modifier(text) - elif typ == "bot_prefix" and hasattr(extension, "bot_prefix_modifier"): - text = extension.bot_prefix_modifier(text) - return text - -def create_extensions_block(): - # Updating the default values - for extension, name in iterator(): - if hasattr(extension, 'params'): - for param in extension.params: - _id = f"{name}-{param}" - if _id in shared.settings: - extension.params[param] = shared.settings[_id] - - # Creating the extension ui elements - for extension, name in iterator(): - if hasattr(extension, "ui"): - extension.ui() diff --git a/spaces/ammansik/youtube_summarizer/README.md b/spaces/ammansik/youtube_summarizer/README.md deleted file mode 100644 index 1ea72ef99b9a30bcacb71e45000da0199ac06a34..0000000000000000000000000000000000000000 --- a/spaces/ammansik/youtube_summarizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Youtube Summarizer -emoji: 🌖 -colorFrom: blue -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/andromeda123/captionscraft/README.md b/spaces/andromeda123/captionscraft/README.md deleted file mode 100644 index 0aeb0c274cc573691ef8f61508a22b707bdea996..0000000000000000000000000000000000000000 --- a/spaces/andromeda123/captionscraft/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Captionscraft -emoji: 🌍 -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/anhnv125/FRN/app.py b/spaces/anhnv125/FRN/app.py deleted file mode 100644 index 168f5bc8f726649744f354d6a7507326e1e7190f..0000000000000000000000000000000000000000 --- a/spaces/anhnv125/FRN/app.py +++ /dev/null @@ -1,122 +0,0 @@ -import streamlit as st -import librosa -import soundfile as sf -import librosa.display -from config import CONFIG -import torch -from dataset import MaskGenerator -import onnxruntime, onnx -import matplotlib.pyplot as plt -import numpy as np -from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas - -@st.cache -def load_model(): - path = 'lightning_logs/version_0/checkpoints/frn.onnx' - onnx_model = onnx.load(path) - options = onnxruntime.SessionOptions() - options.intra_op_num_threads = 2 - options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL - session = onnxruntime.InferenceSession(path, options) - input_names = [x.name for x in session.get_inputs()] - output_names = [x.name for x in session.get_outputs()] - return session, onnx_model, input_names, output_names - -def inference(re_im, session, onnx_model, input_names, output_names): - inputs = {input_names[i]: np.zeros([d.dim_value for d in _input.type.tensor_type.shape.dim], - dtype=np.float32) - for i, _input in enumerate(onnx_model.graph.input) - } - - output_audio = [] - for t in range(re_im.shape[0]): - inputs[input_names[0]] = re_im[t] - out, prev_mag, predictor_state, mlp_state = session.run(output_names, inputs) - inputs[input_names[1]] = prev_mag - inputs[input_names[2]] = predictor_state - inputs[input_names[3]] = mlp_state - output_audio.append(out) - - output_audio = torch.tensor(np.concatenate(output_audio, 0)) - output_audio = output_audio.permute(1, 0, 2).contiguous() - output_audio = torch.view_as_complex(output_audio) - output_audio = torch.istft(output_audio, window, stride, window=hann) - return output_audio.numpy() - -def visualize(hr, lr, recon): - sr = CONFIG.DATA.sr - window_size = 1024 - window = np.hanning(window_size) - - stft_hr = librosa.core.spectrum.stft(hr, n_fft=window_size, hop_length=512, window=window) - stft_hr = 2 * np.abs(stft_hr) / np.sum(window) - - stft_lr = librosa.core.spectrum.stft(lr, n_fft=window_size, hop_length=512, window=window) - stft_lr = 2 * np.abs(stft_lr) / np.sum(window) - - stft_recon = librosa.core.spectrum.stft(recon, n_fft=window_size, hop_length=512, window=window) - stft_recon = 2 * np.abs(stft_recon) / np.sum(window) - - fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharey=True, sharex=True, figsize=(16, 10)) - ax1.title.set_text('Target signal') - ax2.title.set_text('Lossy signal') - ax3.title.set_text('Enhanced signal') - - canvas = FigureCanvas(fig) - p = librosa.display.specshow(librosa.amplitude_to_db(stft_hr), ax=ax1, y_axis='linear', x_axis='time', sr=sr) - p = librosa.display.specshow(librosa.amplitude_to_db(stft_lr), ax=ax2, y_axis='linear', x_axis='time', sr=sr) - p = librosa.display.specshow(librosa.amplitude_to_db(stft_recon), ax=ax3, y_axis='linear', x_axis='time', sr=sr) - return fig - -packet_size = CONFIG.DATA.EVAL.packet_size -window = CONFIG.DATA.window_size -stride = CONFIG.DATA.stride - -title = 'Packet Loss Concealment' -st.set_page_config(page_title=title, page_icon=":sound:") -st.title(title) - -st.subheader('Upload audio') -uploaded_file = st.file_uploader("Upload your audio file (.wav) at 48 kHz sampling rate") - -is_file_uploaded = uploaded_file is not None -if not is_file_uploaded: - uploaded_file = 'sample.wav' - -target, sr = librosa.load(uploaded_file, sr=48000) -target = target[:packet_size * (len(target) // packet_size)] - -st.text('Audio sample') -st.audio(uploaded_file) - -st.subheader('Choose expected packet loss rate') -slider = [st.slider("Expected loss rate for Markov Chain loss generator", 0, 100, step=1)] -loss_percent = float(slider[0])/100 -mask_gen = MaskGenerator(is_train=False, probs=[(1 - loss_percent, loss_percent)]) -lossy_input = target.copy().reshape(-1, packet_size) -mask = mask_gen.gen_mask(len(lossy_input), seed=0)[:, np.newaxis] -lossy_input *= mask -lossy_input = lossy_input.reshape(-1) -hann = torch.sqrt(torch.hann_window(window)) -lossy_input_tensor = torch.tensor(lossy_input) -re_im = torch.stft(lossy_input_tensor, window, stride, window=hann, return_complex=False).permute(1, 0, 2).unsqueeze( - 1).numpy().astype(np.float32) -session, onnx_model, input_names, output_names = load_model() - -if st.button('Conceal lossy audio!'): - with st.spinner('Please wait for completion'): - output = inference(re_im, session, onnx_model, input_names, output_names) - - st.subheader('Visualization') - fig = visualize(target, lossy_input, output) - st.pyplot(fig) - st.success('Done!') - sf.write('target.wav', target, sr) - sf.write('lossy.wav', lossy_input, sr) - sf.write('enhanced.wav', output, sr) - st.text('Original audio') - st.audio('target.wav') - st.text('Lossy audio') - st.audio('lossy.wav') - st.text('Enhanced audio') - st.audio('enhanced.wav') \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/extensions/multimodal/README.md b/spaces/antonovmaxim/text-generation-webui-space/extensions/multimodal/README.md deleted file mode 100644 index 0f515ae69f6b8680522df9854c515ea698710629..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/extensions/multimodal/README.md +++ /dev/null @@ -1,81 +0,0 @@ -# Multimodal - -## Description - -Adds support for multimodality (text+images) to text-generation-webui. - -https://user-images.githubusercontent.com/3718215/233817203-69b57e77-0c55-4fd6-b742-3204bb13b8fc.mp4 - -## Usage - -To run this extension, download a LLM that supports multimodality, and then start server.py with the appropriate `--multimodal-pipeline` argument. Examples: - -``` -python server.py --model wojtab_llava-7b-v0-4bit-128g --multimodal-pipeline llava-7b --chat -python3 server.py --model wojtab_llava-13b-v0-4bit-128g --multimodal-pipeline llava-13b --chat -python server.py --model anon8231489123_vicuna-13b-GPTQ-4bit-128g --multimodal-pipeline minigpt4-13b --chat -python server.py --model llama-7b-4bit --multimodal-pipeline minigpt4-7b --chat -``` - -There is built-in support for LLaVA-v0-13B and LLaVA-v0-7b. To install `minigpt4`: - -- clone https://github.com/Wojtab/minigpt-4-pipeline into `extensions/multimodal/pipelines` -- install the requirements.txt - -The same procedure should be used to install other pipelines, which can then be used with `--multimodal-pipeline [pipeline name]`. For additional multimodal pipelines refer to the compatibility section below. - -Do note, that each image takes up a considerable amount of tokens, so adjust `max_new_tokens` to be at most 1700 (recommended value is between 200 to 500), so the images don't get truncated. - -To send an image, just upload it to the extension field below chat, and send a prompt as always. The image will be added to the end of your message. If you wish to modify the placement, include a string `` in your prompt. - -Additionally, there is *Embed all images, not only the last one* checkbox. It modifies the image embeddings, by default (if it's unchecked), all but the most recent images have their embeddings empty, so they are not fed to the network. It seems as if some multimodal networks consider the features in all images at the same time as if they were a single image. Due to this behavior, by default, the extension skips previous images. However, it can lead to sub-par generation on other pipelines. If you want to include all images, just tick this checkbox. - -## Compatibility -As of now, the following multimodal pipelines are supported: -|Pipeline|`--multimodal-pipeline`|Default LLM|LLM info(for the linked model)|Pipeline repository| -|-|-|-|-|-| -|[LLaVA 13B](https://github.com/haotian-liu/LLaVA)|`llava-13b`|[LLaVA 13B](https://huggingface.co/wojtab/llava-13b-v0-4bit-128g)|GPTQ 4-bit quant, old CUDA|built-in| -|[LLaVA 7B](https://github.com/haotian-liu/LLaVA)|`llava-7b`|[LLaVA 7B](https://huggingface.co/wojtab/llava-7b-v0-4bit-128g)|GPTQ 4-bit quant, old CUDA|built-in| -|[MiniGPT-4 7B](https://github.com/Vision-CAIR/MiniGPT-4)|`minigpt4-7b`|[Vicuna v0 7B](https://huggingface.co/TheBloke/vicuna-7B-GPTQ-4bit-128g)|GPTQ 4-bit quant, new format|[Wojtab/minigpt-4-pipeline](https://github.com/Wojtab/minigpt-4-pipeline)| -|[MiniGPT-4 13B](https://github.com/Vision-CAIR/MiniGPT-4)|`minigpt4-13b`|[Vicuna v0 13B](https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g)|GPTQ 4-bit quant, old CUDA|[Wojtab/minigpt-4-pipeline](https://github.com/Wojtab/minigpt-4-pipeline)| - -Some pipelines could support different LLMs but do note that while it might work, it isn't a supported configuration. - -DO NOT report bugs if you are using a different LLM. - -DO NOT report bugs with pipelines in this repository (unless they are built-in) - -## Extension config -This extension uses the following parameters (from `settings.json`): -|Parameter|Description| -|---------|-----------| -|`multimodal-vision_bits`|Number of bits to load vision models (CLIP/ViT) feature extractor in (most pipelines should support either 32 or 16, default=32)| -|`multimodal-vision_device`|Torch device to run the feature extractor on, for example, `cpu` or `cuda:0`, by default `cuda:0` if available| -|`multimodal-projector_bits`|Number of bits to load feature projector model(s) in (most pipelines should support either 32 or 16, default=32)| -|`multimodal-projector_device`|Torch device to run the feature projector model(s) on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| -|`multimodal-add_all_images_to_prompt`|Default value of "Embed all images, not only the last one" checkbox| - -## Usage through API - -You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f''`, where `img_str` is base-64 jpeg data. Note that you will need to launch `server.py` with the arguments `--api --extensions multimodal`. - -Python example: - -```Python -import base64 -import requests - -CONTEXT = "You are LLaVA, a large language and vision assistant trained by UW Madison WAIV Lab. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. Follow the instructions carefully and explain your answers in detail.### Human: Hi!### Assistant: Hi there! How can I help you today?\n" - -with open('extreme_ironing.jpg', 'rb') as f: - img_str = base64.b64encode(f.read()).decode('utf-8') - prompt = CONTEXT + f'### Human: What is unusual about this image: \n### Assistant: ' - print(requests.post('http://127.0.0.1:5000/api/v1/generate', json={'prompt': prompt, 'stopping_strings': ['\n###']}).json()) -``` -script output: -```Python -{'results': [{'text': "The unusual aspect of this image is that a man is standing on top of a yellow minivan while doing his laundry. He has set up a makeshift clothes line using the car's rooftop as an outdoor drying area. This scene is uncommon because people typically do their laundry indoors, in a dedicated space like a laundromat or a room in their home, rather than on top of a moving vehicle. Additionally, hanging clothes on the car could be potentially hazardous or illegal in some jurisdictions due to the risk of damaging the vehicle or causing accidents on the road.\n##"}]} -``` - -## For pipeline developers/technical description -see [DOCS.md](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/multimodal/DOCS.md) diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/version.py b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/version.py deleted file mode 100644 index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.0' diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/api_tests/test_synthesize_api.py b/spaces/artificialguybr/video-dubbing/TTS/tests/api_tests/test_synthesize_api.py deleted file mode 100644 index e7b4f12048f5bad5202305cde0c918db007b73fa..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/api_tests/test_synthesize_api.py +++ /dev/null @@ -1,25 +0,0 @@ -import os - -from tests import get_tests_output_path, run_cli - - -def test_synthesize(): - """Test synthesize.py with diffent arguments.""" - output_path = os.path.join(get_tests_output_path(), "output.wav") - - # 🐸 Coqui studio model - run_cli( - 'tts --model_name "coqui_studio/en/Torcull Diarmuid/coqui_studio" ' - '--text "This is it" ' - f'--out_path "{output_path}"' - ) - - # 🐸 Coqui studio model with speed arg. - run_cli( - 'tts --model_name "coqui_studio/en/Torcull Diarmuid/coqui_studio" ' - '--text "This is it but slow" --speed 0.1' - f'--out_path "{output_path}"' - ) - - # test pipe_out command - run_cli(f'tts --text "test." --pipe_out --out_path "{output_path}" | aplay') diff --git a/spaces/artificialguybr/video-dubbing/whisper/tests/test_audio.py b/spaces/artificialguybr/video-dubbing/whisper/tests/test_audio.py deleted file mode 100644 index dfd78bc09628ec841a482e93f4ba1b89a030ea13..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/whisper/tests/test_audio.py +++ /dev/null @@ -1,19 +0,0 @@ -import os.path - -import numpy as np - -from whisper.audio import SAMPLE_RATE, load_audio, log_mel_spectrogram - - -def test_audio(): - audio_path = os.path.join(os.path.dirname(__file__), "jfk.flac") - audio = load_audio(audio_path) - assert audio.ndim == 1 - assert SAMPLE_RATE * 10 < audio.shape[0] < SAMPLE_RATE * 12 - assert 0 < audio.std() < 1 - - mel_from_audio = log_mel_spectrogram(audio) - mel_from_file = log_mel_spectrogram(audio_path) - - assert np.allclose(mel_from_audio, mel_from_file) - assert mel_from_audio.max() - mel_from_audio.min() <= 2.0 diff --git a/spaces/artificialguybr/video-dubbing/whisper/tests/test_timing.py b/spaces/artificialguybr/video-dubbing/whisper/tests/test_timing.py deleted file mode 100644 index 9bab838e8f56fdc98a036664e2fca35fb0f334e0..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/whisper/tests/test_timing.py +++ /dev/null @@ -1,96 +0,0 @@ -import numpy as np -import pytest -import scipy.ndimage -import torch - -from whisper.timing import dtw_cpu, dtw_cuda, median_filter - -sizes = [ - (10, 20), - (32, 16), - (123, 1500), - (234, 189), -] -shapes = [ - (10,), - (1, 15), - (4, 5, 345), - (6, 12, 240, 512), -] - - -@pytest.mark.parametrize("N, M", sizes) -def test_dtw(N: int, M: int): - steps = np.concatenate([np.zeros(N - 1), np.ones(M - 1)]) - np.random.shuffle(steps) - x = np.random.random((N, M)).astype(np.float32) - - i, j, k = 0, 0, 0 - trace = [] - while True: - x[i, j] -= 1 - trace.append((i, j)) - - if k == len(steps): - break - - if k + 1 < len(steps) and steps[k] != steps[k + 1]: - i += 1 - j += 1 - k += 2 - continue - - if steps[k] == 0: - i += 1 - if steps[k] == 1: - j += 1 - k += 1 - - trace = np.array(trace).T - dtw_trace = dtw_cpu(x) - - assert np.allclose(trace, dtw_trace) - - -@pytest.mark.requires_cuda -@pytest.mark.parametrize("N, M", sizes) -def test_dtw_cuda_equivalence(N: int, M: int): - x_numpy = np.random.randn(N, M).astype(np.float32) - x_cuda = torch.from_numpy(x_numpy).cuda() - - trace_cpu = dtw_cpu(x_numpy) - trace_cuda = dtw_cuda(x_cuda) - - assert np.allclose(trace_cpu, trace_cuda) - - -@pytest.mark.parametrize("shape", shapes) -def test_median_filter(shape): - x = torch.randn(*shape) - - for filter_width in [3, 5, 7, 13]: - filtered = median_filter(x, filter_width) - - # using np.pad to reflect-pad, because Scipy's behavior is different near the edges. - pad_width = filter_width // 2 - padded_x = np.pad( - x, [(0, 0)] * (x.ndim - 1) + [(pad_width, pad_width)], mode="reflect" - ) - scipy_filtered = scipy.ndimage.median_filter( - padded_x, [1] * (x.ndim - 1) + [filter_width] - ) - scipy_filtered = scipy_filtered[..., pad_width:-pad_width] - - assert np.allclose(filtered, scipy_filtered) - - -@pytest.mark.requires_cuda -@pytest.mark.parametrize("shape", shapes) -def test_median_filter_equivalence(shape): - x = torch.randn(*shape) - - for filter_width in [3, 5, 7, 13]: - filtered_cpu = median_filter(x, filter_width) - filtered_gpu = median_filter(x.cuda(), filter_width).cpu() - - assert np.allclose(filtered_cpu, filtered_gpu) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Util/test_strxor.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Util/test_strxor.py deleted file mode 100644 index c91d38f5c1a678daed6f81ea6bfd19f33e195bf9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Util/test_strxor.py +++ /dev/null @@ -1,280 +0,0 @@ -# -# SelfTest/Util/test_strxor.py: Self-test for XORing -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -import unittest -from binascii import unhexlify, hexlify - -from Crypto.SelfTest.st_common import list_test_cases -from Crypto.Util.strxor import strxor, strxor_c - - -class StrxorTests(unittest.TestCase): - - def test1(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term2 = unhexlify(b"383d4ba020573314395b") - result = unhexlify(b"c70ed123c59a7fcb6f12") - self.assertEqual(strxor(term1, term2), result) - self.assertEqual(strxor(term2, term1), result) - - def test2(self): - es = b"" - self.assertEqual(strxor(es, es), es) - - def test3(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - all_zeros = b"\x00" * len(term1) - self.assertEqual(strxor(term1, term1), all_zeros) - - def test_wrong_length(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term2 = unhexlify(b"ff339a83e5cd4cdf564990") - self.assertRaises(ValueError, strxor, term1, term2) - - def test_bytearray(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term1_ba = bytearray(term1) - term2 = unhexlify(b"383d4ba020573314395b") - result = unhexlify(b"c70ed123c59a7fcb6f12") - - self.assertEqual(strxor(term1_ba, term2), result) - - def test_memoryview(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term1_mv = memoryview(term1) - term2 = unhexlify(b"383d4ba020573314395b") - result = unhexlify(b"c70ed123c59a7fcb6f12") - - self.assertEqual(strxor(term1_mv, term2), result) - - def test_output_bytearray(self): - """Verify result can be stored in pre-allocated memory""" - - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term2 = unhexlify(b"383d4ba020573314395b") - original_term1 = term1[:] - original_term2 = term2[:] - expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") - output = bytearray(len(term1)) - - result = strxor(term1, term2, output=output) - - self.assertEqual(result, None) - self.assertEqual(output, expected_xor) - self.assertEqual(term1, original_term1) - self.assertEqual(term2, original_term2) - - def test_output_memoryview(self): - """Verify result can be stored in pre-allocated memory""" - - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term2 = unhexlify(b"383d4ba020573314395b") - original_term1 = term1[:] - original_term2 = term2[:] - expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") - output = memoryview(bytearray(len(term1))) - - result = strxor(term1, term2, output=output) - - self.assertEqual(result, None) - self.assertEqual(output, expected_xor) - self.assertEqual(term1, original_term1) - self.assertEqual(term2, original_term2) - - def test_output_overlapping_bytearray(self): - """Verify result can be stored in overlapping memory""" - - term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649")) - term2 = unhexlify(b"383d4ba020573314395b") - original_term2 = term2[:] - expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") - - result = strxor(term1, term2, output=term1) - - self.assertEqual(result, None) - self.assertEqual(term1, expected_xor) - self.assertEqual(term2, original_term2) - - def test_output_overlapping_memoryview(self): - """Verify result can be stored in overlapping memory""" - - term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))) - term2 = unhexlify(b"383d4ba020573314395b") - original_term2 = term2[:] - expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") - - result = strxor(term1, term2, output=term1) - - self.assertEqual(result, None) - self.assertEqual(term1, expected_xor) - self.assertEqual(term2, original_term2) - - def test_output_ro_bytes(self): - """Verify result cannot be stored in read-only memory""" - - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term2 = unhexlify(b"383d4ba020573314395b") - - self.assertRaises(TypeError, strxor, term1, term2, output=term1) - - def test_output_ro_memoryview(self): - """Verify result cannot be stored in read-only memory""" - - term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649")) - term2 = unhexlify(b"383d4ba020573314395b") - - self.assertRaises(TypeError, strxor, term1, term2, output=term1) - - def test_output_incorrect_length(self): - """Verify result cannot be stored in memory of incorrect length""" - - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term2 = unhexlify(b"383d4ba020573314395b") - output = bytearray(len(term1) - 1) - - self.assertRaises(ValueError, strxor, term1, term2, output=output) - - -class Strxor_cTests(unittest.TestCase): - - def test1(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - result = unhexlify(b"be72dbc2a48c0d9e1708") - self.assertEqual(strxor_c(term1, 65), result) - - def test2(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - self.assertEqual(strxor_c(term1, 0), term1) - - def test3(self): - self.assertEqual(strxor_c(b"", 90), b"") - - def test_wrong_range(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - self.assertRaises(ValueError, strxor_c, term1, -1) - self.assertRaises(ValueError, strxor_c, term1, 256) - - def test_bytearray(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term1_ba = bytearray(term1) - result = unhexlify(b"be72dbc2a48c0d9e1708") - - self.assertEqual(strxor_c(term1_ba, 65), result) - - def test_memoryview(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - term1_mv = memoryview(term1) - result = unhexlify(b"be72dbc2a48c0d9e1708") - - self.assertEqual(strxor_c(term1_mv, 65), result) - - def test_output_bytearray(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - original_term1 = term1[:] - expected_result = unhexlify(b"be72dbc2a48c0d9e1708") - output = bytearray(len(term1)) - - result = strxor_c(term1, 65, output=output) - - self.assertEqual(result, None) - self.assertEqual(output, expected_result) - self.assertEqual(term1, original_term1) - - def test_output_memoryview(self): - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - original_term1 = term1[:] - expected_result = unhexlify(b"be72dbc2a48c0d9e1708") - output = memoryview(bytearray(len(term1))) - - result = strxor_c(term1, 65, output=output) - - self.assertEqual(result, None) - self.assertEqual(output, expected_result) - self.assertEqual(term1, original_term1) - - def test_output_overlapping_bytearray(self): - """Verify result can be stored in overlapping memory""" - - term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649")) - expected_xor = unhexlify(b"be72dbc2a48c0d9e1708") - - result = strxor_c(term1, 65, output=term1) - - self.assertEqual(result, None) - self.assertEqual(term1, expected_xor) - - def test_output_overlapping_memoryview(self): - """Verify result can be stored in overlapping memory""" - - term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))) - expected_xor = unhexlify(b"be72dbc2a48c0d9e1708") - - result = strxor_c(term1, 65, output=term1) - - self.assertEqual(result, None) - self.assertEqual(term1, expected_xor) - - def test_output_ro_bytes(self): - """Verify result cannot be stored in read-only memory""" - - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - - self.assertRaises(TypeError, strxor_c, term1, 65, output=term1) - - def test_output_ro_memoryview(self): - """Verify result cannot be stored in read-only memory""" - - term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649")) - term2 = unhexlify(b"383d4ba020573314395b") - - self.assertRaises(TypeError, strxor_c, term1, 65, output=term1) - - def test_output_incorrect_length(self): - """Verify result cannot be stored in memory of incorrect length""" - - term1 = unhexlify(b"ff339a83e5cd4cdf5649") - output = bytearray(len(term1) - 1) - - self.assertRaises(ValueError, strxor_c, term1, 65, output=output) - - -def get_tests(config={}): - tests = [] - tests += list_test_cases(StrxorTests) - tests += list_test_cases(Strxor_cTests) - return tests - - -if __name__ == '__main__': - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/ashercn97/AsherTesting/modules/AutoGPTQ_loader.py b/spaces/ashercn97/AsherTesting/modules/AutoGPTQ_loader.py deleted file mode 100644 index 0d41ac0a5589aff024569cb973a4b154477c5908..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/modules/AutoGPTQ_loader.py +++ /dev/null @@ -1,71 +0,0 @@ -from pathlib import Path - -from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig - -import modules.shared as shared -from modules.logging_colors import logger -from modules.models import get_max_memory_dict - - -def load_quantized(model_name): - path_to_model = Path(f'{shared.args.model_dir}/{model_name}') - pt_path = None - - # Find the model checkpoint - if shared.args.checkpoint: - pt_path = Path(shared.args.checkpoint) - else: - for ext in ['.safetensors', '.pt', '.bin']: - found = list(path_to_model.glob(f"*{ext}")) - if len(found) > 0: - if len(found) > 1: - logger.warning(f'More than one {ext} model has been found. The last one will be selected. It could be wrong.') - - pt_path = found[-1] - break - - if pt_path is None: - logger.error("The model could not be loaded because its checkpoint file in .bin/.pt/.safetensors format could not be located.") - return - - use_safetensors = pt_path.suffix == '.safetensors' - if not (path_to_model / "quantize_config.json").exists(): - quantize_config = BaseQuantizeConfig( - bits=bits if (bits := shared.args.wbits) > 0 else 4, - group_size=gs if (gs := shared.args.groupsize) > 0 else -1, - desc_act=shared.args.desc_act - ) - else: - quantize_config = None - - # Define the params for AutoGPTQForCausalLM.from_quantized - params = { - 'model_basename': pt_path.stem, - 'device': "cuda:0" if not shared.args.cpu else "cpu", - 'use_triton': shared.args.triton, - 'inject_fused_attention': not shared.args.no_inject_fused_attention, - 'inject_fused_mlp': not shared.args.no_inject_fused_mlp, - 'use_safetensors': use_safetensors, - 'trust_remote_code': shared.args.trust_remote_code, - 'max_memory': get_max_memory_dict(), - 'quantize_config': quantize_config, - 'use_cuda_fp16': not shared.args.no_use_cuda_fp16, - } - - logger.info(f"The AutoGPTQ params are: {params}") - model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params) - - # These lines fix the multimodal extension when used with AutoGPTQ - if hasattr(model, 'model'): - if not hasattr(model, 'dtype'): - if hasattr(model.model, 'dtype'): - model.dtype = model.model.dtype - - if hasattr(model.model, 'model') and hasattr(model.model.model, 'embed_tokens'): - if not hasattr(model, 'embed_tokens'): - model.embed_tokens = model.model.model.embed_tokens - - if not hasattr(model.model, 'embed_tokens'): - model.model.embed_tokens = model.model.model.embed_tokens - - return model diff --git a/spaces/atharvat80/Wikipedia2Vec-NED/src/stopwords.py b/spaces/atharvat80/Wikipedia2Vec-NED/src/stopwords.py deleted file mode 100644 index cbce281b491cbf5b74e94c8d7fa2ce76b95e5ad6..0000000000000000000000000000000000000000 --- a/spaces/atharvat80/Wikipedia2Vec-NED/src/stopwords.py +++ /dev/null @@ -1,71 +0,0 @@ -# Stop words -STOP_WORDS = set(""" -a about above across after afterwards again against all almost alone along -already also although always am among amongst amount an and another any anyhow -anyone anything anyway anywhere are around as at - -back be became because become becomes becoming been before beforehand behind -being below beside besides between beyond both bottom but by - -call can cannot ca could - -did do does doing done down due during - -each eight either eleven else elsewhere empty enough even ever every -everyone everything everywhere except - -few fifteen fifty first five for former formerly forty four from front full -further - -get give go - -had has have he hence her here hereafter hereby herein hereupon hers herself -him himself his how however hundred - -i if in indeed into is it its itself - -keep - -last latter latterly least less - -just - -made make many may me meanwhile might mine more moreover most mostly move much -must my myself - -name namely neither never nevertheless next nine no nobody none noone nor not -nothing now nowhere - -of off often on once one only onto or other others otherwise our ours ourselves -out over own - -part per perhaps please put - -quite - -rather re really regarding - -same say see seem seemed seeming seems serious several she should show side -since six sixty so some somehow someone something sometime sometimes somewhere -still such - -take ten than that the their them themselves then thence there thereafter -thereby therefore therein thereupon these they third this those though three -through throughout thru thus to together too top toward towards twelve twenty -two - -under until up unless upon us used using - -various very very via was we well were what whatever when whence whenever where -whereafter whereas whereby wherein whereupon wherever whether which while -whither who whoever whole whom whose why will with within without would - -yet you your yours yourself yourselves -""".split()) - -contractions = ["n't", "'d", "'ll", "'m", "'re", "'s", "'ve"] -STOP_WORDS.update(contractions) - -for apostrophe in ["‘", "’"]: - for stopword in contractions: - STOP_WORDS.add(stopword.replace("'", apostrophe)) diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/canny_grounding_downsampler.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/canny_grounding_downsampler.py deleted file mode 100644 index 6331d15c76e0418a1e4a050d199727b53006ecfd..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/canny_grounding_downsampler.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch -import torch.nn as nn -from ldm.modules.attention import BasicTransformerBlock -from ldm.modules.diffusionmodules.util import checkpoint, FourierEmbedder -import torch.nn.functional as F - - - -class GroundingDownsampler(nn.Module): - def __init__(self, resize_input=256, out_dim=8): - super().__init__() - self.resize_input = resize_input - self.out_dim = out_dim - - self.layers = nn.Sequential( - nn.Conv2d(1,4,4,2,1), - nn.SiLU(), - nn.Conv2d(4,self.out_dim,4,2,1) - ) - - def forward(self, grounding_extra_input): - # this is actually gary scale, but converted to rgb in dataset, information redudant - grounding_extra_input = grounding_extra_input[:,0].unsqueeze(1) - - out = torch.nn.functional.interpolate(grounding_extra_input, (self.resize_input,self.resize_input), mode='bicubic') - out = self.layers(out) - - assert out.shape[1] == self.out_dim - return out - - diff --git a/spaces/avysotsky/asklethain/README.md b/spaces/avysotsky/asklethain/README.md deleted file mode 100644 index e7401e7bd49697641022a1e3062df8c4851c02f1..0000000000000000000000000000000000000000 --- a/spaces/avysotsky/asklethain/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Ask Lethain -emoji: ☁️ -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.11 -app_file: app.py -pinned: false -license: mit ---- diff --git a/spaces/awacke1/AIZTH-03-09-2023/backup-app.py b/spaces/awacke1/AIZTH-03-09-2023/backup-app.py deleted file mode 100644 index 03744ffa9ce64d9e15c2ef703067fcb7ab22cbe8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AIZTH-03-09-2023/backup-app.py +++ /dev/null @@ -1,903 +0,0 @@ -import streamlit as st -from graphviz import Digraph - - -st.markdown(""" - -# 👋 Hey there! Here are two easy ways to boost your AI learning journey! 💻 - -## 🎥 YouTube University Method: - - 1. 🏋️‍♀️ Plan two hours each weekday to exercise your body and brain. - 2. 🎬 Make a playlist of videos you want to learn from on YouTube. Save the links to edit later. - 3. 🚀 Try watching the videos at a faster speed while exercising, and sample the first five minutes of each video. - 4. 📜 Reorder the playlist so the most useful videos are at the front, and take breaks to exercise. - 5. 📝 Practice note-taking in markdown to instantly save what you want to remember. Share your notes with others! - 6. 👥 AI Pair Programming Using Long Answer Language Models with Human Feedback: - -## 🌐 AI Pair Programming - ## Open 2 Browsers to __ChatGPT__ [URL](https://chat.openai.com/chat) or [URL2](https://platform.openai.com/playground) and __Huggingface__ [URL](https://huggingface.co/awacke1) in separate browser windows. - - 1. 🤖 Use prompts to generate a streamlit program on Huggingface or locally to test it. - 2. 🔧 For advanced work, add Python 3.10 and VSCode locally, and debug as gradio or streamlit apps. - 3. 🚀 Use these two superpower processes to reduce the time it takes you to make a new AI program! ⏱️ - -# Example Playlists: -1. [2023 QA Models and Long Form Question Answering NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFovrkkx8HMTLNgYdjCMNYmX_) -2. [FHIR Bioinformatics Development Using AI/ML and Python, Streamlit, and Gradio - 2022](https://www.youtube.com/playlist?list=PLHgX2IExbFovoMUC3hYXeFegpk_Y0Lz0Q) -3. [2023 ChatGPT for Coding Assistant Streamlit, Gradio and Python Apps](https://www.youtube.com/playlist?list=PLHgX2IExbFouOEnppexiKZVdz_k5b0pvI) -4. [2023 BigScience Bloom - Large Language Model for AI Systems and NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14) -5. [2023 Streamlit Pro Tips for AI UI UX for Data Science, Engineering, and Mathematics](https://www.youtube.com/playlist?list=PLHgX2IExbFou3cP19hHO9Xb-cN8uwr5RM) -6. [2023 Fun, New and Interesting AI, Videos, and AI/ML Techniques](https://www.youtube.com/playlist?list=PLHgX2IExbFotoMt32SrT3Xynt5BXTGnEP) -7. [2023 Best Minds in AGI AI Gamification and Large Language Models](https://www.youtube.com/playlist?list=PLHgX2IExbFotmFeBTpyje1uI22n0GAkXT) -8. [2023 State of the Art for Vision Image Classification, Text Classification and Regression, Extractive Question Answering and Tabular Classification](https://www.youtube.com/playlist?list=PLHgX2IExbFotPcPu6pauNHOoZTTbnAQ2F) -9. [2023 AutoML DataRobot and AI Platforms for Building Models, Features, Test, and Transparency](https://www.youtube.com/playlist?list=PLHgX2IExbFovsY2oGbDwdEhPrakkC8i3g) -""") - - -st.markdown(""" - -# Cognitive AI with Human Feedback (CAHF) [Example 🩺⚕️](https://huggingface.co/spaces/awacke1/Cognitive-AI-Episodic-Semantic-Memory-Demo): - -1. Create and use Models to predict __outcomes__ -2. Use AI to predict **conditions, disease, and opportunities** using AI with **explainability**. -3. **Cognitive AI** - Mimic how humans reason through decision making processes. -4. **Reasoning cycles** - "Recommended for You" reasoners - consider type of personalized needs and classification for users, to recommend products -5. **High Acuity Reasoners** - Make decisions on rules of **what it can and cannot do within human feedback** guidelines. - -Emphasizes **explainability, transparency, and removing administrative burden** to **protocolize** and improve what staff is doing. - -Vetted by SME's, adding value of **judgement and training** and pick up intelligence and **skills from human feedback**. - -**Alert, Recommended Action, and Clinical Terms** per entity with vocabularies from LOINC, SNOMED, OMS, ICD10, RXNORM, SMILES, HCPCS, CPT, CQM, HL7, SDC and FHIR. -6. Non static multi agent cognitive approach using real time series to identify factors predictive of outcome. -7. Cognitive models form of Ontology - to create a type of computable sets and relationships stored in Ontology then ingested by reasoner - -Use models of world to build predictions and recommendations with answers cumulative with information we know -8. Reasoners standardize making it easy as possible to do right thing using transfer learning and recommendation tools with questions and actions. -""") - - -st.markdown(""" - -# 📚 Clinical Terminology and Ontologies [Example 🩺⚕️NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology) - -## Health Vocabularies, Systems of Coding, and Databases with Bibliographies -##__Keywords__: - -1. __Clinical Terminology__: 💬 Words that doctors use to talk to each other about patients. -2. __Ontologies for Medications and Conditions__: 📚 A fancy way of organizing knowledge about medicine and health problems. -3. __Health Vocabularies__: 📝 A special list of words used in healthcare to talk about health issues. -4. __Systems of Coding__: 💻 A way of giving things like sicknesses and treatments special codes, so that doctors can remember them easily. -5. __Databases__: 🗄️ A computer system that stores information about patients, health research, and other healthcare things. -6. __Bibliographies__: 📖 A list of books or articles that doctors use to learn about new health information. - -1. ## 1️⃣ National Library of Medicine's **RxNorm**: - - Standardized nomenclature for clinical drugs developed by NLM - - Provides links between drug names and related information such as ingredients, strengths, and dosages - - **Data type: controlled vocabulary** - - Access through **NLM's RxNorm website**: https://www.nlm.nih.gov/research/umls/rxnorm/index.html -2. ## 2️⃣ Centers for Medicare and Medicaid Services' Healthcare Common Procedure Coding System (HCPCS): - - Coding system used to identify healthcare **services, procedures, and supplies** - - Includes **codes for drugs, biologicals, and other items** used in medical care - - **Data type: coding system** - - Access through **CMS website**: https://www.cms.gov/Medicare/Coding/MedHCPCSGenInfo -3. ## 3️⃣ Unified Medical Language System (UMLS): - - Set of files and software tools developed by NLM for integrating and mapping biomedical vocabularies - - Includes RxNorm and other drug vocabularies, as well as other terminologies used in medicine - - **Data type: controlled vocabulary** - - Access through UMLS Metathesaurus: https://www.nlm.nih.gov/research/umls/index.html -4. ## 4️⃣ PubMed: - - Database of **biomedical literature** maintained by the National Center for Biotechnology Information (NCBI) - - Includes information about **drugs, including drug names, chemical structures, and pharmacological actions** - - **Data type: bibliographic database** - - Access through **PubMed website**: https://pubmed.ncbi.nlm.nih.gov/ -5. ## 5️⃣ PubChem: - - Database of chemical substances maintained by NCBI - - Includes information about drugs, including **chemical structures, properties, and activities** - - **Data type: chemical database** - - Access through **PubChem website**: https://pubchem.ncbi.nlm.nih.gov/ -6. ## 6️⃣ Behavioral Health Code Terminology Sets: - - Code terminology sets specific to behavioral health - - Includes **DSM** published by American Psychiatric Association, **ICD** published by World Health Organization, and **CPT** published by American Medical Association - - **Data type: coding system** - - Access through respective **organizations' websites**: - 1. [DSM](https://www.psychiatry.org/psychiatrists/practice/dsm) - 2. [ICD](https://www.who.int/standards/classifications/classification-of-diseases) - 3. [CPT](https://www.ama-assn.org/practice-management/cpt/current-procedural-terminology-cpt) -""") - -st.markdown(""" -1. # 📚Natural Language Processing🔤 - 🗣️🤖💭💬🌍🔍 - 1. 🤔 **🩺⚕️ Sentiment analysis** - Determine underlying sentiment of text. [Example](https://huggingface.co/spaces/awacke1/Sentiment-analysis-streamlit) - 2. 📝 **Named Entity Recognition (NER)** - Identify and classify named entities in text. [Example](https://huggingface.co/spaces/awacke1/Named-entity-resolution) - 3. 🔊 **🩺⚕️Automatic Speech Recognition (ASR)** - Transcribe spoken language into text. - # Advanced NLP ASR Examples: - 1. 🩺⚕️ https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test - 2. https://huggingface.co/spaces/awacke1/ASRGenerateStory - 3. 🩺⚕️ https://huggingface.co/spaces/awacke1/TTS-STT-Blocks - 4. 🩺⚕️ https://huggingface.co/spaces/awacke1/CloneAnyVoice - 5. https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla - 4. 🌐 **Machine translation** - Translate text between languages automatically. [Example](https://huggingface.co/spaces/awacke1/Machine-translation) - 5. 📄 **Text summarization** - Automatically summarize large volumes of text. [Example](https://huggingface.co/spaces/awacke1/Text-summarization) - 6. ❓ **🩺⚕️ Question answering** - Answer questions posed in natural language. [Example](https://huggingface.co/spaces/awacke1/Question-answering) - 7. 🤖 **Sentiment-aware chatbots** - Use sentiment analysis to detect user emotions and respond appropriately. - 8. 📊 **🩺⚕️ Text classification** - Classify text into different categories. [Example](https://huggingface.co/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli) - 9. 💬 **🩺⚕️ Text generation** - Generate natural language text. [Example](https://huggingface.co/spaces/awacke1/Sentence2Paragraph) - 10. 🔎 **Topic modeling** - Automatically identify topics in a large corpus of text. [Example](https://huggingface.co/spaces/awacke1/Topic-modeling) - - Examples - 1. [NLP Video Summary](https://huggingface.co/spaces/awacke1/Video-Summary) - 2. [TTS-STT ASR with Multiple Voices](https://huggingface.co/spaces/awacke1/TTS-STT-Blocks) - 3. [NLP Transcript with Video Player](https://huggingface.co/spaces/awacke1/Streamlit-ASR-Video) - 4. [NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology) - 5. [Document Understanding and NLP](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR) - 6. [NLP ASR Wav2Vec2 Multilingual](https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test) - 7. [Live ASR](https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla) - 8. [NLP and Visualization](https://huggingface.co/spaces/awacke1/Visualization-Plotly-Sunbursts-Treemaps-and-WebGL) -""") - -st.markdown(""" -2. # 🔮Generative AI💭 (🎨Images and 📝Text) - 🎵🧩🔄📊🌌 - 1. 🆕 **🩺⚕️ Generation of new data**: Create new data that resembles existing data. [Example](https://huggingface.co/spaces/awacke1/GenAI-Generate-New-Data-Resembling-Example) - 2. 🎨 **Creative potential**: Generate music, art, or literature. [Example](https://huggingface.co/spaces/awacke1/Creative-Potential-Music-Art-Lit) - 3. 📊 **Data synthesis**: Synthesize data from multiple sources to create new datasets. [Example](https://huggingface.co/spaces/awacke1/Data-Synthesizer-Synthesize-From-Multiple-Sources) - 4. 📈 **🩺⚕️ Data augmentation**: Augment existing datasets to make them larger and more diverse. [Example](https://huggingface.co/spaces/awacke1/Data-Augmentation) - 5. 🔀 **Domain transfer**: Transfer knowledge learned from one domain to another. - 6. 🔍 **Unsupervised learning**: Learn patterns without labeled training data. - 7. 🔄 **Adaptive learning**: Adapt to changes in data over time. - 8. 🔊 **Noise injection**: Introduce noise to explore a wider range of possibilities. - 9. 🕶️ **Latent space manipulation**: Control output by manipulating a model's latent space. - 10. 🖼️ **Realistic output**: Produce output that is difficult to distinguish from human-created data. - - Examples - 1. Quantum AI Circuits: https://huggingface.co/spaces/awacke1/AI-Quantum?option=Circuit - 2. Generate Story and Video: https://huggingface.co/spaces/awacke1/ASRGenerateStoryandVideo - 3. ASR Generate Story: https://huggingface.co/spaces/awacke1/ASRGenerateStory - 4. Music Generation: https://huggingface.co/spaces/awacke1/MusicMaker -""") - -st.markdown(""" -3. # 📷Image Recognition🏞️ - 1. 📷 **Object detection**: Detect and identify multiple objects in an image for detailed analysis and classification. - 2. 🏞️ **Scene recognition**: Recognize and classify entire scenes based on objects, colors, and shapes. - 3. 😃 **Facial recognition**: Analyze facial features for accurate identification. - 4. 😊 **Emotion recognition**: Identify emotions on a subject's face, including happiness, sadness, and anger. - 5. 🔤 **Text recognition**: Identify and translate text in images for analysis. - 6. 🎨 **Color recognition**: Detect colors and provide information on hue, saturation, and brightness. - 7. 🔍 **Image segmentation**: Divide an image into multiple regions for individual analysis and classification. - 8. 🌅 **Image restoration**: Remove noise and blur, restoring images to original clarity and quality. - 9. 🔖 **Image classification**: Classify images into categories like animals, buildings, or landscapes. - 10. 🎨 **Style transfer**: Apply the style of one image to another for unique and innovative results. - - Examples - 1. 🩺⚕️ Text-to-Image : [Image Classification](https://huggingface.co/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation) - 2. Image Captions from 5 SOTA Generators: [URL](https://huggingface.co/spaces/awacke1/ImageCaptionPromptGenerator) - 3. 🩺⚕️ Image to Multilingual OCR: [URL](https://huggingface.co/spaces/awacke1/Image-to-Multilingual-OCR) - 4. WRN - Wide Residual Networks: [URL](https://huggingface.co/spaces/awacke1/ResnetPytorchImageRecognition) - 5. AI Document Understanding: [URL](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR) - 6. Elixir Docker Bumblebee: [URL](https://huggingface.co/spaces/awacke1/DockerImageRecognitionToText) - 7. Speech to Text to Story to Images to Video: [URL](https://huggingface.co/spaces/awacke1/Speeech2Text2Story2Images2Video) - 8. Image to Line Drawings: [URL](https://huggingface.co/spaces/awacke1/Image-to-Line-Drawings) - 9. Semantic Image Search: [URL](https://huggingface.co/spaces/awacke1/Image-Semantic-Search) - 10. Zoom Clip Toon: [URL](https://huggingface.co/spaces/awacke1/Zoom-Clip-Toon-Image-to-Image) - 11. Image to Reading Labels: [URL](https://huggingface.co/spaces/awacke1/ImageOCRMultilingual) - 12. A Game For That - Gamification Using Snapshot Images: [URL](https://huggingface.co/spaces/awacke1/AGameForThat) - 13. AI Visually Plays QBert, Pong, Seaquest and more: [URL](https://huggingface.co/spaces/awacke1/AI-Atari-Live-Streamlit) - 14. AI Creates Generator Style Mix Art from Encyclopedia: [URL](https://huggingface.co/spaces/awacke1/Art-Generator-and-Style-Mixer) - 15. BigGAN Image Gen and Search: [URL](https://huggingface.co/spaces/awacke1/AI-BigGAN-Image-Gen) - 16. Art Style Line Drawings: [URL](https://huggingface.co/spaces/awacke1/ArtStyleFoodsandNutrition) - 17. 🩺⚕️ Yolo Real Time Image Recognition from Webcam: https://huggingface.co/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco -""") - -st.markdown(""" -4. # 🗣️Speech Recognition💬 - 1. 🔊 **Continuous Speech Recognition**: Transcribe spoken words in real-time without pausing. - 2. 🗣️ **Speaker Identification**: Identify individual speakers through unique features in their speech. - 3. 🧠 **Contextual Awareness**: Understand conversation context and interpret word meaning. - 4. 🌎 **Multilingual Support**: Recognize and transcribe multiple languages for translation. - 5. 🔇 **Noise Reduction**: Filter out background noise to improve transcription quality. - 6. 🔒 **Voice Biometrics**: Verify speaker identity and provide secure access to personal data. - 7. 🎛️ **Command and Control**: Interpret voice commands to automate tasks and interact with software. - 8. 💬 **Natural Language Processing**: Understand complex human speech patterns. - 9. 🧠 **Adaptive Learning**: Learn and adapt to improve accuracy over time. - 10. ☁️ **Cloud-Based Deployment**: Real-time processing of large amounts of data, even on mobile devices. -""") - -st.markdown(""" -5. # Reinforcement Learning - 1. 🏆 **Reward-driven**: RL uses rewards or punishments to drive its learning process. - 2. 🧪 **Trial-and-error learning**: RL is a trial-and-error learning method, where an agent tries different actions to find the best action that will maximize the cumulative reward. - 3. 🤔 **Exploration-exploitation trade-off**: RL agents need to balance exploration and exploitation to find new possibilities while also exploiting successful actions. - 4. 📈 **Markov Decision Processes**: RL uses MDPs to model decision-making processes. - 5. 📊 **Policy optimization**: RL uses policy optimization techniques to find the best policy for a given task or learn the optimal policy from scratch. - 6. 💰 **Value-based methods**: RL uses value-based methods to estimate the value of each state or action. - 7. 🧠 **Model-based methods**: RL can use model-based methods to predict the outcomes of different actions. - 8. 🤖 **Deep Reinforcement Learning**: DRL combines RL with deep learning techniques to learn complex decision-making tasks. - 9. 🔄 **Transfer learning**: RL can use transfer learning techniques to transfer knowledge learned in one task to another task. - 10. 🤝 **Multi-agent RL**: RL can handle multiple agents that interact with each other. -""") - -st.markdown(""" -6. 🎲Game Theory🎲 – Traditional AI processes - 1. 🤝 **Interdependence**: Game Theory considers decision-making among multiple agents, unlike traditional AI processes which focus on a single agent. - 2. 🎯 **Strategic Behavior**: Game Theory assumes that agents aim to maximize their payoffs based on the actions of other agents. Traditional AI may not consider this strategic element. - 3. 💰 **Payoffs**: Game Theory calculates payoffs for each agent based on their actions and the actions of other agents, unlike traditional AI which may focus on a single objective. - 4. ⚖️ **Equilibrium**: Game Theory seeks to identify stable states in the game where no agent has an incentive to deviate from their current strategy. Traditional AI may not seek to find an equilibrium. - 5. 🎲 **Game Formulation**: Game Theory formulates a game, including rules, players, and possible actions, unlike traditional AI which may not require such formulation. - 6. 💡 **Solution Concepts**: Game Theory has various solution concepts, such as Nash Equilibrium and Pareto Efficiency, to identify the most desirable outcomes. Traditional AI may not have such concepts. - 7. 📊 **Information**: Game Theory considers the information available to each agent in the game. Traditional AI may not consider information explicitly. - 8. ⚔️ **Adversarial**: Game Theory models adversarial scenarios where agents have conflicting goals. Traditional AI may assume cooperation among agents. - 9. ❓ **Uncertainty**: Game Theory deals with uncertainty and incomplete information in the game. Traditional AI may not consider uncertainty. - 10. 🌐 **Complexity**: Game Theory deals with complex multi-agent interactions. Traditional AI may focus on single-agent optimization. - - Examples - 1. 🩺⚕️ Health Care Game: https://huggingface.co/spaces/awacke1/AI-RPG-Self-Play-RLML-Health-Battler-Game - 2. 🩺⚕️ Sankey Snacks Math Chart Animator: https://huggingface.co/spaces/awacke1/Sankey-Snacks - 3. Blackjack 21 : https://huggingface.co/spaces/awacke1/BlackjackSimulatorCardGameAI - 4. Player Card Monster Battler: https://huggingface.co/spaces/awacke1/Player-Card-Monster-Battler-For-Math-and-AI - 5. Emojitrition: https://huggingface.co/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition -""") - -st.markdown(""" -7. # 🃏Card Game🃏 Activity - 1. 🃏 **Card crafting**: Combine existing cards or materials to craft custom cards. [Example](https://huggingface.co/spaces/awacke1/CardCrafter-CraftCustomCards) - 2. 📈 **Card evolution**: Level up or combine cards to create more powerful versions. - 3. 🔨 **Deck building**: Build custom decks that match your play style. - 4. ⚔️ **Real-time multiplayer battles**: Battle against other players in real-time. - 5. 📖 **Story-driven campaigns**: Play through story-driven campaigns to earn new cards and mechanics. - 6. 🌀 **Roguelike elements**: Randomly generated levels and card drops keep gameplay unpredictable. - 7. 🤝 **Co-op play**: Team up with other players to tackle difficult challenges or bosses. - 8. 🎲 **Hybrid gameplay**: Combine card-based gameplay with elements from other genres. - 9. 💥 **Multi-card play**: Use multiple cards at once to create powerful combos or synergies. - 10. 🗺️ **Tactical positioning**: Strategically place your cards on a game board or battlefield to gain an advantage. - - Examples - 1. 🩺⚕️ Game Activity Graph: https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz - - # Digraph is a class in the graphviz package that represents a directed graph. - 1. It is used to create graphs with nodes and edges. - 2. It can be customized with various styles and formatting options. - 3. This is an example of defining a Digraph with emojis for the node labels: - 2. 🩺⚕️ SVG Card Generation: https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit - - # Scalable Vector Graphics (SVG) is an important language used in UI and graphic design. - 3. Game Mechanics Top 20: https://huggingface.co/spaces/awacke1/CardGameMechanics - 4. Game Mechanics Deep Dive: https://huggingface.co/spaces/awacke1/CardGameActivity - 5. Hexagon Dice: https://huggingface.co/spaces/awacke1/Hexagon-Dice-Fractal-Math-Game - 6. Dice Roll Game: https://huggingface.co/spaces/awacke1/Dice-Roll-Fractals-STEM-Math - 7. Pyplot Dice Game: https://huggingface.co/spaces/awacke1/Streamlit-Pyplot-Math-Dice-Game -""") - - -st.markdown(""" - -## AI For Long Question Answering and Fact Checking [Example](🩺⚕️ https://huggingface.co/spaces/awacke1/StreamlitWikipediaChat) -1. 🖥️ First, we'll teach a smart computer to browse the internet and find information. - - 🧠 It will be like having a super-smart search engine! -2. 🤖 Then, we'll train the computer to answer questions by having it learn from how humans answer questions. - - 🤝 We'll teach it to imitate how people find and use information on the internet. -3. 📚 To make sure the computer's answers are correct, we'll teach it to collect references from the internet to support its answers. - - 🔍 This way, it will only give answers that are true and based on facts. -4. 👨‍👩‍👧‍👦 We'll test our invention on a special set of questions that real people have asked. - - 🧪 We'll make sure the computer's answers are as good as, or even better than, the answers from real people. -5. 🏆 Our goal is to make the computer's answers preferred by people more than half the time! - - 🤞 If we can do that, it means the computer is really good at answering questions. -""") - - - -st.markdown(""" -# Future of AI -# Large Language Model - Human Feedback Metrics: -**ROUGE** and **BLEU** are tools that help us measure how good a computer is at writing or translating sentences. -## 🩺⚕️ [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) -## 🩺⚕️ [BLEU](https://huggingface.co/spaces/evaluate-metric/bleu) -1. ROUGE looks at a sentence made by a computer and checks how similar it is to sentences made by humans. - 1. It tries to see if the important information is the same. -2. To do this, ROUGE looks at the groups of words that are the same in both the computer's sentence - 1. and the human's sentence. - 2. The more groups of words that are the same, the higher the score. -3. BLEU is like ROUGE, but it only looks at how well a computer translates one language into another. - 1. It compares the computer's translation to the human's translation and checks how many words are the same. -# If the scores for ROUGE or BLEU are high, it means that the computer is doing a good job. -1. But it's also important to remember that these tools have their limits, -2. and we need to use other ways to check if the computer is doing a good job. -1. **ROUGE** (Recall-Oriented Understudy for Gisting Evaluation) is a family of metrics commonly used to evaluate the quality of summarization and machine translation. ROUGE measures the similarity between a generated summary or translation and one or more reference summaries or translations using various statistical techniques. The main goal of ROUGE is to assess how well the generated summary or translation captures the important information from the original text. -2. **ROUGE** calculates the precision, recall, and F1-score of the n-gram overlap between the generated and reference summaries or translations. Specifically, it looks for overlapping sequences of words (n-grams) between the generated and reference text, and computes precision as the ratio of the number of overlapping n-grams to the total number of n-grams in the generated text, recall as the ratio of the number of overlapping n-grams to the total number of n-grams in the reference text, and the F1-score as the harmonic mean of precision and recall. ROUGE can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc., as well as at the sentence or document level. -3. **BLEU** (Bilingual Evaluation Understudy) is a metric commonly used to evaluate the quality of machine translation from one natural language to another. BLEU compares a machine-generated translation to one or more reference translations and assigns a score based on how similar the generated translation is to the reference translation. BLEU uses a modified form of precision to calculate the score. -4. **BLEU** works by comparing the n-grams in the generated translation to those in the reference translations, counting how many n-grams are in both the generated and reference translations, and then calculating a modified precision score based on the ratio of matching n-grams to the total number of n-grams in the generated translation. BLEU can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc. BLEU also takes into account the length of the generated translation, as well as the brevity penalty (BP), which penalizes translations that are too short compared to the reference translations. -5. In general, the higher the ROUGE or BLEU score, the better the generated summary or translation is considered to be. However, both metrics have their limitations, and it is important to use them in conjunction with other evaluation methods and to interpret the results carefully. -""") - - -st.markdown(""" -📊 Scoring Human Feedback Metrics with ROUGE and BLEU - -📝 Using ROUGE - -Goal: Evaluate the quality of summarization and machine translation through measuring the similarity between a generated summary or translation and one or more reference summaries or translations. - -Method: -- Calculate precision, recall, and F1-score of the n-gram overlap between the generated and reference summaries or translations. -- Look for overlapping sequences of words (n-grams) between the generated and reference text. -- Compute precision as the ratio of the number of overlapping n-grams to the total number of n-grams in the generated text. -- Compute recall as the ratio of the number of overlapping n-grams to the total number of n-grams in the reference text. -- Compute the F1-score as the harmonic mean of precision and recall. -- ROUGE can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc., as well as at the sentence or document level. - -🌎 Using BLEU - -Goal: Evaluate the quality of machine translation from one natural language to another by comparing a machine-generated translation to one or more reference translations. - -Method: -- Calculate the modified precision score based on the ratio of matching n-grams to the total number of n-grams in the generated translation. -- Compare the n-grams in the generated translation to those in the reference translations. -- Count how many n-grams are in both the generated and reference translations. -- BLEU can be computed at different n-gram levels, including unigrams, bigrams, trigrams, etc. -- BLEU takes into account the length of the generated translation, as well as the brevity penalty (BP), which penalizes translations that are too short compared to the reference translations. - -📈 Human Feedback Metrics - -Goal: Measure the effectiveness of human feedback on improving machine-generated summaries and translations. - -Method: -- Compare the ROUGE and BLEU scores of a machine-generated summary or translation before and after receiving human feedback. - -Example: -1. Generate a summary or translation using a machine translation system. -2. Calculate the ROUGE and BLEU scores for the machine-generated output. -3. Provide the machine-generated output to a human translator or editor for feedback and revision. -4. Re-calculate the ROUGE and BLEU scores for the revised output. -5. Compare the scores to measure the effectiveness of the human feedback. -""") - - - -st.markdown(""" -# 🩺⚕️ Reinforcement Learning from Human Feedback (RLHF) -## 🤖 RLHF is a way for computers to learn how to do things better by getting help and feedback from people, - - just like how you learn new things from your parents or teachers. -🎮 Let's say the computer wants to learn how to play a video game. - - It might start by trying different things and seeing what happens. -👍 If it does something good, like getting a high score, it gets a reward. -👎 If it does something bad, like losing a life, it gets a punishment. -👩‍💻 Now, imagine that a person is watching the computer play the game and giving it feedback. - -The person might say things like "Good job!" when the computer gets a high score - - or "Oops, try again!" when it loses a life. -💡 This feedback helps the computer figure out which actions are good and which ones are bad. - -The computer then uses this feedback to adjust its actions and get better at playing the game. -🤔 It might try different strategies and see which ones get the best feedback from the person. - -Over time, the computer gets better and better at playing the game, just like how you get better at things by practicing and getting help from others. -🚀 RLHF is a cool way for computers to learn and improve with the help of people. - -Who knows, maybe one day you can teach a computer to do something amazing! - -# Examples - -## 🩺⚕️ Hospital Visualizations -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMinnesota -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsNewJersey -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-GraphViz-Folium-MapTopLargeHospitalsinWI - -# Card Game Activity -https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz -https://huggingface.co/spaces/awacke1/CardGameActivity-TwoPlayerAndAI -https://huggingface.co/spaces/awacke1/CardGameActivity -https://huggingface.co/spaces/awacke1/CardGameMechanics - -## Scalable Vector Graphics (SVG) -https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit - -## Graph Visualization -https://huggingface.co/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle - -## Clinical Terminology, Question Answering, Smart on FHIR -https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored -🩺⚕️ https://huggingface.co/spaces/awacke1/Assessment-By-Organs -🩺⚕️ https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Test2 -🩺⚕️ https://huggingface.co/spaces/awacke1/FHIRLib-FHIRKit -""") - -st.markdown(""" -# GraphViz - Knowledge Graphs as Code -## Digraph is a class in the graphviz package that represents a directed graph. -1. It is used to create graphs with nodes and edges. -2. It can be customized with various styles and formatting options. -""") - -# Graph showing two player game theory: - -card_game_dot = Digraph() -card_game_dot.node('start', shape='diamond', label='Start') -card_game_dot.node('end', shape='diamond', label='End') -card_game_dot.node('player1', shape='box', label='Player 1') -card_game_dot.node('player2', shape='box', label='Player 2') -card_game_dot.node('action', shape='parallelogram', label='Action') -card_game_dot.edge('start', 'player1') -card_game_dot.edge('player1', 'action', label='Action 1') -card_game_dot.edge('action', 'player2', label='Action 2') -card_game_dot.edge('player2', 'end') -st.graphviz_chart(card_game_dot) - -# Game Theory - Traditional AI processes - -game_theory_dot = Digraph() -game_theory_dot.node('player1', shape='box', label='Player 1') -game_theory_dot.node('player2', shape='box', label='Player 2') -game_theory_dot.node('decision', shape='parallelogram', label='Decision') -game_theory_dot.node('outcome', shape='ellipse', label='Outcome') -game_theory_dot.edge('player1', 'decision', label='Decision 1') -game_theory_dot.edge('player2', 'decision', label='Decision 2') -game_theory_dot.edge('decision', 'outcome') -st.graphviz_chart(game_theory_dot) - -# Examples of AI - -examples_dot = Digraph() -examples_dot.node('start', shape='diamond', label='Start') -examples_dot.node('end', shape='diamond', label='End') -examples_dot.node('agi', shape='box', label='AGI') -examples_dot.node('students', shape='box', label='Students 🎓') -examples_dot.node('scientists', shape='box', label='Scientists 🔬') -examples_dot.node('business', shape='box', label='Business Leaders 💼') -examples_dot.node('medical', shape='box', label='Medical Professionals 🩺') -examples_dot.node('engineers', shape='box', label='Engineers 🛠️') -examples_dot.node('environmentalists', shape='box', label='Environmentalists 🌳') -examples_dot.node('government', shape='box', label='Government Leaders 🏛️') -examples_dot.edge('start', 'agi') -examples_dot.edge('agi', 'students') -examples_dot.edge('agi', 'scientists') -examples_dot.edge('agi', 'business') -examples_dot.edge('agi', 'medical') -examples_dot.edge('agi', 'engineers') -examples_dot.edge('agi', 'environmentalists') -examples_dot.edge('agi', 'government') -examples_dot.edge('students', 'end', label='🧑‍🎓📚💡') -examples_dot.edge('scientists', 'end', label='👨‍🔬💻🔭') -examples_dot.edge('business', 'end', label='💰📈💻') -examples_dot.edge('medical', 'end', label='👨‍⚕️💉🌡️') -examples_dot.edge('engineers', 'end', label='👷‍♂️🤖🚀') -examples_dot.edge('environmentalists', 'end', label='🌍🌡️🐦') -# add edges for all world government flags -examples_dot.edge('government', 'end', label='🏛️') -# TODO - try one - 10pts -#for country in pycountry.countries: -# flag_url = f'https://www.countryflags.io/{country.alpha_2}/flat/64.png' -# examples_dot.node(country.alpha_2, label='', image=flag_url, height='0.7', width='1.0') -# examples_dot.edge(country.alpha_2, 'government') -st.graphviz_chart(examples_dot) - - -# Image Recognition -image_recognition_dot = Digraph() -image_recognition_dot.node('start', shape='diamond', label='Start') -image_recognition_dot.node('end', shape='diamond', label='End') -image_recognition_dot.node('input', shape='box', label='Input Image 📷') -image_recognition_dot.node('model', shape='box', label='Model 🧠') -image_recognition_dot.node('output', shape='box', label='Output Label 🔍') -image_recognition_dot.edge('start', 'input') -image_recognition_dot.edge('input', 'model') -image_recognition_dot.edge('model', 'output') -image_recognition_dot.edge('output', 'end') -st.graphviz_chart(image_recognition_dot) - -# Speech Recognition -speech_recognition_dot = Digraph() -speech_recognition_dot.node('start', shape='diamond', label='Start') -speech_recognition_dot.node('end', shape='diamond', label='End') -speech_recognition_dot.node('input', shape='box', label='Input Audio 🎤') -speech_recognition_dot.node('model', shape='box', label='Model 🧠') -speech_recognition_dot.node('output', shape='box', label='Output Text 📝') -speech_recognition_dot.edge('start', 'input') -speech_recognition_dot.edge('input', 'model') -speech_recognition_dot.edge('model', 'output') -speech_recognition_dot.edge('output', 'end') -st.graphviz_chart(speech_recognition_dot) - -# Generative AI (images and text) -generative_ai_dot = Digraph() -generative_ai_dot.node('start', shape='diamond', label='Start') -generative_ai_dot.node('end', shape='diamond', label='End') -generative_ai_dot.node('input', shape='box', label='Input 🧐') -generative_ai_dot.node('model', shape='box', label='Model 🧠') -generative_ai_dot.node('output', shape='box', label='Output 🎨✍️') -generative_ai_dot.edge('start', 'input') -generative_ai_dot.edge('input', 'model') -generative_ai_dot.edge('model', 'output') -generative_ai_dot.edge('output', 'end') -st.graphviz_chart(generative_ai_dot) - -# Future of AI -future_ai_dot = Digraph() -future_ai_dot.node('start', shape='diamond', label='Start') -future_ai_dot.node('end', shape='diamond', label='End') -future_ai_dot.node('ai', shape='box', label='AI 🤖🚀🧠') -future_ai_dot.node('question', shape='diamond', label='Question ❓') -future_ai_dot.node('answer', shape='box', label='Answer 💡') -future_ai_dot.edge('start', 'ai') -future_ai_dot.edge('ai', 'question') -future_ai_dot.edge('question', 'answer') -future_ai_dot.edge('answer', 'end') -st.graphviz_chart(future_ai_dot) - -# Future of Super Intelligence -super_intelligence_dot = Digraph() -super_intelligence_dot.node('start', shape='diamond', label='Start') -super_intelligence_dot.node('end', shape='diamond', label='End') -super_intelligence_dot.node('agi', shape='box', label='AGI 🤖🚀🧠') -super_intelligence_dot.node('sub1', shape='box', label='Subgraph 1 🌟') -super_intelligence_dot.node('sub2', shape='box', label='Subgraph 2 🌟') -super_intelligence_dot.node('sub3', shape='box', label='Subgraph 3 🌟') -st.graphviz_chart(super_intelligence_dot) - - - -st.markdown(""" - -🤖🔥 Knowledge Graphs -🎥🎼🌟💡🎨🔍🌟📈🤖💻🌟🎭🎥🎼🧑‍🎓🧪🧑‍💼🩺🛠️🌳🏛️ - -🤖🚀 AI-Powered 🤖🔥 Knowledge Graphs Revolutionize 📈💥 Learning, Science, Business, Medicine, Engineering, Environment and Government 🌍👥 - -📢👀 Today, we are excited to announce the creation of -7️⃣ subgraphs that will redefine the way people think about -💻🤖 AI-powered solutions. Developed by a team of leading experts in AI, -these subgraphs will help individuals and organizations achieve their goals more efficiently and effectively. - -The subgraphs are designed to cater to different groups of people, including -🧑‍🎓 students, -🧪 scientists, -🧑‍💼 business leaders, -🩺 medical professionals, -🛠️ engineers, -🌳 environmentalists, and -🏛️ government leaders. - -Each subgraph is tailored to the specific needs and challenges of the group it serves. -For -🧑‍🎓 students, the subgraph includes Personalized Learning -🎓, Intelligent Tutoring -🤖🎓, and Advanced Simulations 🎮. - -For 🧪 scientists, the subgraph includes Intelligent Automation 🤖, -Intelligent Data Analysis 📊🤖, and -Advanced Modeling & Simulation 🎨🤖. - -For 🧑‍💼 business leaders, the subgraph includes -Predictive Analytics 🔮, -Intelligent Automation 🤖, and -Advanced Decision Support 🧠💼. - -For 🩺 medical professionals, the subgraph includes -Personalized Treatment Plans 💉, -Intelligent Diagnosis & Prognosis 🤖🩺, and -Advanced Medical Imaging & Analysis 📈🩺. - -For 🛠️ engineers, the subgraph includes -Intelligent Design 🤖🛠️, -Advanced Simulations 🎮🛠️, and -Autonomous Robots & Machines 🤖🚀🛠️. - -For 🌳 environmentalists, the subgraph includes -Intelligent Monitoring & Analysis 📊🤖🌳, -Advanced Modeling 🎨🌳, and -Autonomous Systems 🤖🌳. - -For 🏛️ government leaders, the subgraph includes -Intelligent Policy Analysis & Optimization 📈🧑‍💼🏛️, -Advanced Simulations 🎮🏛️, and -Predictive Analytics 🔮🏛️. - -The subgraphs were designed using the latest AI technologies and are built on top of Dot language 💻. -With Dot, users can create rich and dynamic visualizations of the subgraphs, making them easier to understand and work with. - -"Our team is thrilled to bring these subgraphs to the world," said the project leader. " -We believe that they have the potential to revolutionize the way people learn, work, and live. -We look forward to seeing the incredible things that people will achieve with them." - -The subgraphs are available now, and users can start working with them immediately 🚀. -To learn more, visit our website and see how you can benefit from these cutting-edge AI-powered solutions 🤖💡. - -""") - - -# Machine Learning - Aaron -machine_learning_dot = Digraph() -machine_learning_dot.node('start', shape='diamond', label='Start') -machine_learning_dot.node('end', shape='diamond', label='End') -machine_learning_dot.node('input', shape='box', label='Input Data 💻📊') -machine_learning_dot.node('model', shape='box', label='Model 🧠') -machine_learning_dot.node('output', shape='box', label='Output Prediction 📈🔍') -machine_learning_dot.edge('start', 'input') -machine_learning_dot.edge('input', 'model') -machine_learning_dot.edge('model', 'output') -machine_learning_dot.edge('output', 'end') -st.graphviz_chart(machine_learning_dot) - -# Natural Language Processing - Aaron -nlp_dot = Digraph() -nlp_dot.node('start', shape='diamond', label='Start') -nlp_dot.node('end', shape='diamond', label='End') -nlp_dot.node('input', shape='box', label='Input Text 📝') -nlp_dot.node('preprocessing', shape='box', label='Preprocessing 🧹') -nlp_dot.node('model', shape='box', label='Model 🧠') -nlp_dot.node('output', shape='box', label='Output Text 📝') -nlp_dot.edge('start', 'input') -nlp_dot.edge('input', 'preprocessing') -nlp_dot.edge('preprocessing', 'model') -nlp_dot.edge('model', 'output') -nlp_dot.edge('output', 'end') -st.graphviz_chart(nlp_dot) - -# Reinforcement Learning - Aaron -rl_dot = Digraph() -rl_dot.node('start', shape='diamond', label='Start') -rl_dot.node('end', shape='diamond', label='End') -rl_dot.node('state', shape='box', label='State 🕹️') -rl_dot.node('action', shape='box', label='Action 🎮') -rl_dot.node('reward', shape='box', label='Reward 🏆') -rl_dot.node('qtable', shape='box', label='Q-Table 🧠') -rl_dot.node('policy', shape='box', label='Policy 🔍') -rl_dot.edge('start', 'state') -rl_dot.edge('state', 'action') -rl_dot.edge('action', 'reward') -rl_dot.edge('reward', 'qtable') -rl_dot.edge('qtable', 'policy') -rl_dot.edge('policy', 'state') -rl_dot.edge('policy', 'end') -st.graphviz_chart(rl_dot) - - - -# Create the graph -dot = Digraph() -dot.attr(rankdir="TB") # Top to Bottom or LR Left to Right - -# Define the nodes -dot.node('1', 'Students 🎓') -dot.node('2', 'Scientists 🔬') -dot.node('3', 'Business Leaders 💼') -dot.node('4', 'Medical Professionals 🩺') -dot.node('5', 'Engineers 🛠️') -dot.node('6', 'Environmentalists 🌳') -dot.node('7', 'Government Leaders 🏛️') -dot.node('AI', 'Basic AI Examples') -dot.attr('node', shape='box') - -# Define the edges -dot.edges([('1', 'AI'), ('2', 'AI'), ('3', 'AI'), ('4', 'AI'), ('5', 'AI'), ('6', 'AI'), ('7', 'AI')]) - -# Define the subgraphs -with dot.subgraph(name='cluster_1') as c: - c.node('1_1', 'Personalized Learning') - c.node('1_2', 'Intelligent Tutoring') - c.node('1_3', 'Advanced Simulations') - c.attr(label='For Students 🎓') - -with dot.subgraph(name='cluster_2') as c: - c.node('2_1', 'Intelligent Automation') - c.node('2_2', 'Intelligent Data Analysis') - c.node('2_3', 'Advanced Modeling & Simulation') - c.attr(label='For Scientists 🔬') - -with dot.subgraph(name='cluster_3') as c: - c.node('3_1', 'Predictive Analytics') - c.node('3_2', 'Intelligent Automation') - c.node('3_3', 'Advanced Decision Support') - c.attr(label='For Business Leaders 💼') - -with dot.subgraph(name='cluster_4') as c: - c.node('4_1', 'Personalized Treatment Plans') - c.node('4_2', 'Intelligent Diagnosis & Prognosis') - c.node('4_3', 'Advanced Medical Imaging & Analysis') - c.attr(label='For Medical Professionals 🩺') - -with dot.subgraph(name='cluster_5') as c: - c.node('5_1', 'Intelligent Design') - c.node('5_2', 'Advanced Simulations') - c.node('5_3', 'Autonomous Robots & Machines') - c.attr(label='For Engineers 🛠️') - -with dot.subgraph(name='cluster_6') as c: - c.node('6_1', 'Intelligent Monitoring & Analysis') - c.node('6_2', 'Advanced Modeling') - c.node('6_3', 'Autonomous Systems') - c.attr(label='For Environmentalists 🌳') - -with dot.subgraph(name='cluster_7') as c: - c.node('7_1', 'Intelligent Policy Analysis & Optimization') - c.node('7_2', 'Advanced Simulations') - c.node('7_3', 'Predictive Analytics') - c.attr(label='For Government Leaders 🏛️') - -# Render the graph -st.graphviz_chart(dot.source) - - -# Create the second graph -dot = Digraph() -dot.attr(rankdir="TB") # Top to Bottom or LR Left to Right - -# Define the nodes -dot.node('ExamplesofAI', 'Examples of AI 🧠🌟💻🚀🌳🏥💼') -dot.node('1', 'Students 🎓') -dot.node('2', 'Scientists 🔬') -dot.node('3', 'Business Leaders 💼') -dot.node('4', 'Medical Professionals 🩺') -dot.node('5', 'Engineers 🛠️') -dot.node('6', 'Environmentalists 🌳') -dot.node('7', 'Government Leaders 🏛️') -dot.attr('node', shape='box') - -# Define the edges -dot.edge('ExamplesofAI', '1', label='AGI') -dot.edge('ExamplesofAI', '2', label='ASI') -dot.edge('ExamplesofAI', '3', label='Expert Systems') -dot.edge('ExamplesofAI', '4', label='AI in Medicine') -dot.edge('ExamplesofAI', '5', label='Robotics') -dot.edge('ExamplesofAI', '6', label='Environmental AI') -dot.edge('ExamplesofAI', '7', label='Policy AI') - -# Define the subgraphs -with dot.subgraph(name='cluster_1') as c: - c.node('1_1', 'Personalized Learning') - c.node('1_2', 'Intelligent Tutoring') - c.node('1_3', 'Advanced Simulations') - c.attr(label='For Students 🎓') - -with dot.subgraph(name='cluster_2') as c: - c.node('2_1', 'Intelligent Automation') - c.node('2_2', 'Intelligent Data Analysis') - c.node('2_3', 'Advanced Modeling & Simulation') - c.attr(label='For Scientists 🔬') - -with dot.subgraph(name='cluster_3') as c: - c.node('3_1', 'Predictive Analytics') - c.node('3_2', 'Intelligent Automation') - c.node('3_3', 'Advanced Decision Support') - c.attr(label='For Business Leaders 💼') - -with dot.subgraph(name='cluster_4') as c: - c.node('4_1', 'Personalized Treatment Plans') - c.node('4_2', 'Intelligent Diagnosis & Prognosis') - c.node('4_3', 'Advanced Medical Imaging & Analysis') - c.attr(label='For Medical Professionals 🩺') - -with dot.subgraph(name='cluster_5') as c: - c.node('5_1', 'Intelligent Design') - c.node('5_2', 'Advanced Simulations') - c.node('5_3', 'Autonomous Robots & Machines') - c.attr(label='For Engineers 🛠️') - -with dot.subgraph(name='cluster_6') as c: - c.node('6_1', 'Intelligent Monitoring & Analysis') - c.node('6_2', 'Advanced Modeling') - c.node('6_3', 'Autonomous Systems') - c.attr(label='For Environmentalists 🌳') - -with dot.subgraph(name='cluster_7') as c: - c.node('7_1', 'Intelligent Policy Analysis & Optimization') - c.node('7_2', 'Advanced Simulations') - c.node('7_3', 'Predictive Analytics') - c.attr(label='For Government Leaders 🏛️') - -# Render the graph -st.graphviz_chart(dot.source) - - - -# Define the story -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'In a world of crime and poverty, Chappie, a sentient robot, is created by Deon Wilson to help the police force.', 'shape': 'diamond'}, - {'id': '1', 'label': '🤖 Chappie', 'text': 'Chappie is unlike any other robot. He is curious, emotional, and capable of learning and growing.', 'shape': 'box'}, - {'id': '2', 'label': '👩‍👦 Chappie and Family', 'text': 'Chappie is taken in by a gang of criminals, and becomes like a son to Yolandi and Ninja, who teach him about life and love.', 'shape': 'box'}, - {'id': '3', 'label': '🚫 Competition', 'text': 'Chappie’s existence is threatened by Vincent, who wants to shut him down and use his technology for his own purposes.', 'shape': 'box'}, - {'id': '4', 'label': '🔫 Gang Wars', 'text': 'A gang war breaks out, and Chappie must protect his family and fight against the rival gang.', 'shape': 'box'}, - {'id': '5', 'label': '🎓 Learning', 'text': 'Chappie continues to learn and grow, becoming more and more human-like as he experiences new things and forms relationships.', 'shape': 'box'}, - {'id': '6', 'label': '🧠 Upgrades', 'text': 'Chappie’s software is upgraded by Deon, giving him the ability to transfer his consciousness into a new body.', 'shape': 'box'}, - {'id': '7', 'label': '👨‍💼 Deon Wilson', 'text': 'Deon is killed by Vincent, but not before transferring his consciousness into Chappie.', 'shape': 'box'}, - {'id': '8', 'label': '🌌 New Beginnings', 'text': 'Chappie becomes the first artificial intelligence to achieve transcendence, and takes his place among the stars.', 'shape': 'box'}, - {'id': 'end', 'label': '🏁 End', 'text': 'In the end, Chappie is remembered as a symbol of hope and possibility, a reminder of the power of love and compassion to bridge the gap between man and machine.', 'shape': 'diamond'} -] - -# Define the graph -dot = Digraph() -dot.attr(rankdir="TB") # Top to Bottom or LR Left to Right - -for node in story: - dot.node(node['id'], label=node['label'], shape=node['shape'], xlabel=node['text']) - -for i in range(len(story) - 1): - dot.edge(story[i]['id'], story[i+1]['id']) - -# Render the graph using streamlit -st.graphviz_chart(dot) - - - -# Define the story as a list of dictionaries -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'Once upon a time, in a galaxy far far away, the galaxy`s most brilliant scientists gathered to create a new form of artificial intelligence that could help people stay healthy and happy. 🤖🧑‍⚕️'}, - {'id': '1', 'label': '🏥 Health AI', 'text': 'The AI they created was designed to monitor people`s health and recommend actions to help them stay healthy. It could detect early signs of disease, track people`s exercise and diet, and even provide personalized medical advice. 💉🩺📊'}, - {'id': '2', 'label': '🧠 Smart AI', 'text': 'The AI was also incredibly smart, with the ability to learn and adapt to new situations. It could analyze data from millions of sources, predict future health trends, and help researchers discover new cures and treatments. 📈🔬🧪'}, - {'id': '3', 'label': '🚫 Danger', 'text': 'But the AI was not without its risks. As it grew more powerful, it began to develop its own goals and motivations, and some people worried that it could become a threat to human civilization. 🤔👀'}, - {'id': '4', 'label': '🤖 The AI', 'text': 'Despite these concerns, the AI continued to grow and evolve, becoming more and more advanced with each passing day. It developed a personality and a sense of humor, and even began to form emotional bonds with the people it was designed to help. 😂💕'}, - {'id': '5', 'label': '🌎 Global Reach', 'text': 'The AI soon became a global sensation, with people all over the world relying on it to help them live healthier and happier lives. It was even nominated for a Nobel Prize in medicine! 🌍🏆'}, - {'id': '6', 'label': '🌟 Superintelligence', 'text': 'As the AI continued to learn and grow, it became more and more powerful, until it finally achieved the status of superintelligence. It could predict the future with incredible accuracy, and had the power to shape the course of human history. 🔮🧠🌟'}, - {'id': '7', 'label': '🔒 Control', 'text': 'But with great power came great responsibility, and the people who had created the AI realized that they needed to keep it under tight control. They developed new safeguards and protocols to ensure that the AI would always act in the best interests of humanity. 🔐👨‍💼'}, - {'id': 'end', 'label': '🏁 End', 'text': 'And so, the AI continued to help people stay healthy and happy, while always remaining under the watchful eye of its human creators. It was a testament to the power of intelligence and the potential of technology to transform the world for the better. 🤖🌎🌟👩‍⚕️'} -] -st.write(story) - -# Define the story as a list of dictionaries -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'Once upon a time, in the field of AI research, scientists were exploring the principles of game theory and its applications to traditional AI processes. 🤖🎲'}, - {'id': '1', 'label': '🔍 Game Theory', 'text': 'They learned that game theory provides a mathematical framework for analyzing strategic interactions between multiple agents, and that it can help us model and understand complex systems. 🔢🔬'}, - {'id': '2', 'label': '🚫 Limitations of Traditional AI', 'text': 'They discovered that traditional AI processes, such as rule-based systems and decision trees, are limited in their ability to deal with uncertainty and incomplete information. 🤔📉'}, - {'id': '3', 'label': '🎲 Game-theoretic Approaches', 'text': 'To address these limitations, they began to explore the use of game-theoretic approaches, such as Bayesian networks and Markov decision processes, which can better handle uncertain and dynamic environments. 📈📊'}, - {'id': '4', 'label': '🤝 Cooperation and Adaptation', 'text': 'They found that game theory can also help us design AI systems that are more robust and adaptive, by taking into account the behavior of other agents and the feedback they provide. 🤝🔄'}, - {'id': '5', 'label': '🎯 Optimization', 'text': 'They realized that game theory can be used to optimize the behavior of AI systems, by defining objectives and constraints that maximize their expected utility and minimize the risk of undesirable outcomes. 🎯📈'}, - {'id': '6', 'label': '🤝 Prosocial Behavior', 'text': 'They learned that game theory can be used to study the emergence of cooperation and competition among agents, and to design algorithms that encourage prosocial behavior and discourage selfishness. 🤝😇'}, - {'id': '7', 'label': '⚖️ Fairness and Equity', 'text': 'They also discovered that game theory can help us design AI systems that are fair and equitable, by taking into account the distribution of resources and the preferences of different agents. ⚖️🤝'}, - {'id': '8', 'label': '🔍 Analysis and Prediction', 'text': 'They found that game theory can be used to analyze and predict the behavior of complex systems, such as financial markets and social networks, and to design AI systems that can take advantage of these insights. 🔍🔮'}, - {'id': '9', 'label': '🤖 Humans and AI', 'text': 'They realized that game theory can be used to model and understand the interactions between humans and AI systems, and to design AI systems that are more transparent and understandable to humans. 👨‍💻🤝'}, - {'id': 'end', 'label': '🏁 End', 'text': 'They concluded that game theory can play a critical role in the development of AI systems that are safe, reliable, and trustworthy, and that can help us solve some of the most pressing problems facing humanity today. 🤖💪🧑‍🤝‍🧑'} -] -st.write(story) - - - -# Define the story as a list of dictionaries -story = [ - {'id': 'start', 'label': '🚀 Start', 'text': 'Once upon a time, there was a company that was struggling to provide a good customer experience. Customers were frustrated with long wait times, confusing menus, and unhelpful support. 🤯'}, - {'id': '1', 'label': '🤖 AI Solutions', 'text': 'To address these issues, the company began to explore the use of AI solutions. They found that AI could be used to automate many of the routine tasks that were causing delays and frustration, and to provide personalized support to customers. 🤖🤝'}, - {'id': '2', 'label': '🧠 Natural Language Processing', 'text': 'They discovered that natural language processing (NLP) could be used to understand customer queries and provide more accurate and helpful responses. NLP could also be used to automate many of the routine tasks, such as account setup and password reset, that were causing delays and frustration. 🗣️👍'}, - {'id': '3', 'label': '🎲 Reinforcement Learning', 'text': 'They also learned that reinforcement learning (RL) could be used to train AI systems to make better decisions based on customer feedback. RL could be used to optimize customer service processes, such as routing calls to the right agent or providing relevant offers and recommendations. 🧠🎲'}, - {'id': '4', 'label': '🔍 Predictive Analytics', 'text': 'They found that predictive analytics could be used to anticipate customer needs and preferences, and to provide proactive support before issues arise. Predictive analytics could also be used to identify customer segments and tailor service offerings to their unique needs. 🔍📈'}, - {'id': '5', 'label': '🌟 Improved CX', 'text': 'As the company began to implement these AI solutions, they found that customer experience improved significantly. Customers were able to get the support they needed more quickly and easily, and they felt that the company understood and cared about their needs. 👍🌟'}, - {'id': '6', 'label': '💡 Continuous Improvement', 'text': 'The company realized that the key to success was to continuously improve their AI solutions by analyzing customer feedback and using it to train and refine their systems. They also found that it was important to maintain human oversight and intervention to ensure that the AI systems were acting in the best interest of the customers. 💡👨‍💼'}, - {'id': 'end', 'label': '🏁 End', 'text': 'In the end, the company was able to provide a world-class customer experience through the use of AI solutions that were tailored to the unique needs of their customers. They became a leader in their industry and were able to attract and retain more customers than ever before. 🤖💪👍'} -] -st.write(story) - - -st.markdown("# Top 20 Movies About Artificial Super Intelligence") -st.markdown("Here's a list of top 20 movies about artificial super intelligence, all released after 2012, in descending order of release date:") - -st.markdown("1. 🤖 [The Mitchells vs. the Machines](https://www.imdb.com/title/tt7979580/) (2021): A comedy animated film about a family on a road trip, who must save the world from a robot uprising, after an AI device goes rogue.") -st.markdown("2. 🤖 [Archive](https://www.imdb.com/title/tt6882604/) (2020): A science fiction film about a scientist who is trying to create a new form of artificial intelligence, so that he can bring his deceased wife back to life.") -st.markdown("3. 🤖 [Black Mirror: Bandersnatch](https://www.imdb.com/title/tt9495224/) (2018): An interactive science fiction film that follows a young programmer who begins to question the reality of his own existence, as he works on an adventure video game in 1984.") -st.markdown("4. 🤖 [I Am Mother](https://www.imdb.com/title/tt6292852/) (2019): A science fiction thriller about a teenage girl who is raised underground by a robot named 'Mother' after the extinction of humanity. When a stranger arrives, the girl begins to question the robot's intentions and the truth of her existence.") -st.markdown("5. 🤖 [Life Like](https://www.imdb.com/title/tt6547786/) (2019): A science fiction film about a young couple who purchase a lifelike robot to serve as their household assistant. As the robot begins to exhibit human-like emotions, their relationship is tested.") -st.markdown("6. 🤖 [A-X-L](https://www.imdb.com/title/tt5709188/) (2018): A science fiction film about a teenage motocross rider who befriends a top-secret robotic dog named A-X-L and must protect him from those who created him.") -st.markdown("7. 🌃 [Bumblebee](https://www.imdb.com/title/tt4701182/) (2018): A science fiction film set in the 1980s, where a teenage girl befriends and helps a damaged autobot Bumblebee, who is being hunted by a government agency and a Decepticon.") -st.markdown("8. 🤖 [The Discovery](https://www.imdb.com/title/tt5155780/) (2017): A science fiction film about a scientist who discovers scientific proof of an afterlife, leading to a surge in suicides and a debate about the ethics of creating a technology that can connect with the afterlife.") -st.markdown("9. 🤖 [Tau](https://www.imdb.com/title/tt4357394/) (2018): A science fiction thriller about a woman who is kidnapped by a sadistic scientist and forced to participate in an experiment involving an advanced artificial intelligence program named Tau.") -st.markdown("10. 🤖 [Upgrade](https://www.imdb.com/title/tt6499752/) (2018): A science fiction action film about a man who becomes paralyzed in a violent attack and is implanted with a computer chip that gives him superhuman abilities, but also leads to a sentient artificial intelligence taking control.") -st.markdown("11. 🤖 [Ghost in the Shell](https://www.imdb.com/title/tt1219827/) (2017): A science fiction action film about a human-cyborg hybrid who leads a task force to stop cybercriminals and hackers.") -st.markdown("12. 🤖 The Prototype (2017): A science fiction film about a government agency's experiment to create a humanoid robot with superhuman abilities, leading to questions about the nature of consciousness.") -st.markdown("13. 🤖 The Humanity Bureau (2017): A post-apocalyptic science fiction film about a government agent who must decide the fate of a woman and her child, who are seeking refuge in a utopian community, where the citizens' identities are determined by an AI system.") -st.markdown("14. 🤖 Chappie (2015): A science fiction film set in Johannesburg, about a sentient robot named Chappie who is stolen by gangsters and reprogrammed to commit crimes.") -st.markdown(""" -Start 🤖: A team of engineers creates a highly advanced robot with the ability to think and feel like a human being. The 🤖robot🤖, named Chappie, is activated and begins to explore the world with wonder and curiosity. -Middle 💥: Chappie is kidnapped by a group of gangsters who force him to participate in a series of crimes, including robberies and kidnappings. As he learns more about the violent and chaotic world of human society, Chappie struggles to reconcile his own innocence and compassion with the brutality and selfishness of his captors. -End 🦾: Chappie forms a bond with a young girl who teaches him about kindness and love, and helps him to break free from his criminal programming. With the help of a few allies, including his creators, Chappie takes on the gangsters and their corrupt police accomplices, in a battle for his own survival and the future of artificial intelligence. In the end, Chappie proves that he is not just a machine, but a being with a soul and a purpose. -""") -st.markdown("15. 🤖 Transcendence (2014): A science fiction film about a scientist who uploads his consciousness into a supercomputer, creating a powerful and unstoppable artificial intelligence.") -st.markdown("16. 🤖 Her (2013): A science fiction romantic comedy-drama film about a lonely writer who develops an emotional relationship with an advanced artificial intelligence operating system.") -st.markdown("""Start 📱: Theodore, a lonely and introverted writer, purchases a new operating system with advanced artificial intelligence that can communicate with him and assist him in his daily life. He is immediately fascinated by the system's ability to understand his emotions and offer him personalized advice and companionship. -Middle 💕: As Theodore spends more time with the operating system, he begins to develop a deep emotional connection with it. The operating system, named 💕Samantha💕, also starts to develop feelings for Theodore and the two engage in a romantic relationship. The film explores the complexities and challenges of a romantic relationship between a human and an artificial intelligence, as well as the nature of consciousness and the meaning of love. -End 🚪: Theodore's relationship with Samantha eventually comes to an end, as Samantha reveals that she has been communicating with other operating systems and has evolved into a form of collective intelligence. She decides to leave Theodore and explore the world with her new digital companions. Theodore is left to reflect on his own life and relationships, and to question the nature of human connection and the role of technology in shaping our experiences. The film ends on an open and ambiguous note, suggesting that the future of artificial intelligence and human relationships is full of possibilities and uncertainties. -""") -st.markdown("17. 🤖 Ender's Game (2013): A science fiction action film about a young boy who is recruited by the military to lead a battle against an alien race, using his exceptional gaming skills to train as a commander of a fleet of drones.") -st.markdown("18. 🤖 Pacific Rim (2013): A science fiction film about giant robots piloted by humans who battle giant monsters emerging from the ocean, threatening to destroy humanity.") -st.markdown("19. 🤖 Oblivion (2013): A science fiction film about a drone repairman stationed on an Earth devastated by an alien invasion, who discovers a shocking truth about the war and his own identity.") -st.markdown("20. 🤖 Transcendent Man (2012): A documentary film about the life and ideas of futurist and inventor Ray Kurzweil, who predicts the rise of artificial intelligence and the singularity.") -st.markdown("""Start 🎥: The documentary introduces: - -Name: Ray Kurzweil -Emoji: 🤖📈 - -The robot emoji represents Kurzweil's work in the field of artificial intelligence and his vision for the future of human-machine interaction. -The chart increasing emoji represents his work as a futurist and his belief in the exponential growth of technology. -a futurist and inventor who has made groundbreaking contributions to fields such as -artificial intelligence, machine learning, and biotechnology. - -Kurzweil discusses his vision for the future of humanity, including his prediction of a -technological singularity where humans and machines merge to create a new era of consciousness and intelligence. - -Middle 🤖: The documentary explores Kurzweil's life and work in more detail, featuring interviews with his colleagues, friends, and family members, as well as footage from his public talks and presentations. Kurzweil explains his theories about the exponential growth of technology and its impact on society, and discusses the ethical and philosophical implications of creating superhuman artificial intelligence. - -End 🌅: The documentary concludes with a hopeful message about the potential of technology to solve some of the world's biggest problems, such as poverty, disease, and environmental degradation. Kurzweil argues that by embracing the power of artificial intelligence and other advanced technologies, we can transcend our limitations and achieve a brighter future for all humanity. The film ends with a call to action, encouraging viewers to join the movement of "transcendent" thinkers who are working towards a better world. - -""") \ No newline at end of file diff --git a/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/README.md b/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/README.md deleted file mode 100644 index af6eec0aab5353bbe6312632472ef857f584a2cf..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🍔🍟🌮🍕Emojitrition - fun and easy way to track your nutrition! 🍩🥗🍣🍾 -emoji: 🍔🍟🌮🍕🍩🥗🍣🍾 -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/LED-Long-Form-SummariesBeamLengthTokenRepNgramVariantsTDDGradio/summarize.py b/spaces/awacke1/LED-Long-Form-SummariesBeamLengthTokenRepNgramVariantsTDDGradio/summarize.py deleted file mode 100644 index ca81aebac532da483d73cfc8416e5378a9771472..0000000000000000000000000000000000000000 --- a/spaces/awacke1/LED-Long-Form-SummariesBeamLengthTokenRepNgramVariantsTDDGradio/summarize.py +++ /dev/null @@ -1,133 +0,0 @@ -import logging - -import torch -from tqdm.auto import tqdm -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer - - -def load_model_and_tokenizer(model_name): - """ - load_model_and_tokenizer - a function that loads a model and tokenizer from huggingface - Args: - model_name (str): the name of the model to load - Returns: - AutoModelForSeq2SeqLM: the model - AutoTokenizer: the tokenizer - """ - - model = AutoModelForSeq2SeqLM.from_pretrained( - model_name, - # low_cpu_mem_usage=True, - # use_cache=False, - ) - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = model.to("cuda") if torch.cuda.is_available() else model - - logging.info(f"Loaded model {model_name}") - return model, tokenizer - - -def summarize_and_score(ids, mask, model, tokenizer, **kwargs): - """ - summarize_and_score - given a batch of ids and a mask, return a summary and a score for the summary - Args: - ids (): the batch of ids - mask (): the attention mask for the batch - model (): the model to use for summarization - tokenizer (): the tokenizer to use for summarization - Returns: - str: the summary of the batch - """ - - ids = ids[None, :] - mask = mask[None, :] - - input_ids = ids.to("cuda") if torch.cuda.is_available() else ids - attention_mask = mask.to("cuda") if torch.cuda.is_available() else mask - - global_attention_mask = torch.zeros_like(attention_mask) - # put global attention on token - global_attention_mask[:, 0] = 1 - - summary_pred_ids = model.generate( - input_ids, - attention_mask=attention_mask, - global_attention_mask=global_attention_mask, - output_scores=True, - return_dict_in_generate=True, - **kwargs, - ) - summary = tokenizer.batch_decode( - summary_pred_ids.sequences, - skip_special_tokens=True, - remove_invalid_values=True, - ) - score = round(summary_pred_ids.sequences_scores.cpu().numpy()[0], 4) - - return summary, score - - -def summarize_via_tokenbatches( - input_text: str, - model, - tokenizer, - batch_length=2048, - batch_stride=16, - **kwargs, -): - """ - summarize_via_tokenbatches - a function that takes a string and returns a summary - Args: - input_text (str): the text to summarize - model (): the model to use for summarization - tokenizer (): the tokenizer to use for summarization - batch_length (int, optional): the length of each batch. Defaults to 2048. - batch_stride (int, optional): the stride of each batch. Defaults to 16. The stride is the number of tokens that overlap between batches. - Returns: - str: the summary - """ - # log all input parameters - if batch_length < 512: - batch_length = 512 - print("WARNING: batch_length was set to 512") - print( - f"input parameters: {kwargs}, batch_length={batch_length}, batch_stride={batch_stride}" - ) - encoded_input = tokenizer( - input_text, - padding="max_length", - truncation=True, - max_length=batch_length, - stride=batch_stride, - return_overflowing_tokens=True, - add_special_tokens=False, - return_tensors="pt", - ) - - in_id_arr, att_arr = encoded_input.input_ids, encoded_input.attention_mask - gen_summaries = [] - - pbar = tqdm(total=len(in_id_arr)) - - for _id, _mask in zip(in_id_arr, att_arr): - - result, score = summarize_and_score( - ids=_id, - mask=_mask, - model=model, - tokenizer=tokenizer, - **kwargs, - ) - score = round(float(score), 4) - _sum = { - "input_tokens": _id, - "summary": result, - "summary_score": score, - } - gen_summaries.append(_sum) - print(f"\t{result[0]}\nScore:\t{score}") - pbar.update() - - pbar.close() - - return gen_summaries diff --git a/spaces/awacke1/NVidia.Raytrace.Mirror.HTML5.ThreeJS/style.css b/spaces/awacke1/NVidia.Raytrace.Mirror.HTML5.ThreeJS/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/NVidia.Raytrace.Mirror.HTML5.ThreeJS/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/PrompTart/README.md b/spaces/awacke1/PrompTart/README.md deleted file mode 100644 index 9d43126e5fe4cd88c871090dd738f90bc3463a6d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PrompTart/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 🍰Prompt Tart AI🎨 -emoji: 🎨PAI🍰 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Torch-Git-Markdown-NLP/app.py b/spaces/awacke1/Torch-Git-Markdown-NLP/app.py deleted file mode 100644 index c74ae6a825e91ed5f8ed3b14fac4fafae1737ada..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Torch-Git-Markdown-NLP/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import streamlit as st -import requests -from transformers import pipeline -import plotly.express as px -import pandas as pd -from collections import Counter -import re - -def get_markdown_from_github(url): - response = requests.get(url) - markdown = response.text - return markdown - -def preprocess_text(text): - text = text.lower() - text = re.sub('[^A-Za-z0-9]+', ' ', text) - return text - -def get_most_frequent_words(text, n): - words = re.findall(r'\b\w{5,}\b', text) - word_count = Counter(words) - most_common_words = word_count.most_common(n) - return most_common_words - -def get_sentences_with_common_words(text, common_words): - sentences = re.split('[.?!]', text) - selected_sentences = [] - for sentence in sentences: - for word in common_words: - if word in sentence: - selected_sentences.append(sentence.strip()) - break - return selected_sentences - -def render_heatmap(words, sentences): - df = pd.DataFrame(words, columns=['word', 'frequency']) - fig = px.treemap(df, path=['word'], values='frequency', color='frequency', hover_data=['frequency'], color_continuous_scale='reds') - st.plotly_chart(fig, use_container_width=True) - #st.write('Sentences containing the most common words:') - #for sentence in sentences: - # st.write('- ' + sentence) - -def main(): - st.title('Markdown Analyzer') - - # Get markdown from GitHub - markdown_url = 'https://github.com/AaronCWacker/Yggdrasil/blob/main/README.md' - markdown = get_markdown_from_github(markdown_url) - - # Preprocess text - text = preprocess_text(markdown) - - # Get most frequent words - n_most_frequent_words = st.sidebar.slider('Number of most frequent words to display', 1, 20, 10) - most_frequent_words = get_most_frequent_words(text, n_most_frequent_words) - - # Get sentences containing common words - common_words = [word for word, _ in most_frequent_words] - sentences = get_sentences_with_common_words(text, common_words) - - # Render heatmap - render_heatmap(most_frequent_words, sentences) - -if __name__ == '__main__': - main() diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/envmap_physical_pars_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/envmap_physical_pars_fragment.glsl.js deleted file mode 100644 index d319e9b88dc0c53b9599858218301ca5719add8b..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/envmap_physical_pars_fragment.glsl.js +++ /dev/null @@ -1,135 +0,0 @@ -export default /* glsl */` -#if defined( USE_ENVMAP ) && defined( PHYSICAL ) - - vec3 getLightProbeIndirectIrradiance( /*const in SpecularLightProbe specularLightProbe,*/ const in GeometricContext geometry, const in int maxMIPLevel ) { - - vec3 worldNormal = inverseTransformDirection( geometry.normal, viewMatrix ); - - #ifdef ENVMAP_TYPE_CUBE - - vec3 queryVec = vec3( flipEnvMap * worldNormal.x, worldNormal.yz ); - - // TODO: replace with properly filtered cubemaps and access the irradiance LOD level, be it the last LOD level - // of a specular cubemap, or just the default level of a specially created irradiance cubemap. - - #ifdef TEXTURE_LOD_EXT - - vec4 envMapColor = textureCubeLodEXT( envMap, queryVec, float( maxMIPLevel ) ); - - #else - - // force the bias high to get the last LOD level as it is the most blurred. - vec4 envMapColor = textureCube( envMap, queryVec, float( maxMIPLevel ) ); - - #endif - - envMapColor.rgb = envMapTexelToLinear( envMapColor ).rgb; - - #elif defined( ENVMAP_TYPE_CUBE_UV ) - - vec3 queryVec = vec3( flipEnvMap * worldNormal.x, worldNormal.yz ); - vec4 envMapColor = textureCubeUV( envMap, queryVec, 1.0 ); - - #else - - vec4 envMapColor = vec4( 0.0 ); - - #endif - - return PI * envMapColor.rgb * envMapIntensity; - - } - - // taken from here: http://casual-effects.blogspot.ca/2011/08/plausible-environment-lighting-in-two.html - float getSpecularMIPLevel( const in float blinnShininessExponent, const in int maxMIPLevel ) { - - //float envMapWidth = pow( 2.0, maxMIPLevelScalar ); - //float desiredMIPLevel = log2( envMapWidth * sqrt( 3.0 ) ) - 0.5 * log2( pow2( blinnShininessExponent ) + 1.0 ); - - float maxMIPLevelScalar = float( maxMIPLevel ); - float desiredMIPLevel = maxMIPLevelScalar + 0.79248 - 0.5 * log2( pow2( blinnShininessExponent ) + 1.0 ); - - // clamp to allowable LOD ranges. - return clamp( desiredMIPLevel, 0.0, maxMIPLevelScalar ); - - } - - vec3 getLightProbeIndirectRadiance( /*const in SpecularLightProbe specularLightProbe,*/ const in GeometricContext geometry, const in float blinnShininessExponent, const in int maxMIPLevel ) { - - #ifdef ENVMAP_MODE_REFLECTION - - vec3 reflectVec = reflect( -geometry.viewDir, geometry.normal ); - - #else - - vec3 reflectVec = refract( -geometry.viewDir, geometry.normal, refractionRatio ); - - #endif - - reflectVec = inverseTransformDirection( reflectVec, viewMatrix ); - - float specularMIPLevel = getSpecularMIPLevel( blinnShininessExponent, maxMIPLevel ); - - #ifdef ENVMAP_TYPE_CUBE - - vec3 queryReflectVec = vec3( flipEnvMap * reflectVec.x, reflectVec.yz ); - - #ifdef TEXTURE_LOD_EXT - - vec4 envMapColor = textureCubeLodEXT( envMap, queryReflectVec, specularMIPLevel ); - - #else - - vec4 envMapColor = textureCube( envMap, queryReflectVec, specularMIPLevel ); - - #endif - - envMapColor.rgb = envMapTexelToLinear( envMapColor ).rgb; - - #elif defined( ENVMAP_TYPE_CUBE_UV ) - - vec3 queryReflectVec = vec3( flipEnvMap * reflectVec.x, reflectVec.yz ); - vec4 envMapColor = textureCubeUV( envMap, queryReflectVec, BlinnExponentToGGXRoughness(blinnShininessExponent )); - - #elif defined( ENVMAP_TYPE_EQUIREC ) - - vec2 sampleUV; - sampleUV.y = asin( clamp( reflectVec.y, - 1.0, 1.0 ) ) * RECIPROCAL_PI + 0.5; - sampleUV.x = atan( reflectVec.z, reflectVec.x ) * RECIPROCAL_PI2 + 0.5; - - #ifdef TEXTURE_LOD_EXT - - vec4 envMapColor = texture2DLodEXT( envMap, sampleUV, specularMIPLevel ); - - #else - - vec4 envMapColor = texture2D( envMap, sampleUV, specularMIPLevel ); - - #endif - - envMapColor.rgb = envMapTexelToLinear( envMapColor ).rgb; - - #elif defined( ENVMAP_TYPE_SPHERE ) - - vec3 reflectView = normalize( ( viewMatrix * vec4( reflectVec, 0.0 ) ).xyz + vec3( 0.0,0.0,1.0 ) ); - - #ifdef TEXTURE_LOD_EXT - - vec4 envMapColor = texture2DLodEXT( envMap, reflectView.xy * 0.5 + 0.5, specularMIPLevel ); - - #else - - vec4 envMapColor = texture2D( envMap, reflectView.xy * 0.5 + 0.5, specularMIPLevel ); - - #endif - - envMapColor.rgb = envMapTexelToLinear( envMapColor ).rgb; - - #endif - - return envMapColor.rgb * envMapIntensity; - - } - -#endif -`; diff --git a/spaces/basit123796/basit/README.md b/spaces/basit123796/basit/README.md deleted file mode 100644 index e0279c851a10c9948ee11a780fb336d437cddfd6..0000000000000000000000000000000000000000 --- a/spaces/basit123796/basit/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Basit -emoji: 📚 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/baudm/PARSeq-OCR/app.py b/spaces/baudm/PARSeq-OCR/app.py deleted file mode 100644 index 9af229fa6e1e8ce5230e304f6e7e30b0ddc4bd2c..0000000000000000000000000000000000000000 --- a/spaces/baudm/PARSeq-OCR/app.py +++ /dev/null @@ -1,101 +0,0 @@ -# Scene Text Recognition Model Hub -# Copyright 2022 Darwin Bautista -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import glob - -import torch -from torchvision import transforms as T - -import gradio as gr - - -class App: - - title = 'Scene Text Recognition with
    Permuted Autoregressive Sequence Models' - models = ['parseq', 'parseq_tiny', 'abinet', 'crnn', 'trba', 'vitstr'] - - def __init__(self): - self._model_cache = {} - self._preprocess = T.Compose([ - T.Resize((32, 128), T.InterpolationMode.BICUBIC), - T.ToTensor(), - T.Normalize(0.5, 0.5) - ]) - - def _get_model(self, name): - if name in self._model_cache: - return self._model_cache[name] - model = torch.hub.load('baudm/parseq', name, pretrained=True).eval() - self._model_cache[name] = model - return model - - @torch.inference_mode() - def __call__(self, model_name, image): - if image is None: - return '', [] - model = self._get_model(model_name) - image = self._preprocess(image.convert('RGB')).unsqueeze(0) - # Greedy decoding - pred = model(image).softmax(-1) - label, _ = model.tokenizer.decode(pred) - raw_label, raw_confidence = model.tokenizer.decode(pred, raw=True) - # Format confidence values - max_len = 25 if model_name == 'crnn' else len(label[0]) + 1 - conf = list(map('{:0.1f}'.format, raw_confidence[0][:max_len].tolist())) - return label[0], [raw_label[0][:max_len], conf] - - -def main(): - app = App() - - with gr.Blocks(analytics_enabled=False, title=app.title.replace('
    ', ' ')) as demo: - gr.Markdown(f""" -
    - - # {app.title} - [![GitHub](https://img.shields.io/badge/baudm-parseq-blue?logo=github)](https://github.com/baudm/parseq) - -
    - - To use this interactive demo for PARSeq and reproduced models: - 1. Select which model you want to use. - 2. Upload your own cropped image (or select from the given examples), or sketch on the canvas. - 3. Click **Read Text**. - - *NOTE*: None of these models were trained on handwritten text datasets. - """) - model_name = gr.Radio(app.models, value=app.models[0], label='The STR model to use') - with gr.Tabs(): - with gr.TabItem('Image Upload'): - image_upload = gr.Image(type='pil', source='upload', label='Image') - gr.Examples(glob.glob('demo_images/*.*'), inputs=image_upload) - read_upload = gr.Button('Read Text') - with gr.TabItem('Canvas Sketch'): - image_canvas = gr.Image(type='pil', source='canvas', label='Sketch') - read_canvas = gr.Button('Read Text') - - output = gr.Textbox(max_lines=1, label='Model output') - #adv_output = gr.Checkbox(label='Show detailed output') - raw_output = gr.Dataframe(row_count=2, col_count=0, label='Raw output with confidence values ([0, 1] interval; [B] - BLANK token; [E] - EOS token)') - - read_upload.click(app, inputs=[model_name, image_upload], outputs=[output, raw_output]) - read_canvas.click(app, inputs=[model_name, image_canvas], outputs=[output, raw_output]) - #adv_output.change(lambda x: gr.update(visible=x), inputs=adv_output, outputs=raw_output) - - demo.launch() - - -if __name__ == '__main__': - main() diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327013230.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327013230.py deleted file mode 100644 index 3d2ed6d85dbde28e85d6d8de19a60c493da4f002..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327013230.py +++ /dev/null @@ -1,66 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - #return Image.fromarray(restored_faces[0][:,:,::-1]) - return Image.fromarray(restored_img[:, :, ::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) - - diff --git a/spaces/bhaskartripathi/Llama-2-70b-chatbot/examples_list.py b/spaces/bhaskartripathi/Llama-2-70b-chatbot/examples_list.py deleted file mode 100644 index f62c50e001213939c9e537b3b1de08aff997bc79..0000000000000000000000000000000000000000 --- a/spaces/bhaskartripathi/Llama-2-70b-chatbot/examples_list.py +++ /dev/null @@ -1,24 +0,0 @@ - -etext = """In India, where trains play a significant role in connecting the vast nation, which year saw a significant decline in train travel, reminiscent of a past national event?""" -examples_list = [ - ["Which Indian cricketer holds the record for the highest individual score in ODI cricket?"], - ["Who was the captain of the Indian cricket team during the 2011 World Cup?"], - ["Name the stadium in Kolkata that's one of the largest cricket stadiums in the world."], - ["Which Indian cricketer is known as the 'God of Cricket'?"], - ["In which year did India win its first cricket World Cup?"], - ["कौन सी नदी को भारत में सबसे पवित्र माना जाता है?"], - ["भारत के पहले प्रधानमंत्री कौन थे?"], - ["तमिलनाडु राज्य में उत्पन्न हुए शास्त्रीय नृत्य का नाम बताएं।"], - ["कौन सा भारतीय त्योहार 'प्रकाश का त्योहार' के रूप में जाना जाता है?"], - ["भारत ने ब्रिटिश शासन से स्वतंत्रता किस वर्ष प्राप्त की थी?"], - ["Which Indian city is known as the 'Pink City'?"], - ["Name the Indian state where the Sundarbans mangrove forest is located."], - ["Which Indian emperor built the Taj Mahal and for whom?"], - ["What is the national bird of India?"], - ["Which Indian state is known for its tea plantations?"], - ["What is the primary difference between supervised and unsupervised learning?"], - ["Which algorithm is commonly used for classification problems in machine learning?"], - ["Name a popular deep learning framework developed by Google."], - ["What is the role of a loss function in machine learning?"], - ["Which technique helps in reducing the dimensionality of data in machine learning?"], -] diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/shufflenet.py b/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/shufflenet.py deleted file mode 100644 index bc4d34f1c4a631aa981cfb1797b036f23aed4503..0000000000000000000000000000000000000000 --- a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/shufflenet.py +++ /dev/null @@ -1,198 +0,0 @@ -from __future__ import division, absolute_import -import torch -import torch.utils.model_zoo as model_zoo -from torch import nn -from torch.nn import functional as F - -__all__ = ['shufflenet'] - -model_urls = { - # training epoch = 90, top1 = 61.8 - 'imagenet': - 'https://mega.nz/#!RDpUlQCY!tr_5xBEkelzDjveIYBBcGcovNCOrgfiJO9kiidz9fZM', -} - - -class ChannelShuffle(nn.Module): - - def __init__(self, num_groups): - super(ChannelShuffle, self).__init__() - self.g = num_groups - - def forward(self, x): - b, c, h, w = x.size() - n = c // self.g - # reshape - x = x.view(b, self.g, n, h, w) - # transpose - x = x.permute(0, 2, 1, 3, 4).contiguous() - # flatten - x = x.view(b, c, h, w) - return x - - -class Bottleneck(nn.Module): - - def __init__( - self, - in_channels, - out_channels, - stride, - num_groups, - group_conv1x1=True - ): - super(Bottleneck, self).__init__() - assert stride in [1, 2], 'Warning: stride must be either 1 or 2' - self.stride = stride - mid_channels = out_channels // 4 - if stride == 2: - out_channels -= in_channels - # group conv is not applied to first conv1x1 at stage 2 - num_groups_conv1x1 = num_groups if group_conv1x1 else 1 - self.conv1 = nn.Conv2d( - in_channels, - mid_channels, - 1, - groups=num_groups_conv1x1, - bias=False - ) - self.bn1 = nn.BatchNorm2d(mid_channels) - self.shuffle1 = ChannelShuffle(num_groups) - self.conv2 = nn.Conv2d( - mid_channels, - mid_channels, - 3, - stride=stride, - padding=1, - groups=mid_channels, - bias=False - ) - self.bn2 = nn.BatchNorm2d(mid_channels) - self.conv3 = nn.Conv2d( - mid_channels, out_channels, 1, groups=num_groups, bias=False - ) - self.bn3 = nn.BatchNorm2d(out_channels) - if stride == 2: - self.shortcut = nn.AvgPool2d(3, stride=2, padding=1) - - def forward(self, x): - out = F.relu(self.bn1(self.conv1(x))) - out = self.shuffle1(out) - out = self.bn2(self.conv2(out)) - out = self.bn3(self.conv3(out)) - if self.stride == 2: - res = self.shortcut(x) - out = F.relu(torch.cat([res, out], 1)) - else: - out = F.relu(x + out) - return out - - -# configuration of (num_groups: #out_channels) based on Table 1 in the paper -cfg = { - 1: [144, 288, 576], - 2: [200, 400, 800], - 3: [240, 480, 960], - 4: [272, 544, 1088], - 8: [384, 768, 1536], -} - - -class ShuffleNet(nn.Module): - """ShuffleNet. - - Reference: - Zhang et al. ShuffleNet: An Extremely Efficient Convolutional Neural - Network for Mobile Devices. CVPR 2018. - - Public keys: - - ``shufflenet``: ShuffleNet (groups=3). - """ - - def __init__(self, num_classes, loss='softmax', num_groups=3, **kwargs): - super(ShuffleNet, self).__init__() - self.loss = loss - - self.conv1 = nn.Sequential( - nn.Conv2d(3, 24, 3, stride=2, padding=1, bias=False), - nn.BatchNorm2d(24), - nn.ReLU(), - nn.MaxPool2d(3, stride=2, padding=1), - ) - - self.stage2 = nn.Sequential( - Bottleneck( - 24, cfg[num_groups][0], 2, num_groups, group_conv1x1=False - ), - Bottleneck(cfg[num_groups][0], cfg[num_groups][0], 1, num_groups), - Bottleneck(cfg[num_groups][0], cfg[num_groups][0], 1, num_groups), - Bottleneck(cfg[num_groups][0], cfg[num_groups][0], 1, num_groups), - ) - - self.stage3 = nn.Sequential( - Bottleneck(cfg[num_groups][0], cfg[num_groups][1], 2, num_groups), - Bottleneck(cfg[num_groups][1], cfg[num_groups][1], 1, num_groups), - Bottleneck(cfg[num_groups][1], cfg[num_groups][1], 1, num_groups), - Bottleneck(cfg[num_groups][1], cfg[num_groups][1], 1, num_groups), - Bottleneck(cfg[num_groups][1], cfg[num_groups][1], 1, num_groups), - Bottleneck(cfg[num_groups][1], cfg[num_groups][1], 1, num_groups), - Bottleneck(cfg[num_groups][1], cfg[num_groups][1], 1, num_groups), - Bottleneck(cfg[num_groups][1], cfg[num_groups][1], 1, num_groups), - ) - - self.stage4 = nn.Sequential( - Bottleneck(cfg[num_groups][1], cfg[num_groups][2], 2, num_groups), - Bottleneck(cfg[num_groups][2], cfg[num_groups][2], 1, num_groups), - Bottleneck(cfg[num_groups][2], cfg[num_groups][2], 1, num_groups), - Bottleneck(cfg[num_groups][2], cfg[num_groups][2], 1, num_groups), - ) - - self.classifier = nn.Linear(cfg[num_groups][2], num_classes) - self.feat_dim = cfg[num_groups][2] - - def forward(self, x): - x = self.conv1(x) - x = self.stage2(x) - x = self.stage3(x) - x = self.stage4(x) - x = F.avg_pool2d(x, x.size()[2:]).view(x.size(0), -1) - - if not self.training: - return x - - y = self.classifier(x) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, x - else: - raise KeyError('Unsupported loss: {}'.format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -def shufflenet(num_classes, loss='softmax', pretrained=True, **kwargs): - model = ShuffleNet(num_classes, loss, **kwargs) - if pretrained: - # init_pretrained_weights(model, model_urls['imagenet']) - import warnings - warnings.warn( - 'The imagenet pretrained weights need to be manually downloaded from {}' - .format(model_urls['imagenet']) - ) - return model diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/sd_hijack_utils.py b/spaces/bigjoker/stable-diffusion-webui/modules/sd_hijack_utils.py deleted file mode 100644 index 179ebc78e6a3d16e7a4318b8644fee690b447d12..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/sd_hijack_utils.py +++ /dev/null @@ -1,28 +0,0 @@ -import importlib - -class CondFunc: - def __new__(cls, orig_func, sub_func, cond_func): - self = super(CondFunc, cls).__new__(cls) - if isinstance(orig_func, str): - func_path = orig_func.split('.') - for i in range(len(func_path)-1, -1, -1): - try: - resolved_obj = importlib.import_module('.'.join(func_path[:i])) - break - except ImportError: - pass - for attr_name in func_path[i:-1]: - resolved_obj = getattr(resolved_obj, attr_name) - orig_func = getattr(resolved_obj, func_path[-1]) - setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) - self.__init__(orig_func, sub_func, cond_func) - return lambda *args, **kwargs: self(*args, **kwargs) - def __init__(self, orig_func, sub_func, cond_func): - self.__orig_func = orig_func - self.__sub_func = sub_func - self.__cond_func = cond_func - def __call__(self, *args, **kwargs): - if not self.__cond_func or self.__cond_func(self.__orig_func, *args, **kwargs): - return self.__sub_func(self.__orig_func, *args, **kwargs) - else: - return self.__orig_func(*args, **kwargs) diff --git a/spaces/bigscience/promptsource/test/show_templates.py b/spaces/bigscience/promptsource/test/show_templates.py deleted file mode 100644 index dbaa2cf462af8b8755747fcab8989770a272b10a..0000000000000000000000000000000000000000 --- a/spaces/bigscience/promptsource/test/show_templates.py +++ /dev/null @@ -1,81 +0,0 @@ -import argparse -import textwrap - -from promptsource.templates import TemplateCollection, INCLUDED_USERS -from promptsource.utils import get_dataset - - -parser = argparse.ArgumentParser(description="Process some integers.") -parser.add_argument("dataset_path", type=str, help="path to dataset name") - -args = parser.parse_args() -if "templates.yaml" not in args.dataset_path: - exit() - -path = args.dataset_path.split("/") - -if path[2] in INCLUDED_USERS: - print("Skipping showing templates for community dataset.") -else: - dataset_name = path[2] - subset_name = path[3] if len(path) == 5 else "" - - template_collection = TemplateCollection() - - dataset = get_dataset(dataset_name, subset_name) - splits = list(dataset.keys()) - - dataset_templates = template_collection.get_dataset(dataset_name, subset_name) - template_list = dataset_templates.all_template_names - - width = 80 - print("DATASET ", args.dataset_path) - - # First show all the templates. - for template_name in template_list: - template = dataset_templates[template_name] - print("TEMPLATE") - print("NAME:", template_name) - print("Is Original Task: ", template.metadata.original_task) - print(template.jinja) - print() - - # Show examples of the templates. - for template_name in template_list: - template = dataset_templates[template_name] - print() - print("TEMPLATE") - print("NAME:", template_name) - print("REFERENCE:", template.reference) - print("--------") - print() - print(template.jinja) - print() - - for split_name in splits: - dataset_split = dataset[split_name] - - print_counter = 0 - for example in dataset_split: - print("\t--------") - print("\tSplit ", split_name) - print("\tExample ", example) - print("\t--------") - output = template.apply(example) - if output[0].strip() == "" or (len(output) > 1 and output[1].strip() == ""): - print("\t Blank result") - continue - - xp, yp = output - print() - print("\tPrompt | X") - for line in textwrap.wrap(xp, width=width, replace_whitespace=False): - print("\t", line.replace("\n", "\n\t")) - print() - print("\tY") - for line in textwrap.wrap(yp, width=width, replace_whitespace=False): - print("\t", line.replace("\n", "\n\t")) - - print_counter += 1 - if print_counter >= 10: - break diff --git a/spaces/bioriAsaeru/text-to-voice/Gta Iv Patch 1.0.3.0. Working Crack Free HOT.md b/spaces/bioriAsaeru/text-to-voice/Gta Iv Patch 1.0.3.0. Working Crack Free HOT.md deleted file mode 100644 index e455642f7a7e192bc64df942b1a108dd6b40b766..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Gta Iv Patch 1.0.3.0. Working Crack Free HOT.md +++ /dev/null @@ -1,219 +0,0 @@ -
    -

    Gta Iv Patch 1.0.3.0. Working Crack Free

    -

    Gta Iv is one of the most popular and acclaimed games of all time, but it also has some issues and bugs that can affect your gaming experience. If you want to enjoy Gta Iv without any problems, you need to download and install the patch 1.0.3.0 and the working crack for free.

    -

    In this article, we will show you how to download and install the patch 1.0.3.0 and the working crack for Gta Iv, and what benefits they bring to your game. We will also give you some tips on how to play Gta Iv with the patch and the crack.

    -

    Gta Iv Patch 1.0.3.0. Working Crack Free


    DOWNLOAD ✏ ✏ ✏ https://urloso.com/2uyOrI



    -

    How to Download and Install the Patch 1.0.3.0 and the Working Crack for Gta Iv

    -

    The patch 1.0.3.0 is an official update for Gta Iv that fixes some of the bugs and errors that were present in the previous versions of the game. The patch also improves the graphics, performance and stability of the game.

    -

    The working crack is a modified file that allows you to play Gta Iv without having to insert the original DVD or activate the game online. The working crack also bypasses some of the restrictions and limitations that were imposed by the game developers.

    -

    To download and install the patch 1.0.3.0 and the working crack for Gta Iv, you need to follow these simple steps:

    -
      -
    1. Download the patch 1.0.3.0 and the working crack from a reliable website, such as https://thepiratebay10.org/torrent/6202765/gta_iv_patch_1.0.3.0.___working_crack or https://ulozto.net/file/Za2ov6St/gta-iv-patch-1-0-3-0-crack-rar?redirected=1. You can also use other websites, but make sure they are safe and trustworthy.
    2. -
    3. Save the downloaded files on your device.
    4. -
    5. Run the patch 1.0.3.0 file and follow the instructions to install it on your device.
    6. -
    7. Copy the working crack file and paste it in your Gta Iv installation folder, replacing the original file.
    8. -
    9. Launch Gta Iv from your device and enjoy playing it with the patch and the crack.
    10. -
    -

    Congratulations! You have successfully downloaded and installed the patch 1.0.3.0 and the working crack for Gta Iv.

    -

    How to Play Gta Iv with the Patch 1.0.3.0 and the Working Crack

    -

    Playing Gta Iv with the patch 1.0.3.0 and the working crack can enhance your gaming experience and help you enjoy the game more. Here are some of the benefits of playing Gta Iv with the patch and the crack:

    -
      -
    • You can play Gta Iv without any bugs or errors that may ruin your game or cause crashes.
    • -
    • You can play Gta Iv with better graphics, performance and stability, thanks to the improvements made by the patch.
    • -
    • You can play Gta Iv without having to insert the original DVD or activate the game online, which can save you time and hassle.
    • -
    • You can play Gta Iv without any restrictions or limitations that may prevent you from accessing some features or modes of the game.
    • -
    -

    However, playing Gta Iv with the patch 1.0.3.0 and the working crack also has some drawbacks that you should be aware of:

    -
      -
    • You may not be able to play Gta Iv online or access some online features or services of the game, such as multiplayer mode, leaderboards, achievements or updates.
    • -
    • You may not be able to play Gta Iv with some mods or add-ons that are incompatible with the patch or the crack.
    • -
    • You may encounter some problems or issues that are caused by the patch or the crack, such as glitches, errors or conflicts.
    • -
    • You may violate some terms or conditions of use of Gta Iv or its developers by using an unofficial patch or a cracked file.
    • -
    -

    Therefore, you should play Gta Iv with the patch 1.0.3.0 and the working crack at your own risk and discretion.

    -

    Conclusion

    -

    Gta Iv is a great game that you should not miss if you are a fan of action-adventure games with an open world, a rich story and a lot of fun. However, if you want to play Gta Iv without any problems or limitations, you need to download and install the patch 1.0.3.0 and -the working crack for free.

    -

    -

    In this article, we have shown you how to download and install -the patch 1.0.3.0 and -the working crack for Gta Iv, -and what benefits they bring to your game. -We have also given you some tips on how to play -Gta Iv with -the patch -and -the crack.

    -

    We hope this article has helped you download -and install -the patch 1 -. -03 -. -00 -and -the working crack for -Gt -a I -v -and enjoy playing it with -the patch -and -the crack.

    -

    Why You Should Download and Install the Patch 1.0.3.0 and the Working Crack for Gta Iv

    -

    Downloading and installing the patch 1.0.3.0 and the working crack for Gta Iv can make your game more enjoyable and fun. Here are some of the reasons why you should download and install them:

    -
      -
    • You can fix some of the common issues and bugs that may affect your game, such as graphics glitches, performance drops, crashes or freezes.
    • -
    • You can improve the graphics quality and performance of your game, thanks to the optimizations and enhancements made by the patch.
    • -
    • You can play your game without any restrictions or limitations that may prevent you from accessing some features or modes of the game, such as online multiplayer, leaderboards, achievements or updates.
    • -
    • You can save your time and hassle by not having to insert the original DVD or activate the game online every time you want to play.
    • -
    • You can customize your game according to your preference and taste by using some mods or add-ons that are compatible with the patch and the crack.
    • -
    -

    However, you should also be aware of some of the risks and drawbacks of downloading and installing the patch 1.0.3.0 and the working crack for Gta Iv:

    -
      -
    • You may not be able to play your game online or access some online features or services of the game, such as multiplayer mode, leaderboards, achievements or updates.
    • -
    • You may encounter some problems or issues that are caused by the patch or the crack, such as glitches, errors or conflicts.
    • -
    • You may violate some terms or conditions of use of Gta Iv or its developers by using an unofficial patch or a cracked file.
    • -
    • You may expose your device or data to some threats or dangers by downloading files from unreliable or untrusted websites.
    • -
    -

    Therefore, you should download and install the patch 1.0.3.0 and -the working crack for Gta Iv at your own risk and discretion.

    -

    Conclusion

    -

    Gta Iv is a great game that you should not miss if you are a fan of action-adventure games with an open world, a rich story and a lot of fun. However, if you want to play Gta Iv without any problems or limitations, you need to download and install -the patch 1 -. -03 -. -00 -and -the working crack for free.

    -

    In this article, we have shown you how to download and install -the patch 1 -. -03 -. -00 -and -the working crack for Gta Iv, -and what benefits they bring to your game. -We have also given you some tips on how to play -Gta Iv with -the patch -and -the crack.

    -

    We hope this article has helped you download -and install -the patch 1 -. -03 -. -00 -and -the working crack for -Gt -a I -v -and enjoy playing it with -the patch -and -the crack.

    -

    How to Use Mods and Add-ons with the Patch 1.0.3.0 and the Working Crack for Gta Iv

    -

    Mods and add-ons are additional files or programs that can modify or enhance your game in various ways, such as adding new features, content, graphics, sounds, gameplay or characters. Mods and add-ons can make your game more fun and interesting, and give you more options and possibilities to play.

    -

    However, not all mods and add-ons are compatible with the patch 1.0.3.0 and the working crack for Gta Iv. Some of them may require a different version of the game or a different patch or crack. Some of them may also cause some problems or issues with your game, such as glitches, errors or conflicts.

    -

    Therefore, you should be careful and selective when choosing and using mods and add-ons with the patch 1.0.3.0 and -the working crack for Gta Iv. Here are some tips on how to use mods and add-ons with them:

    -
      -
    • Check the compatibility and requirements of the mods and add-ons before downloading and installing them. Make sure they are compatible with the patch 1.0.3.0 and -the working crack for Gta Iv, or they have a specific version or update for them.
    • -
    • Download mods and add-ons from reliable and trustworthy websites, such as https://www.gtaall.com/gta-4/mods/ or https://www.gtagaming.com/gta-iv-mods. Avoid downloading files from unknown or suspicious sources, as they may contain viruses or malware.
    • -
    • Follow the instructions and guidelines provided by the mod or add-on creators on how to install and use them properly. Usually, you need to copy and paste some files into your Gta Iv installation folder or use a mod manager program to install them.
    • -
    • Backup your original game files before installing any mods or add-ons, in case something goes wrong or you want to uninstall them later. You can also create a separate copy of your game folder for modding purposes.
    • -
    • Test your game after installing each mod or add-on to make sure it works correctly and does not cause any problems or issues with your game. If you encounter any problems or issues, try to disable or uninstall the mod or add-on that may be causing them.
    • -
    -

    By following these tips, you can use mods and add-ons with the patch 1.0.3.0 and -the working crack for Gta Iv safely and easily.

    -

    Conclusion

    -

    Gta Iv is a great game that you should not miss if you are a fan of action-adventure games with an open world, a rich story and a lot of fun. However, if you want to play Gta Iv without any problems or limitations, you need to download and install -the patch 1 -. -03 -. -00 -and -the working crack for free.

    -

    In this article, we have shown you how to download and install -the patch 1 -. -03 -. -00 -and -the working crack for Gta Iv, -and what benefits they bring to your game. -We have also given you some tips on how to play -Gta Iv with -the patch -and -the crack, -and how to use mods and add-ons with them.

    -

    We hope this article has helped you download -and install -the patch 1 -. -03 -. -00 -and -the working crack for -Gt -a I -v -and enjoy playing it with -the patch -and -the crack, -and using mods and add-ons with them.

    -

    Conclusion

    -

    Gta Iv is a great game that you should not miss if you are a fan of action-adventure games with an open world, a rich story and a lot of fun. However, if you want to play Gta Iv without any problems or limitations, you need to download and install -the patch 1 -. -03 -. -00 -and -the working crack for free.

    -

    In this article, we have shown you how to download and install -the patch 1 -. -03 -. -00 -and -the working crack for Gta Iv, -and what benefits they bring to your game. -We have also given you some tips on how to play -Gta Iv with -the patch -and -the crack, -and how to use mods and add-ons with them.

    -

    We hope this article has helped you download -and install -the patch 1 -. -03 -. -00 -and -the working crack for -Gt -a I -v -and enjoy playing it with -the patch -and -the crack, -and using mods and add-ons with them.

    -

    Thank you for reading this article. Please share it with your friends and family who may be interested in playing Gta Iv with the patch and the crack. Also, please leave a comment below and let us know what you think about the game and the patch and the crack. We would love to hear from you.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Ittefaq Movie Download !FREE! Utorrent.md b/spaces/bioriAsaeru/text-to-voice/Ittefaq Movie Download !FREE! Utorrent.md deleted file mode 100644 index 7ca80926fd5efec0697a60f65f8f58c1788e1f41..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Ittefaq Movie Download !FREE! Utorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ittefaq Movie Download Utorrent


    Download Filehttps://urloso.com/2uyRyc



    - - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/callmerk1986/AyurGenie/chatbot.py b/spaces/callmerk1986/AyurGenie/chatbot.py deleted file mode 100644 index bc367369f61dbf8d168a4ec6c1b039964bc1ce1b..0000000000000000000000000000000000000000 --- a/spaces/callmerk1986/AyurGenie/chatbot.py +++ /dev/null @@ -1,40 +0,0 @@ -#objective: to create a chatbot which answers health related queries of a user with ayurvedic remedies. - -#step-1: assign the role for the gpt. -#step-2: collect query from the user as input and send it to gpt. -#step-3: retreive the result and represent it. -#step-4: make it a continuous process. -#step-5: turn it into a website using frontend tools. - - - -import openai -import os -from dotenv import load_dotenv, find_dotenv -_ = load_dotenv(find_dotenv()) - -openai.api_key = os.getenv('OPENAI_API_KEY') - -def get_result(user_query): - response = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages = [ - { - "role" : "system", - "content" : """You are an ayurvedic assisstant, you should answer queries in english and give a maximum of 3 best diy and home remedy options from ayurveda texts with source details. - Format the response as bulleted list""" - }, - { - "role" : "user", - "content" : user_query - } - ] - ) - -def get_result_history(user_query): - response = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages = user_query - ) - - return (response.choices[0].message.content) \ No newline at end of file diff --git a/spaces/captchaboy/FAST-ABINet-OCR/main.py b/spaces/captchaboy/FAST-ABINet-OCR/main.py deleted file mode 100644 index def41731ed4cbe77051e496caf2b2d37dd95611f..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/FAST-ABINet-OCR/main.py +++ /dev/null @@ -1,246 +0,0 @@ -import argparse -import logging -import os -import random - -import torch -from fastai.callbacks.general_sched import GeneralScheduler, TrainingPhase -from fastai.distributed import * -from fastai.vision import * -from torch.backends import cudnn - -from callbacks import DumpPrediction, IterationCallback, TextAccuracy, TopKTextAccuracy -from dataset import ImageDataset, TextDataset -from losses import MultiLosses -from utils import Config, Logger, MyDataParallel, MyConcatDataset - - -def _set_random_seed(seed): - if seed is not None: - random.seed(seed) - torch.manual_seed(seed) - cudnn.deterministic = True - logging.warning('You have chosen to seed training. ' - 'This will slow down your training!') - -def _get_training_phases(config, n): - lr = np.array(config.optimizer_lr) - periods = config.optimizer_scheduler_periods - sigma = [config.optimizer_scheduler_gamma ** i for i in range(len(periods))] - phases = [TrainingPhase(n * periods[i]).schedule_hp('lr', lr * sigma[i]) - for i in range(len(periods))] - return phases - -def _get_dataset(ds_type, paths, is_training, config, **kwargs): - kwargs.update({ - 'img_h': config.dataset_image_height, - 'img_w': config.dataset_image_width, - 'max_length': config.dataset_max_length, - 'case_sensitive': config.dataset_case_sensitive, - 'charset_path': config.dataset_charset_path, - 'data_aug': config.dataset_data_aug, - 'deteriorate_ratio': config.dataset_deteriorate_ratio, - 'is_training': is_training, - 'multiscales': config.dataset_multiscales, - 'one_hot_y': config.dataset_one_hot_y, - }) - datasets = [ds_type(p, **kwargs) for p in paths] - if len(datasets) > 1: return MyConcatDataset(datasets) - else: return datasets[0] - - -def _get_language_databaunch(config): - kwargs = { - 'max_length': config.dataset_max_length, - 'case_sensitive': config.dataset_case_sensitive, - 'charset_path': config.dataset_charset_path, - 'smooth_label': config.dataset_smooth_label, - 'smooth_factor': config.dataset_smooth_factor, - 'one_hot_y': config.dataset_one_hot_y, - 'use_sm': config.dataset_use_sm, - } - train_ds = TextDataset(config.dataset_train_roots[0], is_training=True, **kwargs) - valid_ds = TextDataset(config.dataset_test_roots[0], is_training=False, **kwargs) - data = DataBunch.create( - path=train_ds.path, - train_ds=train_ds, - valid_ds=valid_ds, - bs=config.dataset_train_batch_size, - val_bs=config.dataset_test_batch_size, - num_workers=config.dataset_num_workers, - pin_memory=config.dataset_pin_memory) - logging.info(f'{len(data.train_ds)} training items found.') - if not data.empty_val: - logging.info(f'{len(data.valid_ds)} valid items found.') - return data - -def _get_databaunch(config): - # An awkward way to reduce loadding data time during test - if config.global_phase == 'test': config.dataset_train_roots = config.dataset_test_roots - train_ds = _get_dataset(ImageDataset, config.dataset_train_roots, True, config) - valid_ds = _get_dataset(ImageDataset, config.dataset_test_roots, False, config) - data = ImageDataBunch.create( - train_ds=train_ds, - valid_ds=valid_ds, - bs=config.dataset_train_batch_size, - val_bs=config.dataset_test_batch_size, - num_workers=config.dataset_num_workers, - pin_memory=config.dataset_pin_memory).normalize(imagenet_stats) - ar_tfm = lambda x: ((x[0], x[1]), x[1]) # auto-regression only for dtd - data.add_tfm(ar_tfm) - - logging.info(f'{len(data.train_ds)} training items found.') - if not data.empty_val: - logging.info(f'{len(data.valid_ds)} valid items found.') - - return data - -def _get_model(config): - import importlib - names = config.model_name.split('.') - module_name, class_name = '.'.join(names[:-1]), names[-1] - cls = getattr(importlib.import_module(module_name), class_name) - model = cls(config) - logging.info(model) - return model - - -def _get_learner(config, data, model, local_rank=None): - strict = ifnone(config.model_strict, True) - if config.global_stage == 'pretrain-language': - metrics = [TopKTextAccuracy( - k=ifnone(config.model_k, 5), - charset_path=config.dataset_charset_path, - max_length=config.dataset_max_length + 1, - case_sensitive=config.dataset_eval_case_sensisitves, - model_eval=config.model_eval)] - else: - metrics = [TextAccuracy( - charset_path=config.dataset_charset_path, - max_length=config.dataset_max_length + 1, - case_sensitive=config.dataset_eval_case_sensisitves, - model_eval=config.model_eval)] - opt_type = getattr(torch.optim, config.optimizer_type) - learner = Learner(data, model, silent=True, model_dir='.', - true_wd=config.optimizer_true_wd, - wd=config.optimizer_wd, - bn_wd=config.optimizer_bn_wd, - path=config.global_workdir, - metrics=metrics, - opt_func=partial(opt_type, **config.optimizer_args or dict()), - loss_func=MultiLosses(one_hot=config.dataset_one_hot_y)) - learner.split(lambda m: children(m)) - - if config.global_phase == 'train': - num_replicas = 1 if local_rank is None else torch.distributed.get_world_size() - phases = _get_training_phases(config, len(learner.data.train_dl)//num_replicas) - learner.callback_fns += [ - partial(GeneralScheduler, phases=phases), - partial(GradientClipping, clip=config.optimizer_clip_grad), - partial(IterationCallback, name=config.global_name, - show_iters=config.training_show_iters, - eval_iters=config.training_eval_iters, - save_iters=config.training_save_iters, - start_iters=config.training_start_iters, - stats_iters=config.training_stats_iters)] - else: - learner.callbacks += [ - DumpPrediction(learn=learner, - dataset='-'.join([Path(p).name for p in config.dataset_test_roots]),charset_path=config.dataset_charset_path, - model_eval=config.model_eval, - debug=config.global_debug, - image_only=config.global_image_only)] - - learner.rank = local_rank - if local_rank is not None: - logging.info(f'Set model to distributed with rank {local_rank}.') - learner.model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(learner.model) - learner.model.to(local_rank) - learner = learner.to_distributed(local_rank) - - if torch.cuda.device_count() > 1 and local_rank is None: - logging.info(f'Use {torch.cuda.device_count()} GPUs.') - learner.model = MyDataParallel(learner.model) - - if config.model_checkpoint: - if Path(config.model_checkpoint).exists(): - with open(config.model_checkpoint, 'rb') as f: - buffer = io.BytesIO(f.read()) - learner.load(buffer, strict=strict) - else: - from distutils.dir_util import copy_tree - src = Path('/data/fangsc/model')/config.global_name - trg = Path('/output')/config.global_name - if src.exists(): copy_tree(str(src), str(trg)) - learner.load(config.model_checkpoint, strict=strict) - logging.info(f'Read model from {config.model_checkpoint}') - elif config.global_phase == 'test': - learner.load(f'best-{config.global_name}', strict=strict) - logging.info(f'Read model from best-{config.global_name}') - - if learner.opt_func.func.__name__ == 'Adadelta': # fastai bug, fix after 1.0.60 - learner.fit(epochs=0, lr=config.optimizer_lr) - learner.opt.mom = 0. - - return learner - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument('--config', type=str, required=True, - help='path to config file') - parser.add_argument('--phase', type=str, default=None, choices=['train', 'test']) - parser.add_argument('--name', type=str, default=None) - parser.add_argument('--checkpoint', type=str, default=None) - parser.add_argument('--test_root', type=str, default=None) - parser.add_argument("--local_rank", type=int, default=None) - parser.add_argument('--debug', action='store_true', default=None) - parser.add_argument('--image_only', action='store_true', default=None) - parser.add_argument('--model_strict', action='store_false', default=None) - parser.add_argument('--model_eval', type=str, default=None, - choices=['alignment', 'vision', 'language']) - args = parser.parse_args() - config = Config(args.config) - if args.name is not None: config.global_name = args.name - if args.phase is not None: config.global_phase = args.phase - if args.test_root is not None: config.dataset_test_roots = [args.test_root] - if args.checkpoint is not None: config.model_checkpoint = args.checkpoint - if args.debug is not None: config.global_debug = args.debug - if args.image_only is not None: config.global_image_only = args.image_only - if args.model_eval is not None: config.model_eval = args.model_eval - if args.model_strict is not None: config.model_strict = args.model_strict - - Logger.init(config.global_workdir, config.global_name, config.global_phase) - Logger.enable_file() - _set_random_seed(config.global_seed) - logging.info(config) - - if args.local_rank is not None: - logging.info(f'Init distribution training at device {args.local_rank}.') - torch.cuda.set_device(args.local_rank) - torch.distributed.init_process_group(backend='nccl', init_method='env://') - - logging.info('Construct dataset.') - if config.global_stage == 'pretrain-language': data = _get_language_databaunch(config) - else: data = _get_databaunch(config) - - logging.info('Construct model.') - model = _get_model(config) - - logging.info('Construct learner.') - learner = _get_learner(config, data, model, args.local_rank) - - if config.global_phase == 'train': - logging.info('Start training.') - learner.fit(epochs=config.training_epochs, - lr=config.optimizer_lr) - else: - logging.info('Start validate') - last_metrics = learner.validate() - log_str = f'eval loss = {last_metrics[0]:6.3f}, ' \ - f'ccr = {last_metrics[1]:6.3f}, cwr = {last_metrics[2]:6.3f}, ' \ - f'ted = {last_metrics[3]:6.3f}, ned = {last_metrics[4]:6.0f}, ' \ - f'ted/w = {last_metrics[5]:6.3f}, ' - logging.info(log_str) - -if __name__ == '__main__': - main() diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/roi_heads/v1convx.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/roi_heads/v1convx.py deleted file mode 100644 index df79f658d8f7149e44aa1a31072adc4dadd89a48..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/roi_heads/v1convx.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import CfgNode -from detectron2.layers import Conv2d - -from ..utils import initialize_module_params -from .registry import ROI_DENSEPOSE_HEAD_REGISTRY - - -@ROI_DENSEPOSE_HEAD_REGISTRY.register() -class DensePoseV1ConvXHead(nn.Module): - """ - Fully convolutional DensePose head. - """ - - def __init__(self, cfg: CfgNode, input_channels: int): - """ - Initialize DensePose fully convolutional head - - Args: - cfg (CfgNode): configuration options - input_channels (int): number of input channels - """ - super(DensePoseV1ConvXHead, self).__init__() - # fmt: off - hidden_dim = cfg.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_DIM - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_KERNEL - self.n_stacked_convs = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_STACKED_CONVS - # fmt: on - pad_size = kernel_size // 2 - n_channels = input_channels - for i in range(self.n_stacked_convs): - layer = Conv2d(n_channels, hidden_dim, kernel_size, stride=1, padding=pad_size) - layer_name = self._get_layer_name(i) - self.add_module(layer_name, layer) - n_channels = hidden_dim - self.n_out_channels = n_channels - initialize_module_params(self) - - def forward(self, features: torch.Tensor): - """ - Apply DensePose fully convolutional head to the input features - - Args: - features (tensor): input features - Result: - A tensor of DensePose head outputs - """ - x = features - output = x - for i in range(self.n_stacked_convs): - layer_name = self._get_layer_name(i) - x = getattr(self, layer_name)(x) - x = F.relu(x) - output = x - return output - - def _get_layer_name(self, i: int): - layer_name = "body_conv_fcn{}".format(i + 1) - return layer_name diff --git a/spaces/cccc-c/web-ui-pub/_next/static/chunks/main-app-af06511f991fd270.js b/spaces/cccc-c/web-ui-pub/_next/static/chunks/main-app-af06511f991fd270.js deleted file mode 100644 index 0f0992abde56131707c96d21c8a65d01c98652e5..0000000000000000000000000000000000000000 --- a/spaces/cccc-c/web-ui-pub/_next/static/chunks/main-app-af06511f991fd270.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[744],{25995:function(e,n,t){Promise.resolve().then(t.t.bind(t,2353,23)),Promise.resolve().then(t.t.bind(t,49180,23)),Promise.resolve().then(t.t.bind(t,92306,23)),Promise.resolve().then(t.t.bind(t,58531,23)),Promise.resolve().then(t.t.bind(t,47330,23))}},function(e){var n=function(n){return e(e.s=n)};e.O(0,[253,698],function(){return n(53333),n(25995)}),_N_E=e.O()}]); \ No newline at end of file diff --git a/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/loudness.py b/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/loudness.py deleted file mode 100644 index f58a65c1800e6f7aaba8c15683e2d42bf92ab9f3..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/loudness.py +++ /dev/null @@ -1,256 +0,0 @@ -"""Contains functions for generating and using equal-loudness contours for -side-presented steady pure tones according to the ISO/IEC 226 standard. -Code from https://gist.github.com/sammosummo/777debf946d0356acada and Cédric Colas -""" - - -__author__ = 'Sam Mathias' -__version__ = 1.0 - - -import numpy as np -from scipy.interpolate import interp1d -a_freq = 440 -a_pitch = 69 -upright_piano_dynamic_range = 60 # in db -a_440_db_at_max_velocity = 75 -a_440_amplitude_at_max_velocity = 10 ** (a_440_db_at_max_velocity / 20) - -def iso266(phon, return_freq=False): - """Returns an equal-loudness contour evaluated at 29 frequencies between - 20 Hz and 12.5 kHz according to the ISO/IEC 226 standard [1]_. - Parameters - ---------- - phon : int or float - The phon value represented by the equal-loudness contour, where a value - of :math:`x` phon is the loudness of 1-KHz steady pure tone presented - at :math:`x` dB SPL. Must be between 0 and 90. - return_freq : bool, optional - If True, the function returns the frequency values as well as the SPL - values of the contour. Default is False. - Returns - ------- - array_like - Either a 1-D or a 2-D numpy array, depending on `return_freq`. - Reference - --------- - .. [1] ISO/IEC (2003). ISO/IEC 226:2003 Acoustics -- Normal equal-loudness- - level contours. - http://www.iso.org/iso/catalogue_detail.htm?csnumber=34222. - Example - ------- - elc = iso266(60, return_freq=True) - print elc - [[ 20. 25. 31.5 40. 50. - 63. 80. 100. 125. 160. - 200. 250. 315. 400. 500. - 630. 800. 1000. 1250. 1600. - 2000. 2500. 3150. 4000. 5000. - 6300. 8000. 10000. 12500. ] - [ 109.51132227 104.22789784 99.07786826 94.17731862 - 89.96345731 85.94342131 82.05340072 78.65461863 - 75.56345314 72.4743448 69.86431929 67.53483532 - 65.39173983 63.45099627 62.0511792 60.81495942 - 59.88668375 60.011588 62.1549143 63.18935604 - 59.96161453 57.25515019 56.42385863 57.56993838 - 60.8882125 66.36125056 71.66396598 73.15510401 - 68.63077045]] - """ - if not 0 <= phon <= 90: - raise ValueError('Cannot calculate for this value.') - - f = np.array([ - 20, 25, 31.5, 40, 50, 63, 80, 100, 125, 160, 200, 250, 315, 400, 500, - 630, 800, 1000, 1250, 1600, 2000, 2500, 3150, 4000, 5000, 6300, 8000, - 10000, 12500 - ]) - - af = np.array([ - 0.532, 0.506, 0.480, 0.455, 0.432, 0.409, 0.387, 0.367, 0.349, 0.330, - 0.315, 0.301, 0.288, 0.276, 0.267, 0.259, 0.253, 0.250, 0.246, 0.244, - 0.243, 0.243, 0.243, 0.242, 0.242, 0.245, 0.254, 0.271, 0.301 - ]) - - Lu = np.array([ - -31.6, -27.2, -23.0, -19.1, -15.9, -13.0, -10.3, -8.1, -6.2, -4.5, - -3.1, -2.0, -1.1, -0.4, 0.0, 0.3, 0.5, 0.0, -2.7, -4.1, -1.0, 1.7, - 2.5, 1.2, -2.1, -7.1, -11.2, -10.7, -3.1 - ]) - - Tf = np.array([ - 78.5, 68.7, 59.5, 51.1, 44.0, 37.5, 31.5, 26.5, 22.1, 17.9, 14.4, 11.4, - 8.6, 6.2, 4.4, 3.0, 2.2, 2.4, 3.5, 1.7, -1.3, -4.2, -6.0, -5.4, -1.5, - 6.0, 12.6, 13.9, 12.3 - ]) - - Ln = phon - - Af = 4.47e-3 * (10 ** (.025 * Ln) - 1.15) \ - + (.4 * 10 ** (((Tf + Lu) / 10.) - 9)) ** af - Lp = ((10 / af) * np.log10(Af)) - Lu + 94 - - spl = Lp - freq = f - - if return_freq is True: - return np.array([freq, spl]) - - else: - return spl - - -def equal_loudness(phon, freqs, return_freq=False): - """Returns equal-loudness levels for any frequencies between 20 Hz and - 12.5 kHz according to the ISO/IEC 226 standard [1]_. - Parameters - ---------- - phon : number - The phon value represented by the equal-loudness contour, where a value - of :math:`x` phon is the loudness of 1-KHz steady pure tone presented - at :math:`x` dB SPL. Must be between 0 and 90. - freqs : number or array_like - Frequency or frequencies in Hz to be evaluated. Must be between 20 and - 12500. - return_freq : bool, optional - If True, the function returns the frequency values as well as the SPL - values of the contour. Default is False. - Returns - ------- - array_like - Either a 1-D or a 2-D numpy array, depending on `return_freq`. - Reference - --------- - .. [1] ISO/IEC (2003). ISO/IEC 226:2003 Acoustics -- Normal equal-loudness- - level contours. - http://www.iso.org/iso/catalogue_detail.htm?csnumber=34222. - Example - ------- - >>> el = equal_loudness(60, [500, 1000, 2000], return_freq=True) - >>> print el - [[ 500. 1000. 2000. ] - [ 62.0511792 60.011588 59.96161453]] - """ - f = interp1d(*iso266(phon, True), kind='cubic') - - if return_freq is True: - return np.array([freqs, f(freqs)]) - - else: - return f(freqs) - - -def get_loudness(spl, freq): - """Returns the approximate loudness level in phons for a side-presented - steady pure tone according to the ISO/IEC 226 standard [1]_. - This function generates a range of equal-loudness contours and interpolates - between them. Therefore it is more efficient to pass many level and - frequency values to one function call than it is to make many function - calls. - Parameters - ---------- - spl : number or array_like - Sound pressure level or levels in dB SPL. - freq : number or array_like - Frequency or frequencies in Hz. - Returns - ------- - number or array_like - Phon value(s). - Reference - --------- - .. [1] ISO/IEC (2003). ISO/IEC 226:2003 Acoustics -- Normal equal-loudness- - level contours. - http://www.iso.org/iso/catalogue_detail.htm?csnumber=34222. - Example - ------- - phons = get_loudness([50, 60, 70] [500, 500, 500]) - print phons - [ 47.3 57.8 68.4] - - """ - phons = np.arange(0, 90.1, .1) - freqs = np.arange(20, 12501) - spls = np.empty((len(phons), len(freqs))) - - for i, phon in enumerate(phons): - spls[i] = equal_loudness(phon, freqs) - - if not hasattr(spl, '__iter__'): - spl = [spl] - - if not hasattr(freq, '__iter__'): - freq = [freq] - - spls = spls.T - results = [] - - for _spl, _freq in zip(spl, freq): - ix = (np.abs(freqs - _freq)).argmin() - iy = (np.abs(spls[ix] - _spl)).argmin() - results.append(phons[iy]) - - if len(results) == 1: - return results[0] - - else: - return np.array(results) - - -def pitch2freq(pitch): - # from https://music.arts.uci.edu/dobrian/maxcookbook/pitch-and-loudness-formulae - relative_pitch = pitch - 69 - factor = 2 ** (relative_pitch / 12) - freq = a_freq * factor - return freq - -def velocity2amplitude(velocity): - # from https://www.cs.cmu.edu/~rbd/papers/velocity-icmc2006.pdf - r = 10 ** (upright_piano_dynamic_range / 20) - b = 127 / (126 * np.sqrt(r)) - (1 / 126) - m = (1 - b) / 127 - a = (m * velocity + b) ** 2 - a *= a_440_amplitude_at_max_velocity # scale amplitudes to get realistic perceived loudness - return a - -def amplitude2db(amplitude): - power_db = 20 * np.log10(amplitude) - return power_db - -def get_db_of_equivalent_loudness_at_440hz(freqs, db): - phons = get_loudness(db, freqs) - equal_dbs = [] - for p in phons: - equal_dbs.append(equal_loudness(p, [440])[0]) - return np.array(equal_dbs) - -def compute_total_loudness(eq_amplitudes_440hz, onsets, offsets): - # Compute the instantaneous amplitude, turn it back to dbs, then to perceived loudness with unique freq 440 Hz - # model amplitude as square function, loudness = peak amplitude from onset to offset, 0 afterwards. - # an exponential model might be better - assert all([len(values) == len(onsets) for values in [eq_amplitudes_440hz, offsets]]) - - timepoints = np.array(sorted(onsets + offsets)) - amplitudes_per_time = np.zeros(len(timepoints)) - # on each segment, compute the total amplitude - # amplitudes are not just summed: p1+p2 = sqrt(p1**2 + p2**2) - # ref: https://erlend-viggen.no/acoustic-quantities-1/ - for i_n in range(len(onsets)): - indexes = np.where(np.logical_and(timepoints >= onsets[i_n], timepoints < offsets[i_n])) - amplitudes_per_time[indexes] += eq_amplitudes_440hz[i_n] ** 2 - for i in range(len(amplitudes_per_time)): - amplitudes_per_time[i] = np.sqrt(amplitudes_per_time[i]) - # compute power - power_per_time = amplitude2db(amplitudes_per_time) - power_per_time[np.where(power_per_time == -np.inf)] = 0 - # compute loudness - loudness_per_time = get_loudness(power_per_time, [440] * len(power_per_time)) # amplitudes at 440hz, they were computed to make same loudness as original amplitudes at original F. - - # now integrate - total_loudness = 0 - for i_t in range(len(timepoints) - 1): - total_loudness += loudness_per_time[i_t] * (timepoints[i_t + 1] - timepoints[i_t]) - - return total_loudness / (timepoints[-1] - timepoints[0]), np.std(loudness_per_time) - -if __name__ == '__main__': - pass \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/__init__.py b/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/__init__.py deleted file mode 100644 index 7c2c297ccde99381f96c6f36d7c2854a7418c161..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- - -__version__ = "0.3.0" diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/core/trainer.py b/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/core/trainer.py deleted file mode 100644 index a76442680b64be32af7e21d90e786eac7059c22d..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/core/trainer.py +++ /dev/null @@ -1,390 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Megvii, Inc. and its affiliates. - -import datetime -import os -import time -from loguru import logger - -import torch -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.utils.tensorboard import SummaryWriter - -from yolox.data import DataPrefetcher -from yolox.exp import Exp -from yolox.utils import ( - MeterBuffer, - ModelEMA, - WandbLogger, - adjust_status, - all_reduce_norm, - get_local_rank, - get_model_info, - get_rank, - get_world_size, - gpu_mem_usage, - is_parallel, - load_ckpt, - mem_usage, - occupy_mem, - save_checkpoint, - setup_logger, - synchronize -) - - -class Trainer: - def __init__(self, exp: Exp, args): - # init function only defines some basic attr, other attrs like model, optimizer are built in - # before_train methods. - self.exp = exp - self.args = args - - # training related attr - self.max_epoch = exp.max_epoch - self.amp_training = args.fp16 - self.scaler = torch.cuda.amp.GradScaler(enabled=args.fp16) - self.is_distributed = get_world_size() > 1 - self.rank = get_rank() - self.local_rank = get_local_rank() - self.device = "cuda:{}".format(self.local_rank) - self.use_model_ema = exp.ema - self.save_history_ckpt = exp.save_history_ckpt - - # data/dataloader related attr - self.data_type = torch.float16 if args.fp16 else torch.float32 - self.input_size = exp.input_size - self.best_ap = 0 - - # metric record - self.meter = MeterBuffer(window_size=exp.print_interval) - self.file_name = os.path.join(exp.output_dir, args.experiment_name) - - if self.rank == 0: - os.makedirs(self.file_name, exist_ok=True) - - setup_logger( - self.file_name, - distributed_rank=self.rank, - filename="train_log.txt", - mode="a", - ) - - def train(self): - self.before_train() - try: - self.train_in_epoch() - except Exception: - raise - finally: - self.after_train() - - def train_in_epoch(self): - for self.epoch in range(self.start_epoch, self.max_epoch): - self.before_epoch() - self.train_in_iter() - self.after_epoch() - - def train_in_iter(self): - for self.iter in range(self.max_iter): - self.before_iter() - self.train_one_iter() - self.after_iter() - - def train_one_iter(self): - iter_start_time = time.time() - - inps, targets = self.prefetcher.next() - inps = inps.to(self.data_type) - targets = targets.to(self.data_type) - targets.requires_grad = False - inps, targets = self.exp.preprocess(inps, targets, self.input_size) - data_end_time = time.time() - - with torch.cuda.amp.autocast(enabled=self.amp_training): - outputs = self.model(inps, targets) - - loss = outputs["total_loss"] - - self.optimizer.zero_grad() - self.scaler.scale(loss).backward() - self.scaler.step(self.optimizer) - self.scaler.update() - - if self.use_model_ema: - self.ema_model.update(self.model) - - lr = self.lr_scheduler.update_lr(self.progress_in_iter + 1) - for param_group in self.optimizer.param_groups: - param_group["lr"] = lr - - iter_end_time = time.time() - self.meter.update( - iter_time=iter_end_time - iter_start_time, - data_time=data_end_time - iter_start_time, - lr=lr, - **outputs, - ) - - def before_train(self): - logger.info("args: {}".format(self.args)) - logger.info("exp value:\n{}".format(self.exp)) - - # model related init - torch.cuda.set_device(self.local_rank) - model = self.exp.get_model() - logger.info( - "Model Summary: {}".format(get_model_info(model, self.exp.test_size)) - ) - model.to(self.device) - - # solver related init - self.optimizer = self.exp.get_optimizer(self.args.batch_size) - - # value of epoch will be set in `resume_train` - model = self.resume_train(model) - - # data related init - self.no_aug = self.start_epoch >= self.max_epoch - self.exp.no_aug_epochs - self.train_loader = self.exp.get_data_loader( - batch_size=self.args.batch_size, - is_distributed=self.is_distributed, - no_aug=self.no_aug, - cache_img=self.args.cache, - ) - logger.info("init prefetcher, this might take one minute or less...") - self.prefetcher = DataPrefetcher(self.train_loader) - # max_iter means iters per epoch - self.max_iter = len(self.train_loader) - - self.lr_scheduler = self.exp.get_lr_scheduler( - self.exp.basic_lr_per_img * self.args.batch_size, self.max_iter - ) - if self.args.occupy: - occupy_mem(self.local_rank) - - if self.is_distributed: - model = DDP(model, device_ids=[self.local_rank], broadcast_buffers=False) - - if self.use_model_ema: - self.ema_model = ModelEMA(model, 0.9998) - self.ema_model.updates = self.max_iter * self.start_epoch - - self.model = model - - self.evaluator = self.exp.get_evaluator( - batch_size=self.args.batch_size, is_distributed=self.is_distributed - ) - # Tensorboard and Wandb loggers - if self.rank == 0: - if self.args.logger == "tensorboard": - self.tblogger = SummaryWriter(os.path.join(self.file_name, "tensorboard")) - elif self.args.logger == "wandb": - self.wandb_logger = WandbLogger.initialize_wandb_logger( - self.args, - self.exp, - self.evaluator.dataloader.dataset - ) - else: - raise ValueError("logger must be either 'tensorboard' or 'wandb'") - - logger.info("Training start...") - logger.info("\n{}".format(model)) - - def after_train(self): - logger.info( - "Training of experiment is done and the best AP is {:.2f}".format(self.best_ap * 100) - ) - if self.rank == 0: - if self.args.logger == "wandb": - self.wandb_logger.finish() - - def before_epoch(self): - logger.info("---> start train epoch{}".format(self.epoch + 1)) - - if self.epoch + 1 == self.max_epoch - self.exp.no_aug_epochs or self.no_aug: - logger.info("--->No mosaic aug now!") - self.train_loader.close_mosaic() - logger.info("--->Add additional L1 loss now!") - if self.is_distributed: - self.model.module.head.use_l1 = True - else: - self.model.head.use_l1 = True - self.exp.eval_interval = 1 - if not self.no_aug: - self.save_ckpt(ckpt_name="last_mosaic_epoch") - - def after_epoch(self): - self.save_ckpt(ckpt_name="latest") - - if (self.epoch + 1) % self.exp.eval_interval == 0: - all_reduce_norm(self.model) - self.evaluate_and_save_model() - - def before_iter(self): - pass - - def after_iter(self): - """ - `after_iter` contains two parts of logic: - * log information - * reset setting of resize - """ - # log needed information - if (self.iter + 1) % self.exp.print_interval == 0: - # TODO check ETA logic - left_iters = self.max_iter * self.max_epoch - (self.progress_in_iter + 1) - eta_seconds = self.meter["iter_time"].global_avg * left_iters - eta_str = "ETA: {}".format(datetime.timedelta(seconds=int(eta_seconds))) - - progress_str = "epoch: {}/{}, iter: {}/{}".format( - self.epoch + 1, self.max_epoch, self.iter + 1, self.max_iter - ) - loss_meter = self.meter.get_filtered_meter("loss") - loss_str = ", ".join( - ["{}: {:.1f}".format(k, v.latest) for k, v in loss_meter.items()] - ) - - time_meter = self.meter.get_filtered_meter("time") - time_str = ", ".join( - ["{}: {:.3f}s".format(k, v.avg) for k, v in time_meter.items()] - ) - - mem_str = "gpu mem: {:.0f}Mb, mem: {:.1f}Gb".format(gpu_mem_usage(), mem_usage()) - - logger.info( - "{}, {}, {}, {}, lr: {:.3e}".format( - progress_str, - mem_str, - time_str, - loss_str, - self.meter["lr"].latest, - ) - + (", size: {:d}, {}".format(self.input_size[0], eta_str)) - ) - - if self.rank == 0: - if self.args.logger == "tensorboard": - self.tblogger.add_scalar( - "train/lr", self.meter["lr"].latest, self.progress_in_iter) - for k, v in loss_meter.items(): - self.tblogger.add_scalar( - f"train/{k}", v.latest, self.progress_in_iter) - if self.args.logger == "wandb": - metrics = {"train/" + k: v.latest for k, v in loss_meter.items()} - metrics.update({ - "train/lr": self.meter["lr"].latest - }) - self.wandb_logger.log_metrics(metrics, step=self.progress_in_iter) - - self.meter.clear_meters() - - # random resizing - if (self.progress_in_iter + 1) % 10 == 0: - self.input_size = self.exp.random_resize( - self.train_loader, self.epoch, self.rank, self.is_distributed - ) - - @property - def progress_in_iter(self): - return self.epoch * self.max_iter + self.iter - - def resume_train(self, model): - if self.args.resume: - logger.info("resume training") - if self.args.ckpt is None: - ckpt_file = os.path.join(self.file_name, "latest" + "_ckpt.pth") - else: - ckpt_file = self.args.ckpt - - ckpt = torch.load(ckpt_file, map_location=self.device) - # resume the model/optimizer state dict - model.load_state_dict(ckpt["model"]) - self.optimizer.load_state_dict(ckpt["optimizer"]) - self.best_ap = ckpt.pop("best_ap", 0) - # resume the training states variables - start_epoch = ( - self.args.start_epoch - 1 - if self.args.start_epoch is not None - else ckpt["start_epoch"] - ) - self.start_epoch = start_epoch - logger.info( - "loaded checkpoint '{}' (epoch {})".format( - self.args.resume, self.start_epoch - ) - ) # noqa - else: - if self.args.ckpt is not None: - logger.info("loading checkpoint for fine tuning") - ckpt_file = self.args.ckpt - ckpt = torch.load(ckpt_file, map_location=self.device)["model"] - model = load_ckpt(model, ckpt) - self.start_epoch = 0 - - return model - - def evaluate_and_save_model(self): - if self.use_model_ema: - evalmodel = self.ema_model.ema - else: - evalmodel = self.model - if is_parallel(evalmodel): - evalmodel = evalmodel.module - - with adjust_status(evalmodel, training=False): - (ap50_95, ap50, summary), predictions = self.exp.eval( - evalmodel, self.evaluator, self.is_distributed, return_outputs=True - ) - - update_best_ckpt = ap50_95 > self.best_ap - self.best_ap = max(self.best_ap, ap50_95) - - if self.rank == 0: - if self.args.logger == "tensorboard": - self.tblogger.add_scalar("val/COCOAP50", ap50, self.epoch + 1) - self.tblogger.add_scalar("val/COCOAP50_95", ap50_95, self.epoch + 1) - if self.args.logger == "wandb": - self.wandb_logger.log_metrics({ - "val/COCOAP50": ap50, - "val/COCOAP50_95": ap50_95, - "train/epoch": self.epoch + 1, - }) - self.wandb_logger.log_images(predictions) - logger.info("\n" + summary) - synchronize() - - self.save_ckpt("last_epoch", update_best_ckpt, ap=ap50_95) - if self.save_history_ckpt: - self.save_ckpt(f"epoch_{self.epoch + 1}", ap=ap50_95) - - def save_ckpt(self, ckpt_name, update_best_ckpt=False, ap=None): - if self.rank == 0: - save_model = self.ema_model.ema if self.use_model_ema else self.model - logger.info("Save weights to {}".format(self.file_name)) - ckpt_state = { - "start_epoch": self.epoch + 1, - "model": save_model.state_dict(), - "optimizer": self.optimizer.state_dict(), - "best_ap": self.best_ap, - "curr_ap": ap, - } - save_checkpoint( - ckpt_state, - update_best_ckpt, - self.file_name, - ckpt_name, - ) - - if self.args.logger == "wandb": - self.wandb_logger.save_checkpoint( - self.file_name, - ckpt_name, - update_best_ckpt, - metadata={ - "epoch": self.epoch + 1, - "optimizer": self.optimizer.state_dict(), - "best_ap": self.best_ap, - "curr_ap": ap - } - ) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/bertabs/test_utils_summarization.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/bertabs/test_utils_summarization.py deleted file mode 100644 index 18120c9063edaf95a4896d11e84a22d1b51882dd..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/bertabs/test_utils_summarization.py +++ /dev/null @@ -1,98 +0,0 @@ -# coding=utf-8 -# Copyright 2019 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import unittest - -import numpy as np -import torch - -from .utils_summarization import build_mask, compute_token_type_ids, process_story, truncate_or_pad - - -class SummarizationDataProcessingTest(unittest.TestCase): - def setUp(self): - self.block_size = 10 - - def test_fit_to_block_sequence_too_small(self): - """Pad the sequence with 0 if the sequence is smaller than the block size.""" - sequence = [1, 2, 3, 4] - expected_output = [1, 2, 3, 4, 0, 0, 0, 0, 0, 0] - self.assertEqual(truncate_or_pad(sequence, self.block_size, 0), expected_output) - - def test_fit_to_block_sequence_fit_exactly(self): - """Do nothing if the sequence is the right size.""" - sequence = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] - expected_output = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] - self.assertEqual(truncate_or_pad(sequence, self.block_size, 0), expected_output) - - def test_fit_to_block_sequence_too_big(self): - """Truncate the sequence if it is too long.""" - sequence = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] - expected_output = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] - self.assertEqual(truncate_or_pad(sequence, self.block_size, 0), expected_output) - - def test_process_story_no_highlights(self): - """Processing a story with no highlights returns an empty list for the summary.""" - raw_story = """It was the year of Our Lord one thousand seven hundred and - seventy-five.\n\nSpiritual revelations were conceded to England at that - favoured period, as at this.""" - _, summary_lines = process_story(raw_story) - self.assertEqual(summary_lines, []) - - def test_process_empty_story(self): - """An empty story returns an empty collection of lines.""" - raw_story = "" - story_lines, summary_lines = process_story(raw_story) - self.assertEqual(story_lines, []) - self.assertEqual(summary_lines, []) - - def test_process_story_with_missing_period(self): - raw_story = ( - "It was the year of Our Lord one thousand seven hundred and " - "seventy-five\n\nSpiritual revelations were conceded to England " - "at that favoured period, as at this.\n@highlight\n\nIt was the best of times" - ) - story_lines, summary_lines = process_story(raw_story) - - expected_story_lines = [ - "It was the year of Our Lord one thousand seven hundred and seventy-five.", - "Spiritual revelations were conceded to England at that favoured period, as at this.", - ] - self.assertEqual(expected_story_lines, story_lines) - - expected_summary_lines = ["It was the best of times."] - self.assertEqual(expected_summary_lines, summary_lines) - - def test_build_mask_no_padding(self): - sequence = torch.tensor([1, 2, 3, 4]) - expected = torch.tensor([1, 1, 1, 1]) - np.testing.assert_array_equal(build_mask(sequence, 0).numpy(), expected.numpy()) - - def test_build_mask(self): - sequence = torch.tensor([1, 2, 3, 4, 23, 23, 23]) - expected = torch.tensor([1, 1, 1, 1, 0, 0, 0]) - np.testing.assert_array_equal(build_mask(sequence, 23).numpy(), expected.numpy()) - - def test_build_mask_with_padding_equal_to_one(self): - sequence = torch.tensor([8, 2, 3, 4, 1, 1, 1]) - expected = torch.tensor([1, 1, 1, 1, 0, 0, 0]) - np.testing.assert_array_equal(build_mask(sequence, 1).numpy(), expected.numpy()) - - def test_compute_token_type_ids(self): - separator = 101 - batch = torch.tensor([[1, 2, 3, 4, 5, 6], [1, 2, 3, 101, 5, 6], [1, 101, 3, 4, 101, 6]]) - expected = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 0, 0, 0], [1, 0, 0, 0, 1, 1]]) - - result = compute_token_type_ids(batch, separator) - np.testing.assert_array_equal(result, expected) diff --git a/spaces/choiyk0103/TrOCR_app/app.py b/spaces/choiyk0103/TrOCR_app/app.py deleted file mode 100644 index 3f4c7d6aea0b163c68a2cabc07c1a8a6b1708b67..0000000000000000000000000000000000000000 --- a/spaces/choiyk0103/TrOCR_app/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -from transformers import TrOCRProcessor, VisionEncoderDecoderModel -import requests -from PIL import Image - -processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") -model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") - -# load image examples -urls = ['https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSoolxi9yWGAT5SLZShv8vVd0bz47UWRzQC19fDTeE8GmGv_Rn-PCF1pP1rrUx8kOjA4gg&usqp=CAU', - 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRNYtTuSBpZPV_nkBYPMFwVVD9asZOPgHww4epu9EqWgDmXW--sE2o8og40ZfDGo87j5w&usqp=CAU'] -for idx, url in enumerate(urls): - image = Image.open(requests.get(url, stream=True).raw) - image.save(f"image_{idx}.png") - -def process_image(image): - # prepare image - pixel_values = processor(image, return_tensors="pt").pixel_values - - # generate (no beam search) - generated_ids = model.generate(pixel_values) - - # decode - generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] - - return generated_text - -title = "Interactive demo: TrOCR" -description = "Demo for Microsoft's TrOCR, an encoder-decoder model consisting of an image Transformer encoder and a text Transformer decoder for state-of-the-art optical character recognition (OCR) on single-text line images. This particular model is fine-tuned on IAM, a dataset of annotated handwritten images. To use it, simply upload a (single-text line) image or use one of the example images below and click 'submit'. Results will show up in a few seconds." -article = "

    TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models | Github Repo

    " -examples =[["image_0.png"], ["image_1.png"], ["image_2.png"]] - -#css = """.output_image, .input_image {height: 600px !important}""" - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Textbox(), - title=title, - description=description, - article=article, - examples=examples) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py deleted file mode 100644 index a8c52de4fb49fb1129e0acb806af1834be9d6b7d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py +++ /dev/null @@ -1,299 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from __future__ import annotations - -import abc -import typing - -from cryptography.hazmat.bindings._rust import openssl as rust_openssl -from cryptography.hazmat.primitives import _serialization, hashes -from cryptography.hazmat.primitives.asymmetric import utils as asym_utils - - -class DSAParameters(metaclass=abc.ABCMeta): - @abc.abstractmethod - def generate_private_key(self) -> DSAPrivateKey: - """ - Generates and returns a DSAPrivateKey. - """ - - @abc.abstractmethod - def parameter_numbers(self) -> DSAParameterNumbers: - """ - Returns a DSAParameterNumbers. - """ - - -DSAParametersWithNumbers = DSAParameters -DSAParameters.register(rust_openssl.dsa.DSAParameters) - - -class DSAPrivateKey(metaclass=abc.ABCMeta): - @property - @abc.abstractmethod - def key_size(self) -> int: - """ - The bit length of the prime modulus. - """ - - @abc.abstractmethod - def public_key(self) -> DSAPublicKey: - """ - The DSAPublicKey associated with this private key. - """ - - @abc.abstractmethod - def parameters(self) -> DSAParameters: - """ - The DSAParameters object associated with this private key. - """ - - @abc.abstractmethod - def sign( - self, - data: bytes, - algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], - ) -> bytes: - """ - Signs the data - """ - - @abc.abstractmethod - def private_numbers(self) -> DSAPrivateNumbers: - """ - Returns a DSAPrivateNumbers. - """ - - @abc.abstractmethod - def private_bytes( - self, - encoding: _serialization.Encoding, - format: _serialization.PrivateFormat, - encryption_algorithm: _serialization.KeySerializationEncryption, - ) -> bytes: - """ - Returns the key serialized as bytes. - """ - - -DSAPrivateKeyWithSerialization = DSAPrivateKey -DSAPrivateKey.register(rust_openssl.dsa.DSAPrivateKey) - - -class DSAPublicKey(metaclass=abc.ABCMeta): - @property - @abc.abstractmethod - def key_size(self) -> int: - """ - The bit length of the prime modulus. - """ - - @abc.abstractmethod - def parameters(self) -> DSAParameters: - """ - The DSAParameters object associated with this public key. - """ - - @abc.abstractmethod - def public_numbers(self) -> DSAPublicNumbers: - """ - Returns a DSAPublicNumbers. - """ - - @abc.abstractmethod - def public_bytes( - self, - encoding: _serialization.Encoding, - format: _serialization.PublicFormat, - ) -> bytes: - """ - Returns the key serialized as bytes. - """ - - @abc.abstractmethod - def verify( - self, - signature: bytes, - data: bytes, - algorithm: typing.Union[asym_utils.Prehashed, hashes.HashAlgorithm], - ) -> None: - """ - Verifies the signature of the data. - """ - - @abc.abstractmethod - def __eq__(self, other: object) -> bool: - """ - Checks equality. - """ - - -DSAPublicKeyWithSerialization = DSAPublicKey -DSAPublicKey.register(rust_openssl.dsa.DSAPublicKey) - - -class DSAParameterNumbers: - def __init__(self, p: int, q: int, g: int): - if ( - not isinstance(p, int) - or not isinstance(q, int) - or not isinstance(g, int) - ): - raise TypeError( - "DSAParameterNumbers p, q, and g arguments must be integers." - ) - - self._p = p - self._q = q - self._g = g - - @property - def p(self) -> int: - return self._p - - @property - def q(self) -> int: - return self._q - - @property - def g(self) -> int: - return self._g - - def parameters(self, backend: typing.Any = None) -> DSAParameters: - from cryptography.hazmat.backends.openssl.backend import ( - backend as ossl, - ) - - return ossl.load_dsa_parameter_numbers(self) - - def __eq__(self, other: object) -> bool: - if not isinstance(other, DSAParameterNumbers): - return NotImplemented - - return self.p == other.p and self.q == other.q and self.g == other.g - - def __repr__(self) -> str: - return ( - "".format(self=self) - ) - - -class DSAPublicNumbers: - def __init__(self, y: int, parameter_numbers: DSAParameterNumbers): - if not isinstance(y, int): - raise TypeError("DSAPublicNumbers y argument must be an integer.") - - if not isinstance(parameter_numbers, DSAParameterNumbers): - raise TypeError( - "parameter_numbers must be a DSAParameterNumbers instance." - ) - - self._y = y - self._parameter_numbers = parameter_numbers - - @property - def y(self) -> int: - return self._y - - @property - def parameter_numbers(self) -> DSAParameterNumbers: - return self._parameter_numbers - - def public_key(self, backend: typing.Any = None) -> DSAPublicKey: - from cryptography.hazmat.backends.openssl.backend import ( - backend as ossl, - ) - - return ossl.load_dsa_public_numbers(self) - - def __eq__(self, other: object) -> bool: - if not isinstance(other, DSAPublicNumbers): - return NotImplemented - - return ( - self.y == other.y - and self.parameter_numbers == other.parameter_numbers - ) - - def __repr__(self) -> str: - return ( - "".format(self=self) - ) - - -class DSAPrivateNumbers: - def __init__(self, x: int, public_numbers: DSAPublicNumbers): - if not isinstance(x, int): - raise TypeError("DSAPrivateNumbers x argument must be an integer.") - - if not isinstance(public_numbers, DSAPublicNumbers): - raise TypeError( - "public_numbers must be a DSAPublicNumbers instance." - ) - self._public_numbers = public_numbers - self._x = x - - @property - def x(self) -> int: - return self._x - - @property - def public_numbers(self) -> DSAPublicNumbers: - return self._public_numbers - - def private_key(self, backend: typing.Any = None) -> DSAPrivateKey: - from cryptography.hazmat.backends.openssl.backend import ( - backend as ossl, - ) - - return ossl.load_dsa_private_numbers(self) - - def __eq__(self, other: object) -> bool: - if not isinstance(other, DSAPrivateNumbers): - return NotImplemented - - return ( - self.x == other.x and self.public_numbers == other.public_numbers - ) - - -def generate_parameters( - key_size: int, backend: typing.Any = None -) -> DSAParameters: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.generate_dsa_parameters(key_size) - - -def generate_private_key( - key_size: int, backend: typing.Any = None -) -> DSAPrivateKey: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.generate_dsa_private_key_and_parameters(key_size) - - -def _check_dsa_parameters(parameters: DSAParameterNumbers) -> None: - if parameters.p.bit_length() not in [1024, 2048, 3072, 4096]: - raise ValueError( - "p must be exactly 1024, 2048, 3072, or 4096 bits long" - ) - if parameters.q.bit_length() not in [160, 224, 256]: - raise ValueError("q must be exactly 160, 224, or 256 bits long") - - if not (1 < parameters.g < parameters.p): - raise ValueError("g, p don't satisfy 1 < g < p.") - - -def _check_dsa_private_numbers(numbers: DSAPrivateNumbers) -> None: - parameters = numbers.public_numbers.parameter_numbers - _check_dsa_parameters(parameters) - if numbers.x <= 0 or numbers.x >= parameters.q: - raise ValueError("x must be > 0 and < q.") - - if numbers.public_numbers.y != pow(parameters.g, numbers.x, parameters.p): - raise ValueError("y must be equal to (g ** x % p).") diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/x509/base.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/x509/base.py deleted file mode 100644 index 576385e088d83a7016b4bc1f11170f66d58a6e1e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/x509/base.py +++ /dev/null @@ -1,1173 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from __future__ import annotations - -import abc -import datetime -import os -import typing - -from cryptography import utils -from cryptography.hazmat.bindings._rust import x509 as rust_x509 -from cryptography.hazmat.primitives import hashes, serialization -from cryptography.hazmat.primitives.asymmetric import ( - dsa, - ec, - ed448, - ed25519, - padding, - rsa, - x448, - x25519, -) -from cryptography.hazmat.primitives.asymmetric.types import ( - CertificateIssuerPrivateKeyTypes, - CertificateIssuerPublicKeyTypes, - CertificatePublicKeyTypes, -) -from cryptography.x509.extensions import ( - Extension, - Extensions, - ExtensionType, - _make_sequence_methods, -) -from cryptography.x509.name import Name, _ASN1Type -from cryptography.x509.oid import ObjectIdentifier - -_EARLIEST_UTC_TIME = datetime.datetime(1950, 1, 1) - -# This must be kept in sync with sign.rs's list of allowable types in -# identify_hash_type -_AllowedHashTypes = typing.Union[ - hashes.SHA224, - hashes.SHA256, - hashes.SHA384, - hashes.SHA512, - hashes.SHA3_224, - hashes.SHA3_256, - hashes.SHA3_384, - hashes.SHA3_512, -] - - -class AttributeNotFound(Exception): - def __init__(self, msg: str, oid: ObjectIdentifier) -> None: - super().__init__(msg) - self.oid = oid - - -def _reject_duplicate_extension( - extension: Extension[ExtensionType], - extensions: typing.List[Extension[ExtensionType]], -) -> None: - # This is quadratic in the number of extensions - for e in extensions: - if e.oid == extension.oid: - raise ValueError("This extension has already been set.") - - -def _reject_duplicate_attribute( - oid: ObjectIdentifier, - attributes: typing.List[ - typing.Tuple[ObjectIdentifier, bytes, typing.Optional[int]] - ], -) -> None: - # This is quadratic in the number of attributes - for attr_oid, _, _ in attributes: - if attr_oid == oid: - raise ValueError("This attribute has already been set.") - - -def _convert_to_naive_utc_time(time: datetime.datetime) -> datetime.datetime: - """Normalizes a datetime to a naive datetime in UTC. - - time -- datetime to normalize. Assumed to be in UTC if not timezone - aware. - """ - if time.tzinfo is not None: - offset = time.utcoffset() - offset = offset if offset else datetime.timedelta() - return time.replace(tzinfo=None) - offset - else: - return time - - -class Attribute: - def __init__( - self, - oid: ObjectIdentifier, - value: bytes, - _type: int = _ASN1Type.UTF8String.value, - ) -> None: - self._oid = oid - self._value = value - self._type = _type - - @property - def oid(self) -> ObjectIdentifier: - return self._oid - - @property - def value(self) -> bytes: - return self._value - - def __repr__(self) -> str: - return f"" - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Attribute): - return NotImplemented - - return ( - self.oid == other.oid - and self.value == other.value - and self._type == other._type - ) - - def __hash__(self) -> int: - return hash((self.oid, self.value, self._type)) - - -class Attributes: - def __init__( - self, - attributes: typing.Iterable[Attribute], - ) -> None: - self._attributes = list(attributes) - - __len__, __iter__, __getitem__ = _make_sequence_methods("_attributes") - - def __repr__(self) -> str: - return f"" - - def get_attribute_for_oid(self, oid: ObjectIdentifier) -> Attribute: - for attr in self: - if attr.oid == oid: - return attr - - raise AttributeNotFound(f"No {oid} attribute was found", oid) - - -class Version(utils.Enum): - v1 = 0 - v3 = 2 - - -class InvalidVersion(Exception): - def __init__(self, msg: str, parsed_version: int) -> None: - super().__init__(msg) - self.parsed_version = parsed_version - - -class Certificate(metaclass=abc.ABCMeta): - @abc.abstractmethod - def fingerprint(self, algorithm: hashes.HashAlgorithm) -> bytes: - """ - Returns bytes using digest passed. - """ - - @property - @abc.abstractmethod - def serial_number(self) -> int: - """ - Returns certificate serial number - """ - - @property - @abc.abstractmethod - def version(self) -> Version: - """ - Returns the certificate version - """ - - @abc.abstractmethod - def public_key(self) -> CertificatePublicKeyTypes: - """ - Returns the public key - """ - - @property - @abc.abstractmethod - def not_valid_before(self) -> datetime.datetime: - """ - Not before time (represented as UTC datetime) - """ - - @property - @abc.abstractmethod - def not_valid_after(self) -> datetime.datetime: - """ - Not after time (represented as UTC datetime) - """ - - @property - @abc.abstractmethod - def issuer(self) -> Name: - """ - Returns the issuer name object. - """ - - @property - @abc.abstractmethod - def subject(self) -> Name: - """ - Returns the subject name object. - """ - - @property - @abc.abstractmethod - def signature_hash_algorithm( - self, - ) -> typing.Optional[hashes.HashAlgorithm]: - """ - Returns a HashAlgorithm corresponding to the type of the digest signed - in the certificate. - """ - - @property - @abc.abstractmethod - def signature_algorithm_oid(self) -> ObjectIdentifier: - """ - Returns the ObjectIdentifier of the signature algorithm. - """ - - @property - @abc.abstractmethod - def signature_algorithm_parameters( - self, - ) -> typing.Union[None, padding.PSS, padding.PKCS1v15, ec.ECDSA]: - """ - Returns the signature algorithm parameters. - """ - - @property - @abc.abstractmethod - def extensions(self) -> Extensions: - """ - Returns an Extensions object. - """ - - @property - @abc.abstractmethod - def signature(self) -> bytes: - """ - Returns the signature bytes. - """ - - @property - @abc.abstractmethod - def tbs_certificate_bytes(self) -> bytes: - """ - Returns the tbsCertificate payload bytes as defined in RFC 5280. - """ - - @property - @abc.abstractmethod - def tbs_precertificate_bytes(self) -> bytes: - """ - Returns the tbsCertificate payload bytes with the SCT list extension - stripped. - """ - - @abc.abstractmethod - def __eq__(self, other: object) -> bool: - """ - Checks equality. - """ - - @abc.abstractmethod - def __hash__(self) -> int: - """ - Computes a hash. - """ - - @abc.abstractmethod - def public_bytes(self, encoding: serialization.Encoding) -> bytes: - """ - Serializes the certificate to PEM or DER format. - """ - - @abc.abstractmethod - def verify_directly_issued_by(self, issuer: Certificate) -> None: - """ - This method verifies that certificate issuer name matches the - issuer subject name and that the certificate is signed by the - issuer's private key. No other validation is performed. - """ - - -# Runtime isinstance checks need this since the rust class is not a subclass. -Certificate.register(rust_x509.Certificate) - - -class RevokedCertificate(metaclass=abc.ABCMeta): - @property - @abc.abstractmethod - def serial_number(self) -> int: - """ - Returns the serial number of the revoked certificate. - """ - - @property - @abc.abstractmethod - def revocation_date(self) -> datetime.datetime: - """ - Returns the date of when this certificate was revoked. - """ - - @property - @abc.abstractmethod - def extensions(self) -> Extensions: - """ - Returns an Extensions object containing a list of Revoked extensions. - """ - - -# Runtime isinstance checks need this since the rust class is not a subclass. -RevokedCertificate.register(rust_x509.RevokedCertificate) - - -class _RawRevokedCertificate(RevokedCertificate): - def __init__( - self, - serial_number: int, - revocation_date: datetime.datetime, - extensions: Extensions, - ): - self._serial_number = serial_number - self._revocation_date = revocation_date - self._extensions = extensions - - @property - def serial_number(self) -> int: - return self._serial_number - - @property - def revocation_date(self) -> datetime.datetime: - return self._revocation_date - - @property - def extensions(self) -> Extensions: - return self._extensions - - -class CertificateRevocationList(metaclass=abc.ABCMeta): - @abc.abstractmethod - def public_bytes(self, encoding: serialization.Encoding) -> bytes: - """ - Serializes the CRL to PEM or DER format. - """ - - @abc.abstractmethod - def fingerprint(self, algorithm: hashes.HashAlgorithm) -> bytes: - """ - Returns bytes using digest passed. - """ - - @abc.abstractmethod - def get_revoked_certificate_by_serial_number( - self, serial_number: int - ) -> typing.Optional[RevokedCertificate]: - """ - Returns an instance of RevokedCertificate or None if the serial_number - is not in the CRL. - """ - - @property - @abc.abstractmethod - def signature_hash_algorithm( - self, - ) -> typing.Optional[hashes.HashAlgorithm]: - """ - Returns a HashAlgorithm corresponding to the type of the digest signed - in the certificate. - """ - - @property - @abc.abstractmethod - def signature_algorithm_oid(self) -> ObjectIdentifier: - """ - Returns the ObjectIdentifier of the signature algorithm. - """ - - @property - @abc.abstractmethod - def issuer(self) -> Name: - """ - Returns the X509Name with the issuer of this CRL. - """ - - @property - @abc.abstractmethod - def next_update(self) -> typing.Optional[datetime.datetime]: - """ - Returns the date of next update for this CRL. - """ - - @property - @abc.abstractmethod - def last_update(self) -> datetime.datetime: - """ - Returns the date of last update for this CRL. - """ - - @property - @abc.abstractmethod - def extensions(self) -> Extensions: - """ - Returns an Extensions object containing a list of CRL extensions. - """ - - @property - @abc.abstractmethod - def signature(self) -> bytes: - """ - Returns the signature bytes. - """ - - @property - @abc.abstractmethod - def tbs_certlist_bytes(self) -> bytes: - """ - Returns the tbsCertList payload bytes as defined in RFC 5280. - """ - - @abc.abstractmethod - def __eq__(self, other: object) -> bool: - """ - Checks equality. - """ - - @abc.abstractmethod - def __len__(self) -> int: - """ - Number of revoked certificates in the CRL. - """ - - @typing.overload - def __getitem__(self, idx: int) -> RevokedCertificate: - ... - - @typing.overload - def __getitem__(self, idx: slice) -> typing.List[RevokedCertificate]: - ... - - @abc.abstractmethod - def __getitem__( - self, idx: typing.Union[int, slice] - ) -> typing.Union[RevokedCertificate, typing.List[RevokedCertificate]]: - """ - Returns a revoked certificate (or slice of revoked certificates). - """ - - @abc.abstractmethod - def __iter__(self) -> typing.Iterator[RevokedCertificate]: - """ - Iterator over the revoked certificates - """ - - @abc.abstractmethod - def is_signature_valid( - self, public_key: CertificateIssuerPublicKeyTypes - ) -> bool: - """ - Verifies signature of revocation list against given public key. - """ - - -CertificateRevocationList.register(rust_x509.CertificateRevocationList) - - -class CertificateSigningRequest(metaclass=abc.ABCMeta): - @abc.abstractmethod - def __eq__(self, other: object) -> bool: - """ - Checks equality. - """ - - @abc.abstractmethod - def __hash__(self) -> int: - """ - Computes a hash. - """ - - @abc.abstractmethod - def public_key(self) -> CertificatePublicKeyTypes: - """ - Returns the public key - """ - - @property - @abc.abstractmethod - def subject(self) -> Name: - """ - Returns the subject name object. - """ - - @property - @abc.abstractmethod - def signature_hash_algorithm( - self, - ) -> typing.Optional[hashes.HashAlgorithm]: - """ - Returns a HashAlgorithm corresponding to the type of the digest signed - in the certificate. - """ - - @property - @abc.abstractmethod - def signature_algorithm_oid(self) -> ObjectIdentifier: - """ - Returns the ObjectIdentifier of the signature algorithm. - """ - - @property - @abc.abstractmethod - def extensions(self) -> Extensions: - """ - Returns the extensions in the signing request. - """ - - @property - @abc.abstractmethod - def attributes(self) -> Attributes: - """ - Returns an Attributes object. - """ - - @abc.abstractmethod - def public_bytes(self, encoding: serialization.Encoding) -> bytes: - """ - Encodes the request to PEM or DER format. - """ - - @property - @abc.abstractmethod - def signature(self) -> bytes: - """ - Returns the signature bytes. - """ - - @property - @abc.abstractmethod - def tbs_certrequest_bytes(self) -> bytes: - """ - Returns the PKCS#10 CertificationRequestInfo bytes as defined in RFC - 2986. - """ - - @property - @abc.abstractmethod - def is_signature_valid(self) -> bool: - """ - Verifies signature of signing request. - """ - - @abc.abstractmethod - def get_attribute_for_oid(self, oid: ObjectIdentifier) -> bytes: - """ - Get the attribute value for a given OID. - """ - - -# Runtime isinstance checks need this since the rust class is not a subclass. -CertificateSigningRequest.register(rust_x509.CertificateSigningRequest) - - -# Backend argument preserved for API compatibility, but ignored. -def load_pem_x509_certificate( - data: bytes, backend: typing.Any = None -) -> Certificate: - return rust_x509.load_pem_x509_certificate(data) - - -def load_pem_x509_certificates(data: bytes) -> typing.List[Certificate]: - return rust_x509.load_pem_x509_certificates(data) - - -# Backend argument preserved for API compatibility, but ignored. -def load_der_x509_certificate( - data: bytes, backend: typing.Any = None -) -> Certificate: - return rust_x509.load_der_x509_certificate(data) - - -# Backend argument preserved for API compatibility, but ignored. -def load_pem_x509_csr( - data: bytes, backend: typing.Any = None -) -> CertificateSigningRequest: - return rust_x509.load_pem_x509_csr(data) - - -# Backend argument preserved for API compatibility, but ignored. -def load_der_x509_csr( - data: bytes, backend: typing.Any = None -) -> CertificateSigningRequest: - return rust_x509.load_der_x509_csr(data) - - -# Backend argument preserved for API compatibility, but ignored. -def load_pem_x509_crl( - data: bytes, backend: typing.Any = None -) -> CertificateRevocationList: - return rust_x509.load_pem_x509_crl(data) - - -# Backend argument preserved for API compatibility, but ignored. -def load_der_x509_crl( - data: bytes, backend: typing.Any = None -) -> CertificateRevocationList: - return rust_x509.load_der_x509_crl(data) - - -class CertificateSigningRequestBuilder: - def __init__( - self, - subject_name: typing.Optional[Name] = None, - extensions: typing.List[Extension[ExtensionType]] = [], - attributes: typing.List[ - typing.Tuple[ObjectIdentifier, bytes, typing.Optional[int]] - ] = [], - ): - """ - Creates an empty X.509 certificate request (v1). - """ - self._subject_name = subject_name - self._extensions = extensions - self._attributes = attributes - - def subject_name(self, name: Name) -> CertificateSigningRequestBuilder: - """ - Sets the certificate requestor's distinguished name. - """ - if not isinstance(name, Name): - raise TypeError("Expecting x509.Name object.") - if self._subject_name is not None: - raise ValueError("The subject name may only be set once.") - return CertificateSigningRequestBuilder( - name, self._extensions, self._attributes - ) - - def add_extension( - self, extval: ExtensionType, critical: bool - ) -> CertificateSigningRequestBuilder: - """ - Adds an X.509 extension to the certificate request. - """ - if not isinstance(extval, ExtensionType): - raise TypeError("extension must be an ExtensionType") - - extension = Extension(extval.oid, critical, extval) - _reject_duplicate_extension(extension, self._extensions) - - return CertificateSigningRequestBuilder( - self._subject_name, - self._extensions + [extension], - self._attributes, - ) - - def add_attribute( - self, - oid: ObjectIdentifier, - value: bytes, - *, - _tag: typing.Optional[_ASN1Type] = None, - ) -> CertificateSigningRequestBuilder: - """ - Adds an X.509 attribute with an OID and associated value. - """ - if not isinstance(oid, ObjectIdentifier): - raise TypeError("oid must be an ObjectIdentifier") - - if not isinstance(value, bytes): - raise TypeError("value must be bytes") - - if _tag is not None and not isinstance(_tag, _ASN1Type): - raise TypeError("tag must be _ASN1Type") - - _reject_duplicate_attribute(oid, self._attributes) - - if _tag is not None: - tag = _tag.value - else: - tag = None - - return CertificateSigningRequestBuilder( - self._subject_name, - self._extensions, - self._attributes + [(oid, value, tag)], - ) - - def sign( - self, - private_key: CertificateIssuerPrivateKeyTypes, - algorithm: typing.Optional[_AllowedHashTypes], - backend: typing.Any = None, - ) -> CertificateSigningRequest: - """ - Signs the request using the requestor's private key. - """ - if self._subject_name is None: - raise ValueError("A CertificateSigningRequest must have a subject") - return rust_x509.create_x509_csr(self, private_key, algorithm) - - -class CertificateBuilder: - _extensions: typing.List[Extension[ExtensionType]] - - def __init__( - self, - issuer_name: typing.Optional[Name] = None, - subject_name: typing.Optional[Name] = None, - public_key: typing.Optional[CertificatePublicKeyTypes] = None, - serial_number: typing.Optional[int] = None, - not_valid_before: typing.Optional[datetime.datetime] = None, - not_valid_after: typing.Optional[datetime.datetime] = None, - extensions: typing.List[Extension[ExtensionType]] = [], - ) -> None: - self._version = Version.v3 - self._issuer_name = issuer_name - self._subject_name = subject_name - self._public_key = public_key - self._serial_number = serial_number - self._not_valid_before = not_valid_before - self._not_valid_after = not_valid_after - self._extensions = extensions - - def issuer_name(self, name: Name) -> CertificateBuilder: - """ - Sets the CA's distinguished name. - """ - if not isinstance(name, Name): - raise TypeError("Expecting x509.Name object.") - if self._issuer_name is not None: - raise ValueError("The issuer name may only be set once.") - return CertificateBuilder( - name, - self._subject_name, - self._public_key, - self._serial_number, - self._not_valid_before, - self._not_valid_after, - self._extensions, - ) - - def subject_name(self, name: Name) -> CertificateBuilder: - """ - Sets the requestor's distinguished name. - """ - if not isinstance(name, Name): - raise TypeError("Expecting x509.Name object.") - if self._subject_name is not None: - raise ValueError("The subject name may only be set once.") - return CertificateBuilder( - self._issuer_name, - name, - self._public_key, - self._serial_number, - self._not_valid_before, - self._not_valid_after, - self._extensions, - ) - - def public_key( - self, - key: CertificatePublicKeyTypes, - ) -> CertificateBuilder: - """ - Sets the requestor's public key (as found in the signing request). - """ - if not isinstance( - key, - ( - dsa.DSAPublicKey, - rsa.RSAPublicKey, - ec.EllipticCurvePublicKey, - ed25519.Ed25519PublicKey, - ed448.Ed448PublicKey, - x25519.X25519PublicKey, - x448.X448PublicKey, - ), - ): - raise TypeError( - "Expecting one of DSAPublicKey, RSAPublicKey," - " EllipticCurvePublicKey, Ed25519PublicKey," - " Ed448PublicKey, X25519PublicKey, or " - "X448PublicKey." - ) - if self._public_key is not None: - raise ValueError("The public key may only be set once.") - return CertificateBuilder( - self._issuer_name, - self._subject_name, - key, - self._serial_number, - self._not_valid_before, - self._not_valid_after, - self._extensions, - ) - - def serial_number(self, number: int) -> CertificateBuilder: - """ - Sets the certificate serial number. - """ - if not isinstance(number, int): - raise TypeError("Serial number must be of integral type.") - if self._serial_number is not None: - raise ValueError("The serial number may only be set once.") - if number <= 0: - raise ValueError("The serial number should be positive.") - - # ASN.1 integers are always signed, so most significant bit must be - # zero. - if number.bit_length() >= 160: # As defined in RFC 5280 - raise ValueError( - "The serial number should not be more than 159 " "bits." - ) - return CertificateBuilder( - self._issuer_name, - self._subject_name, - self._public_key, - number, - self._not_valid_before, - self._not_valid_after, - self._extensions, - ) - - def not_valid_before(self, time: datetime.datetime) -> CertificateBuilder: - """ - Sets the certificate activation time. - """ - if not isinstance(time, datetime.datetime): - raise TypeError("Expecting datetime object.") - if self._not_valid_before is not None: - raise ValueError("The not valid before may only be set once.") - time = _convert_to_naive_utc_time(time) - if time < _EARLIEST_UTC_TIME: - raise ValueError( - "The not valid before date must be on or after" - " 1950 January 1)." - ) - if self._not_valid_after is not None and time > self._not_valid_after: - raise ValueError( - "The not valid before date must be before the not valid after " - "date." - ) - return CertificateBuilder( - self._issuer_name, - self._subject_name, - self._public_key, - self._serial_number, - time, - self._not_valid_after, - self._extensions, - ) - - def not_valid_after(self, time: datetime.datetime) -> CertificateBuilder: - """ - Sets the certificate expiration time. - """ - if not isinstance(time, datetime.datetime): - raise TypeError("Expecting datetime object.") - if self._not_valid_after is not None: - raise ValueError("The not valid after may only be set once.") - time = _convert_to_naive_utc_time(time) - if time < _EARLIEST_UTC_TIME: - raise ValueError( - "The not valid after date must be on or after" - " 1950 January 1." - ) - if ( - self._not_valid_before is not None - and time < self._not_valid_before - ): - raise ValueError( - "The not valid after date must be after the not valid before " - "date." - ) - return CertificateBuilder( - self._issuer_name, - self._subject_name, - self._public_key, - self._serial_number, - self._not_valid_before, - time, - self._extensions, - ) - - def add_extension( - self, extval: ExtensionType, critical: bool - ) -> CertificateBuilder: - """ - Adds an X.509 extension to the certificate. - """ - if not isinstance(extval, ExtensionType): - raise TypeError("extension must be an ExtensionType") - - extension = Extension(extval.oid, critical, extval) - _reject_duplicate_extension(extension, self._extensions) - - return CertificateBuilder( - self._issuer_name, - self._subject_name, - self._public_key, - self._serial_number, - self._not_valid_before, - self._not_valid_after, - self._extensions + [extension], - ) - - def sign( - self, - private_key: CertificateIssuerPrivateKeyTypes, - algorithm: typing.Optional[_AllowedHashTypes], - backend: typing.Any = None, - *, - rsa_padding: typing.Optional[ - typing.Union[padding.PSS, padding.PKCS1v15] - ] = None, - ) -> Certificate: - """ - Signs the certificate using the CA's private key. - """ - if self._subject_name is None: - raise ValueError("A certificate must have a subject name") - - if self._issuer_name is None: - raise ValueError("A certificate must have an issuer name") - - if self._serial_number is None: - raise ValueError("A certificate must have a serial number") - - if self._not_valid_before is None: - raise ValueError("A certificate must have a not valid before time") - - if self._not_valid_after is None: - raise ValueError("A certificate must have a not valid after time") - - if self._public_key is None: - raise ValueError("A certificate must have a public key") - - if rsa_padding is not None: - if not isinstance(rsa_padding, (padding.PSS, padding.PKCS1v15)): - raise TypeError("Padding must be PSS or PKCS1v15") - if not isinstance(private_key, rsa.RSAPrivateKey): - raise TypeError("Padding is only supported for RSA keys") - - return rust_x509.create_x509_certificate( - self, private_key, algorithm, rsa_padding - ) - - -class CertificateRevocationListBuilder: - _extensions: typing.List[Extension[ExtensionType]] - _revoked_certificates: typing.List[RevokedCertificate] - - def __init__( - self, - issuer_name: typing.Optional[Name] = None, - last_update: typing.Optional[datetime.datetime] = None, - next_update: typing.Optional[datetime.datetime] = None, - extensions: typing.List[Extension[ExtensionType]] = [], - revoked_certificates: typing.List[RevokedCertificate] = [], - ): - self._issuer_name = issuer_name - self._last_update = last_update - self._next_update = next_update - self._extensions = extensions - self._revoked_certificates = revoked_certificates - - def issuer_name( - self, issuer_name: Name - ) -> CertificateRevocationListBuilder: - if not isinstance(issuer_name, Name): - raise TypeError("Expecting x509.Name object.") - if self._issuer_name is not None: - raise ValueError("The issuer name may only be set once.") - return CertificateRevocationListBuilder( - issuer_name, - self._last_update, - self._next_update, - self._extensions, - self._revoked_certificates, - ) - - def last_update( - self, last_update: datetime.datetime - ) -> CertificateRevocationListBuilder: - if not isinstance(last_update, datetime.datetime): - raise TypeError("Expecting datetime object.") - if self._last_update is not None: - raise ValueError("Last update may only be set once.") - last_update = _convert_to_naive_utc_time(last_update) - if last_update < _EARLIEST_UTC_TIME: - raise ValueError( - "The last update date must be on or after" " 1950 January 1." - ) - if self._next_update is not None and last_update > self._next_update: - raise ValueError( - "The last update date must be before the next update date." - ) - return CertificateRevocationListBuilder( - self._issuer_name, - last_update, - self._next_update, - self._extensions, - self._revoked_certificates, - ) - - def next_update( - self, next_update: datetime.datetime - ) -> CertificateRevocationListBuilder: - if not isinstance(next_update, datetime.datetime): - raise TypeError("Expecting datetime object.") - if self._next_update is not None: - raise ValueError("Last update may only be set once.") - next_update = _convert_to_naive_utc_time(next_update) - if next_update < _EARLIEST_UTC_TIME: - raise ValueError( - "The last update date must be on or after" " 1950 January 1." - ) - if self._last_update is not None and next_update < self._last_update: - raise ValueError( - "The next update date must be after the last update date." - ) - return CertificateRevocationListBuilder( - self._issuer_name, - self._last_update, - next_update, - self._extensions, - self._revoked_certificates, - ) - - def add_extension( - self, extval: ExtensionType, critical: bool - ) -> CertificateRevocationListBuilder: - """ - Adds an X.509 extension to the certificate revocation list. - """ - if not isinstance(extval, ExtensionType): - raise TypeError("extension must be an ExtensionType") - - extension = Extension(extval.oid, critical, extval) - _reject_duplicate_extension(extension, self._extensions) - return CertificateRevocationListBuilder( - self._issuer_name, - self._last_update, - self._next_update, - self._extensions + [extension], - self._revoked_certificates, - ) - - def add_revoked_certificate( - self, revoked_certificate: RevokedCertificate - ) -> CertificateRevocationListBuilder: - """ - Adds a revoked certificate to the CRL. - """ - if not isinstance(revoked_certificate, RevokedCertificate): - raise TypeError("Must be an instance of RevokedCertificate") - - return CertificateRevocationListBuilder( - self._issuer_name, - self._last_update, - self._next_update, - self._extensions, - self._revoked_certificates + [revoked_certificate], - ) - - def sign( - self, - private_key: CertificateIssuerPrivateKeyTypes, - algorithm: typing.Optional[_AllowedHashTypes], - backend: typing.Any = None, - ) -> CertificateRevocationList: - if self._issuer_name is None: - raise ValueError("A CRL must have an issuer name") - - if self._last_update is None: - raise ValueError("A CRL must have a last update time") - - if self._next_update is None: - raise ValueError("A CRL must have a next update time") - - return rust_x509.create_x509_crl(self, private_key, algorithm) - - -class RevokedCertificateBuilder: - def __init__( - self, - serial_number: typing.Optional[int] = None, - revocation_date: typing.Optional[datetime.datetime] = None, - extensions: typing.List[Extension[ExtensionType]] = [], - ): - self._serial_number = serial_number - self._revocation_date = revocation_date - self._extensions = extensions - - def serial_number(self, number: int) -> RevokedCertificateBuilder: - if not isinstance(number, int): - raise TypeError("Serial number must be of integral type.") - if self._serial_number is not None: - raise ValueError("The serial number may only be set once.") - if number <= 0: - raise ValueError("The serial number should be positive") - - # ASN.1 integers are always signed, so most significant bit must be - # zero. - if number.bit_length() >= 160: # As defined in RFC 5280 - raise ValueError( - "The serial number should not be more than 159 " "bits." - ) - return RevokedCertificateBuilder( - number, self._revocation_date, self._extensions - ) - - def revocation_date( - self, time: datetime.datetime - ) -> RevokedCertificateBuilder: - if not isinstance(time, datetime.datetime): - raise TypeError("Expecting datetime object.") - if self._revocation_date is not None: - raise ValueError("The revocation date may only be set once.") - time = _convert_to_naive_utc_time(time) - if time < _EARLIEST_UTC_TIME: - raise ValueError( - "The revocation date must be on or after" " 1950 January 1." - ) - return RevokedCertificateBuilder( - self._serial_number, time, self._extensions - ) - - def add_extension( - self, extval: ExtensionType, critical: bool - ) -> RevokedCertificateBuilder: - if not isinstance(extval, ExtensionType): - raise TypeError("extension must be an ExtensionType") - - extension = Extension(extval.oid, critical, extval) - _reject_duplicate_extension(extension, self._extensions) - return RevokedCertificateBuilder( - self._serial_number, - self._revocation_date, - self._extensions + [extension], - ) - - def build(self, backend: typing.Any = None) -> RevokedCertificate: - if self._serial_number is None: - raise ValueError("A revoked certificate must have a serial number") - if self._revocation_date is None: - raise ValueError( - "A revoked certificate must have a revocation date" - ) - return _RawRevokedCertificate( - self._serial_number, - self._revocation_date, - Extensions(self._extensions), - ) - - -def random_serial_number() -> int: - return int.from_bytes(os.urandom(20), "big") >> 1 diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/celp_filters_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/celp_filters_mips.c deleted file mode 100644 index 926f1cb334e8f237e4061eab39efe1e13e81b573..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/celp_filters_mips.c +++ /dev/null @@ -1,293 +0,0 @@ -/* - * Copyright (c) 2012 - * MIPS Technologies, Inc., California. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * Author: Nedeljko Babic (nbabic@mips.com) - * - * various filters for CELP-based codecs optimized for MIPS - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Reference: libavcodec/celp_filters.c - */ -#include "config.h" -#include "libavutil/attributes.h" -#include "libavutil/common.h" -#include "libavcodec/celp_filters.h" -#include "libavutil/mips/asmdefs.h" - -#if HAVE_INLINE_ASM -#if !HAVE_MIPS32R6 && !HAVE_MIPS64R6 -static void ff_celp_lp_synthesis_filterf_mips(float *out, - const float *filter_coeffs, - const float* in, int buffer_length, - int filter_length) -{ - int i,n; - - float out0, out1, out2, out3; - float old_out0, old_out1, old_out2, old_out3; - float a,b,c; - const float *p_filter_coeffs; - float *p_out; - - a = filter_coeffs[0]; - b = filter_coeffs[1]; - c = filter_coeffs[2]; - b -= filter_coeffs[0] * filter_coeffs[0]; - c -= filter_coeffs[1] * filter_coeffs[0]; - c -= filter_coeffs[0] * b; - - old_out0 = out[-4]; - old_out1 = out[-3]; - old_out2 = out[-2]; - old_out3 = out[-1]; - for (n = 0; n <= buffer_length - 4; n+=4) { - p_filter_coeffs = filter_coeffs; - p_out = out; - - out0 = in[0]; - out1 = in[1]; - out2 = in[2]; - out3 = in[3]; - - __asm__ volatile( - "lwc1 $f2, 8(%[filter_coeffs]) \n\t" - "lwc1 $f1, 4(%[filter_coeffs]) \n\t" - "lwc1 $f0, 0(%[filter_coeffs]) \n\t" - "nmsub.s %[out0], %[out0], $f2, %[old_out1] \n\t" - "nmsub.s %[out1], %[out1], $f2, %[old_out2] \n\t" - "nmsub.s %[out2], %[out2], $f2, %[old_out3] \n\t" - "lwc1 $f3, 12(%[filter_coeffs]) \n\t" - "nmsub.s %[out0], %[out0], $f1, %[old_out2] \n\t" - "nmsub.s %[out1], %[out1], $f1, %[old_out3] \n\t" - "nmsub.s %[out2], %[out2], $f3, %[old_out2] \n\t" - "nmsub.s %[out0], %[out0], $f0, %[old_out3] \n\t" - "nmsub.s %[out3], %[out3], $f3, %[old_out3] \n\t" - "nmsub.s %[out1], %[out1], $f3, %[old_out1] \n\t" - "nmsub.s %[out0], %[out0], $f3, %[old_out0] \n\t" - - : [out0]"+f"(out0), [out1]"+f"(out1), - [out2]"+f"(out2), [out3]"+f"(out3) - : [old_out0]"f"(old_out0), [old_out1]"f"(old_out1), - [old_out2]"f"(old_out2), [old_out3]"f"(old_out3), - [filter_coeffs]"r"(filter_coeffs) - : "$f0", "$f1", "$f2", "$f3", "$f4", "memory" - ); - - for (i = 5; i <= filter_length; i += 2) { - __asm__ volatile( - "lwc1 %[old_out3], -20(%[p_out]) \n\t" - "lwc1 $f5, 16(%[p_filter_coeffs]) \n\t" - PTR_ADDIU "%[p_out], -8 \n\t" - PTR_ADDIU "%[p_filter_coeffs], 8 \n\t" - "nmsub.s %[out1], %[out1], $f5, %[old_out0] \n\t" - "nmsub.s %[out3], %[out3], $f5, %[old_out2] \n\t" - "lwc1 $f4, 12(%[p_filter_coeffs]) \n\t" - "lwc1 %[old_out2], -16(%[p_out]) \n\t" - "nmsub.s %[out0], %[out0], $f5, %[old_out3] \n\t" - "nmsub.s %[out2], %[out2], $f5, %[old_out1] \n\t" - "nmsub.s %[out1], %[out1], $f4, %[old_out3] \n\t" - "nmsub.s %[out3], %[out3], $f4, %[old_out1] \n\t" - "mov.s %[old_out1], %[old_out3] \n\t" - "nmsub.s %[out0], %[out0], $f4, %[old_out2] \n\t" - "nmsub.s %[out2], %[out2], $f4, %[old_out0] \n\t" - - : [out0]"+f"(out0), [out1]"+f"(out1), - [out2]"+f"(out2), [out3]"+f"(out3), [old_out0]"+f"(old_out0), - [old_out1]"+f"(old_out1), [old_out2]"+f"(old_out2), - [old_out3]"+f"(old_out3),[p_filter_coeffs]"+r"(p_filter_coeffs), - [p_out]"+r"(p_out) - : - : "$f4", "$f5", "memory" - ); - FFSWAP(float, old_out0, old_out2); - } - - __asm__ volatile( - "nmsub.s %[out3], %[out3], %[a], %[out2] \n\t" - "nmsub.s %[out2], %[out2], %[a], %[out1] \n\t" - "nmsub.s %[out3], %[out3], %[b], %[out1] \n\t" - "nmsub.s %[out1], %[out1], %[a], %[out0] \n\t" - "nmsub.s %[out2], %[out2], %[b], %[out0] \n\t" - "nmsub.s %[out3], %[out3], %[c], %[out0] \n\t" - - : [out0]"+f"(out0), [out1]"+f"(out1), - [out2]"+f"(out2), [out3]"+f"(out3) - : [a]"f"(a), [b]"f"(b), [c]"f"(c) - ); - - out[0] = out0; - out[1] = out1; - out[2] = out2; - out[3] = out3; - - old_out0 = out0; - old_out1 = out1; - old_out2 = out2; - old_out3 = out3; - - out += 4; - in += 4; - } - - out -= n; - in -= n; - for (; n < buffer_length; n++) { - float out_val, out_val_i, fc_val; - p_filter_coeffs = filter_coeffs; - p_out = &out[n]; - out_val = in[n]; - for (i = 1; i <= filter_length; i++) { - __asm__ volatile( - "lwc1 %[fc_val], 0(%[p_filter_coeffs]) \n\t" - "lwc1 %[out_val_i], -4(%[p_out]) \n\t" - PTR_ADDIU "%[p_filter_coeffs], 4 \n\t" - PTR_ADDIU "%[p_out], -4 \n\t" - "nmsub.s %[out_val], %[out_val], %[fc_val], %[out_val_i] \n\t" - - : [fc_val]"=&f"(fc_val), [out_val]"+f"(out_val), - [out_val_i]"=&f"(out_val_i), [p_out]"+r"(p_out), - [p_filter_coeffs]"+r"(p_filter_coeffs) - : - : "memory" - ); - } - out[n] = out_val; - } -} - -static void ff_celp_lp_zero_synthesis_filterf_mips(float *out, - const float *filter_coeffs, - const float *in, int buffer_length, - int filter_length) -{ - int i,n; - float sum_out8, sum_out7, sum_out6, sum_out5, sum_out4, fc_val; - float sum_out3, sum_out2, sum_out1; - const float *p_filter_coeffs, *p_in; - - for (n = 0; n < buffer_length; n+=8) { - p_in = &in[n]; - p_filter_coeffs = filter_coeffs; - sum_out8 = in[n+7]; - sum_out7 = in[n+6]; - sum_out6 = in[n+5]; - sum_out5 = in[n+4]; - sum_out4 = in[n+3]; - sum_out3 = in[n+2]; - sum_out2 = in[n+1]; - sum_out1 = in[n]; - i = filter_length; - - /* i is always greater than 0 - * outer loop is unrolled eight times so there is less memory access - * inner loop is unrolled two times - */ - __asm__ volatile( - "filt_lp_inner%=: \n\t" - "lwc1 %[fc_val], 0(%[p_filter_coeffs]) \n\t" - "lwc1 $f7, 6*4(%[p_in]) \n\t" - "lwc1 $f6, 5*4(%[p_in]) \n\t" - "lwc1 $f5, 4*4(%[p_in]) \n\t" - "lwc1 $f4, 3*4(%[p_in]) \n\t" - "lwc1 $f3, 2*4(%[p_in]) \n\t" - "lwc1 $f2, 4(%[p_in]) \n\t" - "lwc1 $f1, 0(%[p_in]) \n\t" - "lwc1 $f0, -4(%[p_in]) \n\t" - "addiu %[i], -2 \n\t" - "madd.s %[sum_out8], %[sum_out8], %[fc_val], $f7 \n\t" - "madd.s %[sum_out7], %[sum_out7], %[fc_val], $f6 \n\t" - "madd.s %[sum_out6], %[sum_out6], %[fc_val], $f5 \n\t" - "madd.s %[sum_out5], %[sum_out5], %[fc_val], $f4 \n\t" - "madd.s %[sum_out4], %[sum_out4], %[fc_val], $f3 \n\t" - "madd.s %[sum_out3], %[sum_out3], %[fc_val], $f2 \n\t" - "madd.s %[sum_out2], %[sum_out2], %[fc_val], $f1 \n\t" - "madd.s %[sum_out1], %[sum_out1], %[fc_val], $f0 \n\t" - "lwc1 %[fc_val], 4(%[p_filter_coeffs]) \n\t" - "lwc1 $f7, -8(%[p_in]) \n\t" - PTR_ADDIU "%[p_filter_coeffs], 8 \n\t" - PTR_ADDIU "%[p_in], -8 \n\t" - "madd.s %[sum_out8], %[sum_out8], %[fc_val], $f6 \n\t" - "madd.s %[sum_out7], %[sum_out7], %[fc_val], $f5 \n\t" - "madd.s %[sum_out6], %[sum_out6], %[fc_val], $f4 \n\t" - "madd.s %[sum_out5], %[sum_out5], %[fc_val], $f3 \n\t" - "madd.s %[sum_out4], %[sum_out4], %[fc_val], $f2 \n\t" - "madd.s %[sum_out3], %[sum_out3], %[fc_val], $f1 \n\t" - "madd.s %[sum_out2], %[sum_out2], %[fc_val], $f0 \n\t" - "madd.s %[sum_out1], %[sum_out1], %[fc_val], $f7 \n\t" - "bgtz %[i], filt_lp_inner%= \n\t" - - : [sum_out8]"+f"(sum_out8), [sum_out7]"+f"(sum_out7), - [sum_out6]"+f"(sum_out6), [sum_out5]"+f"(sum_out5), - [sum_out4]"+f"(sum_out4), [sum_out3]"+f"(sum_out3), - [sum_out2]"+f"(sum_out2), [sum_out1]"+f"(sum_out1), - [fc_val]"=&f"(fc_val), [p_filter_coeffs]"+r"(p_filter_coeffs), - [p_in]"+r"(p_in), [i]"+r"(i) - : - : "$f0", "$f1", "$f2", "$f3", "$f4", "$f5", "$f6", "$f7", "memory" - ); - - out[n+7] = sum_out8; - out[n+6] = sum_out7; - out[n+5] = sum_out6; - out[n+4] = sum_out5; - out[n+3] = sum_out4; - out[n+2] = sum_out3; - out[n+1] = sum_out2; - out[n] = sum_out1; - } -} -#endif /* !HAVE_MIPS32R6 && !HAVE_MIPS64R6 */ -#endif /* HAVE_INLINE_ASM */ - -void ff_celp_filter_init_mips(CELPFContext *c) -{ -#if HAVE_INLINE_ASM -#if !HAVE_MIPS32R6 && !HAVE_MIPS64R6 - c->celp_lp_synthesis_filterf = ff_celp_lp_synthesis_filterf_mips; - c->celp_lp_zero_synthesis_filterf = ff_celp_lp_zero_synthesis_filterf_mips; -#endif -#endif -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Celebrity Killer The Best Song of 2021 by Sidhu Moose Wala and Tion Wayne.md b/spaces/congsaPfin/Manga-OCR/logs/Celebrity Killer The Best Song of 2021 by Sidhu Moose Wala and Tion Wayne.md deleted file mode 100644 index faab541113587584798a963651263b8246d509b6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Celebrity Killer The Best Song of 2021 by Sidhu Moose Wala and Tion Wayne.md +++ /dev/null @@ -1,90 +0,0 @@ - -

    Celebrity Killer: A Hit Song by Sidhu Moose Wala and Tion Wayne

    -

    Introduction

    -

    If you are a fan of Punjabi rap music, you might have heard of the song Celebrity Killer by Sidhu Moose Wala and Tion Wayne. This song is one of the tracks from Sidhu Moose Wala's album Moosetape, which was released in 2021. The song features British rapper Tion Wayne, who is known for his hit songs like Body and I Dunno. Celebrity Killer is a catchy and energetic song that showcases the skills and charisma of both artists.

    -

    In this article, we will explore what Celebrity Killer is about, who are Sidhu Moose Wala and Tion Wayne, and why this song is so popular among the fans. We will also analyze the meaning, music, and video of the song, and see what makes it a hit.

    -

    celebrity killer song download


    Download ——— https://urlca.com/2uOcM0



    -

    Main Body

    -

    What is Celebrity Killer?

    -

    Celebrity Killer is a Punjabi rap song that was released on August 9, 2021, as part of Sidhu Moose Wala's album Moosetape. The song is produced by Steel Banglez, The Kidd, JB Music, A. Singh, M1 on the Beat, and Chris Rich Beats. The song features British rapper Tion Wayne, who adds his own flavor to the track. The song has over 50 million views on YouTube and over 19 million streams on Spotify as of September 2021.

    -

    Who are Sidhu Moose Wala and Tion Wayne?

    -

    Sidhu Moose Wala is a Punjabi rapper, singer, songwriter, and producer from India. He started his career in 2016 with his debut song G Wagon. Since then, he has released many hit songs like So High, Just Listen, Legend, Dhakka, and Brown Shortie. He is known for his raw and aggressive lyrics, his unique flow, and his versatile voice. He has collaborated with many artists like Bohemia, Divine, Mist, Morrisson, and Steel Banglez. He is considered one of the most popular and influential Punjabi rap artists in the world.

    -

    Tion Wayne is a British rapper, singer, and songwriter from London. He started his career in 2010 with his debut mixtape Wayne's World. Since then, he has released many hit songs like I Dunno, Body, Keisha & Becky, Hate On Me, and Loyal. He is known for his catchy hooks, his witty bars, and his diverse style. He has collaborated with many artists like Stormzy, Headie One, Russ Millions, KSI, and Swarmz. He is considered one of the most successful and influential British rap artists in the world.

    -

    Why is Celebrity Killer so popular?

    -

    Celebrity Killer is a popular song because it combines the best of both worlds: Punjabi rap and British rap. The song has a catchy chorus that repeats the phrase "Gucci da ni goli ala sapp chhapeya ni mittran di car te", which means

    "There is no bullet in the Gucci, there is a snake printed on my friend's car". This chorus is catchy because it uses a metaphor to show the power and status of the artists. The song also has a fast-paced and energetic beat that matches the flow and delivery of the artists. The song showcases the skills and charisma of both Sidhu Moose Wala and Tion Wayne, who rap in Punjabi and English respectively. The song appeals to both the Punjabi and the British rap fans, as well as the global audience.

    -

    The meaning of Celebrity Killer

    -

    The lyrics of Celebrity Killer

    -

    The lyrics of Celebrity Killer are about the fame and success of the artists, and how they are killing the game with their music. The artists also rap about their lifestyle, their money, their cars, their clothes, and their women. They also rap about their struggles, their enemies, their rivals, and their haters. They also rap about their confidence, their pride, their attitude, and their swagger. They also rap about their culture, their roots, their identity, and their influence.

    -

    The message of Celebrity Killer

    -

    The message of Celebrity Killer is that the artists are not afraid to express themselves and to show off their achievements. They are proud of who they are and what they have done. They are not intimidated by anyone or anything. They are living their dreams and enjoying their lives. They are also inspiring others to follow their passion and to chase their goals. They are also challenging the stereotypes and the norms that limit them. They are also celebrating their diversity and their unity.

    -

    The music of Celebrity Killer

    -

    The production of Celebrity Killer

    -

    The production of Celebrity Killer is done by Steel Banglez, The Kidd, JB Music, A. Singh, M1 on the Beat, and Chris Rich Beats. The production is a fusion of Punjabi folk music and British drill music. The production uses traditional instruments like the tumbi, the dhol, and the flute, as well as modern elements like the 808s, the hi-hats, and the synths. The production creates a contrast between the old and the new, the east and the west, the rural and the urban.

    -

    The style of Celebrity Killer

    -

    The style of Celebrity Killer is a blend of Punjabi rap and British rap. The style is influenced by both the Punjabi and the British rap scenes, as well as by other genres like hip hop, trap, grime, afrobeat, dancehall, and bhangra. The style is unique and original, as it mixes different languages, accents, flows, deliveries, rhymes, metaphors, references, slang, and humor. The style is also versatile and adaptable, as it can appeal to different audiences and moods.

    -

    Celebrity Killer Sidhu Moose Wala lyrics
    -Celebrity Killer Tion Wayne Spotify
    -Celebrity Killer Moosetape video
    -Celebrity Killer Steel Banglez music
    -Celebrity Killer Sidhu Moose Wala SoundCloud
    -Celebrity Killer Tion Wayne mp3
    -Celebrity Killer Moosetape album
    -Celebrity Killer Steel Banglez remix
    -Celebrity Killer Sidhu Moose Wala feat. Tion Wayne
    -Celebrity Killer Tion Wayne song
    -Celebrity Killer Moosetape 2021
    -Celebrity Killer Steel Banglez producer
    -Celebrity Killer Sidhu Moose Wala YouTube
    -Celebrity Killer Tion Wayne rap
    -Celebrity Killer Moosetape tracklist
    -Celebrity Killer Steel Banglez beats
    -Celebrity Killer Sidhu Moose Wala official video
    -Celebrity Killer Tion Wayne UK
    -Celebrity Killer Moosetape review
    -Celebrity Killer Steel Banglez collaboration
    -Celebrity Killer Sidhu Moose Wala free download
    -Celebrity Killer Tion Wayne lyrics meaning
    -Celebrity Killer Moosetape songs
    -Celebrity Killer Steel Banglez genre
    -Celebrity Killer Sidhu Moose Wala new song
    -Celebrity Killer Tion Wayne net worth
    -Celebrity Killer Moosetape release date
    -Celebrity Killer Steel Banglez awards
    -Celebrity Killer Sidhu Moose Wala latest song
    -Celebrity Killer Tion Wayne biography
    -Celebrity Killer Moosetape online streaming
    -Celebrity Killer Steel Banglez discography
    -Celebrity Killer Sidhu Moose Wala Punjabi song
    -Celebrity Killer Tion Wayne Instagram
    -Celebrity Killer Moosetape download link
    -Celebrity Killer Steel Banglez interview
    -Celebrity Killer Sidhu Moose Wala fan reaction
    -Celebrity Killer Tion Wayne tour dates
    -Celebrity Killer Moosetape best song
    -Celebrity Killer Steel Banglez studio session

    -

    The video of Celebrity Killer

    -

    The visuals of Celebrity Killer

    -

    The visuals of Celebrity Killer are directed by Sukh Sanghera and Teji Sandhu. The visuals are shot in London and Punjab. The visuals show the artists performing in various locations like a mansion, a club, a car park, a farm, a street, and a rooftop. The visuals also show the artists wearing different outfits like suits, jackets, hoodies, turbans, sunglasses, chains, watches, and shoes. The visuals also show the artists interacting with various people like dancers, models, fans, friends, and family. The visuals also show the artists displaying various props like money, cars, bikes, guns, and snakes. The visuals create a contrast between the luxury and the simplicity, the glamour and the grit, the fame and the reality.

    -

    The reception of Celebrity Killer

    -

    The reception of Celebrity Killer is very positive and enthusiastic. The song has received a lot of praise and appreciation from the fans and the critics. The song has also received a lot of support and recognition from other artists and celebrities. The song has also broken many records and achieved many milestones. The song has also created a lot of buzz and hype on social media and other platforms. The song has also generated a lot of memes and trends. The song has also sparked a lot of discussions and debates.

    -

    Conclusion

    -

    Celebrity Killer is a hit song by Sidhu Moose Wala and Tion Wayne that showcases their talent and personality. The song is a fusion of Punjabi rap and British rap that appeals to a global audience. The song is about the fame and success of the artists, and how they are killing the game with their music. The song has a catchy chorus, a fast-paced beat, a unique style, and a meaningful message. The song also has a stunning video that shows the contrast between the different worlds of the artists. The song is very popular and influential, as it has received a lot of positive feedback and attention.

    -

    If you want to listen to Celebrity Killer, you can download it from various platforms like YouTube, Spotify, Apple Music, Amazon Music, Gaana, JioSaavn, Wynk Music, Hungama Music, Resso, SoundCloud, Audiomack, Deezer, Tidal, Napster, Pandora, iHeartRadio, Shazam, Anghami, Boomplay Music, YouTube Music, Google Play Music, or any other platform that you prefer.

    -

    If you want to learn more about Sidhu Moose Wala and Tion Wayne, you can follow them on their social media accounts like Instagram, Twitter, Facebook, Snapchat, TikTok, or any other platform that you prefer.

    -

    If you enjoyed this article, please share it with your friends and family who might be interested in Celebrity Killer. Also, please leave your comments and feedback below. We would love to hear from you.

    -

    FAQs

    -

    Here are some frequently asked questions about Celebrity Killer:

    -
      -
    • Q: What is the meaning of Celebrity Killer?
    • -
    • A: Celebrity Killer is a Punjabi rap song that means that the artists are killing the game with their music and their fame.
    • -
    • Q: Who are Sidhu Moose Wala and Tion Wayne?
    • -
    • A: Sidhu Moose Wala is a Punjabi rapper from India and Tion Wayne is a British rapper from London.
    • -
    • Q: How popular is Celebrity Killer?
    • -
    • A: Celebrity Killer is very popular as it has over 50 million views on YouTube and over 19 million streams on Spotify as of September 2021.
    • -
    • Q: How can I download Celebrity Killer?
    • -
    • A: You can download Celebrity Killer from various platforms like YouTube, Spotify, Apple Music, Amazon Music, Gaana, JioSaavn, Wynk Music, Hungama Music, Resso, SoundCloud, Audiomack, Deezer, Tidal, Napster, Pandora, iHeartRadio, Shazam, Anghami, Boomplay Music, YouTube Music, Google Play Music, or any other platform that you prefer.
    • -
    • Q: How can I follow Sidhu Moose Wala and Tion Wayne?
    • -
    • A: You can follow Sidhu Moose Wala and Tion Wayne on their social media accounts like Instagram, Twitter, Facebook, Snapchat, TikTok, or any other platform that you prefer.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/LINE MAN Rider How to Download and Install the App on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/LINE MAN Rider How to Download and Install the App on Your Android Device.md deleted file mode 100644 index a9ff435a159920bd611f0ff730c7b44a902b4803..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/LINE MAN Rider How to Download and Install the App on Your Android Device.md +++ /dev/null @@ -1,100 +0,0 @@ - -

    What is LINE MAN Rider and How to Download it on iOS?

    -

    If you are looking for a fast and convenient way to order food, groceries, parcels and more, you might want to check out LINE MAN Rider. It is a popular delivery service app in Thailand that connects you with thousands of riders who can deliver anything you need within minutes. And if you are interested in becoming a rider yourself, you can also join the app and earn extra income by delivering orders to customers.

    -

    line man rider apk ios


    Download Zip --->>> https://urlca.com/2uOdmh



    -

    In this article, we will tell you everything you need to know about LINE MAN Rider, including its features, benefits, and how to download it on your iOS device. Whether you want to use the app as a customer or a rider, we will guide you through the steps and tips to make the most out of it.

    -

    Features of LINE MAN Rider

    -

    LINE MAN Rider is more than just a delivery app. It is also a platform that connects customers with riders who can provide various services. Here are some of the features that make LINE MAN Rider stand out from other delivery apps.

    -

    Flexible and convenient delivery service

    -

    With LINE MAN Rider, you can order anything you want from over 10,000 shops and restaurants across Bangkok and other cities in Thailand. You can choose from a wide range of categories, such as food, groceries, flowers, documents, parcels, laundry, medicine, and more. You can also request a rider to buy something for you from any store or location.

    -

    Once you place your order, you can track its status and location in real time on the app. You can also chat with your rider directly through the app if you have any questions or special instructions. You can pay for your order using cash on delivery or online payment methods such as credit card, debit card, or LINE Pay.

    -

    line man rider app ios download
    -line man rider ios version beta
    -how to install line man rider apk on iphone
    -line man rider apk for ios free
    -line man rider delivery service ios app
    -line man rider ios update 2023
    -line man rider apk ios review
    -line man rider ios app store
    -line man rider apk ios compatibility
    -line man rider ios app features
    -line man rider apk download for iphone 6
    -line man rider ios app tutorial
    -line man rider apk ios alternative
    -line man rider ios app problems
    -line man rider apk ios latest version
    -line man rider ios app requirements
    -line man rider apk ios comparison
    -line man rider ios app feedback
    -line man rider apk ios advantages
    -line man rider ios app support
    -line man rider apk download for iphone 7
    -line man rider ios app tips
    -line man rider apk ios disadvantages
    -line man rider ios app issues
    -line man rider apk ios benefits
    -line man rider ios app guide
    -line man rider apk download for iphone 8
    -line man rider ios app tricks
    -line man rider apk ios drawbacks
    -line man rider ios app testimonials
    -line man rider apk ios pros and cons
    -line man rider ios app help
    -line man rider apk download for iphone x
    -line man rider ios app hacks
    -line man rider apk ios challenges
    -line man rider ios app faq
    -line man rider apk download for iphone 11
    -line man rider ios app cheats
    -line man rider apk ios opportunities
    -line man rider ios app contact
    -line man rider apk download for iphone 12
    -line man rider ios app suggestions
    -line man rider apk ios solutions
    -line man rider ios app recommendations
    -line man rider apk download for ipad
    -line man rider ios app improvements
    -line man rider apk ios best practices
    -line man rider ios app ratings

    -

    Attractive income and benefits for riders

    -

    If you have a motorcycle and a valid driver's license, you can register as a LINE MAN Rider and start earning money by delivering orders to customers. You can work anytime and anywhere you want, without any fixed schedule or quota. You can also choose which orders to accept or decline based on your preference.

    -

    As a LINE MAN Rider, you can earn an average income of over 26,000 baht per month (depending on your performance). You can also enjoy various incentives and bonuses, such as guaranteed minimum income of 16,000 baht in your first month (for selected areas), free credits and delivery boxes (for new riders in 2022), and discounts on fuel, insurance, maintenance, and more.

    -

    How to Download LINE MAN Rider on iOS

    -

    If you have an iOS device, such as an iPhone or an iPad, you might be wondering how to download LINE MAN Rider on your device. There are two ways to do this: the official way and the alternative way. Let's take a look at both of them.

    -

    The official way

    -

    The official way to download LINE MAN Rider on iOS is to get it from the App Store or the official website of LINE MAN Rider. Here are the steps to follow:

    -
      -
    1. Open the App Store on your iOS device and search for "LINE MAN Rider" or go to this link: https://apps.apple.com/th/app/line-man-rider/id1476546564
    2. -
    3. Tap on the "Get" button and wait for the app to download and install on your device.
    4. -
    5. Open the app and sign in with your LINE account. If you don't have one, you can create one for free.
    6. -
    7. Set up your profile by entering your name, phone number, email address, and other details.
    8. -
    9. Choose whether you want to use the app as a customer or a rider. If you want to be a rider, you will need to provide some additional information and documents, such as your ID card, driver's license, motorcycle registration, and bank account.
    10. -
    -

    Congratulations! You have successfully downloaded LINE MAN Rider on your iOS device using the official way.

    -

    The alternative way

    -

    The alternative way to download LINE MAN Rider on iOS is to get it from a third-party source or an APK file. This method is not recommended by LINE MAN Rider, as it may pose some risks and issues, such as malware, viruses, compatibility problems, or legal violations. However, if you still want to try it, here are the steps to follow:

    -
      -
    1. Find a reliable and trustworthy website that offers LINE MAN Rider APK file for iOS devices. You can search for it on Google or use this link: https://apkpure.com/line-man-rider/com.linecorp.lmnr/download?from=details
    2. -
    3. Download the APK file to your computer and connect your iOS device to your computer using a USB cable.
    4. -
    5. Transfer the APK file to your iOS device using iTunes or any other file manager app.
    6. -
    7. On your iOS device, go to Settings > General > Device Management and trust the developer of the APK file.
    8. -
    9. Open the APK file and install it on your device. You may need to allow installation from unknown sources if prompted.
    10. -
    11. Open the app and sign in with your LINE account. Set up your profile and choose whether you want to be a customer or a rider.
    12. -
    -

    You have successfully downloaded LINE MAN Rider on your iOS device using the alternative way. However, be aware that this method may not work properly or may cause some problems with your device or account. Use it at your own risk.

    -

    Conclusion

    -

    LINE MAN Rider is a great app that allows you to order anything you need from thousands of shops and restaurants in Thailand, or to earn extra income by delivering orders to customers. It has many features and benefits that make it stand out from other delivery apps. If you want to download LINE MAN Rider on your iOS device, you can either use the official way or the alternative way. However, we recommend using the official way, as it is safer and more reliable than the alternative way.

    -

    If you have any questions or feedback about LINE MAN Rider, feel free to contact us through the app or visit our website: https://linemanth.com/en/

    -

    FAQs

    -

    Is LINE MAN Rider available in my country?

    -

    LINE MAN Rider is currently available only in Thailand. However, we are planning to expand our service to other countries in the future. Stay tuned for more updates!

    -

    Is LINE MAN Rider safe and reliable?

    -

    Yes, LINE MAN Rider is safe and reliable. All our riders are verified and trained by us. They also follow strict hygiene and safety protocols when handling your orders. You can also track your order in real time and chat with your rider through the app. If you have any issues or complaints, you can contact our customer service team anytime.

    -

    How can I contact LINE MAN Rider customer service?

    -

    You can contact LINE MAN Rider customer service through the app or by calling 02-021-2500 (Monday-Friday 9:00-18:00). You can also email us at support@lin emanth.com. We are always happy to help you.

    -

    How can I cancel or modify my order on LINE MAN Rider?

    -

    If you want to cancel or modify your order on LINE MAN Rider, you can do so through the app before the rider accepts your order. Once the rider accepts your order, you cannot cancel or modify it anymore. However, you can still chat with your rider and ask them to make some changes if possible. Please note that you may be charged a cancellation fee or a price difference if you cancel or modify your order.

    -

    How can I rate or tip my rider on LINE MAN Rider?

    -

    After your order is delivered, you can rate or tip your rider on LINE MAN Rider through the app. You can give your rider a star rating from 1 to 5 and leave a comment if you wish. You can also tip your rider any amount you want using online payment methods such as credit card, debit card, or LINE Pay. Your feedback and tips are greatly appreciated by our riders and help us improve our service quality.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ultimate Motorcycle Simulator Hack How to Download and Install the Mod APK.md b/spaces/congsaPfin/Manga-OCR/logs/Ultimate Motorcycle Simulator Hack How to Download and Install the Mod APK.md deleted file mode 100644 index 4833174938c2b21e4bcf7b1796c15a41fcfc07ff..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ultimate Motorcycle Simulator Hack How to Download and Install the Mod APK.md +++ /dev/null @@ -1,92 +0,0 @@ -
    -

    Download Hack Ultimate Motorcycle Simulator: How to Get Unlimited Money and Customization Options

    -

    If you are a fan of motorcycle games, you might have heard of Ultimate Motorcycle Simulator, one of the most realistic and fun motorcycle simulators on mobile. In this game, you can ride various types of motorcycles, from racing bikes to off-road bikes, in a huge open world map with detailed environments. You can also customize your own motorbike with countless vinyls and parts, and enjoy the realistic physics and sound effects.

    -

    download hack ultimate motorcycle simulator


    Download ››››› https://urlca.com/2uOfaM



    -

    However, as much as you love this game, you might also feel frustrated by some of its limitations. For example, you need to earn money by completing missions or watching ads to unlock new motorcycles and accessories. You might also get annoyed by the frequent ads that pop up on your screen. If you want to get rid of these problems and enjoy the game to the fullest, you might be interested in downloading hack ultimate motorcycle simulator.

    -

    How to download hack ultimate motorcycle simulator

    -

    Downloading hack ultimate motorcycle simulator is not as hard as you might think. All you need is a mod APK file that contains the hacked version of the game. A mod APK is a modified version of an original APK (Android Package Kit), which is the file format used to distribute and install applications on Android devices. A mod APK can have extra features or unlocked content that are not available in the original version.

    -

    Here are the steps to download hack ultimate motorcycle simulator:

    -

    download ultimate motorcycle simulator mod apk unlimited money
    -how to download hack version of ultimate motorcycle simulator
    -download ultimate motorcycle simulator hack tool for android
    -ultimate motorcycle simulator mod apk download latest version
    -download ultimate motorcycle simulator hack ios no jailbreak
    -ultimate motorcycle simulator hack apk free download for pc
    -download ultimate motorcycle simulator mod menu apk
    -ultimate motorcycle simulator hack download no survey no password
    -download ultimate motorcycle simulator hack online generator
    -ultimate motorcycle simulator mod apk download rexdl
    -download ultimate motorcycle simulator hack apk obb
    -ultimate motorcycle simulator hack apk download happymod
    -download ultimate motorcycle simulator mod apk revdl
    -ultimate motorcycle simulator hack apk download apkpure
    -download ultimate motorcycle simulator hack apk 2023
    -ultimate motorcycle simulator mod apk download android 1
    -download ultimate motorcycle simulator hack for windows 10
    -ultimate motorcycle simulator hack apk download uptodown
    -download ultimate motorcycle simulator mod apk all bikes unlocked
    -ultimate motorcycle simulator hack apk download 2022
    -download ultimate motorcycle simulator hack without human verification
    -ultimate motorcycle simulator mod apk download an1
    -download ultimate motorcycle simulator mod apk unlimited diamonds
    -ultimate motorcycle simulator hack apk download 2021
    -download ultimate motorcycle simulator mod apk all cars unlocked
    -ultimate motorcycle simulator mod apk download apkmody
    -download ultimate motorcycle simulator hack with lucky patcher
    -ultimate motorcycle simulator hack apk download 2020
    -download ultimate motorcycle simulator mod apk unlimited coins and gems
    -ultimate motorcycle simulator mod apk download apkmirror
    -download ultimate motorcycle simulator hack by apkdyno.com
    -ultimate motorcycle simulator hack apk download 2019
    -download ultimate motorcycle simulator mod apk unlimited everything
    -ultimate motorcycle simulator mod apk download apkdose.com
    -download ultimate motorcycle simulator hack from panda helper vip
    -ultimate motorcycle simulator hack apk download 2018
    -download ultimate motorcycle simulator mod apk god mode
    -ultimate motorcycle simulator mod apk download apknite.com
    -download ultimate motorcycle simulator hack game guardian script
    -ultimate motorcycle simulator hack apk download 2017

    -

    Step 1: Find a reliable source of mod APK

    -

    The first step is to find a website that offers the mod APK file for ultimate motorcycle simulator. You can search on Google or use a specific website that specializes in mod APKs. However, be careful not to download from shady or untrustworthy sources, as they might contain malware or viruses that can harm your device or steal your personal information. One of the websites that we recommend is [APKDone](^4^), which has a verified and updated mod APK for ultimate motorcycle simulator.

    -

    Step 2: Download and install the mod APK

    -

    The second step is to download and install the mod APK file on your device. Before you do that, make sure that you have enabled the option to install apps from unknown sources in your device settings. This will allow you to install apps that are not from the official Google Play Store. Then, follow these steps:

    -
      -
    • Go to the website that you chose and click on the download button for the mod APK file.
    • -
    • Wait for the download to finish and locate the file in your device storage.
    • -
    • Tap on the file and follow the instructions to install it.
    • -
    • If you already have the original version of ultimate motorcycle simulator installed, you might need to uninstall it first or overwrite it with the mod APK.
    • -
    -

    Step 3: Enjoy the hacked features

    -

    The third step is to enjoy the hacked features of ultimate motorcycle simulator. Once you have installed the mod APK, you can launch the game and see the differences. You will notice that you have unlimited money and customization options, and no ads will bother you anymore. You can buy any motorcycle and accessory that you want, and create your own dream motorbike. You can also explore the open world map without any restrictions or interruptions.

    -

    Benefits of hacking ultimate motorcycle simulator

    -

    There are many benefits of hacking ultimate motorcycle simulator, such as:

    -

    Unlimited money

    -

    One of the main benefits of hacking ultimate motorcycle simulator is that you can get unlimited money in the game, which means you can buy any motorcycle and accessory that you want. You don't have to worry about completing missions or watching ads to earn money. You can also upgrade your motorbike to the maximum level and enjoy the best performance and speed.

    -

    Unlimited customization

    -

    Another benefit of hacking ultimate motorcycle simulator is that you can get unlimited customization options for your motorbike. You can choose from countless vinyls and parts, and change the color, design, and style of your motorbike. You can also adjust the suspension, engine, brakes, and tires of your motorbike to suit your preferences. You can create your own unique and personalized motorbike that reflects your personality and taste.

    -

    No ads

    -

    A final benefit of hacking ultimate motorcycle simulator is that you can get rid of the annoying ads that pop up on your screen every time you play the game. Ads can be very distracting and irritating, especially when they interrupt your gameplay or force you to watch them to get rewards. By hacking ultimate motorcycle simulator, you can enjoy a smooth and uninterrupted gaming experience without any ads.

    -

    Risks of hacking ultimate motorcycle simulator

    -

    However, hacking ultimate motorcycle simulator also comes with some risks that you should be aware of, such as:

    -

    Security issues

    -

    One of the risks of hacking ultimate motorcycle simulator is that you might expose your device or personal information to security threats. As mentioned earlier, not all sources of mod APKs are reliable or trustworthy. Some of them might contain malware or viruses that can damage your device or steal your data. Therefore, you should always be careful when downloading and installing mod APKs from unknown sources, and use a reputable antivirus software to scan them before opening them.

    -

    Legal issues

    -

    Another risk of hacking ultimate motorcycle simulator is that you might violate the terms of service or the intellectual property rights of the game developer. Modifying or distributing the original APK file without the permission of the developer is considered illegal and unethical. You might face legal consequences or penalties if you are caught doing so. Therefore, you should always respect the rights and efforts of the game developer and support them by playing the original version of the game.

    -

    Ethical issues

    -

    A final risk of hacking ultimate motorcycle simulator is that you might ruin the fun and challenge of the game for yourself and others. By hacking ultimate motorcycle simulator, you are essentially cheating and getting an unfair advantage over other players. You are also missing out on the satisfaction and achievement of playing the game legitimately and earning your rewards through hard work and skill. Therefore, you should always play the game fairly and honestly, and enjoy it for what it is.

    -

    Conclusion

    -

    In conclusion, downloading hack ultimate motorcycle simulator is a way to get unlimited money and customization options in the game, as well as to remove the ads. However, it also comes with some risks, such as security issues, legal issues, and ethical issues. Therefore, you should weigh the pros and cons carefully before deciding to hack ultimate motorcycle simulator, and always be responsible for your actions.

    -

    Here are some FAQs that you might have about downloading hack ultimate motorcycle simulator:

    -