diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acrobat Distiller 9 Full Version Free Download The Ultimate Guide to PDF Creation and Conversion.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acrobat Distiller 9 Full Version Free Download The Ultimate Guide to PDF Creation and Conversion.md deleted file mode 100644 index 204482e7c595c3233974463e2ffbf6f4cd071185..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acrobat Distiller 9 Full Version Free Download The Ultimate Guide to PDF Creation and Conversion.md +++ /dev/null @@ -1,133 +0,0 @@ - -

Acrobat Distiller 9: What Is It and How to Download It for Free

-

If you are looking for a reliable and easy way to create high-quality PDF files from any application, you might want to consider using Acrobat Distiller 9. This software is a part of Adobe Acrobat 9, which is a comprehensive solution for creating, editing, and sharing PDF documents. In this article, we will explain what Acrobat Distiller 9 is, what are its main features and benefits, and how you can download it for free. We will also show you how to use Acrobat Distiller 9 to convert PostScript files to PDFs, manage the conversion queue, and customize the Adobe PDF settings.

-

What Is Acrobat Distiller 9?

-

Acrobat Distiller 9 is a software that allows you to convert PostScript files (PS) or Encapsulated PostScript files (EPS) to Portable Document Format files (PDF). PostScript files are created by applications that can print, such as word processors, spreadsheets, or graphics programs. They contain instructions for printers on how to render the document on paper. PDF files are universal files that can be viewed, printed, or shared on any device or platform. They preserve the layout, fonts, colors, and graphics of the original document.

-

acrobat distiller 9 full version free download


DOWNLOAD ———>>> https://byltly.com/2uKzqW



-

The Main Features of Acrobat Distiller 9

-

Acrobat Distiller 9 has several features that make it a powerful tool for creating PDF files. Some of these features are:

- -

The Benefits of Using Acrobat Distiller 9

-

Acrobat Distiller 9 offers many benefits for users who need to create PDF files from various applications. Some of these benefits are:

- -

How to Download Acrobat Distiller 9 for Free?

-

If you want to download Acrobat Distiller 9 for free, you have two options: the official way and the alternative way. Let's see how they work.

-

The Official Way to Download Acrobat Distiller 9

-

The official way to download Acrobat Distiller 9 is to download Adobe Acrobat 9 Pro Extended trial version from Adobe's website. This trial version includes Acrobat Distiller 9 as well as other features of Adobe Acrobat 9 Pro Extended, such as Adobe Presenter, Adobe LiveCycle Designer ES, Adobe 3D Reviewer, and more. You can use this trial version for free for up to 30 days.

-

To download Adobe Acrobat 9 Pro Extended trial version, follow these steps:

-
    -
  1. Go to https://www.adobe.com/downloads/other-downloads.html and scroll down to find Adobe Acrobat Pro Extended (Windows only).
  2. -
  3. Click on Try Now button and sign in with your Adobe ID or create one if you don't have one.
  4. -
  5. Select your language and click on Download Now button. A file named ADBEPHSPCS4_LS1.exe will be downloaded.
  6. -
  7. Double-click on the downloaded file and follow the instructions on the screen to install Adobe Acrobat Pro Extended trial version on your computer.
  8. -
  9. Launch Adobe Acrobat Pro Extended from your desktop or start menu and enjoy using it for free for up to 30 days.
  10. -
-

The Alternative Way to Download Acrobat Distiller 9

-

The alternative way to download Acrobat Distiller 9 is to use a third-party website that offers free downloads of software. However, this method is not recommended because it may expose your computer to viruses, malware, or other security risks. Moreover, it may violate the terms and conditions of Adobe's license agreement. Therefore, we advise you to use this method at your own risk and discretion.

-

To download Acrobat Distiller 9 from a third-party website, follow these steps:

-
    -
  1. Go to https://en.softonic.com/download/adobe-acrobat-distiller/windows/post-download and click on Free Download button.
  2. -
  3. A file named adobe-acrobat-distiller-4-0.exe will be downloaded. Double-click on it and follow the instructions on the screen to install Acrobat Distiller 9 on your computer.
  4. -
  5. Launch Acrobat Distiller 9 from your desktop or start menu and use it as long as you want.
  6. -
-

How to Use Acrobat Distiller 9 to Create PDFs?

-

Now that you have downloaded and installed Acrobat Distiller 9 on your computer, you can start using it to create PDFs from any application that can print. Here are some tips on how to use Acrobat Distiller 9 effectively:

-

How to get acrobat distiller 9 for free
-Acrobat distiller 9 crack download
-Acrobat distiller 9 serial key generator
-Acrobat distiller 9 license key free
-Acrobat distiller 9 activation code online
-Acrobat distiller 9 offline installer download
-Acrobat distiller 9 portable version download
-Acrobat distiller 9 full setup file download
-Acrobat distiller 9 latest update download
-Acrobat distiller 9 patch file download
-Acrobat distiller 9 torrent download link
-Acrobat distiller 9 direct download link
-Acrobat distiller 9 alternative software free
-Acrobat distiller 9 compatible windows versions
-Acrobat distiller 9 system requirements
-Acrobat distiller 9 features and benefits
-Acrobat distiller 9 user guide pdf download
-Acrobat distiller 9 tutorial videos online
-Acrobat distiller 9 tips and tricks
-Acrobat distiller 9 best practices and recommendations
-Acrobat distiller 9 reviews and ratings
-Acrobat distiller 9 customer testimonials and feedback
-Acrobat distiller 9 comparison with other pdf tools
-Acrobat distiller 9 pros and cons
-Acrobat distiller 9 advantages and disadvantages
-Acrobat distiller 9 price and discounts
-Acrobat distiller 9 coupon codes and offers
-Acrobat distiller 9 free trial period and duration
-Acrobat distiller 9 refund policy and guarantee
-Acrobat distiller 9 customer support and service
-Acrobat distiller 9 technical issues and solutions
-Acrobat distiller 9 error messages and fixes
-Acrobat distiller 9 troubleshooting steps and guides
-Acrobat distiller 9 frequently asked questions and answers
-Acrobat distiller 9 forum and community online
-Acrobat distiller 9 blog and news updates
-Acrobat distiller 9 webinar and training sessions online
-Acrobat distiller 9 case studies and success stories online
-Acrobat distiller 9 awards and recognition online
-Acrobat distiller 9 legal and ethical issues online
-How to uninstall acrobat distiller 9 from windows pc
-How to upgrade acrobat distiller 9 to latest version
-How to downgrade acrobat distiller 9 to previous version
-How to backup acrobat distiller 9 settings and files
-How to restore acrobat distiller 9 settings and files
-How to customize acrobat distiller 9 preferences and options
-How to optimize acrobat distiller 9 performance and speed
-How to secure acrobat distiller 9 from malware and viruses
-How to integrate acrobat distiller 9 with other applications
-How to convert pdf files using acrobat distiller 9

-

How to Convert PostScript Files to PDFs with Acrobat Distiller 9

-

To convert PostScript files (PS) or Encapsulated PostScript files (EPS) to PDFs with Acrobat Distiller 9, follow these steps:

-
    -
  1. In your application that can print, choose File > Print and select Adobe PDF as the printer name.
  2. -
  3. In the Print dialog box, click on Properties button and select an Adobe PDF setting from the Default Settings drop-down menu. You can also click on Edit button to modify or create your own custom setting.
  4. -
  5. In the same dialog box, click on OK button and then click on Print button. A Save As dialog box will appear where you can choose a name and location for your PostScript file.
  6. -
  7. Open Acrobat Distiller 9 from your desktop or start menu and drag-and-drop your PostScript file into its window. Alternatively, you can choose File > Open in Acrobat Distiller 9 and browse for your PostScript file.
  8. -
  9. The conversion process will start automatically and a progress bar will show you its status. When it is done, a new PDF file will be created in the same folder as your PostScript file.
  10. -
  11. You can double-click on the new PDF file to open it in Adobe Reader or any other PDF viewer application.
  12. -
-

How to Manage the Conversion Queue in Acrobat Distiller 9

-

How to Customize the Adobe PDF Settings in Acrobat Distiller 9

-

Acrobat Distiller 9 allows you to customize the Adobe PDF settings that control the quality and size of the output PDF file. You can edit the existing settings or create your own custom settings. To customize the Adobe PDF settings in Acrobat Distiller 9, follow these steps:

-
    -
  1. In Acrobat Distiller 9, choose Settings > Edit Adobe PDF Settings. A dialog box will appear where you can see and modify the settings for the selected Adobe PDF setting.
  2. -
  3. In the General tab, you can change the description, compatibility, resolution, and other options for your PDF file.
  4. -
  5. In the Images tab, you can change the compression, downsampling, and color conversion options for your images.
  6. -
  7. In the Fonts tab, you can change the embedding and subsetting options for your fonts.
  8. -
  9. In the Color tab, you can change the color management and conversion options for your colors.
  10. -
  11. In the Advanced tab, you can change the transparency flattening, optimization, and security options for your PDF file.
  12. -
  13. In the Standards tab, you can change the standards compliance and reporting options for your PDF file.
  14. -
  15. When you are done with your changes, click on Save As button and give a name to your custom Adobe PDF setting. You can also click on OK button to overwrite the existing setting.
  16. -
  17. You can now use your custom Adobe PDF setting to convert your PostScript files to PDFs with Acrobat Distiller 9.
  18. -
-

Conclusion

-

Acrobat Distiller 9 is a useful software that lets you create high-quality PDF files from any application that can print. It has many features and benefits that make it a powerful tool for PDF creation. You can download it for free either from Adobe's website or from a third-party website. However, we recommend using the official way to avoid any security risks or license violations. You can also use Acrobat Distiller 9 to convert PostScript files to PDFs, manage the conversion queue, and customize the Adobe PDF settings. We hope this article has helped you understand what Acrobat Distiller 9 is and how to download it and use it for free.

-

FAQs

-

Here are some frequently asked questions about Acrobat Distiller 9:

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clonedvd-7-0-0-10-ultimate-crack How to Backup Edit and Enjoy Your DVD Collection.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clonedvd-7-0-0-10-ultimate-crack How to Backup Edit and Enjoy Your DVD Collection.md deleted file mode 100644 index b9889b2c7494d6ee67a8dd50a9c1347b9e556e6a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clonedvd-7-0-0-10-ultimate-crack How to Backup Edit and Enjoy Your DVD Collection.md +++ /dev/null @@ -1,150 +0,0 @@ -
-

CloneDVD 7 Ultimate 7.0.0.10 Crack: A Complete Guide

-

If you are looking for a powerful and easy-to-use DVD copying and converting software, you may have heard of CloneDVD 7 Ultimate. This software is designed to meet your various DVD needs, such as cloning, ripping, creating, and converting DVDs. But what exactly is CloneDVD 7 Ultimate and how can you get it for free? In this article, we will give you a complete guide on CloneDVD 7 Ultimate 7.0.0.10 crack, including its features, system requirements, download and installation steps, and usage tips.

-

Clonedvd-7-0-0-10-ultimate-crack


DOWNLOAD ->>->>->> https://byltly.com/2uKyic



-

What is CloneDVD 7 Ultimate?

-

CloneDVD 7 Ultimate is a comprehensive DVD solution that allows you to clone, copy, backup, rip, create, and convert any DVD disc or video file. It supports all popular video formats and devices, such as AVI, MP4, MPG, WMV, MOV, iPhone, iPad, Android phones, etc. It also enables you to edit and customize your DVDs and videos with various effects and settings.

-

CloneDVD 7 Ultimate has four main modules: DVD Copy, DVD Ripper, DVD Creator, and Video Converter. Each module has its own functions and features that we will introduce in the next section.

-

Features of CloneDVD 7 Ultimate

-

DVD Copy

-

The DVD Copy module allows you to clone, copy, and backup any DVD disc or folder with high quality and fast speed. You can choose from multiple copy modes and languages according to your needs. You can also compress or split a DVD9 disc into two DVD5 discs.

-

DVD Ripper

-

The DVD Ripper module allows you to rip any DVD disc or folder into other video formats that can be played on various devices. You can choose from a wide range of output formats and profiles according to your device type and preference. You can also edit and adjust the video parameters such as resolution, bitrate, frame rate, aspect ratio, etc.

-

DVD Creator

-

The DVD Creator module allows you to create your own DVD masterpieces from your collected videos or movies. You can drag and drop any video file into the program and burn it to a blank DVD disc or save it as an ISO file or a DVD folder. You can also customize your DVD menu with different templates, backgrounds, music, etc.

-

Video Converter

-

The Video Converter module allows you to convert any video file between different formats with high quality and fast speed. You can choose from a large number of output formats and profiles according to your device type and preference. You can also edit and enhance your videos with various effects and settings.

-

System Requirements for CloneDVD 7 Ultimate

-

Before you download and install CloneDVD 7 Ultimate 7.0.0.10 crack, you need to make sure that your computer meets the following system requirements:

-

Clone DVD 7 Ultimate Multilingual Incl Crack
-CloneDVD 7 Ultimate Crack with License Code
-CloneDVD 7 Ultimate Serial Number
-CloneDVD 7 Ultimate Full Version Free Download
-CloneDVD 7 Ultimate Keygen
-CloneDVD 7 Ultimate Patch
-CloneDVD 7 Ultimate Activation Code
-CloneDVD 7 Ultimate Registration Code
-CloneDVD 7 Ultimate Portable
-CloneDVD 7 Ultimate Review
-CloneDVD 7 Ultimate Tutorial
-CloneDVD 7 Ultimate Features
-CloneDVD 7 Ultimate System Requirements
-CloneDVD 7 Ultimate DVD Copy Software
-CloneDVD 7 Ultimate DVD Ripper Software
-CloneDVD 7 Ultimate DVD Creator Software
-CloneDVD 7 Ultimate Video Converter Software
-CloneDVD 7 Ultimate DVD Cloner
-CloneDVD 7 Ultimate DVD Backup
-CloneDVD 7 Ultimate DVD Burner
-CloneDVD 7 Ultimate DVD Editor
-CloneDVD 7 Ultimate DVD Maker
-CloneDVD 7 Ultimate DVD Slideshow
-CloneDVD 7 Ultimate Video Editor
-CloneDVD 7 Ultimate Video Maker
-CloneDVD 7 Ultimate Video Slideshow
-CloneDVD 7 Ultimate Convert DVD to MP4
-CloneDVD 7 Ultimate Convert DVD to AVI
-CloneDVD 7 Ultimate Convert DVD to WMV
-CloneDVD 7 Ultimate Convert DVD to MOV
-CloneDVD 7 Ultimate Convert DVD to MKV
-CloneDVD 7 Ultimate Convert DVD to FLV
-CloneDVD 7 Ultimate Convert DVD to MP3
-CloneDVD 7 Ultimate Convert DVD to iPhone
-CloneDVD 7 Ultimate Convert DVD to iPad
-CloneDVD 7 Ultimate Convert DVD to Android
-CloneDVD 7 Ultimate Rip DVD to MP4
-CloneDVD 7 Ultimate Rip DVD to AVI
-CloneDVD 7 Ultimate Rip DVD to WMV
-CloneDVD 7 Ultimate Rip DVD to MOV
-CloneDVD 7 Ultimate Rip DVD to MKV
-CloneDVD 7 Ultimate Rip DVD to FLV
-CloneDVD 7 Ultimate Rip DVD to MP3
-CloneDVD 7 Ultimate Rip DVD to iPhone
-CloneDVD 7 Ultimate Rip DVD to iPad
-CloneDVD 7 Ultimate Rip DVD to Android

- -

How to Download and Install CloneDVD 7 Ultimate 7.0.0.10 Crack?

-

If you want to enjoy the full features of CloneDVD 7 Ultimate without paying for it, you need to download and install its cracked version from a reliable source. Here are the steps you need to follow:

-

Step 1: Download the Setup File and Crack File

-

The first step is to download the setup file and the crack file of CloneDVD 7 Ultimate from a trusted website. For example, you can download them from this link. The setup file is named CloneDVDSetup.exe and the crack file is named CloneDVDCrack.exe. Save them in a folder on your computer.

-

Step 2: Run the Setup File and Follow the Instructions

-

The second step is to run the setup file and follow the instructions on the screen to install CloneDVD 7 Ultimate on your computer. Choose a destination folder for the installation and agree to the terms and conditions.

-

Step 3: Copy and Paste the Crack File into the Installation Folder

-

The third step is to copy and paste the crack file into the installation folder of CloneDVD 7 Ultimate on your computer. The installation folder is usually located at C:\Program Files (x86)\CloneDVDCrack\CloneDVDCrack.exe. Replace the original file with the crack file.

-

Step 4: Enjoy the Full Version of CloneDVD 7 Ultimate

-

The fourth step is to enjoy the full version of CloneDVD 7 Ultimate without any limitations or restrictions. You can launch the program from your desktop shortcut or start menu.

-

How to Use CloneDVD 7 Ultimate 7.0.0.10 Crack?

-

Now that you have installed CloneDVD 7 Ultimate crack successfully on your computer, you may wonder how to use it effectively for your various DVD needs. Here are some tips on how to use each module of CloneDVD 7 Ultimate:

-

How to Copy a DVD with CloneDVD 7 Ultimate?

-
    -
  1. Launch CloneDVD 7 Ultimate and select "Clone DVD" from the main interface.
  2. -
  3. Insert the source DVD disc into your DVD drive or choose a DVD folder from your computer.
  4. -
  5. Select an output target from "Copy as" option: ISO Image File (to save as an ISO file), DVD Folder (to save as a folder), or Writer Device (to burn directly).
  6. -
  7. Select a copy mode from "Copy Mode" option: Entire Disc (to copy all contents), Main Movie (to copy only main movie), Customize (to select specific titles), Split Disc (to split a large disc into two smaller ones).
  8. -
  9. Select an output quality from "Output Quality" option: High Quality (to keep original quality), Standard Quality (to reduce size slightly), Compress Quality (to reduce size significantly).
  10. -
  11. Select an output language from "Audio" option: Auto (to keep original language), English (to change audio language), Other Languages (to select other languages).
  12. -
  13. Select an output subtitle from "Subtitle" option: Auto (to keep original subtitle), English (to change subtitle language), Other Languages (to select other languages), None (to remove subtitle).
  14. -
  15. Click "Start" button to begin copying process.
  16. -
-

How to Rip a DVD with CloneDVD 7 Ultimate?

-
    -
  1. Launch CloneDVD 7 Ultimate and select "Rip DVD" from the main interface.
  2. -
  3. Insert the source DVD disc into your DVD drive or choose a DVD folder from your computer.
  4. -
  5. Select an output format from "Output Format" option according to your device type or preference.
  6. -```html to save the ripped files. -
  7. Click "Start" button to begin ripping process.
  8. -
-

How to Create a DVD with CloneDVD 7 Ultimate?

-
    -
  1. Launch CloneDVD 7 Ultimate and select "Create DVD" from the main interface.
  2. -
  3. Drag and drop any video file into the program or click "Add File" button to browse and select video files from your computer.
  4. -
  5. Select a DVD menu template from "Menu Template" option according to your preference. You can also customize the menu with different backgrounds, music, buttons, etc.
  6. -
  7. Select a DVD disc type from "DVD Type" option: DVD-5 (4.7GB) or DVD-9 (8.5GB).
  8. -
  9. Select an output target from "Output Target" option: ISO Image File (to save as an ISO file), DVD Folder (to save as a folder), or Writer Device (to burn directly).
  10. -
  11. Click "Start" button to begin creating process.
  12. -
-

How to Convert a Video with CloneDVD 7 Ultimate?

-
    -
  1. Launch CloneDVD 7 Ultimate and select "Video Converter" from the main interface.
  2. -
  3. Drag and drop any video file into the program or click "Add File" button to browse and select video files from your computer.
  4. -
  5. Select an output format from "Output Format" option according to your device type or preference.
  6. -
  7. Select an output folder from "Output Folder" option where you want to save the converted files.
  8. -
  9. Click "Start" button to begin converting process.
  10. -
-

Conclusion

-

In conclusion, CloneDVD 7 Ultimate 7.0.0.10 crack is a comprehensive DVD solution that allows you to clone, copy, backup, rip, create, and convert any DVD disc or video file. It supports all popular video formats and devices, such as AVI, MP4, MPG, WMV, MOV, iPhone, iPad, Android phones, etc. It also enables you to edit and customize your DVDs and videos with various effects and settings. You can download and install CloneDVD 7 Ultimate crack for free from a reliable source and enjoy its full features without any limitations or restrictions. We hope this article has given you a complete guide on CloneDVD 7 Ultimate crack and how to use it effectively for your various DVD needs.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Tagalog-Christian-Songs-Lyrics-And-Chords-Pdf-Download-HOT.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/Tagalog-Christian-Songs-Lyrics-And-Chords-Pdf-Download-HOT.md deleted file mode 100644 index 825581d194fd0f9e88d57fa735b45783ce3ea80d..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Tagalog-Christian-Songs-Lyrics-And-Chords-Pdf-Download-HOT.md +++ /dev/null @@ -1,108 +0,0 @@ -## Tagalog Christian Songs Lyrics And Chords Pdf Download - - - - - - - - - -**Click Here >>>>> [https://lodystiri.blogspot.com/?file=2txPB5](https://lodystiri.blogspot.com/?file=2txPB5)** - - - - - - - - - - - - - -# How to Download Tagalog Christian Songs Lyrics and Chords in PDF Format - - - -If you are looking for Tagalog Christian songs lyrics and chords in PDF format, you might have a hard time finding them online. Most of the websites that offer Tagalog worship songs only provide the lyrics or the chords, but not both. And if they do, they might not be in a printable or downloadable format. - - - -But don't worry, because we have a solution for you. In this article, we will show you how to download Tagalog Christian songs lyrics and chords in PDF format using a simple and free tool. You will be able to access hundreds of Tagalog worship songs with lyrics and chords that you can print or save on your device. - - - -## What is PDF Format? - - - -PDF stands for Portable Document Format, which is a file format that preserves the layout and formatting of a document across different platforms and devices. PDF files can be viewed, printed, or edited using various software applications, such as Adobe Acrobat Reader, Microsoft Word, or Google Docs. - - - -PDF files are ideal for sharing documents that contain text, images, graphics, or other elements that need to maintain their appearance and quality. For example, PDF files are commonly used for e-books, reports, flyers, resumes, contracts, and more. - - - -## Why Download Tagalog Christian Songs Lyrics and Chords in PDF Format? - - - -There are many benefits of downloading Tagalog Christian songs lyrics and chords in PDF format. Here are some of them: - - - -- You can easily print them out and use them for your personal or group worship sessions. - -- You can save them on your computer, tablet, smartphone, or other devices and access them anytime and anywhere. - -- You can share them with your friends, family, church members, or anyone who loves Tagalog worship songs. - -- You can edit them if you want to change the font size, color, style, or add notes. - -- You can enjoy high-quality lyrics and chords that are clear and accurate. - - - -## How to Download Tagalog Christian Songs Lyrics and Chords in PDF Format? - - - -The tool that we will use to download Tagalog Christian songs lyrics and chords in PDF format is called [Kaps Worship](https://www.kapsworship.com/tagalog-christian-song-lyrics/). Kaps Worship is a website that offers a huge collection of Tagalog worship songs with lyrics and chords. You can browse through their categories or search for your favorite songs by title or artist. - - - -Kaps Worship also provides a feature that allows you to download any song as a PDF file. Here are the steps to do it: - - - -1. Go to [Kaps Worship](https://www.kapsworship.com/tagalog-christian-song-lyrics/) website and find the song that you want to download. - -2. Click on the song title to open the song page. - -3. On the song page, you will see the lyrics and chords of the song. You will also see a button that says "Download as PDF". - -4. Click on the button and wait for a few seconds. A new tab will open with the PDF file of the song. - -5. You can now view, print, save, or share the PDF file as you wish. - - - -## Conclusion - - - -Downloading Tagalog Christian songs lyrics and chords in PDF format is easy and convenient with Kaps Worship. You can access hundreds of Tagalog worship songs with lyrics and chords that you can use for your personal or group worship sessions. You can also print them out or save them on your device for offline access. You can also share them with others who love Tagalog worship songs. - - - -We hope this article has helped you learn how to download Tagalog Christian songs lyrics and chords in PDF format using Kaps Worship. If you have any questions or feedback, please feel free to leave a comment below. God bless you! - - dfd1c89656 - - - - - diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Extreme Car Driving Simulator Mod APK Latest Version for Free.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Extreme Car Driving Simulator Mod APK Latest Version for Free.md deleted file mode 100644 index 361bb0e7d9bd469eeab2e1cfcce9b1c6e3ae59ea..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Extreme Car Driving Simulator Mod APK Latest Version for Free.md +++ /dev/null @@ -1,120 +0,0 @@ - -

Extreme Car Driving Simulator Latest Mod APK Download

-

Do you love driving fast cars and performing stunts in an open world environment? If yes, then you should try Extreme Car Driving Simulator, one of the most popular car games on Android. In this game, you can drive, drift, and feel a racing sports car without any limits. You can also customize your cars and play different game modes.

-

extreme car driving simulator latest mod apk download


DOWNLOADhttps://urlin.us/2uSU5v



-

But what if you want to enjoy the game without any restrictions or ads? Well, you can do that by downloading the latest mod apk version of Extreme Car Driving Simulator. In this article, we will tell you what is Extreme Car Driving Simulator, why you should download the mod apk version, and how to do it. We will also share some tips and tricks for playing the game.

-

What is Extreme Car Driving Simulator?

-

Extreme Car Driving Simulator is an open world car simulator game developed by AxesInMotion Racing. It was released in 2014 and has over 500 million downloads on Google Play Store. It is one of the best car games for Android thanks to its advanced real physics engine and realistic graphics.

-

Features of the game

-

Some of the features of Extreme Car Driving Simulator are:

- -

How to play the game

-

To play Extreme Car Driving Simulator, you need to choose your vehicle and enter a free-roaming 3D world with various driving tasks to complete. You can also drive freely around the city and perform illegal stunt actions without worrying about the police chasing you. You can drift fast and do burnouts on the asphalt of this open world city.

-

You can also switch between different game modes, such as traffic mode, checkpoint mode, free mode, or airport mode. Each mode has its own challenges and objectives. You can also complete achievements and unlock new cars with different features and performance.

-

Why download the mod apk version?

-

If you want to enjoy Extreme Car Driving Simulator without any limitations or interruptions, you should download the mod apk version of the game. The mod apk version is a modified version of the original game that gives you some extra benefits and features that are not available in the official version.

-

Benefits of the mod apk

-

Some of the benefits of downloading the mod apk version of Extreme Car Driving Simulator are:

- -

How to download and install the mod apk

-

To download and install the mod apk version of Extreme Car Driving Simulator, you need to follow these steps:

-
    -
  1. Go to [1](https://apkdone.com/extreme-car-driving-simulator/) and click on "Download APK".
  2. -
  3. Wait for the download to finish and then open the file.
  4. Allow installation from unknown sources if prompted by your device.
  5. -
  6. Follow the instructions on the screen to install the mod apk.
  7. -
  8. Launch the game and enjoy the mod features.
  9. -
-

Note: You may need to uninstall the original version of the game before installing the mod apk. Also, make sure you download the mod apk from a trusted source and scan it for viruses before installing it.

-

extreme car driving simulator mod apk unlimited money
-extreme car driving simulator hack apk download
-extreme car driving simulator mod apk android 1
-extreme car driving simulator mod apk revdl
-extreme car driving simulator mod apk happymod
-extreme car driving simulator mod apk rexdl
-extreme car driving simulator mod apk all cars unlocked
-extreme car driving simulator mod apk latest version 2023
-extreme car driving simulator mod apk free shopping
-extreme car driving simulator mod apk no ads
-extreme car driving simulator mod apk offline
-extreme car driving simulator mod apk 6.74.9
-extreme car driving simulator mod apk 6.75.0
-extreme car driving simulator mod apk 6.74.8
-extreme car driving simulator mod apk 6.74.7
-extreme car driving simulator mod apk 6.74.6
-extreme car driving simulator mod apk 6.74.5
-extreme car driving simulator mod apk 6.74.4
-extreme car driving simulator mod apk 6.74.3
-extreme car driving simulator mod apk 6.74.2
-extreme car driving simulator mod apk 6.74.1
-extreme car driving simulator mod apk 6.74.0
-extreme car driving simulator mod apk 6.73.9
-extreme car driving simulator mod apk 6.73.8
-extreme car driving simulator mod apk 6.73.7
-extreme car driving simulator mod apk unlimited nitro
-extreme car driving simulator mod apk unlimited coins and gems
-extreme car driving simulator mod apk unlimited everything
-extreme car driving simulator mod apk unlimited fuel and damage
-extreme car driving simulator mod apk unlimited gold and diamonds
-extreme car driving simulator mod apk unlimited keys and cash
-extreme car driving simulator mod apk unlimited stars and xp
-extreme car driving simulator mod apk unlimited tokens and credits
-extreme car driving simulator mod apk unlimited money and cars download for android
-extreme car driving simulator hack version download for android
-how to download extreme car driving simulator mod apk on android phone or tablet
-how to install and play extreme car driving simulator mod apk on pc or laptop using bluestacks emulator or other software
-how to update or upgrade extreme car driving simulator mod apk to the latest version available online or offline
-how to uninstall or remove extreme car driving simulator mod apk from your device without losing any data or progress
-how to fix or solve any errors or issues with extreme car driving simulator mod apk such as crashing, freezing, lagging, not working, not opening, etc.

-

Tips and tricks for playing Extreme Car Driving Simulator

-

Now that you have downloaded and installed the mod apk version of Extreme Car Driving Simulator, you may want to know some tips and tricks to play the game better. Here are some of them:

-

Use drift mode and nitro

-

One of the most fun aspects of Extreme Car Driving Simulator is drifting. You can drift by pressing the brake button while turning. This will make your car slide sideways and create smoke trails. Drifting is not only cool, but also useful for avoiding obstacles and taking sharp turns. You can also use nitro to boost your speed and perform longer drifts. Nitro is activated by pressing the N button on the screen. You can refill your nitro by driving fast or drifting.

-

Explore different game modes and environments

-

Extreme Car Driving Simulator has several game modes and environments to choose from. You can switch between them by tapping the map icon on the screen. Some of the game modes are:

- -

Some of the environments are:

- -

Collect rewards and unlock new cars

-

As you play Extreme Car Driving Simulator, you can collect rewards and unlock new cars. You can collect rewards by completing achievements, missions, or daily tasks. You can also find coins and gems scattered around the map. You can use these currencies to buy and upgrade new cars. There are over 20 cars to choose from, each with different features and performance. You can also customize your cars by changing their color, wheels, spoilers, etc.

-

Conclusion

-

Extreme Car Driving Simulator is a fun and realistic car simulator game that lets you drive fast cars and perform stunts in an open world environment. You can also download the mod apk version of the game to enjoy unlimited money, no ads, all cars unlocked, and all features unlocked. To download and install the mod apk version of Extreme Car Driving Simulator, follow the steps mentioned above. Also, don't forget to check out some tips and tricks for playing the game better.

-

FAQs

-

Here are some frequently asked questions about Extreme Car Driving Simulator:

-

Q: Is Extreme Car Driving Simulator free to play?

-

A: Yes, Extreme Car Driving Simulator is free to play on Android devices. However, it contains ads and in-app purchases that may affect your gaming experience. You can download the mod apk version of the game to remove ads and get unlimited money.

-

Q: Is Extreme Car Driving Simulator safe to download?

-

A: Yes, Extreme Car Driving Simulator is safe to download from Google Play Store or other trusted sources. However, be careful when downloading the mod apk version of the game from unknown sources as they may contain viruses or malware that may harm your device.

-

Q: How do I update Extreme Car Driving Simulator?

-

A: You can update Extreme Car Driving Simulator by visiting Google Play Store or other sources where you downloaded the game from. However, if you are using the mod apk version of the game, you may need to uninstall it and download the latest version from [1](https://apkdone.com/extreme-car-driving-simulator/).

-

Q: How do I contact the developers of Extreme Car Driving Simulator

Q: How do I contact the developers of Extreme Car Driving Simulator?

-

A: You can contact the developers of Extreme Car Driving Simulator by visiting their website [2](https://www.axesinmotion.com/) or their Facebook page [3](https://www.facebook.com/AxesInMotion/). You can also send them an email at support@axesinmotion.com.

-

Q: Can I play Extreme Car Driving Simulator offline?

-

A: Yes, you can play Extreme Car Driving Simulator offline without an internet connection. However, some features of the game may not work properly or may require an update. You can also play the game online with other players and compete in leaderboards and rankings.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Animal Kingdom MOD APK Everything You Need to Know About this Amazing Game.md b/spaces/1phancelerku/anime-remove-background/Animal Kingdom MOD APK Everything You Need to Know About this Amazing Game.md deleted file mode 100644 index d06ccbeb733d79c82762b373a70f144bf4a18ea0..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Animal Kingdom MOD APK Everything You Need to Know About this Amazing Game.md +++ /dev/null @@ -1,110 +0,0 @@ -
-

Animal Kingdom APK Mod: A Fun and Addictive Adventure Game

-

Do you love animals and adventure games? If yes, then you should try Animal Kingdom APK Mod, a game that lets you build your own animal kingdom, raid other players' islands, and collect treasure island coins. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, how to play it, and its pros and cons.

-

animal kingdom apk mod


Download Filehttps://jinyurl.com/2uNLzR



-

What is Animal Kingdom APK Mod?

-

Animal Kingdom APK Mod is a modified version of Animal Kingdom, a popular game developed by Playrix. It is an adventure game that allows you to play with millions of players around the globe in this addictive animal adventure game. You can build islands and bridges, raid lands, and collect treasure island coins. You can also explore the animal island, steal coins, and build your kingdom to become the ultimate raid master.

-

Features of Animal Kingdom APK Mod

-

Animal Kingdom APK Mod has many features that make it more fun and enjoyable than the original game. Here are some of them:

-

- Unlocked islands and animals

-

With Animal Kingdom APK Mod, you can access all the islands and animals in the game without spending any money or waiting for hours. You can choose from a variety of animals, such as lions, tigers, bears, pandas, elephants, monkeys, and more. You can also customize your islands with different themes, such as jungle, desert, ice, candy, and more.

-

animal kingdom mod apk latest version
-animal kingdom mod apk unlimited coins
-animal kingdom mod apk download for android
-animal kingdom mod apk free shopping
-animal kingdom mod apk offline
-animal kingdom mod apk no ads
-animal kingdom mod apk hack
-animal kingdom mod apk revdl
-animal kingdom mod apk rexdl
-animal kingdom mod apk happymod
-animal kingdom mod apk 2023
-animal kingdom mod apk android 1
-animal kingdom mod apk 12.8.3
-animal kingdom mod apk 12.7.2
-animal kingdom mod apk 12.6.1
-animal kingdom adventure game mod apk
-animal kingdom battle simulator 3d mod apk
-animal kingdom online mod apk
-animal kingdom wild lands mod apk
-animal kingdom wildlife park mod apk
-animal kingdom zoo tycoon mod apk
-animal kingdom zoo simulator mod apk
-animal kingdom zoo craft mod apk
-animal kingdom zoo builder mod apk
-animal kingdom zoo world mod apk
-animal kingdom survival simulator mod apk
-animal kingdom safari craft mod apk
-animal kingdom safari hunting 3d mod apk
-animal kingdom safari shooter 3d mod apk
-animal kingdom safari sniper hunter 3d mod apk
-animal kingdom dinosaur hunter 3d mod apk
-animal kingdom dinosaur world 3d mod apk
-animal kingdom dinosaur simulator 3d mod apk
-animal kingdom dinosaur rampage 3d mod apk
-animal kingdom dinosaur attack 3d mod apk
-animal kingdom farm simulator 3d mod apk
-animal kingdom farm frenzy 3d mod apk
-animal kingdom farm story 3d mod apk
-animal kingdom farm village 3d mod apk
-animal kingdom farm rescue 3d mod apk

-

- Unlimited coins and gems

-

Coins and gems are the main currencies in Animal Kingdom. You need them to buy new animals, upgrade your islands, spin the wheel of fortune, and more. With Animal Kingdom APK Mod, you can get unlimited coins and gems for free. You can use them to buy anything you want in the game without worrying about running out.

-

- No ads and root required

-

Animal Kingdom APK Mod is free from annoying ads that interrupt your gameplay. You can enjoy the game without any distractions or interruptions. Moreover, you don't need to root your device to install Animal Kingdom APK Mod. You can simply download the APK file and install it on your device without any hassle.

-

How to download and install Animal Kingdom APK Mod?

-

If you want to download and install Animal Kingdom APK Mod on your device, you need to follow these simple steps:

-

Step 1: Download the APK file from a trusted source

-

You can download the APK file of Animal Kingdom APK Mod from a trusted source like [Moddroid](^1^). Make sure you download the latest version of the game that is compatible with your device.

-

Step 2: Enable unknown sources on your device

-

Before you install the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

-

Step 3: Install the APK file and launch the game

-

After you download the APK file, locate it on your device and tap on it to install it. Follow the instructions on the screen to complete the installation. Once the installation is done, launch the game and enjoy Animal Kingdom APK Mod.

-

How to play Animal Kingdom APK Mod?

-

Playing Animal Kingdom APK Mod is easy and fun. Here are some tips on how to play the game:

-

Build your own animal kingdom

-

The main goal of Animal Kingdom APK Mod is to build your own animal kingdom. You can do this by buying new animals, upgrading your islands, and decorating them with various items. You can also unlock new islands and bridges as you progress in the game. To buy new animals, you need to spin the wheel of fortune, which costs coins. You can also get animals from treasure chests, which cost gems. To upgrade your islands, you need to spend coins and gems as well. You can also earn coins and gems by completing quests and achievements.

-

Raid other players' islands

-

Another fun aspect of Animal Kingdom APK Mod is raiding other players' islands. You can do this by tapping on the map icon and choosing an island to attack. You can also use the search function to find a specific player or a random one. Once you select an island, you can use your animals to raid it and steal coins from it. You can also destroy buildings and decorations to get more coins. However, be careful, as other players can also raid your island and take your coins. You can protect your island by using shields, which cost gems.

-

Collect treasure island coins and gems

-

Treasure island coins and gems are special currencies that you can use to buy exclusive items in the game. You can get treasure island coins and gems by playing the treasure island mode, which is unlocked after you reach level 10. In this mode, you can explore different islands and collect treasure chests that contain coins and gems. You can also find hidden items and secrets that give you more rewards. However, you need to be quick, as the treasure island mode has a time limit.

-

Pros and cons of Animal Kingdom APK Mod

-

Animal Kingdom APK Mod is a great game that offers many benefits, but it also has some drawbacks. Here are some of them:

-

Pros

-

- Fun and engaging gameplay

-

Animal Kingdom APK Mod is a fun and engaging game that will keep you entertained for hours. You can build your own animal kingdom, raid other players' islands, collect treasure island coins and gems, and more. You can also play with millions of players around the world and chat with them in the game.

-

- Beautiful graphics and sound effects

-

Animal Kingdom APK Mod has beautiful graphics and sound effects that make the game more realistic and immersive. You can enjoy the colorful and detailed graphics of the animals, islands, buildings, and items in the game. You can also listen to the soothing and cheerful sound effects of the animals, coins, chests, and more.

-

- Variety of animals and islands to explore

-

Animal Kingdom APK Mod has a variety of animals and islands to explore in the game. You can choose from a wide range of animals, such as lions, tigers, bears, pandas, elephants, monkeys, and more. You can also customize your islands with different themes, such as jungle, desert, ice, candy, and more. You can also unlock new islands and bridges as you progress in the game.

-

Cons

-

- Some bugs and glitches may occur

-

Animal Kingdom APK Mod is not a perfect game, and it may have some bugs and glitches that may affect your gameplay. For example, some users have reported that the game crashes or freezes sometimes, or that some features do not work properly. If you encounter any problems with the game, you can try to update it or reinstall it.

-

- Requires internet connection to play online mode

-

Animal Kingdom APK Mod requires an internet connection to play online mode, which is where you can play with other players and raid their islands. If you don't have a stable internet connection or if you want to play offline mode, you may not be able to enjoy all the features of the game.

-

Conclusion

-

Animal Kingdom APK Mod is a fun and addictive adventure game that lets you build your own animal kingdom, raid other players' islands, and collect treasure island coins. It has many features that make it more enjoyable than the original game, such as unlocked islands and animals, unlimited coins and gems, no ads and root required. However, it also has some drawbacks, such as some bugs and glitches, and the need for an internet connection to play online mode. If you are looking for a fun and addictive adventure game that lets you play with animals and islands, you should give Animal Kingdom APK Mod a try. You can download it from a trusted source like [Moddroid] and install it on your device easily. You can also check out the official website of Animal Kingdom for more information and updates.

-

Here are some FAQs that you may have about Animal Kingdom APK Mod:

-

Q: Is Animal Kingdom APK Mod safe to use?

-

A: Yes, Animal Kingdom APK Mod is safe to use, as long as you download it from a trusted source like [Moddroid]. However, you should always be careful when downloading and installing any modded apps, as they may contain viruses or malware that can harm your device. You should also scan the APK file with an antivirus app before installing it.

-

Q: Can I play Animal Kingdom APK Mod with my friends?

-

A: Yes, you can play Animal Kingdom APK Mod with your friends, as long as they also have the same version of the game installed on their devices. You can add them as friends in the game and chat with them. You can also raid their islands and steal their coins, or help them defend their islands from other players.

-

Q: How can I get more coins and gems in Animal Kingdom APK Mod?

-

A: There are many ways to get more coins and gems in Animal Kingdom APK Mod. You can get unlimited coins and gems for free by using the modded features of the game. You can also earn coins and gems by completing quests and achievements, spinning the wheel of fortune, opening treasure chests, raiding other players' islands, and playing the treasure island mode. You can also buy coins and gems with real money if you want to support the developers of the game.

-

Q: What are the minimum requirements to play Animal Kingdom APK Mod?

-

A: The minimum requirements to play Animal Kingdom APK Mod are as follows:

- - - - - -
Operating systemAndroid 5.0 or higher
RAM2 GB or higher
Storage space100 MB or higher
Internet connectionRequired for online mode
-

Q: What are some alternatives to Animal Kingdom APK Mod?

-

A: If you like Animal Kingdom APK Mod, you may also like some other adventure games that involve animals and islands, such as:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Hill Climb Racing on Your PC for Free.md b/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Hill Climb Racing on Your PC for Free.md deleted file mode 100644 index 3b4aecbeb60a34e5feddd59387811a07fdebb5ba..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Hill Climb Racing on Your PC for Free.md +++ /dev/null @@ -1,108 +0,0 @@ - -

How to Download Hill Climb Racing for PC Free

-

Hill Climb Racing is one of the most addictive and entertaining physics-based driving games ever made. It features a variety of vehicles, stages, challenges, and upgrades that will keep you hooked for hours. You can race your way uphill in different environments, perform stunts, collect coins, and unlock new cars and parts. Hill Climb Racing is available for free on Android and iOS devices, but did you know that you can also play it on your PC? In this article, we will show you how to download hill climb racing for pc free using three different methods. Whether you want to use the Microsoft Store, direct download, or Steam, we have you covered. Follow these simple steps and enjoy this fun and exciting game on your computer.

-

download hill climb racing for pc free


DOWNLOAD ○○○ https://jinyurl.com/2uNKbk



-

Method 1: Microsoft Store

-

The Microsoft Store is a convenient way to get games for your PC. It offers a variety of free and paid games that you can download directly from the store. You don't need any additional software or accounts to use this method. Here's how to download hill climb racing for pc free from the Microsoft Store:

-
    -
  1. Open the Microsoft Store. You can find it on your Start menu or by pressing Windows Key + S and typing "Microsoft Store".
  2. -
  3. Click Gaming in the sidebar. It has a video game controller icon.
  4. -
  5. Select Hill Climb Racing from the list of games. You can also use the search bar to find it faster.
  6. -
  7. Purchase the game (if needed). Hill Climb Racing is free to play, but it has some optional in-app purchases that you can buy if you want. Click the Get button on the game's info page to start the download. If you want to buy any in-app purchases, click the Buy button instead.
  8. -
  9. Install the game. The download should start automatically after you click Get or Buy. You can check the progress on your Downloads & Updates page. Once it's done, you can launch the game from your Start menu or by clicking Play on the game's info page.
  10. -
  11. Play the game. Enjoy racing uphill in this physics-based driving game. You can use your keyboard or mouse to control your car, or connect a controller if you prefer. You can also adjust the graphics settings, sound effects, music, and language from the options menu.
  12. -
-

Method 2: Direct Download

-

If you don't want to use the Microsoft Store, you can also download hill climb racing for pc free directly from the official website of the game. This method requires you to have an internet browser and a file extractor program like WinRAR or 7-Zip. Here's how to do it:

-
    -
  1. Search for "hill climb racing official website" in Google or any other search engine. The first result should be https://fingersoft.com/games/hill-climb-racing/, which is the official website of Fingersoft, the developer of Hill Climb Racing.
  2. -
  3. Click the Download for Windows button on the website. It will take you to another page where you can download the game as a ZIP file. Click the Download Now button and save the file to your preferred location.
  4. -
  5. Extract the ZIP file. You will need a file extractor program like WinRAR or 7-Zip to do this. Right-click on the ZIP file and select Extract Here or Extract to Hill Climb Racing (depending on your program). It will create a folder with the same name as the ZIP file.
  6. -
  7. Install the game. Open the folder and double-click on the Hill Climb Racing.exe file. It will launch the game installer. Follow the instructions on the screen to install the game on your PC. You can choose where to install it and create a desktop shortcut if you want.
  8. -
  9. Play the game. Once the installation is complete, you can launch the game from your Start menu or desktop shortcut. You can also open the folder where you installed it and double-click on the Hill Climb Racing.exe file. Enjoy racing uphill in this physics-based driving game. You can use your keyboard or mouse to control your car, or connect a controller if you prefer. You can also adjust the graphics settings, sound effects, music, and language from the options menu.
  10. -
-

Method 3: Steam

-

Steam is a popular platform for gaming on PC. It offers a huge library of games that you can buy, download, and play online. You can also access various features like achievements, leaderboards, chat, and more. To use this method, you will need to download and install Steam on your PC, create an account, and log in to Steam. Here's how to download hill climb racing for pc free from Steam:

-
    -
  1. Download and install Steam on your PC. You can get it from https://store.steampowered.com/about/, which is the official website of Steam. Click the Install Steam button and save the file to your preferred location. Run the file and follow the instructions on the screen to install Steam on your PC.
  2. -
  3. Create an account and log in to Steam. You will need an email address and a password to create an account. You can also use your Facebook or Google account to sign up. Once you have an account, log in to Steam with your username and password.
  4. -
  5. Find and purchase (if needed) Hill Climb Racing on Steam. You can use the search bar at the top of the Steam window to find it faster. Alternatively, you can browse through the categories and genres in the sidebar. Hill Climb Racing is under Casual, Indie, Racing, Simulation, and Sports. Click on the game's name or image to go to its info page.
  6. -
  7. Download and install Hill Climb Racing from Steam. Hill Climb Racing is free to play, but it has some optional in-app purchases that you can buy if you want. Click the Play Game button on the game's info page to start the download. You can check the progress on your Library page. Once it's done, you can launch the game from your Library or by clicking Play Game on the game's info page.
  8. -
  9. Play Hill Climb Racing from Steam. Enjoy racing uphill in this physics-based driving game. You can use your keyboard or mouse to control your car, or connect a controller if you prefer. You can also adjust the graphics settings, sound effects, music, and language from the options menu. You can also access various features like achievements, leaderboards, chat, and more from Steam.
  10. -
-

Conclusion

-

Hill Climb Racing is a fun and addictive physics-based driving game that you can play on your PC for free using different methods. Whether you use the Microsoft Store, direct download, or Steam, you can enjoy this game on your computer with ease. Here are some tips and tricks for playing hill climb racing on PC:

-

How to download hill climb racing game for pc without bluestacks
-Hill climb racing free download for windows 10 laptop
-Best settings for hill climb racing on pc with emulator
-Hill climb racing pc version online play
-Download hill climb racing mod apk for pc unlimited money
-Hill climb racing 2 download for pc windows 7
-Hill climb racing cheats and hacks for pc
-Hill climb racing offline installer for pc
-Hill climb racing pc gameplay and review
-Hill climb racing tips and tricks for pc beginners
-Hill climb racing latest update download for pc
-Hill climb racing system requirements for pc
-Hill climb racing alternatives and similar games for pc
-Hill climb racing multiplayer mode on pc
-Hill climb racing achievements and leaderboards for pc
-Hill climb racing best vehicles and upgrades for pc
-Hill climb racing custom maps and levels for pc
-Hill climb racing fan art and wallpapers for pc
-Hill climb racing bugs and glitches fix for pc
-Hill climb racing developer contact and support for pc
-Download hill climb racing old version for pc
-Hill climb racing backup and restore data on pc
-Hill climb racing keyboard controls and shortcuts for pc
-Hill climb racing hidden features and secrets for pc
-Hill climb racing fun facts and trivia for pc
-Download hill climb racing 3d for pc
-Hill climb racing soundtracks and music for pc
-Hill climb racing memes and jokes for pc
-Hill climb racing merchandise and gifts for pc fans
-Hill climb racing community and forums for pc players
-Download hill climb racing hd graphics for pc
-Hill climb racing challenges and competitions for pc
-Hill climb racing new cars and stages for pc
-Hill climb racing easter eggs and references for pc
-Hill climb racing ratings and reviews for pc
-Download hill climb racing windows store edition for pc
-Hill climb racing no ads and in-app purchases for pc
-Hill climb racing languages and subtitles for pc
-Hill climb racing parental guide and age rating for pc
-Hill climb racing awards and nominations for pc
-Download hill climb racing mac version for free
-Hill climb racing vr mode and headset compatibility for pc
-Hill climb racing speedrun and world record for pc
-Hill climb racing wiki and guide for pc
-Download hill climb racing cracked version for free on pc
-Hill climb racing mod menu and codes for pc
-Hill climb racing best moments and highlights for pc
-Download hill climb racing from official website for free on pc
-Hill climb racing comparison and difference between mobile and pc versions

- -

We hope this article helped you learn how to download hill climb racing for pc free using different methods. If you have any questions or feedback, please feel free to share them with us in the comments section below. We would love to hear from you and help you with any issues you may have. Happy racing!

-

FAQs

-

Here are some frequently asked questions about hill climb racing and how to download it for pc free:

-

What are the minimum requirements for playing hill climb racing on PC?

-

The minimum requirements for playing hill climb racing on PC are as follows:

- - - -
OSProcessorMemoryGraphicsStorage
Windows 7 or higher1 GHz or faster1 GB RAMDirectX 9 compatible100 MB available space
-

Note that these are the minimum requirements and your performance may vary depending on your system configuration and settings.

-

Is hill climb racing compatible with Windows 10?

-

Yes, hill climb racing is compatible with Windows 10. You can download it from the Microsoft Store, direct download, or Steam without any issues. However, you may need to update your drivers and software to ensure optimal performance and compatibility.

-

Can I play hill climb racing offline?

-

Yes, you can play hill climb racing offline. You don't need an internet connection to play the game once you have downloaded and installed it on your PC. However, you may need an internet connection to access some features like in-app purchases, leaderboards, achievements, and updates.

-

Can I use a controller or a steering wheel to play hill climb racing on PC?

-

Yes, you can use a controller or a steering wheel to play hill climb racing on PC. The game supports various input devices and you can customize the controls from the options menu. You can also use your keyboard or mouse if you prefer.

-

Can I play hill climb racing with my friends online?

-

No, hill climb racing does not have an online multiplayer mode. You can only play the game solo and compete with yourself or other players on the leaderboards. However, you can share your screenshots and videos of your gameplay with your friends on social media or chat platforms.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/2ndelement/voicevox/voicevox_engine/acoustic_feature_extractor.py b/spaces/2ndelement/voicevox/voicevox_engine/acoustic_feature_extractor.py deleted file mode 100644 index 8fa37fbae8badda63d9fe7173cf407eb25343144..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/voicevox_engine/acoustic_feature_extractor.py +++ /dev/null @@ -1,332 +0,0 @@ -from abc import abstractmethod -from enum import Enum -from pathlib import Path -from typing import List, Sequence - -import numpy - - -class BasePhoneme(object): - """ - 音素の応用クラス群の抽象基底クラス - - Attributes - ---------- - phoneme_list : Sequence[str] - 音素のリスト - num_phoneme : int - 音素リストの要素数 - space_phoneme : str - 読点に値する音素 - """ - - phoneme_list: Sequence[str] - num_phoneme: int - space_phoneme: str - - def __init__( - self, - phoneme: str, - start: float, - end: float, - ): - self.phoneme = phoneme - self.start = numpy.round(start, decimals=2) - self.end = numpy.round(end, decimals=2) - - def __repr__(self): - return f"Phoneme(phoneme='{self.phoneme}', start={self.start}, end={self.end})" - - def __eq__(self, o: object): - return isinstance(o, BasePhoneme) and ( - self.phoneme == o.phoneme and self.start == o.start and self.end == o.end - ) - - def verify(self): - """ - 音素クラスとして、データが正しいかassertする - """ - assert self.phoneme in self.phoneme_list, f"{self.phoneme} is not defined." - - @property - def phoneme_id(self): - """ - phoneme_id (phoneme list内でのindex)を取得する - Returns - ------- - id : int - phoneme_idを返す - """ - return self.phoneme_list.index(self.phoneme) - - @property - def duration(self): - """ - 音素継続期間を取得する - Returns - ------- - duration : int - 音素継続期間を返す - """ - return self.end - self.start - - @property - def onehot(self): - """ - phoneme listの長さ分の0埋め配列のうち、phoneme id番目がTrue(1)の配列を返す - Returns - ------- - onehot : numpu.ndarray - 関数内で変更された配列を返す - """ - array = numpy.zeros(self.num_phoneme, dtype=bool) - array[self.phoneme_id] = True - return array - - @classmethod - def parse(cls, s: str): - """ - 文字列をパースして音素クラスを作る - Parameters - ---------- - s : str - パースしたい文字列 - - Returns - ------- - phoneme : BasePhoneme - パース結果を用いた音素クラスを返す - - Examples - -------- - >>> BasePhoneme.parse('1.7425000 1.9125000 o:') - Phoneme(phoneme='o:', start=1.74, end=1.91) - """ - words = s.split() - return cls( - start=float(words[0]), - end=float(words[1]), - phoneme=words[2], - ) - - @classmethod - @abstractmethod - def convert(cls, phonemes: List["BasePhoneme"]) -> List["BasePhoneme"]: - raise NotImplementedError - - @classmethod - def load_lab_list(cls, path: Path): - """ - labファイルを読み込む - Parameters - ---------- - path : Path - 読み込みたいlabファイルのパス - - Returns - ------- - phonemes : List[BasePhoneme] - パース結果を用いた音素クラスを返す - """ - phonemes = [cls.parse(s) for s in path.read_text().split("\n") if len(s) > 0] - phonemes = cls.convert(phonemes) - - for phoneme in phonemes: - phoneme.verify() - return phonemes - - @classmethod - def save_lab_list(cls, phonemes: List["BasePhoneme"], path: Path): - """ - 音素クラスのリストをlabファイル形式で保存する - Parameters - ---------- - phonemes : List[BasePhoneme] - 保存したい音素クラスのリスト - path : Path - labファイルの保存先パス - """ - text = "\n".join( - [ - f"{numpy.round(p.start, decimals=2):.2f}\t" - f"{numpy.round(p.end, decimals=2):.2f}\t" - f"{p.phoneme}" - for p in phonemes - ] - ) - path.write_text(text) - - -class JvsPhoneme(BasePhoneme): - """ - JVS(Japanese versatile speech)コーパスに含まれる音素群クラス - - Attributes - ---------- - phoneme_list : Sequence[str] - 音素のリスト - num_phoneme : int - 音素リストの要素数 - space_phoneme : str - 読点に値する音素 - """ - - phoneme_list = ( - "pau", - "I", - "N", - "U", - "a", - "b", - "by", - "ch", - "cl", - "d", - "dy", - "e", - "f", - "g", - "gy", - "h", - "hy", - "i", - "j", - "k", - "ky", - "m", - "my", - "n", - "ny", - "o", - "p", - "py", - "r", - "ry", - "s", - "sh", - "t", - "ts", - "u", - "v", - "w", - "y", - "z", - ) - num_phoneme = len(phoneme_list) - space_phoneme = "pau" - - @classmethod - def convert(cls, phonemes: List["JvsPhoneme"]) -> List["JvsPhoneme"]: - """ - 最初と最後のsil(silent)をspace_phoneme(pau)に置き換え(変換)する - Parameters - ---------- - phonemes : List[JvsPhoneme] - 変換したいphonemeのリスト - - Returns - ------- - phonemes : List[JvsPhoneme] - 変換されたphonemeのリスト - """ - if "sil" in phonemes[0].phoneme: - phonemes[0].phoneme = cls.space_phoneme - if "sil" in phonemes[-1].phoneme: - phonemes[-1].phoneme = cls.space_phoneme - return phonemes - - -class OjtPhoneme(BasePhoneme): - """ - OpenJTalkに含まれる音素群クラス - - Attributes - ---------- - phoneme_list : Sequence[str] - 音素のリスト - num_phoneme : int - 音素リストの要素数 - space_phoneme : str - 読点に値する音素 - """ - - phoneme_list = ( - "pau", - "A", - "E", - "I", - "N", - "O", - "U", - "a", - "b", - "by", - "ch", - "cl", - "d", - "dy", - "e", - "f", - "g", - "gw", - "gy", - "h", - "hy", - "i", - "j", - "k", - "kw", - "ky", - "m", - "my", - "n", - "ny", - "o", - "p", - "py", - "r", - "ry", - "s", - "sh", - "t", - "ts", - "ty", - "u", - "v", - "w", - "y", - "z", - ) - num_phoneme = len(phoneme_list) - space_phoneme = "pau" - - @classmethod - def convert(cls, phonemes: List["OjtPhoneme"]): - """ - 最初と最後のsil(silent)をspace_phoneme(pau)に置き換え(変換)する - Parameters - ---------- - phonemes : List[OjtPhoneme] - 変換したいphonemeのリスト - - Returns - ------- - phonemes : List[OjtPhoneme] - 変換されたphonemeのリスト - """ - if "sil" in phonemes[0].phoneme: - phonemes[0].phoneme = cls.space_phoneme - if "sil" in phonemes[-1].phoneme: - phonemes[-1].phoneme = cls.space_phoneme - return phonemes - - -class PhonemeType(str, Enum): - jvs = "jvs" - openjtalk = "openjtalk" - - -phoneme_type_to_class = { - PhonemeType.jvs: JvsPhoneme, - PhonemeType.openjtalk: OjtPhoneme, -} diff --git a/spaces/4Taps/SadTalker/src/face3d/models/bfm.py b/spaces/4Taps/SadTalker/src/face3d/models/bfm.py deleted file mode 100644 index a75db682f02dd1979d4a7de1d11dd3aa5cdf5279..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/bfm.py +++ /dev/null @@ -1,331 +0,0 @@ -"""This script defines the parametric 3d face model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -import torch.nn.functional as F -from scipy.io import loadmat -from src.face3d.util.load_mats import transferBFM09 -import os - -def perspective_projection(focal, center): - # return p.T (N, 3) @ (3, 3) - return np.array([ - focal, 0, center, - 0, focal, center, - 0, 0, 1 - ]).reshape([3, 3]).astype(np.float32).transpose() - -class SH: - def __init__(self): - self.a = [np.pi, 2 * np.pi / np.sqrt(3.), 2 * np.pi / np.sqrt(8.)] - self.c = [1/np.sqrt(4 * np.pi), np.sqrt(3.) / np.sqrt(4 * np.pi), 3 * np.sqrt(5.) / np.sqrt(12 * np.pi)] - - - -class ParametricFaceModel: - def __init__(self, - bfm_folder='./BFM', - recenter=True, - camera_distance=10., - init_lit=np.array([ - 0.8, 0, 0, 0, 0, 0, 0, 0, 0 - ]), - focal=1015., - center=112., - is_train=True, - default_name='BFM_model_front.mat'): - - if not os.path.isfile(os.path.join(bfm_folder, default_name)): - transferBFM09(bfm_folder) - - model = loadmat(os.path.join(bfm_folder, default_name)) - # mean face shape. [3*N,1] - self.mean_shape = model['meanshape'].astype(np.float32) - # identity basis. [3*N,80] - self.id_base = model['idBase'].astype(np.float32) - # expression basis. [3*N,64] - self.exp_base = model['exBase'].astype(np.float32) - # mean face texture. [3*N,1] (0-255) - self.mean_tex = model['meantex'].astype(np.float32) - # texture basis. [3*N,80] - self.tex_base = model['texBase'].astype(np.float32) - # face indices for each vertex that lies in. starts from 0. [N,8] - self.point_buf = model['point_buf'].astype(np.int64) - 1 - # vertex indices for each face. starts from 0. [F,3] - self.face_buf = model['tri'].astype(np.int64) - 1 - # vertex indices for 68 landmarks. starts from 0. [68,1] - self.keypoints = np.squeeze(model['keypoints']).astype(np.int64) - 1 - - if is_train: - # vertex indices for small face region to compute photometric error. starts from 0. - self.front_mask = np.squeeze(model['frontmask2_idx']).astype(np.int64) - 1 - # vertex indices for each face from small face region. starts from 0. [f,3] - self.front_face_buf = model['tri_mask2'].astype(np.int64) - 1 - # vertex indices for pre-defined skin region to compute reflectance loss - self.skin_mask = np.squeeze(model['skinmask']) - - if recenter: - mean_shape = self.mean_shape.reshape([-1, 3]) - mean_shape = mean_shape - np.mean(mean_shape, axis=0, keepdims=True) - self.mean_shape = mean_shape.reshape([-1, 1]) - - self.persc_proj = perspective_projection(focal, center) - self.device = 'cpu' - self.camera_distance = camera_distance - self.SH = SH() - self.init_lit = init_lit.reshape([1, 1, -1]).astype(np.float32) - - - def to(self, device): - self.device = device - for key, value in self.__dict__.items(): - if type(value).__module__ == np.__name__: - setattr(self, key, torch.tensor(value).to(device)) - - - def compute_shape(self, id_coeff, exp_coeff): - """ - Return: - face_shape -- torch.tensor, size (B, N, 3) - - Parameters: - id_coeff -- torch.tensor, size (B, 80), identity coeffs - exp_coeff -- torch.tensor, size (B, 64), expression coeffs - """ - batch_size = id_coeff.shape[0] - id_part = torch.einsum('ij,aj->ai', self.id_base, id_coeff) - exp_part = torch.einsum('ij,aj->ai', self.exp_base, exp_coeff) - face_shape = id_part + exp_part + self.mean_shape.reshape([1, -1]) - return face_shape.reshape([batch_size, -1, 3]) - - - def compute_texture(self, tex_coeff, normalize=True): - """ - Return: - face_texture -- torch.tensor, size (B, N, 3), in RGB order, range (0, 1.) - - Parameters: - tex_coeff -- torch.tensor, size (B, 80) - """ - batch_size = tex_coeff.shape[0] - face_texture = torch.einsum('ij,aj->ai', self.tex_base, tex_coeff) + self.mean_tex - if normalize: - face_texture = face_texture / 255. - return face_texture.reshape([batch_size, -1, 3]) - - - def compute_norm(self, face_shape): - """ - Return: - vertex_norm -- torch.tensor, size (B, N, 3) - - Parameters: - face_shape -- torch.tensor, size (B, N, 3) - """ - - v1 = face_shape[:, self.face_buf[:, 0]] - v2 = face_shape[:, self.face_buf[:, 1]] - v3 = face_shape[:, self.face_buf[:, 2]] - e1 = v1 - v2 - e2 = v2 - v3 - face_norm = torch.cross(e1, e2, dim=-1) - face_norm = F.normalize(face_norm, dim=-1, p=2) - face_norm = torch.cat([face_norm, torch.zeros(face_norm.shape[0], 1, 3).to(self.device)], dim=1) - - vertex_norm = torch.sum(face_norm[:, self.point_buf], dim=2) - vertex_norm = F.normalize(vertex_norm, dim=-1, p=2) - return vertex_norm - - - def compute_color(self, face_texture, face_norm, gamma): - """ - Return: - face_color -- torch.tensor, size (B, N, 3), range (0, 1.) - - Parameters: - face_texture -- torch.tensor, size (B, N, 3), from texture model, range (0, 1.) - face_norm -- torch.tensor, size (B, N, 3), rotated face normal - gamma -- torch.tensor, size (B, 27), SH coeffs - """ - batch_size = gamma.shape[0] - v_num = face_texture.shape[1] - a, c = self.SH.a, self.SH.c - gamma = gamma.reshape([batch_size, 3, 9]) - gamma = gamma + self.init_lit - gamma = gamma.permute(0, 2, 1) - Y = torch.cat([ - a[0] * c[0] * torch.ones_like(face_norm[..., :1]).to(self.device), - -a[1] * c[1] * face_norm[..., 1:2], - a[1] * c[1] * face_norm[..., 2:], - -a[1] * c[1] * face_norm[..., :1], - a[2] * c[2] * face_norm[..., :1] * face_norm[..., 1:2], - -a[2] * c[2] * face_norm[..., 1:2] * face_norm[..., 2:], - 0.5 * a[2] * c[2] / np.sqrt(3.) * (3 * face_norm[..., 2:] ** 2 - 1), - -a[2] * c[2] * face_norm[..., :1] * face_norm[..., 2:], - 0.5 * a[2] * c[2] * (face_norm[..., :1] ** 2 - face_norm[..., 1:2] ** 2) - ], dim=-1) - r = Y @ gamma[..., :1] - g = Y @ gamma[..., 1:2] - b = Y @ gamma[..., 2:] - face_color = torch.cat([r, g, b], dim=-1) * face_texture - return face_color - - - def compute_rotation(self, angles): - """ - Return: - rot -- torch.tensor, size (B, 3, 3) pts @ trans_mat - - Parameters: - angles -- torch.tensor, size (B, 3), radian - """ - - batch_size = angles.shape[0] - ones = torch.ones([batch_size, 1]).to(self.device) - zeros = torch.zeros([batch_size, 1]).to(self.device) - x, y, z = angles[:, :1], angles[:, 1:2], angles[:, 2:], - - rot_x = torch.cat([ - ones, zeros, zeros, - zeros, torch.cos(x), -torch.sin(x), - zeros, torch.sin(x), torch.cos(x) - ], dim=1).reshape([batch_size, 3, 3]) - - rot_y = torch.cat([ - torch.cos(y), zeros, torch.sin(y), - zeros, ones, zeros, - -torch.sin(y), zeros, torch.cos(y) - ], dim=1).reshape([batch_size, 3, 3]) - - rot_z = torch.cat([ - torch.cos(z), -torch.sin(z), zeros, - torch.sin(z), torch.cos(z), zeros, - zeros, zeros, ones - ], dim=1).reshape([batch_size, 3, 3]) - - rot = rot_z @ rot_y @ rot_x - return rot.permute(0, 2, 1) - - - def to_camera(self, face_shape): - face_shape[..., -1] = self.camera_distance - face_shape[..., -1] - return face_shape - - def to_image(self, face_shape): - """ - Return: - face_proj -- torch.tensor, size (B, N, 2), y direction is opposite to v direction - - Parameters: - face_shape -- torch.tensor, size (B, N, 3) - """ - # to image_plane - face_proj = face_shape @ self.persc_proj - face_proj = face_proj[..., :2] / face_proj[..., 2:] - - return face_proj - - - def transform(self, face_shape, rot, trans): - """ - Return: - face_shape -- torch.tensor, size (B, N, 3) pts @ rot + trans - - Parameters: - face_shape -- torch.tensor, size (B, N, 3) - rot -- torch.tensor, size (B, 3, 3) - trans -- torch.tensor, size (B, 3) - """ - return face_shape @ rot + trans.unsqueeze(1) - - - def get_landmarks(self, face_proj): - """ - Return: - face_lms -- torch.tensor, size (B, 68, 2) - - Parameters: - face_proj -- torch.tensor, size (B, N, 2) - """ - return face_proj[:, self.keypoints] - - def split_coeff(self, coeffs): - """ - Return: - coeffs_dict -- a dict of torch.tensors - - Parameters: - coeffs -- torch.tensor, size (B, 256) - """ - id_coeffs = coeffs[:, :80] - exp_coeffs = coeffs[:, 80: 144] - tex_coeffs = coeffs[:, 144: 224] - angles = coeffs[:, 224: 227] - gammas = coeffs[:, 227: 254] - translations = coeffs[:, 254:] - return { - 'id': id_coeffs, - 'exp': exp_coeffs, - 'tex': tex_coeffs, - 'angle': angles, - 'gamma': gammas, - 'trans': translations - } - def compute_for_render(self, coeffs): - """ - Return: - face_vertex -- torch.tensor, size (B, N, 3), in camera coordinate - face_color -- torch.tensor, size (B, N, 3), in RGB order - landmark -- torch.tensor, size (B, 68, 2), y direction is opposite to v direction - Parameters: - coeffs -- torch.tensor, size (B, 257) - """ - coef_dict = self.split_coeff(coeffs) - face_shape = self.compute_shape(coef_dict['id'], coef_dict['exp']) - rotation = self.compute_rotation(coef_dict['angle']) - - - face_shape_transformed = self.transform(face_shape, rotation, coef_dict['trans']) - face_vertex = self.to_camera(face_shape_transformed) - - face_proj = self.to_image(face_vertex) - landmark = self.get_landmarks(face_proj) - - face_texture = self.compute_texture(coef_dict['tex']) - face_norm = self.compute_norm(face_shape) - face_norm_roted = face_norm @ rotation - face_color = self.compute_color(face_texture, face_norm_roted, coef_dict['gamma']) - - return face_vertex, face_texture, face_color, landmark - - def compute_for_render_woRotation(self, coeffs): - """ - Return: - face_vertex -- torch.tensor, size (B, N, 3), in camera coordinate - face_color -- torch.tensor, size (B, N, 3), in RGB order - landmark -- torch.tensor, size (B, 68, 2), y direction is opposite to v direction - Parameters: - coeffs -- torch.tensor, size (B, 257) - """ - coef_dict = self.split_coeff(coeffs) - face_shape = self.compute_shape(coef_dict['id'], coef_dict['exp']) - #rotation = self.compute_rotation(coef_dict['angle']) - - - #face_shape_transformed = self.transform(face_shape, rotation, coef_dict['trans']) - face_vertex = self.to_camera(face_shape) - - face_proj = self.to_image(face_vertex) - landmark = self.get_landmarks(face_proj) - - face_texture = self.compute_texture(coef_dict['tex']) - face_norm = self.compute_norm(face_shape) - face_norm_roted = face_norm # @ rotation - face_color = self.compute_color(face_texture, face_norm_roted, coef_dict['gamma']) - - return face_vertex, face_texture, face_color, landmark - - -if __name__ == '__main__': - transferBFM09() \ No newline at end of file diff --git a/spaces/801artistry/RVC801/julius/bands.py b/spaces/801artistry/RVC801/julius/bands.py deleted file mode 100644 index ef2162440b69e960770aa7bf81b9aaec48a63243..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/julius/bands.py +++ /dev/null @@ -1,119 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -Decomposition of a signal over frequency bands in the waveform domain. -""" -from typing import Optional, Sequence -import torch - -from .core import mel_frequencies -from .lowpass import LowPassFilters -from .utils import simple_repr - - -class SplitBands(torch.nn.Module): - """ - Decomposes a signal over the given frequency bands in the waveform domain using - a cascade of low pass filters as implemented by `julius.lowpass.LowPassFilters`. - You can either specify explicitely the frequency cutoffs, or just the number of bands, - in which case the frequency cutoffs will be spread out evenly in mel scale. - - Args: - sample_rate (float): Sample rate of the input signal in Hz. - n_bands (int or None): number of bands, when not giving them explictely with `cutoffs`. - In that case, the cutoff frequencies will be evenly spaced in mel-space. - cutoffs (list[float] or None): list of frequency cutoffs in Hz. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. See `LowPassFilters` for more informations. - fft (bool or None): See `LowPassFilters` for more info. - - ..note:: - The sum of all the bands will always be the input signal. - - ..warning:: - Unlike `julius.lowpass.LowPassFilters`, the cutoffs frequencies must be provided in Hz along - with the sample rate. - - Shape: - - - Input: `[*, T]` - - Output: `[B, *, T']`, with `T'=T` if `pad` is True. - If `n_bands` was provided, `B = n_bands` otherwise `B = len(cutoffs) + 1` - - >>> bands = SplitBands(sample_rate=128, n_bands=10) - >>> x = torch.randn(6, 4, 1024) - >>> list(bands(x).shape) - [10, 6, 4, 1024] - """ - - def __init__(self, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - if (cutoffs is None) + (n_bands is None) != 1: - raise ValueError("You must provide either n_bands, or cutoffs, but not boths.") - - self.sample_rate = sample_rate - self.n_bands = n_bands - self._cutoffs = list(cutoffs) if cutoffs is not None else None - self.pad = pad - self.zeros = zeros - self.fft = fft - - if cutoffs is None: - if n_bands is None: - raise ValueError("You must provide one of n_bands or cutoffs.") - if not n_bands >= 1: - raise ValueError(f"n_bands must be greater than one (got {n_bands})") - cutoffs = mel_frequencies(n_bands + 1, 0, sample_rate / 2)[1:-1] - else: - if max(cutoffs) > 0.5 * sample_rate: - raise ValueError("A cutoff above sample_rate/2 does not make sense.") - if len(cutoffs) > 0: - self.lowpass = LowPassFilters( - [c / sample_rate for c in cutoffs], pad=pad, zeros=zeros, fft=fft) - else: - # Here I cannot make both TorchScript and MyPy happy. - # I miss the good old times, before all this madness was created. - self.lowpass = None # type: ignore - - def forward(self, input): - if self.lowpass is None: - return input[None] - lows = self.lowpass(input) - low = lows[0] - bands = [low] - for low_and_band in lows[1:]: - # Get a bandpass filter by substracting lowpasses - band = low_and_band - low - bands.append(band) - low = low_and_band - # Last band is whatever is left in the signal - bands.append(input - low) - return torch.stack(bands) - - @property - def cutoffs(self): - if self._cutoffs is not None: - return self._cutoffs - elif self.lowpass is not None: - return [c * self.sample_rate for c in self.lowpass.cutoffs] - else: - return [] - - def __repr__(self): - return simple_repr(self, overrides={"cutoffs": self._cutoffs}) - - -def split_bands(signal: torch.Tensor, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `SplitBands`, refer to this class for more information. - - >>> x = torch.randn(6, 4, 1024) - >>> list(split_bands(x, sample_rate=64, cutoffs=[12, 24]).shape) - [3, 6, 4, 1024] - """ - return SplitBands(sample_rate, n_bands, cutoffs, pad, zeros, fft).to(signal)(signal) diff --git a/spaces/A666sxr/Genshin_TTS/text/cleaners.py b/spaces/A666sxr/Genshin_TTS/text/cleaners.py deleted file mode 100644 index b81f9676fdb1c4d40adffedd554d06c49e7cff4b..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/text/cleaners.py +++ /dev/null @@ -1,188 +0,0 @@ -import re -#from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3 -from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2 -# from text.sanskrit import devanagari_to_ipa -from text.english import english_to_lazy_ipa, english_to_ipa2 -# from text.thai import num_to_thai, latin_to_thai -# from text.shanghainese import shanghainese_to_ipa -# from text.cantonese import cantonese_to_ipa -# from text.ngu_dialect import ngu_dialect_to_ipa - - -def japanese_cleaners(text): - text = japanese_to_romaji_with_accent(text) - if re.match('[A-Za-z]', text[-1]): - text += '.' - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - if re.match('[\u3131-\u3163]', text[-1]): - text += '.' - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - if re.match('[ˉˊˇˋ˙]', text[-1]): - text += '。' - return text - - -def zh_ja_mixture_cleaners(text): - chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text) - japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text) - for chinese_text in chinese_texts: - cleaned_text = chinese_to_romaji(chinese_text[4:-4]) - text = text.replace(chinese_text, cleaned_text+' ', 1) - for japanese_text in japanese_texts: - cleaned_text = japanese_to_romaji_with_accent( - japanese_text[4:-4]).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…') - text = text.replace(japanese_text, cleaned_text+' ', 1) - text = text[:-1] - if re.match('[A-Za-zɯɹəɥ→↓↑]', text[-1]): - text += '.' - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if text[-1] != '।': - text += ' ।' - return text - -def zh_en_cleaners(text): - chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text) - english_texts = re.findall(r'\[EN\].*?\[EN\]', text) - for chinese_text in chinese_texts: - cleaned_text = chinese_to_lazy_ipa(chinese_text[4:-4]) - text = text.replace(chinese_text, cleaned_text+' ', 1) - for english_text in english_texts: - cleaned_text = english_to_lazy_ipa(english_text[4:-4]) - text = text.replace(english_text, cleaned_text+' ', 1) - text = text[:-1] - if re.match(r'[^\.,!\?\-…~]', text[-1]): - text += '.' - return text - -def cjks_cleaners(text): - chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text) - japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text) - korean_texts = re.findall(r'\[KO\].*?\[KO\]', text) - sanskrit_texts = re.findall(r'\[SA\].*?\[SA\]', text) - english_texts = re.findall(r'\[EN\].*?\[EN\]', text) - for chinese_text in chinese_texts: - cleaned_text = chinese_to_lazy_ipa(chinese_text[4:-4]) - text = text.replace(chinese_text, cleaned_text+' ', 1) - for japanese_text in japanese_texts: - cleaned_text = japanese_to_ipa(japanese_text[4:-4]) - text = text.replace(japanese_text, cleaned_text+' ', 1) - for korean_text in korean_texts: - cleaned_text = korean_to_lazy_ipa(korean_text[4:-4]) - text = text.replace(korean_text, cleaned_text+' ', 1) - for sanskrit_text in sanskrit_texts: - cleaned_text = devanagari_to_ipa(sanskrit_text[4:-4]) - text = text.replace(sanskrit_text, cleaned_text+' ', 1) - for english_text in english_texts: - cleaned_text = english_to_lazy_ipa(english_text[4:-4]) - text = text.replace(english_text, cleaned_text+' ', 1) - text = text[:-1] - if re.match(r'[^\.,!\?\-…~]', text[-1]): - text += '.' - return text - - -def cjke_cleaners(text): - chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text) - japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text) - korean_texts = re.findall(r'\[KO\].*?\[KO\]', text) - english_texts = re.findall(r'\[EN\].*?\[EN\]', text) - for chinese_text in chinese_texts: - cleaned_text = chinese_to_lazy_ipa(chinese_text[4:-4]) - cleaned_text = cleaned_text.replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn') - text = text.replace(chinese_text, cleaned_text+' ', 1) - for japanese_text in japanese_texts: - cleaned_text = japanese_to_ipa(japanese_text[4:-4]) - cleaned_text = cleaned_text.replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz') - text = text.replace(japanese_text, cleaned_text+' ', 1) - for korean_text in korean_texts: - cleaned_text = korean_to_ipa(korean_text[4:-4]) - text = text.replace(korean_text, cleaned_text+' ', 1) - for english_text in english_texts: - cleaned_text = english_to_ipa2(english_text[4:-4]) - cleaned_text = cleaned_text.replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u') - text = text.replace(english_text, cleaned_text+' ', 1) - text = text[:-1] - if re.match(r'[^\.,!\?\-…~]', text[-1]): - text += '.' - return text - - -def cjke_cleaners2(text): - chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text) - japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text) - korean_texts = re.findall(r'\[KO\].*?\[KO\]', text) - english_texts = re.findall(r'\[EN\].*?\[EN\]', text) - for chinese_text in chinese_texts: - cleaned_text = chinese_to_ipa(chinese_text[4:-4]) - text = text.replace(chinese_text, cleaned_text+' ', 1) - for japanese_text in japanese_texts: - cleaned_text = japanese_to_ipa2(japanese_text[4:-4]) - text = text.replace(japanese_text, cleaned_text+' ', 1) - for korean_text in korean_texts: - cleaned_text = korean_to_ipa(korean_text[4:-4]) - text = text.replace(korean_text, cleaned_text+' ', 1) - for english_text in english_texts: - cleaned_text = english_to_ipa2(english_text[4:-4]) - text = text.replace(english_text, cleaned_text+' ', 1) - text = text[:-1] - if re.match(r'[^\.,!\?\-…~]', text[-1]): - text += '.' - return text - - -def thai_cleaners(text): - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - text = shanghainese_to_ipa(text) - if re.match(r'[^\.,!\?\-…~]', text[-1]): - text += '.' - return text - - -def chinese_dialect_cleaners(text): - text = re.sub(r'\[MD\](.*?)\[MD\]', - lambda x: chinese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[TW\](.*?)\[TW\]', - lambda x: chinese_to_ipa2(x.group(1), True)+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/AIFILMS/ControlNet-Video/share_btn.py b/spaces/AIFILMS/ControlNet-Video/share_btn.py deleted file mode 100644 index 1e961c7a8b71f44a40c305aaab3d2d5068f44cba..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/ControlNet-Video/share_btn.py +++ /dev/null @@ -1,86 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getVideoBlobFile(videoEL){ - const res = await fetch(videoEL.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `vid-pix2pix-${{videoId}}.wav`; - const videoBlob = new File([blob], fileName, { type: 'video/mp4' }); - console.log(videoBlob); - return videoBlob; - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const captionTxt = gradioEl.querySelector('#prompt-in textarea').value; - const controlTask = gradioEl.querySelector('#controltask-in select').value; - const seedValue = gradioEl.querySelector('#seed-in input').value; - const inputVidEl = gradioEl.querySelector('#input-vid video'); - const outputVideo = gradioEl.querySelector('#video-output video'); - const outputPrepVideo = gradioEl.querySelector('#prep-video-output video'); - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputVideo){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputFile = await getVideoBlobFile(inputVidEl); - const urlInputVid = await uploadFile(inputFile); - - const prepVideoOutFile = await getVideoBlobFile(outputPrepVideo); - const dataOutputPrepVid = await uploadFile(prepVideoOutFile); - - const videoOutFile = await getVideoBlobFile(outputVideo); - const dataOutputVid = await uploadFile(videoOutFile); - - const descriptionMd = ` -#### Settings -Prompt: ${captionTxt} -Control Task: ${controlTask} • Seed: ${seedValue} - -#### Video input: -${urlInputVid} - -#### Preprcessor output: -${dataOutputPrepVid} - -#### ControlNet result: -${dataOutputVid} -`; - const params = new URLSearchParams({ - title: captionTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/ControlNet-Video/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/AIGText/GlyphControl/ldm/models/autoencoder.py b/spaces/AIGText/GlyphControl/ldm/models/autoencoder.py deleted file mode 100644 index a16268dd5a9c4e1a6f17b8907c7828ffebd417e5..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/models/autoencoder.py +++ /dev/null @@ -1,278 +0,0 @@ -import torch -import pytorch_lightning as pl -import torch.nn.functional as F -from contextlib import contextmanager - -from ldm.modules.diffusionmodules.model import Encoder, Decoder -from ldm.modules.distributions.distributions import DiagonalGaussianDistribution - -from ldm.util import instantiate_from_config -from ldm.modules.ema import LitEma - - -class AutoencoderKL(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - ema_decay=None, - learn_logvar=False, - keep_keys = [], - ): - super().__init__() - self.learn_logvar = learn_logvar - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - - self.use_ema = ema_decay is not None - if self.use_ema: - self.ema_decay = ema_decay - assert 0. < ema_decay < 1. - self.model_ema = LitEma(self, decay=ema_decay) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, keep_keys=keep_keys) - - def init_from_ckpt(self, path, ignore_keys=list(), keep_keys=list()): - # if path.endswith(".ckpt"): - # sd = torch.load(path, map_location="cpu")["state_dict"] - # elif path.endswith(".bin"): - # sd = torch.load(path, map_location="cpu") - # else: - # raise ValueError - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - if len(keep_keys): - sd_new = {} - for k in list(sd.keys()): - for kk in keep_keys: - if k.startswith(kk): - if kk == "first_stage_model": - k_new = k.split(kk + ".")[1] - else: - k_new = k - sd_new[k_new] = sd[k] - else: - sd_new = sd - # new_k = k - # if ".mid_block." in k: - # new_k = new_k.replace(".mid_block.", ".mid.") - # if "attentions.0." in k: - # new_k = new_k.replace("attentions.0.", "attn_1.") - # if ".resnets.0" in k: - # new_k = new_k.replace(".resnets.0", ".block_1") - # if ".resnets.1" in k: - # new_k = new_k.replace(".resnets.1", ".block_2") - # else: - # if ".up_blocks." in k: - # new_k = new_k.replace(".up_blocks.", ".up.") - # # sd[k.replace(".up_blocks.", ".up.")] = sd[k] - # # del sd[k] - # if ".down_blocks." in k: - # new_k = new_k.replace(".down_blocks.", ".down.") - # if ".resnets." in k: - # new_k = new_k.replace(".resnets.", ".block.") - # if "samplers.0." in k: - # new_k = new_k.replace("samplers.0.", "sample.") - # # sd[k.replace(".down_blocks.", ".down.")] = sd[k] - # # del sd[k] - - # # sd[k.replace(".mid_block.", ".mid.")] = sd[k] - # # del sd[k] - # if ".conv_norm_out." in k: - # new_k = new_k.replace(".conv_norm_out.", ".norm_out.") - # # sd[k.replace(".conv_norm_out.", ".norm_out.")] = sd[k] - # if new_k != k: - # sd[new_k] = sd[k] - # del sd[k] - - # self.load_state_dict(sd, strict=True) - missing, unexpected = self.load_state_dict(sd_new, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys:\n {missing}") - if len(unexpected) > 0: - print(f"\nUnexpected Keys:\n {unexpected}") - # print(f"Restored from {path}") - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.parameters()) - self.model_ema.copy_to(self) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self) - - def encode(self, x): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior - - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - return dec - - def forward(self, input, sample_posterior=True): - posterior = self.encode(input) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - log_dict = self._validation_step(batch, batch_idx) - with self.ema_scope(): - log_dict_ema = self._validation_step(batch, batch_idx, postfix="_ema") - return log_dict - - def _validation_step(self, batch, batch_idx, postfix=""): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val"+postfix) - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val"+postfix) - - self.log(f"val{postfix}/rec_loss", log_dict_ae[f"val{postfix}/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - ae_params_list = list(self.encoder.parameters()) + list(self.decoder.parameters()) + list( - self.quant_conv.parameters()) + list(self.post_quant_conv.parameters()) - if self.learn_logvar: - print(f"{self.__class__.__name__}: Learning logvar") - ae_params_list.append(self.loss.logvar) - opt_ae = torch.optim.Adam(ae_params_list, - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, log_ema=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - if log_ema or self.use_ema: - with self.ema_scope(): - xrec_ema, posterior_ema = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec_ema.shape[1] > 3 - xrec_ema = self.to_rgb(xrec_ema) - log["samples_ema"] = self.decode(torch.randn_like(posterior_ema.sample())) - log["reconstructions_ema"] = xrec_ema - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class IdentityFirstStage(torch.nn.Module): - def __init__(self, *args, vq_interface=False, **kwargs): - self.vq_interface = vq_interface - super().__init__() - - def encode(self, x, *args, **kwargs): - return x - - def decode(self, x, *args, **kwargs): - return x - - def quantize(self, x, *args, **kwargs): - if self.vq_interface: - return x, None, [None, None, None] - return x - - def forward(self, x, *args, **kwargs): - return x - diff --git a/spaces/AchyuthGamer/ImMagician/style.css b/spaces/AchyuthGamer/ImMagician/style.css deleted file mode 100644 index 142c4b92e938cc8cd33cde5ab580b5fd6a2aac78..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/ImMagician/style.css +++ /dev/null @@ -1,97 +0,0 @@ -#col-container {color: white; - max-width: 1200px; - margin-left: auto; - margin-right: auto; -} -a { - color: inherit; - text-decoration: underline; -} -.gradio-container { - color: #ffaa66; - background-color: #005566; - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: #ffffff !important; - text-shadow: 1px 1px 0 rgba(0, 0, 0, 1) !important; - background-image: linear-gradient(#76635a, #d2a489) !important; - border-radius: 24px !important; - border: solid 1px !important; - border-top-color: #ffc99f !important; - border-right-color: #000000 !important; - border-bottom-color: #000000 !important; - border-left-color: #ffc99f !important; - padding: 6px 30px; -} -input[type='range'] { - accent-color: #9d66e5; -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} -.container { - color: #ffaa66; - max-width: 1200px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - color: #ffaa66; - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - color: #ffaa66; - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button:focus { - border-color: rgb(255 160 0 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(0 0 0 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} -#advanced-options { - color: #ffaa66; - margin-bottom: 20px; -} -.footer { - color: #ffaa66; - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - color: #ffaa66; - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.dark .logo{ filter: invert(1); } -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} -.acknowledgments h4{ - color: #ffaa66; - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} - diff --git a/spaces/AgentVerse/agentVerse/ui/dist/assets/tilemaps/tiles/tileset.tsx b/spaces/AgentVerse/agentVerse/ui/dist/assets/tilemaps/tiles/tileset.tsx deleted file mode 100644 index 5897600ae219711f8a5c8da05cceb45b619b4e69..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/dist/assets/tilemaps/tiles/tileset.tsx +++ /dev/null @@ -1,4 +0,0 @@ - - - - diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner/Factory.js deleted file mode 100644 index 88b0621cd1ff3f8939eccec9dba8330ca499eea0..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Spinner from './Spinner.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('spinner', function (config) { - var gameObject = new Spinner(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.Spinner.Spinner', Spinner); - -export default Spinner; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/Methods.js deleted file mode 100644 index e8e753b87237a5c0ec7c8e5fc94999e8b9d223a7..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/Methods.js +++ /dev/null @@ -1,13 +0,0 @@ -import ConfigurationMethods from './ConfigurationMethods.js' -import OpenColorPicker from './OpenColorPicker.js'; - -var methods = { - openColorPicker: OpenColorPicker -} - -Object.assign( - methods, - ConfigurationMethods, -); - -export default methods; \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON.pm b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON.pm deleted file mode 100644 index 8bac7eb5b90b530b828b25d41cec812d2dc2cf8f..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON.pm +++ /dev/null @@ -1,2317 +0,0 @@ -package JSON; - - -use strict; -use Carp (); -use base qw(Exporter); -@JSON::EXPORT = qw(from_json to_json jsonToObj objToJson encode_json decode_json); - -BEGIN { - $JSON::VERSION = '2.90'; - $JSON::DEBUG = 0 unless (defined $JSON::DEBUG); - $JSON::DEBUG = $ENV{ PERL_JSON_DEBUG } if exists $ENV{ PERL_JSON_DEBUG }; -} - -my $Module_XS = 'JSON::XS'; -my $Module_PP = 'JSON::PP'; -my $Module_bp = 'JSON::backportPP'; # included in JSON distribution -my $PP_Version = '2.27203'; -my $XS_Version = '2.34'; - - -# XS and PP common methods - -my @PublicMethods = qw/ - ascii latin1 utf8 pretty indent space_before space_after relaxed canonical allow_nonref - allow_blessed convert_blessed filter_json_object filter_json_single_key_object - shrink max_depth max_size encode decode decode_prefix allow_unknown -/; - -my @Properties = qw/ - ascii latin1 utf8 indent space_before space_after relaxed canonical allow_nonref - allow_blessed convert_blessed shrink max_depth max_size allow_unknown -/; - -my @XSOnlyMethods = qw/allow_tags/; # Currently nothing - -my @PPOnlyMethods = qw/ - indent_length sort_by - allow_singlequote allow_bignum loose allow_barekey escape_slash as_nonblessed -/; # JSON::PP specific - - -# used in _load_xs and _load_pp ($INSTALL_ONLY is not used currently) -my $_INSTALL_DONT_DIE = 1; # When _load_xs fails to load XS, don't die. -my $_INSTALL_ONLY = 2; # Don't call _set_methods() -my $_ALLOW_UNSUPPORTED = 0; -my $_UNIV_CONV_BLESSED = 0; -my $_USSING_bpPP = 0; - - -# Check the environment variable to decide worker module. - -unless ($JSON::Backend) { - $JSON::DEBUG and Carp::carp("Check used worker module..."); - - my $backend = exists $ENV{PERL_JSON_BACKEND} ? $ENV{PERL_JSON_BACKEND} : 1; - - if ($backend eq '1' or $backend =~ /JSON::XS\s*,\s*JSON::PP/) { - _load_xs($_INSTALL_DONT_DIE) or _load_pp(); - } - elsif ($backend eq '0' or $backend eq 'JSON::PP') { - _load_pp(); - } - elsif ($backend eq '2' or $backend eq 'JSON::XS') { - _load_xs(); - } - elsif ($backend eq 'JSON::backportPP') { - $_USSING_bpPP = 1; - _load_pp(); - } - else { - Carp::croak "The value of environmental variable 'PERL_JSON_BACKEND' is invalid."; - } -} - - -sub import { - my $pkg = shift; - my @what_to_export; - my $no_export; - - for my $tag (@_) { - if ($tag eq '-support_by_pp') { - if (!$_ALLOW_UNSUPPORTED++) { - JSON::Backend::XS - ->support_by_pp(@PPOnlyMethods) if ($JSON::Backend eq $Module_XS); - } - next; - } - elsif ($tag eq '-no_export') { - $no_export++, next; - } - elsif ( $tag eq '-convert_blessed_universally' ) { - eval q| - require B; - *UNIVERSAL::TO_JSON = sub { - my $b_obj = B::svref_2object( $_[0] ); - return $b_obj->isa('B::HV') ? { %{ $_[0] } } - : $b_obj->isa('B::AV') ? [ @{ $_[0] } ] - : undef - ; - } - | if ( !$_UNIV_CONV_BLESSED++ ); - next; - } - push @what_to_export, $tag; - } - - return if ($no_export); - - __PACKAGE__->export_to_level(1, $pkg, @what_to_export); -} - - -# OBSOLETED - -sub jsonToObj { - my $alternative = 'from_json'; - if (defined $_[0] and UNIVERSAL::isa($_[0], 'JSON')) { - shift @_; $alternative = 'decode'; - } - Carp::carp "'jsonToObj' will be obsoleted. Please use '$alternative' instead."; - return JSON::from_json(@_); -}; - -sub objToJson { - my $alternative = 'to_json'; - if (defined $_[0] and UNIVERSAL::isa($_[0], 'JSON')) { - shift @_; $alternative = 'encode'; - } - Carp::carp "'objToJson' will be obsoleted. Please use '$alternative' instead."; - JSON::to_json(@_); -}; - - -# INTERFACES - -sub to_json ($@) { - if ( - ref($_[0]) eq 'JSON' - or (@_ > 2 and $_[0] eq 'JSON') - ) { - Carp::croak "to_json should not be called as a method."; - } - my $json = JSON->new; - - if (@_ == 2 and ref $_[1] eq 'HASH') { - my $opt = $_[1]; - for my $method (keys %$opt) { - $json->$method( $opt->{$method} ); - } - } - - $json->encode($_[0]); -} - - -sub from_json ($@) { - if ( ref($_[0]) eq 'JSON' or $_[0] eq 'JSON' ) { - Carp::croak "from_json should not be called as a method."; - } - my $json = JSON->new; - - if (@_ == 2 and ref $_[1] eq 'HASH') { - my $opt = $_[1]; - for my $method (keys %$opt) { - $json->$method( $opt->{$method} ); - } - } - - return $json->decode( $_[0] ); -} - - - -sub true { $JSON::true } - -sub false { $JSON::false } - -sub null { undef; } - - -sub require_xs_version { $XS_Version; } - -sub backend { - my $proto = shift; - $JSON::Backend; -} - -#*module = *backend; - - -sub is_xs { - return $_[0]->backend eq $Module_XS; -} - - -sub is_pp { - return not $_[0]->is_xs; -} - - -sub pureperl_only_methods { @PPOnlyMethods; } - - -sub property { - my ($self, $name, $value) = @_; - - if (@_ == 1) { - my %props; - for $name (@Properties) { - my $method = 'get_' . $name; - if ($name eq 'max_size') { - my $value = $self->$method(); - $props{$name} = $value == 1 ? 0 : $value; - next; - } - $props{$name} = $self->$method(); - } - return \%props; - } - elsif (@_ > 3) { - Carp::croak('property() can take only the option within 2 arguments.'); - } - elsif (@_ == 2) { - if ( my $method = $self->can('get_' . $name) ) { - if ($name eq 'max_size') { - my $value = $self->$method(); - return $value == 1 ? 0 : $value; - } - $self->$method(); - } - } - else { - $self->$name($value); - } - -} - - - -# INTERNAL - -sub _load_xs { - my $opt = shift; - - $JSON::DEBUG and Carp::carp "Load $Module_XS."; - - # if called after install module, overload is disable.... why? - JSON::Boolean::_overrride_overload($Module_XS); - JSON::Boolean::_overrride_overload($Module_PP); - - eval qq| - use $Module_XS $XS_Version (); - |; - - if ($@) { - if (defined $opt and $opt & $_INSTALL_DONT_DIE) { - $JSON::DEBUG and Carp::carp "Can't load $Module_XS...($@)"; - return 0; - } - Carp::croak $@; - } - - unless (defined $opt and $opt & $_INSTALL_ONLY) { - _set_module( $JSON::Backend = $Module_XS ); - my $data = join("", ); # this code is from Jcode 2.xx. - close(DATA); - eval $data; - JSON::Backend::XS->init; - } - - return 1; -}; - - -sub _load_pp { - my $opt = shift; - my $backend = $_USSING_bpPP ? $Module_bp : $Module_PP; - - $JSON::DEBUG and Carp::carp "Load $backend."; - - # if called after install module, overload is disable.... why? - JSON::Boolean::_overrride_overload($Module_XS); - JSON::Boolean::_overrride_overload($backend); - - if ( $_USSING_bpPP ) { - eval qq| require $backend |; - } - else { - eval qq| use $backend $PP_Version () |; - } - - if ($@) { - if ( $backend eq $Module_PP ) { - $JSON::DEBUG and Carp::carp "Can't load $Module_PP ($@), so try to load $Module_bp"; - $_USSING_bpPP++; - $backend = $Module_bp; - JSON::Boolean::_overrride_overload($backend); - local $^W; # if PP installed but invalid version, backportPP redefines methods. - eval qq| require $Module_bp |; - } - Carp::croak $@ if $@; - } - - unless (defined $opt and $opt & $_INSTALL_ONLY) { - _set_module( $JSON::Backend = $Module_PP ); # even if backportPP, set $Backend with 'JSON::PP' - JSON::Backend::PP->init; - } -}; - - -sub _set_module { - return if defined $JSON::true; - - my $module = shift; - - local $^W; - no strict qw(refs); - - $JSON::true = ${"$module\::true"}; - $JSON::false = ${"$module\::false"}; - - push @JSON::ISA, $module; - if ( JSON->is_xs and JSON->backend->VERSION < 3 ) { - eval 'package JSON::PP::Boolean'; - push @{"$module\::Boolean::ISA"}, qw(JSON::PP::Boolean); - } - - *{"JSON::is_bool"} = \&{"$module\::is_bool"}; - - for my $method ($module eq $Module_XS ? @PPOnlyMethods : @XSOnlyMethods) { - *{"JSON::$method"} = sub { - Carp::carp("$method is not supported in $module."); - $_[0]; - }; - } - - return 1; -} - - - -# -# JSON Boolean -# - -package JSON::Boolean; - -my %Installed; - -sub _overrride_overload { - return; # this function is currently disable. - return if ($Installed{ $_[0] }++); - - my $boolean = $_[0] . '::Boolean'; - - eval sprintf(q| - package %s; - use overload ( - '""' => sub { ${$_[0]} == 1 ? 'true' : 'false' }, - 'eq' => sub { - my ($obj, $op) = ref ($_[0]) ? ($_[0], $_[1]) : ($_[1], $_[0]); - if ($op eq 'true' or $op eq 'false') { - return "$obj" eq 'true' ? 'true' eq $op : 'false' eq $op; - } - else { - return $obj ? 1 == $op : 0 == $op; - } - }, - ); - |, $boolean); - - if ($@) { Carp::croak $@; } - - if ( exists $INC{'JSON/XS.pm'} and $boolean eq 'JSON::XS::Boolean' ) { - local $^W; - my $true = do { bless \(my $dummy = 1), $boolean }; - my $false = do { bless \(my $dummy = 0), $boolean }; - *JSON::XS::true = sub () { $true }; - *JSON::XS::false = sub () { $false }; - } - elsif ( exists $INC{'JSON/PP.pm'} and $boolean eq 'JSON::PP::Boolean' ) { - local $^W; - my $true = do { bless \(my $dummy = 1), $boolean }; - my $false = do { bless \(my $dummy = 0), $boolean }; - *JSON::PP::true = sub { $true }; - *JSON::PP::false = sub { $false }; - } - - return 1; -} - - -# -# Helper classes for Backend Module (PP) -# - -package JSON::Backend::PP; - -sub init { - local $^W; - no strict qw(refs); # this routine may be called after JSON::Backend::XS init was called. - *{"JSON::decode_json"} = \&{"JSON::PP::decode_json"}; - *{"JSON::encode_json"} = \&{"JSON::PP::encode_json"}; - *{"JSON::PP::is_xs"} = sub { 0 }; - *{"JSON::PP::is_pp"} = sub { 1 }; - return 1; -} - -# -# To save memory, the below lines are read only when XS backend is used. -# - -package JSON; - -1; -__DATA__ - - -# -# Helper classes for Backend Module (XS) -# - -package JSON::Backend::XS; - -use constant INDENT_LENGTH_FLAG => 15 << 12; - -use constant UNSUPPORTED_ENCODE_FLAG => { - ESCAPE_SLASH => 0x00000010, - ALLOW_BIGNUM => 0x00000020, - AS_NONBLESSED => 0x00000040, - EXPANDED => 0x10000000, # for developer's -}; - -use constant UNSUPPORTED_DECODE_FLAG => { - LOOSE => 0x00000001, - ALLOW_BIGNUM => 0x00000002, - ALLOW_BAREKEY => 0x00000004, - ALLOW_SINGLEQUOTE => 0x00000008, - EXPANDED => 0x20000000, # for developer's -}; - - -sub init { - local $^W; - no strict qw(refs); - *{"JSON::decode_json"} = \&{"JSON::XS::decode_json"}; - *{"JSON::encode_json"} = \&{"JSON::XS::encode_json"}; - *{"JSON::XS::is_xs"} = sub { 1 }; - *{"JSON::XS::is_pp"} = sub { 0 }; - return 1; -} - - -sub support_by_pp { - my ($class, @methods) = @_; - - local $^W; - no strict qw(refs); - - my $JSON_XS_encode_orignal = \&JSON::XS::encode; - my $JSON_XS_decode_orignal = \&JSON::XS::decode; - my $JSON_XS_incr_parse_orignal = \&JSON::XS::incr_parse; - - *JSON::XS::decode = \&JSON::Backend::XS::Supportable::_decode; - *JSON::XS::encode = \&JSON::Backend::XS::Supportable::_encode; - *JSON::XS::incr_parse = \&JSON::Backend::XS::Supportable::_incr_parse; - - *{JSON::XS::_original_decode} = $JSON_XS_decode_orignal; - *{JSON::XS::_original_encode} = $JSON_XS_encode_orignal; - *{JSON::XS::_original_incr_parse} = $JSON_XS_incr_parse_orignal; - - push @JSON::Backend::XS::Supportable::ISA, 'JSON'; - - my $pkg = 'JSON::Backend::XS::Supportable'; - - *{JSON::new} = sub { - my $proto = JSON::XS->new; $$proto = 0; - bless $proto, $pkg; - }; - - - for my $method (@methods) { - my $flag = uc($method); - my $type |= (UNSUPPORTED_ENCODE_FLAG->{$flag} || 0); - $type |= (UNSUPPORTED_DECODE_FLAG->{$flag} || 0); - - next unless($type); - - $pkg->_make_unsupported_method($method => $type); - } - -# push @{"JSON::XS::Boolean::ISA"}, qw(JSON::PP::Boolean); -# push @{"JSON::PP::Boolean::ISA"}, qw(JSON::Boolean); - - $JSON::DEBUG and Carp::carp("set -support_by_pp mode."); - - return 1; -} - - - - -# -# Helper classes for XS -# - -package JSON::Backend::XS::Supportable; - -$Carp::Internal{'JSON::Backend::XS::Supportable'} = 1; - -sub _make_unsupported_method { - my ($pkg, $method, $type) = @_; - - local $^W; - no strict qw(refs); - - *{"$pkg\::$method"} = sub { - local $^W; - if (defined $_[1] ? $_[1] : 1) { - ${$_[0]} |= $type; - } - else { - ${$_[0]} &= ~$type; - } - $_[0]; - }; - - *{"$pkg\::get_$method"} = sub { - ${$_[0]} & $type ? 1 : ''; - }; - -} - - -sub _set_for_pp { - JSON::_load_pp( $_INSTALL_ONLY ); - - my $type = shift; - my $pp = JSON::PP->new; - my $prop = $_[0]->property; - - for my $name (keys %$prop) { - $pp->$name( $prop->{$name} ? $prop->{$name} : 0 ); - } - - my $unsupported = $type eq 'encode' ? JSON::Backend::XS::UNSUPPORTED_ENCODE_FLAG - : JSON::Backend::XS::UNSUPPORTED_DECODE_FLAG; - my $flags = ${$_[0]} || 0; - - for my $name (keys %$unsupported) { - next if ($name eq 'EXPANDED'); # for developer's - my $enable = ($flags & $unsupported->{$name}) ? 1 : 0; - my $method = lc $name; - $pp->$method($enable); - } - - $pp->indent_length( $_[0]->get_indent_length ); - - return $pp; -} - -sub _encode { # using with PP encode - if (${$_[0]}) { - _set_for_pp('encode' => @_)->encode($_[1]); - } - else { - $_[0]->_original_encode( $_[1] ); - } -} - - -sub _decode { # if unsupported-flag is set, use PP - if (${$_[0]}) { - _set_for_pp('decode' => @_)->decode($_[1]); - } - else { - $_[0]->_original_decode( $_[1] ); - } -} - - -sub decode_prefix { # if unsupported-flag is set, use PP - _set_for_pp('decode' => @_)->decode_prefix($_[1]); -} - - -sub _incr_parse { - if (${$_[0]}) { - _set_for_pp('decode' => @_)->incr_parse($_[1]); - } - else { - $_[0]->_original_incr_parse( $_[1] ); - } -} - - -sub get_indent_length { - ${$_[0]} << 4 >> 16; -} - - -sub indent_length { - my $length = $_[1]; - - if (!defined $length or $length > 15 or $length < 0) { - Carp::carp "The acceptable range of indent_length() is 0 to 15."; - } - else { - local $^W; - $length <<= 12; - ${$_[0]} &= ~ JSON::Backend::XS::INDENT_LENGTH_FLAG; - ${$_[0]} |= $length; - *JSON::XS::encode = \&JSON::Backend::XS::Supportable::_encode; - } - - $_[0]; -} - - -1; -__END__ - -=head1 NAME - -JSON - JSON (JavaScript Object Notation) encoder/decoder - -=head1 SYNOPSIS - - use JSON; # imports encode_json, decode_json, to_json and from_json. - - # simple and fast interfaces (expect/generate UTF-8) - - $utf8_encoded_json_text = encode_json $perl_hash_or_arrayref; - $perl_hash_or_arrayref = decode_json $utf8_encoded_json_text; - - # OO-interface - - $json = JSON->new->allow_nonref; - - $json_text = $json->encode( $perl_scalar ); - $perl_scalar = $json->decode( $json_text ); - - $pretty_printed = $json->pretty->encode( $perl_scalar ); # pretty-printing - - # If you want to use PP only support features, call with '-support_by_pp' - # When XS unsupported feature is enable, using PP (de|en)code instead of XS ones. - - use JSON -support_by_pp; - - # option-acceptable interfaces (expect/generate UNICODE by default) - - $json_text = to_json( $perl_scalar, { ascii => 1, pretty => 1 } ); - $perl_scalar = from_json( $json_text, { utf8 => 1 } ); - - # Between (en|de)code_json and (to|from)_json, if you want to write - # a code which communicates to an outer world (encoded in UTF-8), - # recommend to use (en|de)code_json. - -=head1 VERSION - - 2.90 - -This version is compatible with JSON::XS B<2.34> and later. -(Not yet compatble to JSON::XS B<3.0x>.) - - -=head1 NOTE - -JSON::PP was earlier included in the C distribution, but -has since Perl 5.14 been a core module. For this reason, -L was removed from the JSON distribution and can now -be found also in the Perl5 repository at - -=over - -=item * L - -=back - -(The newest JSON::PP version still exists in CPAN.) - -Instead, the C distribution will include JSON::backportPP -for backwards computability. JSON.pm should thus work as it did -before. - -=head1 DESCRIPTION - - *************************** CAUTION ************************************** - * * - * INCOMPATIBLE CHANGE (JSON::XS version 2.90) * - * * - * JSON.pm had patched JSON::XS::Boolean and JSON::PP::Boolean internally * - * on loading time for making these modules inherit JSON::Boolean. * - * But since JSON::XS v3.0 it use Types::Serialiser as boolean class. * - * Then now JSON.pm breaks boolean classe overload features and * - * -support_by_pp if JSON::XS v3.0 or later is installed. * - * * - * JSON::true and JSON::false returned JSON::Boolean objects. * - * For workaround, they return JSON::PP::Boolean objects in this version. * - * * - * isa_ok(JSON::true, 'JSON::PP::Boolean'); * - * * - * And it discards a feature: * - * * - * ok(JSON::true eq 'true'); * - * * - * In other word, JSON::PP::Boolean overload numeric only. * - * * - * ok( JSON::true == 1 ); * - * * - ************************************************************************** - - ************************** CAUTION ******************************** - * This is 'JSON module version 2' and there are many differences * - * to version 1.xx * - * Please check your applications using old version. * - * See to 'INCOMPATIBLE CHANGES TO OLD VERSION' * - ******************************************************************* - -JSON (JavaScript Object Notation) is a simple data format. -See to L and C(L). - -This module converts Perl data structures to JSON and vice versa using either -L or L. - -JSON::XS is the fastest and most proper JSON module on CPAN which must be -compiled and installed in your environment. -JSON::PP is a pure-Perl module which is bundled in this distribution and -has a strong compatibility to JSON::XS. - -This module try to use JSON::XS by default and fail to it, use JSON::PP instead. -So its features completely depend on JSON::XS or JSON::PP. - -See to L. - -To distinguish the module name 'JSON' and the format type JSON, -the former is quoted by CEE (its results vary with your using media), -and the latter is left just as it is. - -Module name : C - -Format type : JSON - -=head2 FEATURES - -=over - -=item * correct unicode handling - -This module (i.e. backend modules) knows how to handle Unicode, documents -how and when it does so, and even documents what "correct" means. - -Even though there are limitations, this feature is available since Perl version 5.6. - -JSON::XS requires Perl 5.8.2 (but works correctly in 5.8.8 or later), so in older versions -C should call JSON::PP as the backend which can be used since Perl 5.005. - -With Perl 5.8.x JSON::PP works, but from 5.8.0 to 5.8.2, because of a Perl side problem, -JSON::PP works slower in the versions. And in 5.005, the Unicode handling is not available. -See to L for more information. - -See also to L -and L. - - -=item * round-trip integrity - -When you serialise a perl data structure using only data types supported -by JSON and Perl, the deserialised data structure is identical on the Perl -level. (e.g. the string "2.0" doesn't suddenly become "2" just because -it looks like a number). There I minor exceptions to this, read the -L section below to learn about those. - - -=item * strict checking of JSON correctness - -There is no guessing, no generating of illegal JSON texts by default, -and only JSON is accepted as input by default (the latter is a security -feature). - -See to L and L. - -=item * fast - -This module returns a JSON::XS object itself if available. -Compared to other JSON modules and other serialisers such as Storable, -JSON::XS usually compares favorably in terms of speed, too. - -If not available, C returns a JSON::PP object instead of JSON::XS and -it is very slow as pure-Perl. - -=item * simple to use - -This module has both a simple functional interface as well as an -object oriented interface interface. - -=item * reasonably versatile output formats - -You can choose between the most compact guaranteed-single-line format possible -(nice for simple line-based protocols), a pure-ASCII format (for when your transport -is not 8-bit clean, still supports the whole Unicode range), or a pretty-printed -format (for when you want to read that stuff). Or you can combine those features -in whatever way you like. - -=back - -=head1 FUNCTIONAL INTERFACE - -Some documents are copied and modified from L. -C and C are additional functions. - -=head2 encode_json - - $json_text = encode_json $perl_scalar - -Converts the given Perl data structure to a UTF-8 encoded, binary string. - -This function call is functionally identical to: - - $json_text = JSON->new->utf8->encode($perl_scalar) - -=head2 decode_json - - $perl_scalar = decode_json $json_text - -The opposite of C: expects an UTF-8 (binary) string and tries -to parse that as an UTF-8 encoded JSON text, returning the resulting -reference. - -This function call is functionally identical to: - - $perl_scalar = JSON->new->utf8->decode($json_text) - - -=head2 to_json - - $json_text = to_json($perl_scalar) - -Converts the given Perl data structure to a json string. - -This function call is functionally identical to: - - $json_text = JSON->new->encode($perl_scalar) - -Takes a hash reference as the second. - - $json_text = to_json($perl_scalar, $flag_hashref) - -So, - - $json_text = to_json($perl_scalar, {utf8 => 1, pretty => 1}) - -equivalent to: - - $json_text = JSON->new->utf8(1)->pretty(1)->encode($perl_scalar) - -If you want to write a modern perl code which communicates to outer world, -you should use C (supposed that JSON data are encoded in UTF-8). - -=head2 from_json - - $perl_scalar = from_json($json_text) - -The opposite of C: expects a json string and tries -to parse it, returning the resulting reference. - -This function call is functionally identical to: - - $perl_scalar = JSON->decode($json_text) - -Takes a hash reference as the second. - - $perl_scalar = from_json($json_text, $flag_hashref) - -So, - - $perl_scalar = from_json($json_text, {utf8 => 1}) - -equivalent to: - - $perl_scalar = JSON->new->utf8(1)->decode($json_text) - -If you want to write a modern perl code which communicates to outer world, -you should use C (supposed that JSON data are encoded in UTF-8). - -=head2 JSON::is_bool - - $is_boolean = JSON::is_bool($scalar) - -Returns true if the passed scalar represents either JSON::true or -JSON::false, two constants that act like C<1> and C<0> respectively -and are also used to represent JSON C and C in Perl strings. - -=head2 JSON::true - -Returns JSON true value which is blessed object. -It C JSON::Boolean object. - -=head2 JSON::false - -Returns JSON false value which is blessed object. -It C JSON::Boolean object. - -=head2 JSON::null - -Returns C. - -See L, below, for more information on how JSON values are mapped to -Perl. - -=head1 HOW DO I DECODE A DATA FROM OUTER AND ENCODE TO OUTER - -This section supposes that your perl version is 5.8 or later. - -If you know a JSON text from an outer world - a network, a file content, and so on, -is encoded in UTF-8, you should use C or C module object -with C enable. And the decoded result will contain UNICODE characters. - - # from network - my $json = JSON->new->utf8; - my $json_text = CGI->new->param( 'json_data' ); - my $perl_scalar = $json->decode( $json_text ); - - # from file content - local $/; - open( my $fh, '<', 'json.data' ); - $json_text = <$fh>; - $perl_scalar = decode_json( $json_text ); - -If an outer data is not encoded in UTF-8, firstly you should C it. - - use Encode; - local $/; - open( my $fh, '<', 'json.data' ); - my $encoding = 'cp932'; - my $unicode_json_text = decode( $encoding, <$fh> ); # UNICODE - - # or you can write the below code. - # - # open( my $fh, "<:encoding($encoding)", 'json.data' ); - # $unicode_json_text = <$fh>; - -In this case, C<$unicode_json_text> is of course UNICODE string. -So you B use C nor C module object with C enable. -Instead of them, you use C module object with C disable or C. - - $perl_scalar = $json->utf8(0)->decode( $unicode_json_text ); - # or - $perl_scalar = from_json( $unicode_json_text ); - -Or C and C: - - $perl_scalar = decode_json( encode( 'utf8', $unicode_json_text ) ); - # this way is not efficient. - -And now, you want to convert your C<$perl_scalar> into JSON data and -send it to an outer world - a network or a file content, and so on. - -Your data usually contains UNICODE strings and you want the converted data to be encoded -in UTF-8, you should use C or C module object with C enable. - - print encode_json( $perl_scalar ); # to a network? file? or display? - # or - print $json->utf8->encode( $perl_scalar ); - -If C<$perl_scalar> does not contain UNICODE but C<$encoding>-encoded strings -for some reason, then its characters are regarded as B for perl -(because it does not concern with your $encoding). -You B use C nor C module object with C enable. -Instead of them, you use C module object with C disable or C. -Note that the resulted text is a UNICODE string but no problem to print it. - - # $perl_scalar contains $encoding encoded string values - $unicode_json_text = $json->utf8(0)->encode( $perl_scalar ); - # or - $unicode_json_text = to_json( $perl_scalar ); - # $unicode_json_text consists of characters less than 0x100 - print $unicode_json_text; - -Or C all string values and C: - - $perl_scalar->{ foo } = decode( $encoding, $perl_scalar->{ foo } ); - # ... do it to each string values, then encode_json - $json_text = encode_json( $perl_scalar ); - -This method is a proper way but probably not efficient. - -See to L, L. - - -=head1 COMMON OBJECT-ORIENTED INTERFACE - -=head2 new - - $json = JSON->new - -Returns a new C object inherited from either JSON::XS or JSON::PP -that can be used to de/encode JSON strings. - -All boolean flags described below are by default I. - -The mutators for flags all return the JSON object again and thus calls can -be chained: - - my $json = JSON->new->utf8->space_after->encode({a => [1,2]}) - => {"a": [1, 2]} - -=head2 ascii - - $json = $json->ascii([$enable]) - - $enabled = $json->get_ascii - -If $enable is true (or missing), then the encode method will not generate characters outside -the code range 0..127. Any Unicode characters outside that range will be escaped using either -a single \uXXXX or a double \uHHHH\uLLLLL escape sequence, as per RFC4627. - -If $enable is false, then the encode method will not escape Unicode characters unless -required by the JSON syntax or other flags. This results in a faster and more compact format. - -This feature depends on the used Perl version and environment. - -See to L if the backend is PP. - - JSON->new->ascii(1)->encode([chr 0x10401]) - => ["\ud801\udc01"] - -=head2 latin1 - - $json = $json->latin1([$enable]) - - $enabled = $json->get_latin1 - -If $enable is true (or missing), then the encode method will encode the resulting JSON -text as latin1 (or iso-8859-1), escaping any characters outside the code range 0..255. - -If $enable is false, then the encode method will not escape Unicode characters -unless required by the JSON syntax or other flags. - - JSON->new->latin1->encode (["\x{89}\x{abc}"] - => ["\x{89}\\u0abc"] # (perl syntax, U+abc escaped, U+89 not) - -=head2 utf8 - - $json = $json->utf8([$enable]) - - $enabled = $json->get_utf8 - -If $enable is true (or missing), then the encode method will encode the JSON result -into UTF-8, as required by many protocols, while the decode method expects to be handled -an UTF-8-encoded string. Please note that UTF-8-encoded strings do not contain any -characters outside the range 0..255, they are thus useful for bytewise/binary I/O. - -In future versions, enabling this option might enable autodetection of the UTF-16 and UTF-32 -encoding families, as described in RFC4627. - -If $enable is false, then the encode method will return the JSON string as a (non-encoded) -Unicode string, while decode expects thus a Unicode string. Any decoding or encoding -(e.g. to UTF-8 or UTF-16) needs to be done yourself, e.g. using the Encode module. - - -Example, output UTF-16BE-encoded JSON: - - use Encode; - $jsontext = encode "UTF-16BE", JSON::XS->new->encode ($object); - -Example, decode UTF-32LE-encoded JSON: - - use Encode; - $object = JSON::XS->new->decode (decode "UTF-32LE", $jsontext); - -See to L if the backend is PP. - - -=head2 pretty - - $json = $json->pretty([$enable]) - -This enables (or disables) all of the C, C and -C (and in the future possibly more) flags in one call to -generate the most readable (or most compact) form possible. - -Equivalent to: - - $json->indent->space_before->space_after - -The indent space length is three and JSON::XS cannot change the indent -space length. - -=head2 indent - - $json = $json->indent([$enable]) - - $enabled = $json->get_indent - -If C<$enable> is true (or missing), then the C method will use a multiline -format as output, putting every array member or object/hash key-value pair -into its own line, identifying them properly. - -If C<$enable> is false, no newlines or indenting will be produced, and the -resulting JSON text is guaranteed not to contain any C. - -This setting has no effect when decoding JSON texts. - -The indent space length is three. -With JSON::PP, you can also access C to change indent space length. - - -=head2 space_before - - $json = $json->space_before([$enable]) - - $enabled = $json->get_space_before - -If C<$enable> is true (or missing), then the C method will add an extra -optional space before the C<:> separating keys from values in JSON objects. - -If C<$enable> is false, then the C method will not add any extra -space at those places. - -This setting has no effect when decoding JSON texts. - -Example, space_before enabled, space_after and indent disabled: - - {"key" :"value"} - - -=head2 space_after - - $json = $json->space_after([$enable]) - - $enabled = $json->get_space_after - -If C<$enable> is true (or missing), then the C method will add an extra -optional space after the C<:> separating keys from values in JSON objects -and extra whitespace after the C<,> separating key-value pairs and array -members. - -If C<$enable> is false, then the C method will not add any extra -space at those places. - -This setting has no effect when decoding JSON texts. - -Example, space_before and indent disabled, space_after enabled: - - {"key": "value"} - - -=head2 relaxed - - $json = $json->relaxed([$enable]) - - $enabled = $json->get_relaxed - -If C<$enable> is true (or missing), then C will accept some -extensions to normal JSON syntax (see below). C will not be -affected in anyway. I. I suggest only to use this option to -parse application-specific files written by humans (configuration files, -resource files etc.) - -If C<$enable> is false (the default), then C will only accept -valid JSON texts. - -Currently accepted extensions are: - -=over 4 - -=item * list items can have an end-comma - -JSON I array elements and key-value pairs with commas. This -can be annoying if you write JSON texts manually and want to be able to -quickly append elements, so this extension accepts comma at the end of -such items not just between them: - - [ - 1, - 2, <- this comma not normally allowed - ] - { - "k1": "v1", - "k2": "v2", <- this comma not normally allowed - } - -=item * shell-style '#'-comments - -Whenever JSON allows whitespace, shell-style comments are additionally -allowed. They are terminated by the first carriage-return or line-feed -character, after which more white-space and comments are allowed. - - [ - 1, # this comment not allowed in JSON - # neither this one... - ] - -=back - - -=head2 canonical - - $json = $json->canonical([$enable]) - - $enabled = $json->get_canonical - -If C<$enable> is true (or missing), then the C method will output JSON objects -by sorting their keys. This is adding a comparatively high overhead. - -If C<$enable> is false, then the C method will output key-value -pairs in the order Perl stores them (which will likely change between runs -of the same script). - -This option is useful if you want the same data structure to be encoded as -the same JSON text (given the same overall settings). If it is disabled, -the same hash might be encoded differently even if contains the same data, -as key-value pairs have no inherent ordering in Perl. - -This setting has no effect when decoding JSON texts. - -=head2 allow_nonref - - $json = $json->allow_nonref([$enable]) - - $enabled = $json->get_allow_nonref - -If C<$enable> is true (or missing), then the C method can convert a -non-reference into its corresponding string, number or null JSON value, -which is an extension to RFC4627. Likewise, C will accept those JSON -values instead of croaking. - -If C<$enable> is false, then the C method will croak if it isn't -passed an arrayref or hashref, as JSON texts must either be an object -or array. Likewise, C will croak if given something that is not a -JSON object or array. - - JSON->new->allow_nonref->encode ("Hello, World!") - => "Hello, World!" - -=head2 allow_unknown - - $json = $json->allow_unknown ([$enable]) - - $enabled = $json->get_allow_unknown - -If $enable is true (or missing), then "encode" will *not* throw an -exception when it encounters values it cannot represent in JSON (for -example, filehandles) but instead will encode a JSON "null" value. -Note that blessed objects are not included here and are handled -separately by c. - -If $enable is false (the default), then "encode" will throw an -exception when it encounters anything it cannot encode as JSON. - -This option does not affect "decode" in any way, and it is -recommended to leave it off unless you know your communications -partner. - -=head2 allow_blessed - - $json = $json->allow_blessed([$enable]) - - $enabled = $json->get_allow_blessed - -If C<$enable> is true (or missing), then the C method will not -barf when it encounters a blessed reference. Instead, the value of the -B option will decide whether C (C -disabled or no C method found) or a representation of the -object (C enabled and C method found) is being -encoded. Has no effect on C. - -If C<$enable> is false (the default), then C will throw an -exception when it encounters a blessed object. - - -=head2 convert_blessed - - $json = $json->convert_blessed([$enable]) - - $enabled = $json->get_convert_blessed - -If C<$enable> is true (or missing), then C, upon encountering a -blessed object, will check for the availability of the C method -on the object's class. If found, it will be called in scalar context -and the resulting scalar will be encoded instead of the object. If no -C method is found, the value of C will decide what -to do. - -The C method may safely call die if it wants. If C -returns other blessed objects, those will be handled in the same -way. C must take care of not causing an endless recursion cycle -(== crash) in this case. The name of C was chosen because other -methods called by the Perl core (== not by the user of the object) are -usually in upper case letters and to avoid collisions with the C -function or method. - -This setting does not yet influence C in any way. - -If C<$enable> is false, then the C setting will decide what -to do when a blessed object is found. - -=over - -=item convert_blessed_universally mode - -If use C with C<-convert_blessed_universally>, the C -subroutine is defined as the below code: - - *UNIVERSAL::TO_JSON = sub { - my $b_obj = B::svref_2object( $_[0] ); - return $b_obj->isa('B::HV') ? { %{ $_[0] } } - : $b_obj->isa('B::AV') ? [ @{ $_[0] } ] - : undef - ; - } - -This will cause that C method converts simple blessed objects into -JSON objects as non-blessed object. - - JSON -convert_blessed_universally; - $json->allow_blessed->convert_blessed->encode( $blessed_object ) - -This feature is experimental and may be removed in the future. - -=back - -=head2 filter_json_object - - $json = $json->filter_json_object([$coderef]) - -When C<$coderef> is specified, it will be called from C each -time it decodes a JSON object. The only argument passed to the coderef -is a reference to the newly-created hash. If the code references returns -a single scalar (which need not be a reference), this value -(i.e. a copy of that scalar to avoid aliasing) is inserted into the -deserialised data structure. If it returns an empty list -(NOTE: I C, which is a valid scalar), the original deserialised -hash will be inserted. This setting can slow down decoding considerably. - -When C<$coderef> is omitted or undefined, any existing callback will -be removed and C will not change the deserialised hash in any -way. - -Example, convert all JSON objects into the integer 5: - - my $js = JSON->new->filter_json_object (sub { 5 }); - # returns [5] - $js->decode ('[{}]'); # the given subroutine takes a hash reference. - # throw an exception because allow_nonref is not enabled - # so a lone 5 is not allowed. - $js->decode ('{"a":1, "b":2}'); - - -=head2 filter_json_single_key_object - - $json = $json->filter_json_single_key_object($key [=> $coderef]) - -Works remotely similar to C, but is only called for -JSON objects having a single key named C<$key>. - -This C<$coderef> is called before the one specified via -C, if any. It gets passed the single value in the JSON -object. If it returns a single value, it will be inserted into the data -structure. If it returns nothing (not even C but the empty list), -the callback from C will be called next, as if no -single-key callback were specified. - -If C<$coderef> is omitted or undefined, the corresponding callback will be -disabled. There can only ever be one callback for a given key. - -As this callback gets called less often then the C -one, decoding speed will not usually suffer as much. Therefore, single-key -objects make excellent targets to serialise Perl objects into, especially -as single-key JSON objects are as close to the type-tagged value concept -as JSON gets (it's basically an ID/VALUE tuple). Of course, JSON does not -support this in any way, so you need to make sure your data never looks -like a serialised Perl hash. - -Typical names for the single object key are C<__class_whatever__>, or -C<$__dollars_are_rarely_used__$> or C<}ugly_brace_placement>, or even -things like C<__class_md5sum(classname)__>, to reduce the risk of clashing -with real hashes. - -Example, decode JSON objects of the form C<< { "__widget__" => } >> -into the corresponding C<< $WIDGET{} >> object: - - # return whatever is in $WIDGET{5}: - JSON - ->new - ->filter_json_single_key_object (__widget__ => sub { - $WIDGET{ $_[0] } - }) - ->decode ('{"__widget__": 5') - - # this can be used with a TO_JSON method in some "widget" class - # for serialisation to json: - sub WidgetBase::TO_JSON { - my ($self) = @_; - - unless ($self->{id}) { - $self->{id} = ..get..some..id..; - $WIDGET{$self->{id}} = $self; - } - - { __widget__ => $self->{id} } - } - - -=head2 shrink - - $json = $json->shrink([$enable]) - - $enabled = $json->get_shrink - -With JSON::XS, this flag resizes strings generated by either -C or C to their minimum size possible. This can save -memory when your JSON texts are either very very long or you have many -short strings. It will also try to downgrade any strings to octet-form -if possible: perl stores strings internally either in an encoding called -UTF-X or in octet-form. The latter cannot store everything but uses less -space in general (and some buggy Perl or C code might even rely on that -internal representation being used). - -With JSON::PP, it is noop about resizing strings but tries -C to the returned string by C. See to L. - -See to L and L. - -=head2 max_depth - - $json = $json->max_depth([$maximum_nesting_depth]) - - $max_depth = $json->get_max_depth - -Sets the maximum nesting level (default C<512>) accepted while encoding -or decoding. If a higher nesting level is detected in JSON text or a Perl -data structure, then the encoder and decoder will stop and croak at that -point. - -Nesting level is defined by number of hash- or arrayrefs that the encoder -needs to traverse to reach a given point or the number of C<{> or C<[> -characters without their matching closing parenthesis crossed to reach a -given character in a string. - -If no argument is given, the highest possible setting will be used, which -is rarely useful. - -Note that nesting is implemented by recursion in C. The default value has -been chosen to be as large as typical operating systems allow without -crashing. (JSON::XS) - -With JSON::PP as the backend, when a large value (100 or more) was set and -it de/encodes a deep nested object/text, it may raise a warning -'Deep recursion on subroutine' at the perl runtime phase. - -See L for more info on why this is useful. - -=head2 max_size - - $json = $json->max_size([$maximum_string_size]) - - $max_size = $json->get_max_size - -Set the maximum length a JSON text may have (in bytes) where decoding is -being attempted. The default is C<0>, meaning no limit. When C -is called on a string that is longer then this many bytes, it will not -attempt to decode the string but throw an exception. This setting has no -effect on C (yet). - -If no argument is given, the limit check will be deactivated (same as when -C<0> is specified). - -See L, below, for more info on why this is useful. - -=head2 encode - - $json_text = $json->encode($perl_scalar) - -Converts the given Perl data structure (a simple scalar or a reference -to a hash or array) to its JSON representation. Simple scalars will be -converted into JSON string or number sequences, while references to arrays -become JSON arrays and references to hashes become JSON objects. Undefined -Perl values (e.g. C) become JSON C values. -References to the integers C<0> and C<1> are converted into C and C. - -=head2 decode - - $perl_scalar = $json->decode($json_text) - -The opposite of C: expects a JSON text and tries to parse it, -returning the resulting simple scalar or reference. Croaks on error. - -JSON numbers and strings become simple Perl scalars. JSON arrays become -Perl arrayrefs and JSON objects become Perl hashrefs. C becomes -C<1> (C), C becomes C<0> (C) and -C becomes C. - -=head2 decode_prefix - - ($perl_scalar, $characters) = $json->decode_prefix($json_text) - -This works like the C method, but instead of raising an exception -when there is trailing garbage after the first JSON object, it will -silently stop parsing there and return the number of characters consumed -so far. - - JSON->new->decode_prefix ("[1] the tail") - => ([], 3) - -See to L - -=head2 property - - $boolean = $json->property($property_name) - -Returns a boolean value about above some properties. - -The available properties are C, C, C, -C,C, C, C, C, -C, C, C, C, -C, C and C. - - $boolean = $json->property('utf8'); - => 0 - $json->utf8; - $boolean = $json->property('utf8'); - => 1 - -Sets the property with a given boolean value. - - $json = $json->property($property_name => $boolean); - -With no argument, it returns all the above properties as a hash reference. - - $flag_hashref = $json->property(); - -=head1 INCREMENTAL PARSING - -Most of this section are copied and modified from L. - -In some cases, there is the need for incremental parsing of JSON texts. -This module does allow you to parse a JSON stream incrementally. -It does so by accumulating text until it has a full JSON object, which -it then can decode. This process is similar to using C -to see if a full JSON object is available, but is much more efficient -(and can be implemented with a minimum of method calls). - -The backend module will only attempt to parse the JSON text once it is sure it -has enough text to get a decisive result, using a very simple but -truly incremental parser. This means that it sometimes won't stop as -early as the full parser, for example, it doesn't detect parenthesis -mismatches. The only thing it guarantees is that it starts decoding as -soon as a syntactically valid JSON text has been seen. This means you need -to set resource limits (e.g. C) to ensure the parser will stop -parsing in the presence if syntax errors. - -The following methods implement this incremental parser. - -=head2 incr_parse - - $json->incr_parse( [$string] ) # void context - - $obj_or_undef = $json->incr_parse( [$string] ) # scalar context - - @obj_or_empty = $json->incr_parse( [$string] ) # list context - -This is the central parsing function. It can both append new text and -extract objects from the stream accumulated so far (both of these -functions are optional). - -If C<$string> is given, then this string is appended to the already -existing JSON fragment stored in the C<$json> object. - -After that, if the function is called in void context, it will simply -return without doing anything further. This can be used to add more text -in as many chunks as you want. - -If the method is called in scalar context, then it will try to extract -exactly I JSON object. If that is successful, it will return this -object, otherwise it will return C. If there is a parse error, -this method will croak just as C would do (one can then use -C to skip the erroneous part). This is the most common way of -using the method. - -And finally, in list context, it will try to extract as many objects -from the stream as it can find and return them, or the empty list -otherwise. For this to work, there must be no separators between the JSON -objects or arrays, instead they must be concatenated back-to-back. If -an error occurs, an exception will be raised as in the scalar context -case. Note that in this case, any previously-parsed JSON texts will be -lost. - -Example: Parse some JSON arrays/objects in a given string and return them. - - my @objs = JSON->new->incr_parse ("[5][7][1,2]"); - -=head2 incr_text - - $lvalue_string = $json->incr_text - -This method returns the currently stored JSON fragment as an lvalue, that -is, you can manipulate it. This I works when a preceding call to -C in I successfully returned an object. Under -all other circumstances you must not call this function (I mean it. -although in simple tests it might actually work, it I fail under -real world conditions). As a special exception, you can also call this -method before having parsed anything. - -This function is useful in two cases: a) finding the trailing text after a -JSON object or b) parsing multiple JSON objects separated by non-JSON text -(such as commas). - - $json->incr_text =~ s/\s*,\s*//; - -In Perl 5.005, C attribute is not available. -You must write codes like the below: - - $string = $json->incr_text; - $string =~ s/\s*,\s*//; - $json->incr_text( $string ); - -=head2 incr_skip - - $json->incr_skip - -This will reset the state of the incremental parser and will remove the -parsed text from the input buffer. This is useful after C -died, in which case the input buffer and incremental parser state is left -unchanged, to skip the text parsed so far and to reset the parse state. - -=head2 incr_reset - - $json->incr_reset - -This completely resets the incremental parser, that is, after this call, -it will be as if the parser had never parsed anything. - -This is useful if you want to repeatedly parse JSON objects and want to -ignore any trailing data, which means you have to reset the parser after -each successful decode. - -See to L for examples. - - -=head1 JSON::PP SUPPORT METHODS - -The below methods are JSON::PP own methods, so when C works -with JSON::PP (i.e. the created object is a JSON::PP object), available. -See to L in detail. - -If you use C with additional C<-support_by_pp>, some methods -are available even with JSON::XS. See to L. - - BEING { $ENV{PERL_JSON_BACKEND} = 'JSON::XS' } - - use JSON -support_by_pp; - - my $json = JSON->new; - $json->allow_nonref->escape_slash->encode("/"); - - # functional interfaces too. - print to_json(["/"], {escape_slash => 1}); - print from_json('["foo"]', {utf8 => 1}); - -If you do not want to all functions but C<-support_by_pp>, -use C<-no_export>. - - use JSON -support_by_pp, -no_export; - # functional interfaces are not exported. - -=head2 allow_singlequote - - $json = $json->allow_singlequote([$enable]) - -If C<$enable> is true (or missing), then C will accept -any JSON strings quoted by single quotations that are invalid JSON -format. - - $json->allow_singlequote->decode({"foo":'bar'}); - $json->allow_singlequote->decode({'foo':"bar"}); - $json->allow_singlequote->decode({'foo':'bar'}); - -As same as the C option, this option may be used to parse -application-specific files written by humans. - -=head2 allow_barekey - - $json = $json->allow_barekey([$enable]) - -If C<$enable> is true (or missing), then C will accept -bare keys of JSON object that are invalid JSON format. - -As same as the C option, this option may be used to parse -application-specific files written by humans. - - $json->allow_barekey->decode('{foo:"bar"}'); - -=head2 allow_bignum - - $json = $json->allow_bignum([$enable]) - -If C<$enable> is true (or missing), then C will convert -the big integer Perl cannot handle as integer into a L -object and convert a floating number (any) into a L. - -On the contrary, C converts C objects and C -objects into JSON numbers with C enable. - - $json->allow_nonref->allow_blessed->allow_bignum; - $bigfloat = $json->decode('2.000000000000000000000000001'); - print $json->encode($bigfloat); - # => 2.000000000000000000000000001 - -See to L about the conversion of JSON number. - -=head2 loose - - $json = $json->loose([$enable]) - -The unescaped [\x00-\x1f\x22\x2f\x5c] strings are invalid in JSON strings -and the module doesn't allow to C to these (except for \x2f). -If C<$enable> is true (or missing), then C will accept these -unescaped strings. - - $json->loose->decode(qq|["abc - def"]|); - -See to L. - -=head2 escape_slash - - $json = $json->escape_slash([$enable]) - -According to JSON Grammar, I (U+002F) is escaped. But by default -JSON backend modules encode strings without escaping slash. - -If C<$enable> is true (or missing), then C will escape slashes. - -=head2 indent_length - - $json = $json->indent_length($length) - -With JSON::XS, The indent space length is 3 and cannot be changed. -With JSON::PP, it sets the indent space length with the given $length. -The default is 3. The acceptable range is 0 to 15. - -=head2 sort_by - - $json = $json->sort_by($function_name) - $json = $json->sort_by($subroutine_ref) - -If $function_name or $subroutine_ref are set, its sort routine are used. - - $js = $pc->sort_by(sub { $JSON::PP::a cmp $JSON::PP::b })->encode($obj); - # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|); - - $js = $pc->sort_by('own_sort')->encode($obj); - # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|); - - sub JSON::PP::own_sort { $JSON::PP::a cmp $JSON::PP::b } - -As the sorting routine runs in the JSON::PP scope, the given -subroutine name and the special variables C<$a>, C<$b> will begin -with 'JSON::PP::'. - -If $integer is set, then the effect is same as C on. - -See to L. - -=head1 MAPPING - -This section is copied from JSON::XS and modified to C. -JSON::XS and JSON::PP mapping mechanisms are almost equivalent. - -See to L. - -=head2 JSON -> PERL - -=over 4 - -=item object - -A JSON object becomes a reference to a hash in Perl. No ordering of object -keys is preserved (JSON does not preserver object key ordering itself). - -=item array - -A JSON array becomes a reference to an array in Perl. - -=item string - -A JSON string becomes a string scalar in Perl - Unicode codepoints in JSON -are represented by the same codepoints in the Perl string, so no manual -decoding is necessary. - -=item number - -A JSON number becomes either an integer, numeric (floating point) or -string scalar in perl, depending on its range and any fractional parts. On -the Perl level, there is no difference between those as Perl handles all -the conversion details, but an integer may take slightly less memory and -might represent more values exactly than floating point numbers. - -If the number consists of digits only, C will try to represent -it as an integer value. If that fails, it will try to represent it as -a numeric (floating point) value if that is possible without loss of -precision. Otherwise it will preserve the number as a string value (in -which case you lose roundtripping ability, as the JSON number will be -re-encoded to a JSON string). - -Numbers containing a fractional or exponential part will always be -represented as numeric (floating point) values, possibly at a loss of -precision (in which case you might lose perfect roundtripping ability, but -the JSON number will still be re-encoded as a JSON number). - -Note that precision is not accuracy - binary floating point values cannot -represent most decimal fractions exactly, and when converting from and to -floating point, C only guarantees precision up to but not including -the least significant bit. - -If the backend is JSON::PP and C is enable, the big integers -and the numeric can be optionally converted into L and -L objects. - -=item true, false - -These JSON atoms become C and C, -respectively. They are overloaded to act almost exactly like the numbers -C<1> and C<0>. You can check whether a scalar is a JSON boolean by using -the C function. - - print JSON::true + 1; - => 1 - - ok(JSON::true eq '1'); - ok(JSON::true == 1); - -C will install these missing overloading features to the backend modules. - - -=item null - -A JSON null atom becomes C in Perl. - -C returns C. - -=back - - -=head2 PERL -> JSON - -The mapping from Perl to JSON is slightly more difficult, as Perl is a -truly typeless language, so we can only guess which JSON type is meant by -a Perl value. - -=over 4 - -=item hash references - -Perl hash references become JSON objects. As there is no inherent ordering -in hash keys (or JSON objects), they will usually be encoded in a -pseudo-random order that can change between runs of the same program but -stays generally the same within a single run of a program. C -optionally sort the hash keys (determined by the I flag), so -the same data structure will serialise to the same JSON text (given same -settings and version of JSON::XS), but this incurs a runtime overhead -and is only rarely useful, e.g. when you want to compare some JSON text -against another for equality. - -In future, the ordered object feature will be added to JSON::PP using C mechanism. - - -=item array references - -Perl array references become JSON arrays. - -=item other references - -Other unblessed references are generally not allowed and will cause an -exception to be thrown, except for references to the integers C<0> and -C<1>, which get turned into C and C atoms in JSON. You can -also use C and C to improve readability. - - to_json [\0,JSON::true] # yields [false,true] - -=item JSON::true, JSON::false, JSON::null - -These special values become JSON true and JSON false values, -respectively. You can also use C<\1> and C<\0> directly if you want. - -JSON::null returns C. - -=item blessed objects - -Blessed objects are not directly representable in JSON. See the -C and C methods on various options on -how to deal with this: basically, you can choose between throwing an -exception, encoding the reference as if it weren't blessed, or provide -your own serialiser method. - -With C mode, C converts blessed -hash references or blessed array references (contains other blessed references) -into JSON members and arrays. - - use JSON -convert_blessed_universally; - JSON->new->allow_blessed->convert_blessed->encode( $blessed_object ); - -See to L. - -=item simple scalars - -Simple Perl scalars (any scalar that is not a reference) are the most -difficult objects to encode: JSON::XS and JSON::PP will encode undefined scalars as -JSON C values, scalars that have last been used in a string context -before encoding as JSON strings, and anything else as number value: - - # dump as number - encode_json [2] # yields [2] - encode_json [-3.0e17] # yields [-3e+17] - my $value = 5; encode_json [$value] # yields [5] - - # used as string, so dump as string - print $value; - encode_json [$value] # yields ["5"] - - # undef becomes null - encode_json [undef] # yields [null] - -You can force the type to be a string by stringifying it: - - my $x = 3.1; # some variable containing a number - "$x"; # stringified - $x .= ""; # another, more awkward way to stringify - print $x; # perl does it for you, too, quite often - -You can force the type to be a number by numifying it: - - my $x = "3"; # some variable containing a string - $x += 0; # numify it, ensuring it will be dumped as a number - $x *= 1; # same thing, the choice is yours. - -You can not currently force the type in other, less obscure, ways. - -Note that numerical precision has the same meaning as under Perl (so -binary to decimal conversion follows the same rules as in Perl, which -can differ to other languages). Also, your perl interpreter might expose -extensions to the floating point numbers of your platform, such as -infinities or NaN's - these cannot be represented in JSON, and it is an -error to pass those in. - -=item Big Number - -If the backend is JSON::PP and C is enable, -C converts C objects and C -objects into JSON numbers. - - -=back - -=head1 JSON and ECMAscript - -See to L. - -=head1 JSON and YAML - -JSON is not a subset of YAML. -See to L. - - -=head1 BACKEND MODULE DECISION - -When you use C, C tries to C JSON::XS. If this call failed, it will -C JSON::PP. The required JSON::XS version is I<2.2> or later. - -The C constructor method returns an object inherited from the backend module, -and JSON::XS object is a blessed scalar reference while JSON::PP is a blessed hash -reference. - -So, your program should not depend on the backend module, especially -returned objects should not be modified. - - my $json = JSON->new; # XS or PP? - $json->{stash} = 'this is xs object'; # this code may raise an error! - -To check the backend module, there are some methods - C, C and C. - - JSON->backend; # 'JSON::XS' or 'JSON::PP' - - JSON->backend->is_pp: # 0 or 1 - - JSON->backend->is_xs: # 1 or 0 - - $json->is_xs; # 1 or 0 - - $json->is_pp; # 0 or 1 - - -If you set an environment variable C, the calling action will be changed. - -=over - -=item PERL_JSON_BACKEND = 0 or PERL_JSON_BACKEND = 'JSON::PP' - -Always use JSON::PP - -=item PERL_JSON_BACKEND == 1 or PERL_JSON_BACKEND = 'JSON::XS,JSON::PP' - -(The default) Use compiled JSON::XS if it is properly compiled & installed, -otherwise use JSON::PP. - -=item PERL_JSON_BACKEND == 2 or PERL_JSON_BACKEND = 'JSON::XS' - -Always use compiled JSON::XS, die if it isn't properly compiled & installed. - -=item PERL_JSON_BACKEND = 'JSON::backportPP' - -Always use JSON::backportPP. -JSON::backportPP is JSON::PP back port module. -C includes JSON::backportPP instead of JSON::PP. - -=back - -These ideas come from L mechanism. - -example: - - BEGIN { $ENV{PERL_JSON_BACKEND} = 'JSON::PP' } - use JSON; # always uses JSON::PP - -In future, it may be able to specify another module. - -=head1 USE PP FEATURES EVEN THOUGH XS BACKEND - -Many methods are available with either JSON::XS or JSON::PP and -when the backend module is JSON::XS, if any JSON::PP specific (i.e. JSON::XS unsupported) -method is called, it will C and be noop. - -But If you C C passing the optional string C<-support_by_pp>, -it makes a part of those unsupported methods available. -This feature is achieved by using JSON::PP in C. - - BEGIN { $ENV{PERL_JSON_BACKEND} = 2 } # with JSON::XS - use JSON -support_by_pp; - my $json = JSON->new; - $json->allow_nonref->escape_slash->encode("/"); - -At this time, the returned object is a C -object (re-blessed XS object), and by checking JSON::XS unsupported flags -in de/encoding, can support some unsupported methods - C, C, -C, C, C and C. - -When any unsupported methods are not enable, C will be -used as is. The switch is achieved by changing the symbolic tables. - -C<-support_by_pp> is effective only when the backend module is JSON::XS -and it makes the de/encoding speed down a bit. - -See to L. - -=head1 INCOMPATIBLE CHANGES TO OLD VERSION - -There are big incompatibility between new version (2.00) and old (1.xx). -If you use old C 1.xx in your code, please check it. - -See to L - -=over - -=item jsonToObj and objToJson are obsoleted. - -Non Perl-style name C and C are obsoleted -(but not yet deleted from the source). -If you use these functions in your code, please replace them -with C and C. - - -=item Global variables are no longer available. - -C class variables - C<$JSON::AUTOCONVERT>, C<$JSON::BareKey>, etc... -- are not available any longer. -Instead, various features can be used through object methods. - - -=item Package JSON::Converter and JSON::Parser are deleted. - -Now C bundles with JSON::PP which can handle JSON more properly than them. - -=item Package JSON::NotString is deleted. - -There was C class which represents JSON value C, C, C -and numbers. It was deleted and replaced by C. - -C represents C and C. - -C does not represent C. - -C returns C. - -C makes L and L is-a relation -to L. - -=item function JSON::Number is obsoleted. - -C is now needless because JSON::XS and JSON::PP have -round-trip integrity. - -=item JSONRPC modules are deleted. - -Perl implementation of JSON-RPC protocol - C, C -and C are deleted in this distribution. -Instead of them, there is L which supports JSON-RPC protocol version 1.1. - -=back - -=head2 Transition ways from 1.xx to 2.xx. - -You should set C mode firstly, because -it is always successful for the below codes even with JSON::XS. - - use JSON -support_by_pp; - -=over - -=item Exported jsonToObj (simple) - - from_json($json_text); - -=item Exported objToJson (simple) - - to_json($perl_scalar); - -=item Exported jsonToObj (advanced) - - $flags = {allow_barekey => 1, allow_singlequote => 1}; - from_json($json_text, $flags); - -equivalent to: - - $JSON::BareKey = 1; - $JSON::QuotApos = 1; - jsonToObj($json_text); - -=item Exported objToJson (advanced) - - $flags = {allow_blessed => 1, allow_barekey => 1}; - to_json($perl_scalar, $flags); - -equivalent to: - - $JSON::BareKey = 1; - objToJson($perl_scalar); - -=item jsonToObj as object method - - $json->decode($json_text); - -=item objToJson as object method - - $json->encode($perl_scalar); - -=item new method with parameters - -The C method in 2.x takes any parameters no longer. -You can set parameters instead; - - $json = JSON->new->pretty; - -=item $JSON::Pretty, $JSON::Indent, $JSON::Delimiter - -If C is enable, that means C<$JSON::Pretty> flag set. And -C<$JSON::Delimiter> was substituted by C and C. -In conclusion: - - $json->indent->space_before->space_after; - -Equivalent to: - - $json->pretty; - -To change indent length, use C. - -(Only with JSON::PP, if C<-support_by_pp> is not used.) - - $json->pretty->indent_length(2)->encode($perl_scalar); - -=item $JSON::BareKey - -(Only with JSON::PP, if C<-support_by_pp> is not used.) - - $json->allow_barekey->decode($json_text) - -=item $JSON::ConvBlessed - -use C<-convert_blessed_universally>. See to L. - -=item $JSON::QuotApos - -(Only with JSON::PP, if C<-support_by_pp> is not used.) - - $json->allow_singlequote->decode($json_text) - -=item $JSON::SingleQuote - -Disable. C does not make such a invalid JSON string any longer. - -=item $JSON::KeySort - - $json->canonical->encode($perl_scalar) - -This is the ascii sort. - -If you want to use with your own sort routine, check the C method. - -(Only with JSON::PP, even if C<-support_by_pp> is used currently.) - - $json->sort_by($sort_routine_ref)->encode($perl_scalar) - - $json->sort_by(sub { $JSON::PP::a <=> $JSON::PP::b })->encode($perl_scalar) - -Can't access C<$a> and C<$b> but C<$JSON::PP::a> and C<$JSON::PP::b>. - -=item $JSON::SkipInvalid - - $json->allow_unknown - -=item $JSON::AUTOCONVERT - -Needless. C backend modules have the round-trip integrity. - -=item $JSON::UTF8 - -Needless because C (JSON::XS/JSON::PP) sets -the UTF8 flag on properly. - - # With UTF8-flagged strings - - $json->allow_nonref; - $str = chr(1000); # UTF8-flagged - - $json_text = $json->utf8(0)->encode($str); - utf8::is_utf8($json_text); - # true - $json_text = $json->utf8(1)->encode($str); - utf8::is_utf8($json_text); - # false - - $str = '"' . chr(1000) . '"'; # UTF8-flagged - - $perl_scalar = $json->utf8(0)->decode($str); - utf8::is_utf8($perl_scalar); - # true - $perl_scalar = $json->utf8(1)->decode($str); - # died because of 'Wide character in subroutine' - -See to L. - -=item $JSON::UnMapping - -Disable. See to L. - -=item $JSON::SelfConvert - -This option was deleted. -Instead of it, if a given blessed object has the C method, -C will be executed with C. - - $json->convert_blessed->encode($blessed_hashref_or_arrayref) - # if need, call allow_blessed - -Note that it was C in old version, but now not C but C. - -=back - -=head1 TODO - -=over - -=item example programs - -=back - -=head1 THREADS - -No test with JSON::PP. If with JSON::XS, See to L. - - -=head1 BUGS - -Please report bugs relevant to C to Emakamaka[at]cpan.orgE. - - -=head1 SEE ALSO - -Most of the document is copied and modified from JSON::XS doc. - -L, L - -C(L) - -=head1 AUTHOR - -Makamaka Hannyaharamitu, Emakamaka[at]cpan.orgE - -JSON::XS was written by Marc Lehmann - -The release of this new version owes to the courtesy of Marc Lehmann. - - -=head1 COPYRIGHT AND LICENSE - -Copyright 2005-2013 by Makamaka Hannyaharamitu - -This library is free software; you can redistribute it and/or modify -it under the same terms as Perl itself. - -=cut - diff --git a/spaces/Alfaxad/BioGalacticModels/README.md b/spaces/Alfaxad/BioGalacticModels/README.md deleted file mode 100644 index 54096e6362451f7aacc4927293ad31d2117b752e..0000000000000000000000000000000000000000 --- a/spaces/Alfaxad/BioGalacticModels/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Explore Biology & Biochem Foundation Models -emoji: 🧬 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: true -duplicated_from: hf-ml4h/biomedical-language-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/audio2pose_models/discriminator.py b/spaces/Alpaca233/SadTalker/src/audio2pose_models/discriminator.py deleted file mode 100644 index 339c38e4812ff38a810f0f3a1c01812f6d5d78db..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/audio2pose_models/discriminator.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -class ConvNormRelu(nn.Module): - def __init__(self, conv_type='1d', in_channels=3, out_channels=64, downsample=False, - kernel_size=None, stride=None, padding=None, norm='BN', leaky=False): - super().__init__() - if kernel_size is None: - if downsample: - kernel_size, stride, padding = 4, 2, 1 - else: - kernel_size, stride, padding = 3, 1, 1 - - if conv_type == '2d': - self.conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size, - stride, - padding, - bias=False, - ) - if norm == 'BN': - self.norm = nn.BatchNorm2d(out_channels) - elif norm == 'IN': - self.norm = nn.InstanceNorm2d(out_channels) - else: - raise NotImplementedError - elif conv_type == '1d': - self.conv = nn.Conv1d( - in_channels, - out_channels, - kernel_size, - stride, - padding, - bias=False, - ) - if norm == 'BN': - self.norm = nn.BatchNorm1d(out_channels) - elif norm == 'IN': - self.norm = nn.InstanceNorm1d(out_channels) - else: - raise NotImplementedError - nn.init.kaiming_normal_(self.conv.weight) - - self.act = nn.LeakyReLU(negative_slope=0.2, inplace=False) if leaky else nn.ReLU(inplace=True) - - def forward(self, x): - x = self.conv(x) - if isinstance(self.norm, nn.InstanceNorm1d): - x = self.norm(x.permute((0, 2, 1))).permute((0, 2, 1)) # normalize on [C] - else: - x = self.norm(x) - x = self.act(x) - return x - - -class PoseSequenceDiscriminator(nn.Module): - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - leaky = self.cfg.MODEL.DISCRIMINATOR.LEAKY_RELU - - self.seq = nn.Sequential( - ConvNormRelu('1d', cfg.MODEL.DISCRIMINATOR.INPUT_CHANNELS, 256, downsample=True, leaky=leaky), # B, 256, 64 - ConvNormRelu('1d', 256, 512, downsample=True, leaky=leaky), # B, 512, 32 - ConvNormRelu('1d', 512, 1024, kernel_size=3, stride=1, padding=1, leaky=leaky), # B, 1024, 16 - nn.Conv1d(1024, 1, kernel_size=3, stride=1, padding=1, bias=True) # B, 1, 16 - ) - - def forward(self, x): - x = x.reshape(x.size(0), x.size(1), -1).transpose(1, 2) - x = self.seq(x) - x = x.squeeze(1) - return x \ No newline at end of file diff --git a/spaces/Ame42/UBTH/README.md b/spaces/Ame42/UBTH/README.md deleted file mode 100644 index 7741c0848f63cb511fed99f3c2ea78bdbb635cd6..0000000000000000000000000000000000000000 --- a/spaces/Ame42/UBTH/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: UBTH -emoji: 📚 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -python_version: 3.11 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpgd.h b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpgd.h deleted file mode 100644 index a1c0cac61839a6f66a42c341f50d5e36faad9a93..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpgd.h +++ /dev/null @@ -1,316 +0,0 @@ -// jpgd.h - C++ class for JPEG decompression. -// Public domain, Rich Geldreich -#ifndef JPEG_DECODER_H -#define JPEG_DECODER_H - -#include -#include -#include - -namespace jpgd -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef unsigned short uint16; - typedef unsigned int uint; - typedef signed int int32; - - // Loads a JPEG image from a memory buffer or a file. - // req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA). - // On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB). - // Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly. - // Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp. -// BEGIN EPIC MOD -//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps); - unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format); -// END EPIC MOD - unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps); - - // Success/failure error codes. - enum jpgd_status - { - JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1, - JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE, - JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS, - JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH, - JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER, - JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS, - JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE, - JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR, - JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM - }; - - // Input stream interface. - // Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available. - // The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set. - // It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer. - // Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding. - class jpeg_decoder_stream - { - public: - jpeg_decoder_stream() { } - virtual ~jpeg_decoder_stream() { } - - // The read() method is called when the internal input buffer is empty. - // Parameters: - // pBuf - input buffer - // max_bytes_to_read - maximum bytes that can be written to pBuf - // pEOF_flag - set this to true if at end of stream (no more bytes remaining) - // Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0). - // Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full. - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0; - }; - - // stdio FILE stream class. - class jpeg_decoder_file_stream : public jpeg_decoder_stream - { - jpeg_decoder_file_stream(const jpeg_decoder_file_stream &); - jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &); - - FILE *m_pFile; - bool m_eof_flag, m_error_flag; - - public: - jpeg_decoder_file_stream(); - virtual ~jpeg_decoder_file_stream(); - - bool open(const char *Pfilename); - void close(); - - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag); - }; - - // Memory stream class. - class jpeg_decoder_mem_stream : public jpeg_decoder_stream - { - const uint8 *m_pSrc_data; - uint m_ofs, m_size; - - public: - jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { } - jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { } - - virtual ~jpeg_decoder_mem_stream() { } - - bool open(const uint8 *pSrc_data, uint size); - void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; } - - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag); - }; - - // Loads JPEG file from a jpeg_decoder_stream. - unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps); - - enum - { - JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4, - JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384 - }; - - typedef int16 jpgd_quant_t; - typedef int16 jpgd_block_t; - - class jpeg_decoder - { - public: - // Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc. - // methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline. - jpeg_decoder(jpeg_decoder_stream *pStream); - - ~jpeg_decoder(); - - // Call this method after constructing the object to begin decompression. - // If JPGD_SUCCESS is returned you may then call decode() on each scanline. - int begin_decoding(); - - // Returns the next scan line. - // For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1). - // Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4). - // Returns JPGD_SUCCESS if a scan line has been returned. - // Returns JPGD_DONE if all scan lines have been returned. - // Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info. - int decode(const void** pScan_line, uint* pScan_line_len); - - inline jpgd_status get_error_code() const { return m_error_code; } - - inline int get_width() const { return m_image_x_size; } - inline int get_height() const { return m_image_y_size; } - - inline int get_num_components() const { return m_comps_in_frame; } - - inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; } - inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); } - - // Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file). - inline int get_total_bytes_read() const { return m_total_bytes_read; } - - private: - jpeg_decoder(const jpeg_decoder &); - jpeg_decoder &operator =(const jpeg_decoder &); - - typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int); - - struct huff_tables - { - bool ac_table; - uint look_up[256]; - uint look_up2[256]; - uint8 code_size[256]; - uint tree[512]; - }; - - struct coeff_buf - { - uint8 *pData; - int block_num_x, block_num_y; - int block_len_x, block_len_y; - int block_size; - }; - - struct mem_block - { - mem_block *m_pNext; - size_t m_used_count; - size_t m_size; - char m_data[1]; - }; - - jmp_buf m_jmp_state; - mem_block *m_pMem_blocks; - int m_image_x_size; - int m_image_y_size; - jpeg_decoder_stream *m_pStream; - int m_progressive_flag; - uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES]; - uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size - uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size - jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables - int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported) - int m_comps_in_frame; // # of components in frame - int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor - int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor - int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector - int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID - int m_comp_h_blocks[JPGD_MAX_COMPONENTS]; - int m_comp_v_blocks[JPGD_MAX_COMPONENTS]; - int m_comps_in_scan; // # of components in scan - int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan - int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector - int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector - int m_spectral_start; // spectral selection start - int m_spectral_end; // spectral selection end - int m_successive_low; // successive approximation low - int m_successive_high; // successive approximation high - int m_max_mcu_x_size; // MCU's max. X size in pixels - int m_max_mcu_y_size; // MCU's max. Y size in pixels - int m_blocks_per_mcu; - int m_max_blocks_per_row; - int m_mcus_per_row, m_mcus_per_col; - int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU]; - int m_total_lines_left; // total # lines left in image - int m_mcu_lines_left; // total # lines left in this MCU - int m_real_dest_bytes_per_scan_line; - int m_dest_bytes_per_scan_line; // rounded up - int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y) - huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES]; - coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS]; - coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS]; - int m_eob_run; - int m_block_y_mcu[JPGD_MAX_COMPONENTS]; - uint8* m_pIn_buf_ofs; - int m_in_buf_left; - int m_tem_flag; - bool m_eof_flag; - uint8 m_in_buf_pad_start[128]; - uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128]; - uint8 m_in_buf_pad_end[128]; - int m_bits_left; - uint m_bit_buf; - int m_restart_interval; - int m_restarts_left; - int m_next_restart_num; - int m_max_mcus_per_row; - int m_max_blocks_per_mcu; - int m_expanded_blocks_per_mcu; - int m_expanded_blocks_per_row; - int m_expanded_blocks_per_component; - bool m_freq_domain_chroma_upsample; - int m_max_mcus_per_col; - uint m_last_dc_val[JPGD_MAX_COMPONENTS]; - jpgd_block_t* m_pMCU_coefficients; - int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU]; - uint8* m_pSample_buf; - int m_crr[256]; - int m_cbb[256]; - int m_crg[256]; - int m_cbg[256]; - uint8* m_pScan_line_0; - uint8* m_pScan_line_1; - jpgd_status m_error_code; - bool m_ready_flag; - int m_total_bytes_read; - - void free_all_blocks(); - // BEGIN EPIC MOD - UE_NORETURN void stop_decoding(jpgd_status status); - // END EPIC MOD - void *alloc(size_t n, bool zero = false); - void word_clear(void *p, uint16 c, uint n); - void prep_in_buffer(); - void read_dht_marker(); - void read_dqt_marker(); - void read_sof_marker(); - void skip_variable_marker(); - void read_dri_marker(); - void read_sos_marker(); - int next_marker(); - int process_markers(); - void locate_soi_marker(); - void locate_sof_marker(); - int locate_sos_marker(); - void init(jpeg_decoder_stream * pStream); - void create_look_ups(); - void fix_in_buffer(); - void transform_mcu(int mcu_row); - void transform_mcu_expand(int mcu_row); - coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y); - inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y); - void load_next_row(); - void decode_next_row(); - void make_huff_table(int index, huff_tables *pH); - void check_quant_tables(); - void check_huff_tables(); - void calc_mcu_block_order(); - int init_scan(); - void init_frame(); - void process_restart(); - void decode_scan(pDecode_block_func decode_block_func); - void init_progressive(); - void init_sequential(); - void decode_start(); - void decode_init(jpeg_decoder_stream * pStream); - void H2V2Convert(); - void H2V1Convert(); - void H1V2Convert(); - void H1V1Convert(); - void gray_convert(); - void expanded_convert(); - void find_eoi(); - inline uint get_char(); - inline uint get_char(bool *pPadding_flag); - inline void stuff_char(uint8 q); - inline uint8 get_octet(); - inline uint get_bits(int num_bits); - inline uint get_bits_no_markers(int numbits); - inline int huff_decode(huff_tables *pH); - inline int huff_decode(huff_tables *pH, int& extrabits); - static inline uint8 clamp(int i); - static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y); - }; - -} // namespace jpgd - -#endif // JPEG_DECODER_H diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/conv2d_resample.py b/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/conv2d_resample.py deleted file mode 100644 index cd4750744c83354bab78704d4ef51ad1070fcc4a..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/conv2d_resample.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""2D convolution with optional up/downsampling.""" - -import torch - -from .. import misc -from . import conv2d_gradfix -from . import upfirdn2d -from .upfirdn2d import _parse_padding -from .upfirdn2d import _get_filter_size - -#---------------------------------------------------------------------------- - -def _get_weight_shape(w): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - shape = [int(sz) for sz in w.shape] - misc.assert_shape(w, shape) - return shape - -#---------------------------------------------------------------------------- - -def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True): - """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations. - """ - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - - # Flip weight if requested. - if not flip_weight: # conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False). - w = w.flip([2, 3]) - - # Workaround performance pitfall in cuDNN 8.0.5, triggered when using - # 1x1 kernel + memory_format=channels_last + less than 64 channels. - if kw == 1 and kh == 1 and stride == 1 and padding in [0, [0, 0], (0, 0)] and not transpose: - if x.stride()[1] == 1 and min(out_channels, in_channels_per_group) < 64: - if out_channels <= 4 and groups == 1: - in_shape = x.shape - x = w.squeeze(3).squeeze(2) @ x.reshape([in_shape[0], in_channels_per_group, -1]) - x = x.reshape([in_shape[0], out_channels, in_shape[2], in_shape[3]]) - else: - x = x.to(memory_format=torch.contiguous_format) - w = w.to(memory_format=torch.contiguous_format) - x = conv2d_gradfix.conv2d(x, w, groups=groups) - return x.to(memory_format=torch.channels_last) - - # Otherwise => execute using conv2d_gradfix. - op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d - return op(x, w, stride=stride, padding=padding, groups=groups) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False): - r"""2D convolution with optional up/downsampling. - - Padding is performed only once at the beginning, not between the operations. - - Args: - x: Input tensor of shape - `[batch_size, in_channels, in_height, in_width]`. - w: Weight tensor of shape - `[out_channels, in_channels//groups, kernel_height, kernel_width]`. - f: Low-pass filter for up/downsampling. Must be prepared beforehand by - calling upfirdn2d.setup_filter(). None = identity (default). - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - groups: Split input channels into N groups (default: 1). - flip_weight: False = convolution, True = correlation (default: True). - flip_filter: False = convolution, True = correlation (default: False). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and (x.ndim == 4) - assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype) - assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32) - assert isinstance(up, int) and (up >= 1) - assert isinstance(down, int) and (down >= 1) - assert isinstance(groups, int) and (groups >= 1) - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - fw, fh = _get_filter_size(f) - px0, px1, py0, py1 = _parse_padding(padding) - - # Adjust padding to account for up/downsampling. - if up > 1: - px0 += (fw + up - 1) // 2 - px1 += (fw - up) // 2 - py0 += (fh + up - 1) // 2 - py1 += (fh - up) // 2 - if down > 1: - px0 += (fw - down + 1) // 2 - px1 += (fw - down) // 2 - py0 += (fh - down + 1) // 2 - py1 += (fh - down) // 2 - - # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve. - if kw == 1 and kh == 1 and (down > 1 and up == 1): - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample. - if kw == 1 and kh == 1 and (up > 1 and down == 1): - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - return x - - # Fast path: downsampling only => use strided convolution. - if down > 1 and up == 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: upsampling with optional downsampling => use transpose strided convolution. - if up > 1: - if groups == 1: - w = w.transpose(0, 1) - else: - w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw) - w = w.transpose(1, 2) - w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw) - px0 -= kw - 1 - px1 -= kw - up - py0 -= kh - 1 - py1 -= kh - up - pxt = max(min(-px0, -px1), 0) - pyt = max(min(-py0, -py1), 0) - x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[pyt,pxt], groups=groups, transpose=True, flip_weight=(not flip_weight)) - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - - # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d. - if up == 1 and down == 1: - if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0: - return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight) - - # Fallback: Generic reference implementation. - x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/lora.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/lora.md deleted file mode 100644 index fd88d74854b2cdb8a3de4d11bbec4648bd515982..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/lora.md +++ /dev/null @@ -1,405 +0,0 @@ - - -# Low-Rank Adaptation of Large Language Models (LoRA) - - - -This is an experimental feature. Its APIs can change in future. - - - -[Low-Rank Adaptation of Large Language Models (LoRA)](https://arxiv.org/abs/2106.09685) is a training method that accelerates the training of large models while consuming less memory. It adds pairs of rank-decomposition weight matrices (called **update matrices**) to existing weights, and **only** trains those newly added weights. This has a couple of advantages: - -- Previous pretrained weights are kept frozen so the model is not as prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114). -- Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable. -- LoRA matrices are generally added to the attention layers of the original model. 🧨 Diffusers provides the [`~diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs`] method to load the LoRA weights into a model's attention layers. You can control the extent to which the model is adapted toward new training images via a `scale` parameter. -- The greater memory-efficiency allows you to run fine-tuning on consumer GPUs like the Tesla T4, RTX 3080 or even the RTX 2080 Ti! GPUs like the T4 are free and readily accessible in Kaggle or Google Colab notebooks. - - - -💡 LoRA is not only limited to attention layers. The authors found that amending -the attention layers of a language model is sufficient to obtain good downstream performance with great efficiency. This is why it's common to just add the LoRA weights to the attention layers of a model. Check out the [Using LoRA for efficient Stable Diffusion fine-tuning](https://huggingface.co/blog/lora) blog for more information about how LoRA works! - - - -[cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository. 🧨 Diffusers now supports finetuning with LoRA for [text-to-image generation](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image#training-with-lora) and [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#training-with-low-rank-adaptation-of-large-language-models-lora). This guide will show you how to do both. - -If you'd like to store or share your model with the community, login to your Hugging Face account (create [one](hf.co/join) if you don't have one already): - -```bash -huggingface-cli login -``` - -## Text-to-image - -Finetuning a model like Stable Diffusion, which has billions of parameters, can be slow and difficult. With LoRA, it is much easier and faster to finetune a diffusion model. It can run on hardware with as little as 11GB of GPU RAM without resorting to tricks such as 8-bit optimizers. - -### Training[[text-to-image-training]] - -Let's finetune [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate your own Pokémon. - -Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) argument. You'll also need to set the `DATASET_NAME` environment variable to the name of the dataset you want to train on. To use your own dataset, take a look at the [Create a dataset for training](create_dataset) guide. - -The `OUTPUT_DIR` and `HUB_MODEL_ID` variables are optional and specify where to save the model to on the Hub: - -```bash -export MODEL_NAME="runwayml/stable-diffusion-v1-5" -export OUTPUT_DIR="/sddata/finetune/lora/pokemon" -export HUB_MODEL_ID="pokemon-lora" -export DATASET_NAME="lambdalabs/pokemon-blip-captions" -``` - -There are some flags to be aware of before you start training: - -* `--push_to_hub` stores the trained LoRA embeddings on the Hub. -* `--report_to=wandb` reports and logs the training results to your Weights & Biases dashboard (as an example, take a look at this [report](https://wandb.ai/pcuenq/text2image-fine-tune/runs/b4k1w0tn?workspace=user-pcuenq)). -* `--learning_rate=1e-04`, you can afford to use a higher learning rate than you normally would with LoRA. - -Now you're ready to launch the training (you can find the full training script [here](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py)). Training takes about 5 hours on a 2080 Ti GPU with 11GB of RAM, and it'll create and save model checkpoints and the `pytorch_lora_weights` in your repository. - -```bash -accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --dataset_name=$DATASET_NAME \ - --dataloader_num_workers=8 \ - --resolution=512 --center_crop --random_flip \ - --train_batch_size=1 \ - --gradient_accumulation_steps=4 \ - --max_train_steps=15000 \ - --learning_rate=1e-04 \ - --max_grad_norm=1 \ - --lr_scheduler="cosine" --lr_warmup_steps=0 \ - --output_dir=${OUTPUT_DIR} \ - --push_to_hub \ - --hub_model_id=${HUB_MODEL_ID} \ - --report_to=wandb \ - --checkpointing_steps=500 \ - --validation_prompt="A pokemon with blue eyes." \ - --seed=1337 -``` - -### Inference[[text-to-image-inference]] - -Now you can use the model for inference by loading the base model in the [`StableDiffusionPipeline`] and then the [`DPMSolverMultistepScheduler`]: - -```py ->>> import torch ->>> from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler - ->>> model_base = "runwayml/stable-diffusion-v1-5" - ->>> pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16) ->>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -``` - -Load the LoRA weights from your finetuned model *on top of the base model weights*, and then move the pipeline to a GPU for faster inference. When you merge the LoRA weights with the frozen pretrained model weights, you can optionally adjust how much of the weights to merge with the `scale` parameter: - - - -💡 A `scale` value of `0` is the same as not using your LoRA weights and you're only using the base model weights, and a `scale` value of `1` means you're only using the fully finetuned LoRA weights. Values between `0` and `1` interpolates between the two weights. - - - -```py ->>> pipe.unet.load_attn_procs(lora_model_path) ->>> pipe.to("cuda") -# use half the weights from the LoRA finetuned model and half the weights from the base model - ->>> image = pipe( -... "A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5, cross_attention_kwargs={"scale": 0.5} -... ).images[0] -# use the weights from the fully finetuned LoRA model - ->>> image = pipe("A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5).images[0] ->>> image.save("blue_pokemon.png") -``` - - - -If you are loading the LoRA parameters from the Hub and if the Hub repository has -a `base_model` tag (such as [this](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/README.md?code=true#L4)), then -you can do: - -```py -from huggingface_hub.repocard import RepoCard - -lora_model_id = "sayakpaul/sd-model-finetuned-lora-t4" -card = RepoCard.load(lora_model_id) -base_model_id = card.data.to_dict()["base_model"] - -pipe = StableDiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16) -... -``` - - - - -## DreamBooth - -[DreamBooth](https://arxiv.org/abs/2208.12242) is a finetuning technique for personalizing a text-to-image model like Stable Diffusion to generate photorealistic images of a subject in different contexts, given a few images of the subject. However, DreamBooth is very sensitive to hyperparameters and it is easy to overfit. Some important hyperparameters to consider include those that affect the training time (learning rate, number of training steps), and inference time (number of steps, scheduler type). - - - -💡 Take a look at the [Training Stable Diffusion with DreamBooth using 🧨 Diffusers](https://huggingface.co/blog/dreambooth) blog for an in-depth analysis of DreamBooth experiments and recommended settings. - - - -### Training[[dreambooth-training]] - -Let's finetune [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) with DreamBooth and LoRA with some 🐶 [dog images](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ). Download and save these images to a directory. To use your own dataset, take a look at the [Create a dataset for training](create_dataset) guide. - -To start, specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) argument. You'll also need to set `INSTANCE_DIR` to the path of the directory containing the images. - -The `OUTPUT_DIR` variables is optional and specifies where to save the model to on the Hub: - -```bash -export MODEL_NAME="runwayml/stable-diffusion-v1-5" -export INSTANCE_DIR="path-to-instance-images" -export OUTPUT_DIR="path-to-save-model" -``` - -There are some flags to be aware of before you start training: - -* `--push_to_hub` stores the trained LoRA embeddings on the Hub. -* `--report_to=wandb` reports and logs the training results to your Weights & Biases dashboard (as an example, take a look at this [report](https://wandb.ai/pcuenq/text2image-fine-tune/runs/b4k1w0tn?workspace=user-pcuenq)). -* `--learning_rate=1e-04`, you can afford to use a higher learning rate than you normally would with LoRA. - -Now you're ready to launch the training (you can find the full training script [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py)). The script creates and saves model checkpoints and the `pytorch_lora_weights.bin` file in your repository. - -It's also possible to additionally fine-tune the text encoder with LoRA. This, in most cases, leads -to better results with a slight increase in the compute. To allow fine-tuning the text encoder with LoRA, -specify the `--train_text_encoder` while launching the `train_dreambooth_lora.py` script. - -```bash -accelerate launch train_dreambooth_lora.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --instance_prompt="a photo of sks dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --checkpointing_steps=100 \ - --learning_rate=1e-4 \ - --report_to="wandb" \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --max_train_steps=500 \ - --validation_prompt="A photo of sks dog in a bucket" \ - --validation_epochs=50 \ - --seed="0" \ - --push_to_hub -``` - -### Inference[[dreambooth-inference]] - -Now you can use the model for inference by loading the base model in the [`StableDiffusionPipeline`]: - -```py ->>> import torch ->>> from diffusers import StableDiffusionPipeline - ->>> model_base = "runwayml/stable-diffusion-v1-5" - ->>> pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16) -``` - -Load the LoRA weights from your finetuned DreamBooth model *on top of the base model weights*, and then move the pipeline to a GPU for faster inference. When you merge the LoRA weights with the frozen pretrained model weights, you can optionally adjust how much of the weights to merge with the `scale` parameter: - - - -💡 A `scale` value of `0` is the same as not using your LoRA weights and you're only using the base model weights, and a `scale` value of `1` means you're only using the fully finetuned LoRA weights. Values between `0` and `1` interpolates between the two weights. - - - -```py ->>> pipe.unet.load_attn_procs(lora_model_path) ->>> pipe.to("cuda") -# use half the weights from the LoRA finetuned model and half the weights from the base model - ->>> image = pipe( -... "A picture of a sks dog in a bucket.", -... num_inference_steps=25, -... guidance_scale=7.5, -... cross_attention_kwargs={"scale": 0.5}, -... ).images[0] -# use the weights from the fully finetuned LoRA model - ->>> image = pipe("A picture of a sks dog in a bucket.", num_inference_steps=25, guidance_scale=7.5).images[0] ->>> image.save("bucket-dog.png") -``` - -If you used `--train_text_encoder` during training, then use `pipe.load_lora_weights()` to load the LoRA -weights. For example: - -```python -from huggingface_hub.repocard import RepoCard -from diffusers import StableDiffusionPipeline -import torch - -lora_model_id = "sayakpaul/dreambooth-text-encoder-test" -card = RepoCard.load(lora_model_id) -base_model_id = card.data.to_dict()["base_model"] - -pipe = StableDiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16) -pipe = pipe.to("cuda") -pipe.load_lora_weights(lora_model_id) -image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).images[0] -``` - - - -If your LoRA parameters involve the UNet as well as the Text Encoder, then passing -`cross_attention_kwargs={"scale": 0.5}` will apply the `scale` value to both the UNet -and the Text Encoder. - - - -Note that the use of [`~diffusers.loaders.LoraLoaderMixin.load_lora_weights`] is preferred to [`~diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs`] for loading LoRA parameters. This is because -[`~diffusers.loaders.LoraLoaderMixin.load_lora_weights`] can handle the following situations: - -* LoRA parameters that don't have separate identifiers for the UNet and the text encoder (such as [`"patrickvonplaten/lora_dreambooth_dog_example"`](https://huggingface.co/patrickvonplaten/lora_dreambooth_dog_example)). So, you can just do: - - ```py - pipe.load_lora_weights(lora_model_path) - ``` - -* LoRA parameters that have separate identifiers for the UNet and the text encoder such as: [`"sayakpaul/dreambooth"`](https://huggingface.co/sayakpaul/dreambooth). - -**Note** that it is possible to provide a local directory path to [`~diffusers.loaders.LoraLoaderMixin.load_lora_weights`] as well as [`~diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs`]. To know about the supported inputs, -refer to the respective docstrings. - -## Unloading LoRA parameters - -You can call [`~diffusers.loaders.LoraLoaderMixin.unload_lora_weights`] on a pipeline to unload the LoRA parameters. - -## Supporting A1111 themed LoRA checkpoints from Diffusers - -This support was made possible because of our amazing contributors: [@takuma104](https://github.com/takuma104) and [@isidentical](https://github.com/isidentical). - -To provide seamless interoperability with A1111 to our users, we support loading A1111 formatted -LoRA checkpoints using [`~diffusers.loaders.LoraLoaderMixin.load_lora_weights`] in a limited capacity. -In this section, we explain how to load an A1111 formatted LoRA checkpoint from [CivitAI](https://civitai.com/) -in Diffusers and perform inference with it. - -First, download a checkpoint. We'll use -[this one](https://civitai.com/models/13239/light-and-shadow) for demonstration purposes. - -```bash -wget https://civitai.com/api/download/models/15603 -O light_and_shadow.safetensors -``` - -Next, we initialize a [`~DiffusionPipeline`]: - -```python -import torch - -from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler - -pipeline = StableDiffusionPipeline.from_pretrained( - "gsdf/Counterfeit-V2.5", torch_dtype=torch.float16, safety_checker=None -).to("cuda") -pipeline.scheduler = DPMSolverMultistepScheduler.from_config( - pipeline.scheduler.config, use_karras_sigmas=True -) -``` - -We then load the checkpoint downloaded from CivitAI: - -```python -pipeline.load_lora_weights(".", weight_name="light_and_shadow.safetensors") -``` - - - -If you're loading a checkpoint in the `safetensors` format, please ensure you have `safetensors` installed. - - - -And then it's time for running inference: - -```python -prompt = "masterpiece, best quality, 1girl, at dusk" -negative_prompt = ("(low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), " - "bad composition, inaccurate eyes, extra digit, fewer digits, (extra arms:1.2), large breasts") - -images = pipeline(prompt=prompt, - negative_prompt=negative_prompt, - width=512, - height=768, - num_inference_steps=15, - num_images_per_prompt=4, - generator=torch.manual_seed(0) -).images -``` - -Below is a comparison between the LoRA and the non-LoRA results: - -![lora_non_lora](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lora_non_lora_comparison.png) - -You have a similar checkpoint stored on the Hugging Face Hub, you can load it -directly with [`~diffusers.loaders.LoraLoaderMixin.load_lora_weights`] like so: - -```python -lora_model_id = "sayakpaul/civitai-light-shadow-lora" -lora_filename = "light_and_shadow.safetensors" -pipeline.load_lora_weights(lora_model_id, weight_name=lora_filename) -``` - -### Supporting Stable Diffusion XL LoRAs trained using the Kohya-trainer - -With this [PR](https://github.com/huggingface/diffusers/pull/4287), there should now be better support for loading Kohya-style LoRAs trained on Stable Diffusion XL (SDXL). - -Here are some example checkpoints we tried out: - -* SDXL 0.9: - * https://civitai.com/models/22279?modelVersionId=118556 - * https://civitai.com/models/104515/sdxlor30costumesrevue-starlight-saijoclaudine-lora - * https://civitai.com/models/108448/daiton-sdxl-test - * https://filebin.net/2ntfqqnapiu9q3zx/pixelbuildings128-v1.safetensors -* SDXL 1.0: - * https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_offset_example-lora_1.0.safetensors - -Here is an example of how to perform inference with these checkpoints in `diffusers`: - -```python -from diffusers import DiffusionPipeline -import torch - -base_model_id = "stabilityai/stable-diffusion-xl-base-0.9" -pipeline = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to("cuda") -pipeline.load_lora_weights(".", weight_name="Kamepan.safetensors") - -prompt = "anime screencap, glint, drawing, best quality, light smile, shy, a full body of a girl wearing wedding dress in the middle of the forest beneath the trees, fireflies, big eyes, 2d, cute, anime girl, waifu, cel shading, magical girl, vivid colors, (outline:1.1), manga anime artstyle, masterpiece, offical wallpaper, glint " -negative_prompt = "(deformed, bad quality, sketch, depth of field, blurry:1.1), grainy, bad anatomy, bad perspective, old, ugly, realistic, cartoon, disney, bad propotions" -generator = torch.manual_seed(2947883060) -num_inference_steps = 30 -guidance_scale = 7 - -image = pipeline( - prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=num_inference_steps, - generator=generator, guidance_scale=guidance_scale -).images[0] -image.save("Kamepan.png") -``` - -`Kamepan.safetensors` comes from https://civitai.com/models/22279?modelVersionId=118556 . - -If you notice carefully, the inference UX is exactly identical to what we presented in the sections above. - -Thanks to [@isidentical](https://github.com/isidentical) for helping us on integrating this feature. - -### Known limitations specific to the Kohya-styled LoRAs - -* SDXL LoRAs that have both the text encoders are currently leading to weird results. We're actively investigating the issue. -* When images don't looks similar to other UIs such ComfyUI, it can be beacause of multiple reasons as explained [here](https://github.com/huggingface/diffusers/pull/4287/#issuecomment-1655110736). \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae_flax.py deleted file mode 100644 index e5c56b61a5a40942cdfe09953f0f195a344b0105..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae_flax.py +++ /dev/null @@ -1,39 +0,0 @@ -import unittest - -from diffusers import FlaxAutoencoderKL -from diffusers.utils import is_flax_available -from diffusers.utils.testing_utils import require_flax - -from .test_modeling_common_flax import FlaxModelTesterMixin - - -if is_flax_available(): - import jax - - -@require_flax -class FlaxAutoencoderKLTests(FlaxModelTesterMixin, unittest.TestCase): - model_class = FlaxAutoencoderKL - - @property - def dummy_input(self): - batch_size = 4 - num_channels = 3 - sizes = (32, 32) - - prng_key = jax.random.PRNGKey(0) - image = jax.random.uniform(prng_key, ((batch_size, num_channels) + sizes)) - - return {"sample": image, "prng_key": prng_key} - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "block_out_channels": [32, 64], - "in_channels": 3, - "out_channels": 3, - "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"], - "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"], - "latent_channels": 4, - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/guided_anchor_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/guided_anchor_head.py deleted file mode 100644 index 997ebb751ade2ebae3fce335a08c46f596c60913..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/guided_anchor_head.py +++ /dev/null @@ -1,860 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import bias_init_with_prob, normal_init -from mmcv.ops import DeformConv2d, MaskedConv2d -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_anchor_generator, - build_assigner, build_bbox_coder, build_sampler, - calc_region, images_to_levels, multi_apply, - multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class FeatureAdaption(nn.Module): - """Feature Adaption Module. - - Feature Adaption Module is implemented based on DCN v1. - It uses anchor shape prediction rather than feature map to - predict offsets of deform conv layer. - - Args: - in_channels (int): Number of channels in the input feature map. - out_channels (int): Number of channels in the output feature map. - kernel_size (int): Deformable conv kernel size. - deform_groups (int): Deformable conv group size. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4): - super(FeatureAdaption, self).__init__() - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 2, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def init_weights(self): - normal_init(self.conv_offset, std=0.1) - normal_init(self.conv_adaption, std=0.01) - - def forward(self, x, shape): - offset = self.conv_offset(shape.detach()) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class GuidedAnchorHead(AnchorHead): - """Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.). - - This GuidedAnchorHead will predict high-quality feature guided - anchors and locations where anchors will be kept in inference. - There are mainly 3 categories of bounding-boxes. - - - Sampled 9 pairs for target assignment. (approxes) - - The square boxes where the predicted anchors are based on. (squares) - - Guided anchors. - - Please refer to https://arxiv.org/abs/1901.03278 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. - approx_anchor_generator (dict): Config dict for approx generator - square_anchor_generator (dict): Config dict for square generator - anchor_coder (dict): Config dict for anchor coder - bbox_coder (dict): Config dict for bbox coder - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - deform_groups: (int): Group number of DCN in - FeatureAdaption module. - loc_filter_thr (float): Threshold to filter out unconcerned regions. - loss_loc (dict): Config of location loss. - loss_shape (dict): Config of anchor shape loss. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of bbox regression loss. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=8, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[8], - strides=[4, 8, 16, 32, 64]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - reg_decoded_bbox=False, - deform_groups=4, - loc_filter_thr=0.01, - train_cfg=None, - test_cfg=None, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)): # yapf: disable - super(AnchorHead, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.deform_groups = deform_groups - self.loc_filter_thr = loc_filter_thr - - # build approx_anchor_generator and square_anchor_generator - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - self.approx_anchor_generator = build_anchor_generator( - approx_anchor_generator) - self.square_anchor_generator = build_anchor_generator( - square_anchor_generator) - self.approxs_per_octave = self.approx_anchor_generator \ - .num_base_anchors[0] - - self.reg_decoded_bbox = reg_decoded_bbox - - # one anchor per location - self.num_anchors = 1 - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.loc_focal_loss = loss_loc['type'] in ['FocalLoss'] - self.sampling = loss_cls['type'] not in ['FocalLoss'] - self.ga_sampling = train_cfg is not None and hasattr( - train_cfg, 'ga_sampler') - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - - # build bbox_coder - self.anchor_coder = build_bbox_coder(anchor_coder) - self.bbox_coder = build_bbox_coder(bbox_coder) - - # build losses - self.loss_loc = build_loss(loss_loc) - self.loss_shape = build_loss(loss_shape) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.ga_assigner = build_assigner(self.train_cfg.ga_assigner) - if self.ga_sampling: - ga_sampler_cfg = self.train_cfg.ga_sampler - else: - ga_sampler_cfg = dict(type='PseudoSampler') - self.ga_sampler = build_sampler(ga_sampler_cfg, context=self) - - self.fp16_enabled = False - - self._init_layers() - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.conv_loc = nn.Conv2d(self.in_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.in_channels, self.num_anchors * 2, 1) - self.feature_adaption = FeatureAdaption( - self.in_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = MaskedConv2d(self.feat_channels, - self.num_anchors * self.cls_out_channels, - 1) - self.conv_reg = MaskedConv2d(self.feat_channels, self.num_anchors * 4, - 1) - - def init_weights(self): - normal_init(self.conv_cls, std=0.01) - normal_init(self.conv_reg, std=0.01) - - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_loc, std=0.01, bias=bias_cls) - normal_init(self.conv_shape, std=0.01) - - self.feature_adaption.init_weights() - - def forward_single(self, x): - loc_pred = self.conv_loc(x) - shape_pred = self.conv_shape(x) - x = self.feature_adaption(x, shape_pred) - # masked conv is only used during inference for speed-up - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.conv_cls(x, mask) - bbox_pred = self.conv_reg(x, mask) - return cls_score, bbox_pred, shape_pred, loc_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_sampled_approxs(self, featmap_sizes, img_metas, device='cuda'): - """Get sampled approxs and inside flags according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: approxes of each image, inside flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # approxes for one time - multi_level_approxs = self.approx_anchor_generator.grid_anchors( - featmap_sizes, device=device) - approxs_list = [multi_level_approxs for _ in range(num_imgs)] - - # for each image, we compute inside flags of multi level approxes - inside_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = [] - multi_level_approxs = approxs_list[img_id] - - # obtain valid flags for each approx first - multi_level_approx_flags = self.approx_anchor_generator \ - .valid_flags(featmap_sizes, - img_meta['pad_shape'], - device=device) - - for i, flags in enumerate(multi_level_approx_flags): - approxs = multi_level_approxs[i] - inside_flags_list = [] - for i in range(self.approxs_per_octave): - split_valid_flags = flags[i::self.approxs_per_octave] - split_approxs = approxs[i::self.approxs_per_octave, :] - inside_flags = anchor_inside_flags( - split_approxs, split_valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - inside_flags_list.append(inside_flags) - # inside_flag for a position is true if any anchor in this - # position is true - inside_flags = ( - torch.stack(inside_flags_list, 0).sum(dim=0) > 0) - multi_level_flags.append(inside_flags) - inside_flag_list.append(multi_level_flags) - return approxs_list, inside_flag_list - - def get_anchors(self, - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=False, - device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - shape_preds (list[tensor]): Multi-level shape predictions. - loc_preds (list[tensor]): Multi-level location predictions. - img_metas (list[dict]): Image meta info. - use_loc_filter (bool): Use loc filter or not. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image, guided anchors of each image, - loc masks of each image - """ - num_imgs = len(img_metas) - num_levels = len(featmap_sizes) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_anchors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - # for each image, we compute multi level guided anchors - guided_anchors_list = [] - loc_mask_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_guided_anchors = [] - multi_level_loc_mask = [] - for i in range(num_levels): - squares = squares_list[img_id][i] - shape_pred = shape_preds[i][img_id] - loc_pred = loc_preds[i][img_id] - guided_anchors, loc_mask = self._get_guided_anchors_single( - squares, - shape_pred, - loc_pred, - use_loc_filter=use_loc_filter) - multi_level_guided_anchors.append(guided_anchors) - multi_level_loc_mask.append(loc_mask) - guided_anchors_list.append(multi_level_guided_anchors) - loc_mask_list.append(multi_level_loc_mask) - return squares_list, guided_anchors_list, loc_mask_list - - def _get_guided_anchors_single(self, - squares, - shape_pred, - loc_pred, - use_loc_filter=False): - """Get guided anchors and loc masks for a single level. - - Args: - square (tensor): Squares of a single level. - shape_pred (tensor): Shape predections of a single level. - loc_pred (tensor): Loc predections of a single level. - use_loc_filter (list[tensor]): Use loc filter or not. - - Returns: - tuple: guided anchors, location masks - """ - # calculate location filtering mask - loc_pred = loc_pred.sigmoid().detach() - if use_loc_filter: - loc_mask = loc_pred >= self.loc_filter_thr - else: - loc_mask = loc_pred >= 0.0 - mask = loc_mask.permute(1, 2, 0).expand(-1, -1, self.num_anchors) - mask = mask.contiguous().view(-1) - # calculate guided anchors - squares = squares[mask] - anchor_deltas = shape_pred.permute(1, 2, 0).contiguous().view( - -1, 2).detach()[mask] - bbox_deltas = anchor_deltas.new_full(squares.size(), 0) - bbox_deltas[:, 2:] = anchor_deltas - guided_anchors = self.anchor_coder.decode( - squares, bbox_deltas, wh_ratio_clip=1e-6) - return guided_anchors, mask - - def ga_loc_targets(self, gt_bboxes_list, featmap_sizes): - """Compute location targets for guided anchoring. - - Each feature map is divided into positive, negative and ignore regions. - - positive regions: target 1, weight 1 - - ignore regions: target 0, weight 0 - - negative regions: target 0, weight 0.1 - - Args: - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - featmap_sizes (list[tuple]): Multi level sizes of each feature - maps. - - Returns: - tuple - """ - anchor_scale = self.approx_anchor_generator.octave_base_scale - anchor_strides = self.approx_anchor_generator.strides - # Currently only supports same stride in x and y direction. - for stride in anchor_strides: - assert (stride[0] == stride[1]) - anchor_strides = [stride[0] for stride in anchor_strides] - - center_ratio = self.train_cfg.center_ratio - ignore_ratio = self.train_cfg.ignore_ratio - img_per_gpu = len(gt_bboxes_list) - num_lvls = len(featmap_sizes) - r1 = (1 - center_ratio) / 2 - r2 = (1 - ignore_ratio) / 2 - all_loc_targets = [] - all_loc_weights = [] - all_ignore_map = [] - for lvl_id in range(num_lvls): - h, w = featmap_sizes[lvl_id] - loc_targets = torch.zeros( - img_per_gpu, - 1, - h, - w, - device=gt_bboxes_list[0].device, - dtype=torch.float32) - loc_weights = torch.full_like(loc_targets, -1) - ignore_map = torch.zeros_like(loc_targets) - all_loc_targets.append(loc_targets) - all_loc_weights.append(loc_weights) - all_ignore_map.append(ignore_map) - for img_id in range(img_per_gpu): - gt_bboxes = gt_bboxes_list[img_id] - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - # assign gt bboxes to different feature levels w.r.t. their scales - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - for gt_id in range(gt_bboxes.size(0)): - lvl = target_lvls[gt_id].item() - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[lvl] - # calculate ignore regions - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[lvl]) - # calculate positive (center) regions - ctr_x1, ctr_y1, ctr_x2, ctr_y2 = calc_region( - gt_, r1, featmap_sizes[lvl]) - all_loc_targets[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - all_loc_weights[lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 0 - all_loc_weights[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - # calculate ignore map on nearby low level feature - if lvl > 0: - d_lvl = lvl - 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[d_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[d_lvl]) - all_ignore_map[d_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - # calculate ignore map on nearby high level feature - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[u_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[u_lvl]) - all_ignore_map[u_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - for lvl_id in range(num_lvls): - # ignore negative regions w.r.t. ignore map - all_loc_weights[lvl_id][(all_loc_weights[lvl_id] < 0) - & (all_ignore_map[lvl_id] > 0)] = 0 - # set negative regions with weight 0.1 - all_loc_weights[lvl_id][all_loc_weights[lvl_id] < 0] = 0.1 - # loc average factor to balance loss - loc_avg_factor = sum( - [t.size(0) * t.size(-1) * t.size(-2) - for t in all_loc_targets]) / 200 - return all_loc_targets, all_loc_weights, loc_avg_factor - - def _ga_shape_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - img_meta, - unmap_outputs=True): - """Compute guided anchoring targets. - - This function returns sampled anchors and gt bboxes directly - rather than calculates regression targets. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image. - img_meta (dict): Meta info of a single image. - approxs_per_octave (int): number of approxs per octave - cfg (dict): RPN train configs. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - if not inside_flags.any(): - return (None, ) * 5 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.ga_assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.ga_sampler.sample(assign_result, squares, - gt_bboxes) - - bbox_anchors = torch.zeros_like(squares) - bbox_gts = torch.zeros_like(squares) - bbox_weights = torch.zeros_like(squares) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - bbox_anchors[pos_inds, :] = sampling_result.pos_bboxes - bbox_gts[pos_inds, :] = sampling_result.pos_gt_bboxes - bbox_weights[pos_inds, :] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - bbox_anchors = unmap(bbox_anchors, num_total_anchors, inside_flags) - bbox_gts = unmap(bbox_gts, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (bbox_anchors, bbox_gts, bbox_weights, pos_inds, neg_inds) - - def ga_shape_targets(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - unmap_outputs=True): - """Compute guided anchoring targets. - - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - (all_bbox_anchors, all_bbox_gts, all_bbox_weights, pos_inds_list, - neg_inds_list) = multi_apply( - self._ga_shape_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - img_metas, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([bbox_anchors is None for bbox_anchors in all_bbox_anchors]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - bbox_anchors_list = images_to_levels(all_bbox_anchors, - num_level_squares) - bbox_gts_list = images_to_levels(all_bbox_gts, num_level_squares) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_squares) - return (bbox_anchors_list, bbox_gts_list, bbox_weights_list, - num_total_pos, num_total_neg) - - def loss_shape_single(self, shape_pred, bbox_anchors, bbox_gts, - anchor_weights, anchor_total_num): - shape_pred = shape_pred.permute(0, 2, 3, 1).contiguous().view(-1, 2) - bbox_anchors = bbox_anchors.contiguous().view(-1, 4) - bbox_gts = bbox_gts.contiguous().view(-1, 4) - anchor_weights = anchor_weights.contiguous().view(-1, 4) - bbox_deltas = bbox_anchors.new_full(bbox_anchors.size(), 0) - bbox_deltas[:, 2:] += shape_pred - # filter out negative samples to speed-up weighted_bounded_iou_loss - inds = torch.nonzero( - anchor_weights[:, 0] > 0, as_tuple=False).squeeze(1) - bbox_deltas_ = bbox_deltas[inds] - bbox_anchors_ = bbox_anchors[inds] - bbox_gts_ = bbox_gts[inds] - anchor_weights_ = anchor_weights[inds] - pred_anchors_ = self.anchor_coder.decode( - bbox_anchors_, bbox_deltas_, wh_ratio_clip=1e-6) - loss_shape = self.loss_shape( - pred_anchors_, - bbox_gts_, - anchor_weights_, - avg_factor=anchor_total_num) - return loss_shape - - def loss_loc_single(self, loc_pred, loc_target, loc_weight, - loc_avg_factor): - loss_loc = self.loss_loc( - loc_pred.reshape(-1, 1), - loc_target.reshape(-1).long(), - loc_weight.reshape(-1), - avg_factor=loc_avg_factor) - return loss_loc - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get loc targets - loc_targets, loc_weights, loc_avg_factor = self.ga_loc_targets( - gt_bboxes, featmap_sizes) - - # get sampled approxes - approxs_list, inside_flag_list = self.get_sampled_approxs( - featmap_sizes, img_metas, device=device) - # get squares and guided anchors - squares_list, guided_anchors_list, _ = self.get_anchors( - featmap_sizes, shape_preds, loc_preds, img_metas, device=device) - - # get shape targets - shape_targets = self.ga_shape_targets(approxs_list, inside_flag_list, - squares_list, gt_bboxes, - img_metas) - if shape_targets is None: - return None - (bbox_anchors_list, bbox_gts_list, anchor_weights_list, anchor_fg_num, - anchor_bg_num) = shape_targets - anchor_total_num = ( - anchor_fg_num if not self.ga_sampling else anchor_fg_num + - anchor_bg_num) - - # get anchor targets - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - guided_anchors_list, - inside_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [ - anchors.size(0) for anchors in guided_anchors_list[0] - ] - # concat all level anchors to a single tensor - concat_anchor_list = [] - for i in range(len(guided_anchors_list)): - concat_anchor_list.append(torch.cat(guided_anchors_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - # get classification and bbox regression losses - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # get anchor location loss - losses_loc = [] - for i in range(len(loc_preds)): - loss_loc = self.loss_loc_single( - loc_preds[i], - loc_targets[i], - loc_weights[i], - loc_avg_factor=loc_avg_factor) - losses_loc.append(loss_loc) - - # get anchor shape loss - losses_shape = [] - for i in range(len(shape_preds)): - loss_shape = self.loss_shape_single( - shape_preds[i], - bbox_anchors_list[i], - bbox_gts_list[i], - anchor_weights_list[i], - anchor_total_num=anchor_total_num) - losses_shape.append(loss_shape) - - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_shape=losses_shape, - loss_loc=losses_loc) - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) == len(shape_preds) == len( - loc_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - # get guided anchors - _, guided_anchors, loc_masks = self.get_anchors( - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=not self.training, - device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - guided_anchor_list = [ - guided_anchors[img_id][i].detach() for i in range(num_levels) - ] - loc_mask_list = [ - loc_masks[img_id][i].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - guided_anchor_list, - loc_mask_list, img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors, mask in zip(cls_scores, bbox_preds, - mlvl_anchors, - mlvl_masks): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - # reshape scores and bbox_pred - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask, :] - bbox_pred = bbox_pred[mask, :] - if scores.dim() == 0: - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - bbox_pred = bbox_pred.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - # multi class NMS - det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels diff --git a/spaces/Andy1621/uniformer_image_detection/tools/misc/browse_dataset.py b/spaces/Andy1621/uniformer_image_detection/tools/misc/browse_dataset.py deleted file mode 100644 index 0c9385fa70e12a912d8963212cc62bf94f83fa7c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/tools/misc/browse_dataset.py +++ /dev/null @@ -1,96 +0,0 @@ -import argparse -import os -from pathlib import Path - -import mmcv -from mmcv import Config, DictAction - -from mmdet.core.utils import mask2ndarray -from mmdet.core.visualization import imshow_det_bboxes -from mmdet.datasets.builder import build_dataset - - -def parse_args(): - parser = argparse.ArgumentParser(description='Browse a dataset') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--skip-type', - type=str, - nargs='+', - default=['DefaultFormatBundle', 'Normalize', 'Collect'], - help='skip some useless pipeline') - parser.add_argument( - '--output-dir', - default=None, - type=str, - help='If there is no display interface, you can save it') - parser.add_argument('--not-show', default=False, action='store_true') - parser.add_argument( - '--show-interval', - type=float, - default=2, - help='the interval of show (s)') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - return args - - -def retrieve_data_cfg(config_path, skip_type, cfg_options): - cfg = Config.fromfile(config_path) - if cfg_options is not None: - cfg.merge_from_dict(cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - train_data_cfg = cfg.data.train - train_data_cfg['pipeline'] = [ - x for x in train_data_cfg.pipeline if x['type'] not in skip_type - ] - - return cfg - - -def main(): - args = parse_args() - cfg = retrieve_data_cfg(args.config, args.skip_type, args.cfg_options) - - dataset = build_dataset(cfg.data.train) - - progress_bar = mmcv.ProgressBar(len(dataset)) - - for item in dataset: - filename = os.path.join(args.output_dir, - Path(item['filename']).name - ) if args.output_dir is not None else None - - gt_masks = item.get('gt_masks', None) - if gt_masks is not None: - gt_masks = mask2ndarray(gt_masks) - - imshow_det_bboxes( - item['img'], - item['gt_bboxes'], - item['gt_labels'], - gt_masks, - class_names=dataset.CLASSES, - show=not args.not_show, - wait_time=args.show_interval, - out_file=filename, - bbox_color=(255, 102, 61), - text_color=(255, 102, 61)) - - progress_bar.update() - - -if __name__ == '__main__': - main() diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/pspnet_r101-d8_512x1024_80k_fp16_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/pspnet_r101-d8_512x1024_80k_fp16_cityscapes.py deleted file mode 100644 index cb2c27e44f33170130a233abf0524d5e346656db..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fp16/pspnet_r101-d8_512x1024_80k_fp16_cityscapes.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../pspnet/pspnet_r101-d8_512x1024_80k_cityscapes.py' -# fp16 settings -optimizer_config = dict(type='Fp16OptimizerHook', loss_scale=512.) -# fp16 placeholder -fp16 = dict() diff --git a/spaces/AnnasBlackHat/Image-Similarity/src/model/similarity_interface.py b/spaces/AnnasBlackHat/Image-Similarity/src/model/similarity_interface.py deleted file mode 100644 index 318cdd972c2d2f758bd9b3dfdbb92cc9dfb28bee..0000000000000000000000000000000000000000 --- a/spaces/AnnasBlackHat/Image-Similarity/src/model/similarity_interface.py +++ /dev/null @@ -1,3 +0,0 @@ -class SimilarityInterface: - def extract_feature(img): - return [] \ No newline at end of file diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/plms.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/plms.py deleted file mode 100644 index 7002a365d27168ced0a04e9a4d83e088f8284eae..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/plms.py +++ /dev/null @@ -1,244 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like -from ldm.models.diffusion.sampling_util import norm_thresholding - - -class PLMSSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - if ddim_eta != 0: - raise ValueError('ddim_eta must be 0 for PLMS') - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - dynamic_threshold=None, - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for PLMS sampling is {size}') - - samples, intermediates = self.plms_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ) - return samples, intermediates - - @torch.no_grad() - def plms_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - dynamic_threshold=None): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = list(reversed(range(0,timesteps))) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running PLMS Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='PLMS Sampler', total=total_steps) - old_eps = [] - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - ts_next = torch.full((b,), time_range[min(i + 1, len(time_range) - 1)], device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - old_eps=old_eps, t_next=ts_next, - dynamic_threshold=dynamic_threshold) - img, pred_x0, e_t = outs - old_eps.append(e_t) - if len(old_eps) >= 4: - old_eps.pop(0) - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, old_eps=None, t_next=None, - dynamic_threshold=None): - b, *_, device = *x.shape, x.device - - def get_model_output(x, t): - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - return e_t - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - - def get_x_prev_and_pred_x0(e_t, index): - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - if dynamic_threshold is not None: - pred_x0 = norm_thresholding(pred_x0, dynamic_threshold) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - e_t = get_model_output(x, t) - if len(old_eps) == 0: - # Pseudo Improved Euler (2nd order) - x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index) - e_t_next = get_model_output(x_prev, t_next) - e_t_prime = (e_t + e_t_next) / 2 - elif len(old_eps) == 1: - # 2nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (3 * e_t - old_eps[-1]) / 2 - elif len(old_eps) == 2: - # 3nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12 - elif len(old_eps) >= 3: - # 4nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24 - - x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index) - - return x_prev, pred_x0, e_t diff --git a/spaces/AriaMei/TTSdemo/text/japanese.py b/spaces/AriaMei/TTSdemo/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/AriaMei/TTSdemo/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/app.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/app.py deleted file mode 100644 index 1113b50ec27b83c371537ac56ce85ddde5bb0a13..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/app.py +++ /dev/null @@ -1,119 +0,0 @@ -import gradio as gr -import transformers -import torch -from dotenv import load_dotenv -from transformers import AutoTokenizer -from transformers import pipeline - - - -load_dotenv() -model = "meta-llama/Llama-2-7b-chat-hf" # meta-llama/Llama-2-7b-chat-hf - -tokenizer = AutoTokenizer.from_pretrained(model, use_auth_token=True) - -llama_pipeline = pipeline( - "text-generation", # LLM task - model=model, - torch_dtype=torch.float16, - device_map="auto", -) - - -def get_response(prompt: str) -> None: - """ - Generate a response from the Llama model. - - Parameters: - prompt (str): The user's input/question for the model. - - Returns: - None: Prints the model's response. - """ - sequences = llama_pipeline( - prompt, - do_sample=True, - top_k=10, - num_return_sequences=1, - eos_token_id=tokenizer.eos_token_id, - max_length=256, - ) - print("Chatbot:", sequences[0]['generated_text']) - -SYSTEM_PROMPT = """[INST] <> -You are a helpful bot. Your answers are clear and concise. -<> - -""" - -# Formatting function for message and history -def format_message(message: str, history: list, memory_limit: int = 3) -> str: - """ - Formats the message and history for the Llama model. - - Parameters: - message (str): Current message to send. - history (list): Past conversation history. - memory_limit (int): Limit on how many past interactions to consider. - - Returns: - str: Formatted message string - """ - # always keep len(history) <= memory_limit - if len(history) > memory_limit: - history = history[-memory_limit:] - - if len(history) == 0: - return SYSTEM_PROMPT + f"{message} [/INST]" - - formatted_message = SYSTEM_PROMPT + f"{history[0][0]} [/INST] {history[0][1]} " - - # Handle conversation history - for user_msg, model_answer in history[1:]: - formatted_message += f"[INST] {user_msg} [/INST] {model_answer} " - - # Handle the current message - formatted_message += f"[INST] {message} [/INST]" - - return formatted_message - - - -def get_llama_response(message: str, history: list) -> str: - """ - Generates a conversational response from the Llama model. - - Parameters: - message (str): User's input message. - history (list): Past conversation history. - - Returns: - str: Generated response from the Llama model. - """ - query = format_message(message, history) - response = "" - - sequences = llama_pipeline( - query, - do_sample=True, - top_k=10, - num_return_sequences=1, - eos_token_id=tokenizer.eos_token_id, - max_length=1024, - ) - - generated_text = sequences[0]['generated_text'] - response = generated_text[len(query):] # Remove the prompt from the output - - print("Chatbot:", response.strip()) - return response.strip() - - -def greet(name): - return "Hello " + name + "!!" -gr.ChatInterface(get_llama_response).launch() -#iface = gr.Interface(fn=greet, inputs="text", outputs="text") -#iface = gr.ChatInterface(get_llama_response).launch() -#iface.launch() - - diff --git a/spaces/Awesimo/jojogan/e4e/configs/transforms_config.py b/spaces/Awesimo/jojogan/e4e/configs/transforms_config.py deleted file mode 100644 index ac12b5d5ba0571f21715e0f6b24b9c1ebe84bf72..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/configs/transforms_config.py +++ /dev/null @@ -1,62 +0,0 @@ -from abc import abstractmethod -import torchvision.transforms as transforms - - -class TransformsConfig(object): - - def __init__(self, opts): - self.opts = opts - - @abstractmethod - def get_transforms(self): - pass - - -class EncodeTransforms(TransformsConfig): - - def __init__(self, opts): - super(EncodeTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': None, - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class CarsEncodeTransforms(TransformsConfig): - - def __init__(self, opts): - super(CarsEncodeTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((192, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': None, - 'transform_test': transforms.Compose([ - transforms.Resize((192, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((192, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/transform.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/transform.py deleted file mode 100644 index de44b991d7ab0d920ffb769e1402f08e358d37f7..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/transform.py +++ /dev/null @@ -1,351 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -See "Data Augmentation" tutorial for an overview of the system: -https://detectron2.readthedocs.io/tutorials/augmentation.html -""" - -import numpy as np -import torch -import torch.nn.functional as F -from fvcore.transforms.transform import ( - CropTransform, - HFlipTransform, - NoOpTransform, - Transform, - TransformList, -) -from PIL import Image - -try: - import cv2 # noqa -except ImportError: - # OpenCV is an optional dependency at the moment - pass - -__all__ = [ - "ExtentTransform", - "ResizeTransform", - "RotationTransform", - "ColorTransform", - "PILColorTransform", -] - - -class ExtentTransform(Transform): - """ - Extracts a subregion from the source image and scales it to the output size. - - The fill color is used to map pixels from the source rect that fall outside - the source image. - - See: https://pillow.readthedocs.io/en/latest/PIL.html#PIL.ImageTransform.ExtentTransform - """ - - def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0): - """ - Args: - src_rect (x0, y0, x1, y1): src coordinates - output_size (h, w): dst image size - interp: PIL interpolation methods - fill: Fill color used when src_rect extends outside image - """ - super().__init__() - self._set_attributes(locals()) - - def apply_image(self, img, interp=None): - h, w = self.output_size - if len(img.shape) > 2 and img.shape[2] == 1: - pil_image = Image.fromarray(img[:, :, 0], mode="L") - else: - pil_image = Image.fromarray(img) - pil_image = pil_image.transform( - size=(w, h), - method=Image.EXTENT, - data=self.src_rect, - resample=interp if interp else self.interp, - fill=self.fill, - ) - ret = np.asarray(pil_image) - if len(img.shape) > 2 and img.shape[2] == 1: - ret = np.expand_dims(ret, -1) - return ret - - def apply_coords(self, coords): - # Transform image center from source coordinates into output coordinates - # and then map the new origin to the corner of the output image. - h, w = self.output_size - x0, y0, x1, y1 = self.src_rect - new_coords = coords.astype(np.float32) - new_coords[:, 0] -= 0.5 * (x0 + x1) - new_coords[:, 1] -= 0.5 * (y0 + y1) - new_coords[:, 0] *= w / (x1 - x0) - new_coords[:, 1] *= h / (y1 - y0) - new_coords[:, 0] += 0.5 * w - new_coords[:, 1] += 0.5 * h - return new_coords - - def apply_segmentation(self, segmentation): - segmentation = self.apply_image(segmentation, interp=Image.NEAREST) - return segmentation - - -class ResizeTransform(Transform): - """ - Resize the image to a target size. - """ - - def __init__(self, h, w, new_h, new_w, interp=None): - """ - Args: - h, w (int): original image size - new_h, new_w (int): new image size - interp: PIL interpolation methods, defaults to bilinear. - """ - # TODO decide on PIL vs opencv - super().__init__() - if interp is None: - interp = Image.BILINEAR - self._set_attributes(locals()) - - def apply_image(self, img, interp=None): - assert img.shape[:2] == (self.h, self.w) - assert len(img.shape) <= 4 - interp_method = interp if interp is not None else self.interp - - if img.dtype == np.uint8: - if len(img.shape) > 2 and img.shape[2] == 1: - pil_image = Image.fromarray(img[:, :, 0], mode="L") - else: - pil_image = Image.fromarray(img) - pil_image = pil_image.resize((self.new_w, self.new_h), interp_method) - ret = np.asarray(pil_image) - if len(img.shape) > 2 and img.shape[2] == 1: - ret = np.expand_dims(ret, -1) - else: - # PIL only supports uint8 - if any(x < 0 for x in img.strides): - img = np.ascontiguousarray(img) - img = torch.from_numpy(img) - shape = list(img.shape) - shape_4d = shape[:2] + [1] * (4 - len(shape)) + shape[2:] - img = img.view(shape_4d).permute(2, 3, 0, 1) # hw(c) -> nchw - _PIL_RESIZE_TO_INTERPOLATE_MODE = { - Image.NEAREST: "nearest", - Image.BILINEAR: "bilinear", - Image.BICUBIC: "bicubic", - } - mode = _PIL_RESIZE_TO_INTERPOLATE_MODE[interp_method] - align_corners = None if mode == "nearest" else False - img = F.interpolate( - img, (self.new_h, self.new_w), mode=mode, align_corners=align_corners - ) - shape[:2] = (self.new_h, self.new_w) - ret = img.permute(2, 3, 0, 1).view(shape).numpy() # nchw -> hw(c) - - return ret - - def apply_coords(self, coords): - coords[:, 0] = coords[:, 0] * (self.new_w * 1.0 / self.w) - coords[:, 1] = coords[:, 1] * (self.new_h * 1.0 / self.h) - return coords - - def apply_segmentation(self, segmentation): - segmentation = self.apply_image(segmentation, interp=Image.NEAREST) - return segmentation - - def inverse(self): - return ResizeTransform(self.new_h, self.new_w, self.h, self.w, self.interp) - - -class RotationTransform(Transform): - """ - This method returns a copy of this image, rotated the given - number of degrees counter clockwise around its center. - """ - - def __init__(self, h, w, angle, expand=True, center=None, interp=None): - """ - Args: - h, w (int): original image size - angle (float): degrees for rotation - expand (bool): choose if the image should be resized to fit the whole - rotated image (default), or simply cropped - center (tuple (width, height)): coordinates of the rotation center - if left to None, the center will be fit to the center of each image - center has no effect if expand=True because it only affects shifting - interp: cv2 interpolation method, default cv2.INTER_LINEAR - """ - super().__init__() - image_center = np.array((w / 2, h / 2)) - if center is None: - center = image_center - if interp is None: - interp = cv2.INTER_LINEAR - abs_cos, abs_sin = (abs(np.cos(np.deg2rad(angle))), abs(np.sin(np.deg2rad(angle)))) - if expand: - # find the new width and height bounds - bound_w, bound_h = np.rint( - [h * abs_sin + w * abs_cos, h * abs_cos + w * abs_sin] - ).astype(int) - else: - bound_w, bound_h = w, h - - self._set_attributes(locals()) - self.rm_coords = self.create_rotation_matrix() - # Needed because of this problem https://github.com/opencv/opencv/issues/11784 - self.rm_image = self.create_rotation_matrix(offset=-0.5) - - def apply_image(self, img, interp=None): - """ - img should be a numpy array, formatted as Height * Width * Nchannels - """ - if len(img) == 0 or self.angle % 360 == 0: - return img - assert img.shape[:2] == (self.h, self.w) - interp = interp if interp is not None else self.interp - return cv2.warpAffine(img, self.rm_image, (self.bound_w, self.bound_h), flags=interp) - - def apply_coords(self, coords): - """ - coords should be a N * 2 array-like, containing N couples of (x, y) points - """ - coords = np.asarray(coords, dtype=float) - if len(coords) == 0 or self.angle % 360 == 0: - return coords - return cv2.transform(coords[:, np.newaxis, :], self.rm_coords)[:, 0, :] - - def apply_segmentation(self, segmentation): - segmentation = self.apply_image(segmentation, interp=cv2.INTER_NEAREST) - return segmentation - - def create_rotation_matrix(self, offset=0): - center = (self.center[0] + offset, self.center[1] + offset) - rm = cv2.getRotationMatrix2D(tuple(center), self.angle, 1) - if self.expand: - # Find the coordinates of the center of rotation in the new image - # The only point for which we know the future coordinates is the center of the image - rot_im_center = cv2.transform(self.image_center[None, None, :] + offset, rm)[0, 0, :] - new_center = np.array([self.bound_w / 2, self.bound_h / 2]) + offset - rot_im_center - # shift the rotation center to the new coordinates - rm[:, 2] += new_center - return rm - - def inverse(self): - """ - The inverse is to rotate it back with expand, and crop to get the original shape. - """ - if not self.expand: # Not possible to inverse if a part of the image is lost - raise NotImplementedError() - rotation = RotationTransform( - self.bound_h, self.bound_w, -self.angle, True, None, self.interp - ) - crop = CropTransform( - (rotation.bound_w - self.w) // 2, (rotation.bound_h - self.h) // 2, self.w, self.h - ) - return TransformList([rotation, crop]) - - -class ColorTransform(Transform): - """ - Generic wrapper for any photometric transforms. - These transformations should only affect the color space and - not the coordinate space of the image (e.g. annotation - coordinates such as bounding boxes should not be changed) - """ - - def __init__(self, op): - """ - Args: - op (Callable): operation to be applied to the image, - which takes in an ndarray and returns an ndarray. - """ - if not callable(op): - raise ValueError("op parameter should be callable") - super().__init__() - self._set_attributes(locals()) - - def apply_image(self, img): - return self.op(img) - - def apply_coords(self, coords): - return coords - - def inverse(self): - return NoOpTransform() - - def apply_segmentation(self, segmentation): - return segmentation - - -class PILColorTransform(ColorTransform): - """ - Generic wrapper for PIL Photometric image transforms, - which affect the color space and not the coordinate - space of the image - """ - - def __init__(self, op): - """ - Args: - op (Callable): operation to be applied to the image, - which takes in a PIL Image and returns a transformed - PIL Image. - For reference on possible operations see: - - https://pillow.readthedocs.io/en/stable/ - """ - if not callable(op): - raise ValueError("op parameter should be callable") - super().__init__(op) - - def apply_image(self, img): - img = Image.fromarray(img) - return np.asarray(super().apply_image(img)) - - -def HFlip_rotated_box(transform, rotated_boxes): - """ - Apply the horizontal flip transform on rotated boxes. - - Args: - rotated_boxes (ndarray): Nx5 floating point array of - (x_center, y_center, width, height, angle_degrees) format - in absolute coordinates. - """ - # Transform x_center - rotated_boxes[:, 0] = transform.width - rotated_boxes[:, 0] - # Transform angle - rotated_boxes[:, 4] = -rotated_boxes[:, 4] - return rotated_boxes - - -def Resize_rotated_box(transform, rotated_boxes): - """ - Apply the resizing transform on rotated boxes. For details of how these (approximation) - formulas are derived, please refer to :meth:`RotatedBoxes.scale`. - - Args: - rotated_boxes (ndarray): Nx5 floating point array of - (x_center, y_center, width, height, angle_degrees) format - in absolute coordinates. - """ - scale_factor_x = transform.new_w * 1.0 / transform.w - scale_factor_y = transform.new_h * 1.0 / transform.h - rotated_boxes[:, 0] *= scale_factor_x - rotated_boxes[:, 1] *= scale_factor_y - theta = rotated_boxes[:, 4] * np.pi / 180.0 - c = np.cos(theta) - s = np.sin(theta) - rotated_boxes[:, 2] *= np.sqrt(np.square(scale_factor_x * c) + np.square(scale_factor_y * s)) - rotated_boxes[:, 3] *= np.sqrt(np.square(scale_factor_x * s) + np.square(scale_factor_y * c)) - rotated_boxes[:, 4] = np.arctan2(scale_factor_x * s, scale_factor_y * c) * 180 / np.pi - - return rotated_boxes - - -HFlipTransform.register_type("rotated_box", HFlip_rotated_box) -ResizeTransform.register_type("rotated_box", Resize_rotated_box) - -# not necessary any more with latest fvcore -NoOpTransform.register_type("rotated_box", lambda t, x: x) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/structures/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/structures/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git "a/spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/01_\360\237\216\245_Input_YouTube_Link.py" "b/spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/01_\360\237\216\245_Input_YouTube_Link.py" deleted file mode 100644 index 30d01fc545ff0a36c8a30a56fc67249613d4296b..0000000000000000000000000000000000000000 --- "a/spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/01_\360\237\216\245_Input_YouTube_Link.py" +++ /dev/null @@ -1,258 +0,0 @@ -import whisper -from pytube import YouTube -import requests -import time -import streamlit as st -from streamlit_lottie import st_lottie -import numpy as np -import os -from typing import Iterator -from io import StringIO -from utils import write_vtt, write_srt -import ffmpeg -from languages import LANGUAGES - -st.set_page_config(page_title="Auto Subtitled Video Generator", page_icon=":movie_camera:", layout="wide") - -# Define a function that we can use to load lottie files from a link. -@st.cache() -def load_lottieurl(url: str): - r = requests.get(url) - if r.status_code != 200: - return None - return r.json() - -col1, col2 = st.columns([1, 3]) -with col1: - lottie = load_lottieurl("https://assets8.lottiefiles.com/packages/lf20_jh9gfdye.json") - st_lottie(lottie) - -with col2: - st.write(""" - ## Auto Subtitled Video Generator - ##### Input a YouTube video link and get a video with subtitles. - ###### ➠ If you want to transcribe the video in its original language, select the task as "Transcribe" - ###### ➠ If you want to translate the subtitles to English, select the task as "Translate" - ###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """) - - -@st.cache(allow_output_mutation=True) -def populate_metadata(link): - yt = YouTube(link) - author = yt.author - title = yt.title - description = yt.description - thumbnail = yt.thumbnail_url - length = yt.length - views = yt.views - return author, title, description, thumbnail, length, views - - -@st.cache(allow_output_mutation=True) -def download_video(link): - yt = YouTube(link) - video = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - return video - - -def convert(seconds): - return time.strftime("%H:%M:%S", time.gmtime(seconds)) - - -loaded_model = whisper.load_model("base") -current_size = "None" - - -@st.cache(allow_output_mutation=True) -def change_model(current_size, size): - if current_size != size: - loaded_model = whisper.load_model(size) - return loaded_model - else: - raise Exception("Model size is the same as the current size.") - - -@st.cache(allow_output_mutation=True) -def inference(link, loaded_model, task): - yt = YouTube(link) - path = yt.streams.filter(only_audio=True)[0].download(filename="audio.mp3") - if task == "Transcribe": - options = dict(task="transcribe", best_of=5) - results = loaded_model.transcribe(path, **options) - vtt = getSubs(results["segments"], "vtt", 80) - srt = getSubs(results["segments"], "srt", 80) - lang = results["language"] - return results["text"], vtt, srt, lang - elif task == "Translate": - options = dict(task="translate", best_of=5) - results = loaded_model.transcribe(path, **options) - vtt = getSubs(results["segments"], "vtt", 80) - srt = getSubs(results["segments"], "srt", 80) - lang = results["language"] - return results["text"], vtt, srt, lang - else: - raise ValueError("Task not supported") - - -@st.cache(allow_output_mutation=True) -def getSubs(segments: Iterator[dict], format: str, maxLineWidth: int) -> str: - segmentStream = StringIO() - - if format == 'vtt': - write_vtt(segments, file=segmentStream, maxLineWidth=maxLineWidth) - elif format == 'srt': - write_srt(segments, file=segmentStream, maxLineWidth=maxLineWidth) - else: - raise Exception("Unknown format " + format) - - segmentStream.seek(0) - return segmentStream.read() - - -def get_language_code(language): - if language in LANGUAGES.keys(): - detected_language = LANGUAGES[language] - return detected_language - else: - raise ValueError("Language not supported") - - -def generate_subtitled_video(video, audio, transcript): - video_file = ffmpeg.input(video) - audio_file = ffmpeg.input(audio) - ffmpeg.concat(video_file.filter("subtitles", transcript), audio_file, v=1, a=1).output("final.mp4").run(quiet=True, overwrite_output=True) - video_with_subs = open("final.mp4", "rb") - return video_with_subs - - -def main(): - size = st.selectbox("Select Model Size (The larger the model, the more accurate the transcription will be, but it will take longer)", ["tiny", "base", "small", "medium", "large"], index=1) - loaded_model = change_model(current_size, size) - st.write(f"Model is {'multilingual' if loaded_model.is_multilingual else 'English-only'} " - f"and has {sum(np.prod(p.shape) for p in loaded_model.parameters()):,} parameters.") - link = st.text_input("YouTube Link (The longer the video, the longer the processing time)") - task = st.selectbox("Select Task", ["Transcribe", "Translate"], index=0) - if task == "Transcribe": - if st.button("Transcribe"): - author, title, description, thumbnail, length, views = populate_metadata(link) - results = inference(link, loaded_model, task) - video = download_video(link) - lang = results[3] - detected_language = get_language_code(lang) - - col3, col4 = st.columns(2) - col5, col6, col7, col8 = st.columns(4) - col9, col10 = st.columns(2) - with col3: - st.video(video) - - # Write the results to a .txt file and download it. - with open("transcript.txt", "w+", encoding='utf8') as f: - f.writelines(results[0]) - f.close() - with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f: - datatxt = f.read() - - with open("transcript.vtt", "w+",encoding='utf8') as f: - f.writelines(results[1]) - f.close() - with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f: - datavtt = f.read() - - with open("transcript.srt", "w+",encoding='utf8') as f: - f.writelines(results[2]) - f.close() - with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f: - datasrt = f.read() - - with col5: - st.download_button(label="Download Transcript (.txt)", - data=datatxt, - file_name="transcript.txt") - with col6: - st.download_button(label="Download Transcript (.vtt)", - data=datavtt, - file_name="transcript.vtt") - with col7: - st.download_button(label="Download Transcript (.srt)", - data=datasrt, - file_name="transcript.srt") - with col9: - st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.") - with col10: - st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.") - - with col4: - with st.spinner("Generating Subtitled Video"): - video_with_subs = generate_subtitled_video(video, "audio.mp3", "transcript.srt") - st.video(video_with_subs) - st.balloons() - with col8: - st.download_button(label="Download Subtitled Video", - data=video_with_subs, - file_name=f"{title} with subtitles.mp4") - elif task == "Translate": - if st.button("Translate to English"): - author, title, description, thumbnail, length, views = populate_metadata(link) - results = inference(link, loaded_model, task) - video = download_video(link) - lang = results[3] - detected_language = get_language_code(lang) - - col3, col4 = st.columns(2) - col5, col6, col7, col8 = st.columns(4) - col9, col10 = st.columns(2) - with col3: - st.video(video) - - # Write the results to a .txt file and download it. - with open("transcript.txt", "w+", encoding='utf8') as f: - f.writelines(results[0]) - f.close() - with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f: - datatxt = f.read() - - with open("transcript.vtt", "w+",encoding='utf8') as f: - f.writelines(results[1]) - f.close() - with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f: - datavtt = f.read() - - with open("transcript.srt", "w+",encoding='utf8') as f: - f.writelines(results[2]) - f.close() - with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f: - datasrt = f.read() - with col5: - st.download_button(label="Download Transcript (.txt)", - data=datatxt, - file_name="transcript.txt") - with col6: - st.download_button(label="Download Transcript (.vtt)", - data=datavtt, - file_name="transcript.vtt") - with col7: - st.download_button(label="Download Transcript (.srt)", - data=datasrt, - file_name="transcript.srt") - with col9: - st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.") - with col10: - st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.") - - with col4: - with st.spinner("Generating Subtitled Video "): - video_with_subs = generate_subtitled_video(video, "audio.mp3", "transcript.srt") - st.video(video_with_subs) - st.balloons() - with col8: - st.download_button(label="Download Subtitled Video ", - data=video_with_subs, - file_name=f"{title} with subtitles.mp4") - else: - st.error("Please select a task.") - - -if __name__ == "__main__": - main() - st.markdown("###### Made with :heart: by [@BatuhanYılmaz](https://twitter.com/batuhan3326) [![this is an image link](https://i.imgur.com/thJhzOO.png)](https://www.buymeacoffee.com/batuhanylmz)") \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/8 Bola Piscina Apk Gua.md b/spaces/Benson/text-generation/Examples/8 Bola Piscina Apk Gua.md deleted file mode 100644 index 6902f426e1e9223846e4e04452a8f30130fedee7..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/8 Bola Piscina Apk Gua.md +++ /dev/null @@ -1,55 +0,0 @@ - -

8 bola piscina APK Guía: Cómo jugar como un profesional

-

¿Te encanta jugar 8 Ball Pool en tu dispositivo Android? ¿Quieres mejorar tus habilidades y ganar más partidos? Si es así, entonces usted podría estar interesado en el uso de una herramienta de guía para 8 Ball Pool. Una herramienta de guía es una aplicación que le ayuda a hacer disparos precisos mediante la ampliación de la guía en el juego. En este artículo, explicaremos qué es una herramienta de guía, cómo funciona y cuáles son las mejores herramientas de guía disponibles para 8 Ball Pool. También le daremos algunos consejos y trucos sobre cómo usar una herramienta de guía de manera eficaz y eficiente. Finalmente, discutiremos los pros y los contras de usar una herramienta de guía para 8 Ball Pool. Al final de este artículo, podrás jugar como un profesional con una herramienta de guía.

-

¿Qué es una herramienta de guía y cómo funciona?

-

Una herramienta de guía es una aplicación que le ayuda a hacer disparos precisos mediante la ampliación de la guía en el juego. La pauta es la línea que muestra dónde irá la bola blanca cuando la golpees. Normalmente, la guía está limitada en longitud y no muestra el ángulo o la dirección de la bola objetivo. Sin embargo, con una herramienta de guía, puede ver una guía extendida que cubre toda la tabla. También puedes ver el ángulo y la dirección de la bola objetivo, así como las posibles trayectorias de ambas bolas después de la colisión. De esta manera, puedes planificar mejor tus disparos y evitar errores.

-

8 bola piscina apk guía


DOWNLOAD --->>> https://bltlly.com/2v6Mhb



-

Una herramienta de guía funciona mediante el análisis de capturas de pantalla o el reconocimiento de imágenes de IA para detectar la posición y el movimiento de las bolas en la mesa. A continuación, se superpone una guía extendida en la parte superior de la pantalla del juego. Puede iniciar el juego desde la aplicación o cambiar entre ellos durante el juego. Algunas herramientas de guía también tienen características adicionales como las tomas de cojín, el modo de vista de cuadrícula y la función de 3 líneas. Estas características te ayudan con disparos más complejos que implican rebotar contra los rieles o golpear varias bolas.

-

Las mejores herramientas de guía para la piscina de bolas 8

- -

Usar una herramienta de guía para 8 Ball Pool puede ayudarte a hacer tiros precisos y ganar más partidos. Sin embargo, no es suficiente confiar en la herramienta por sí sola. También necesitas mejorar tus habilidades y estrategia con la herramienta. Aquí hay algunos consejos y trucos sobre cómo usar una herramienta de guía de manera efectiva y eficiente.

-

Cómo ajustar la sensibilidad y el ancho de línea

-

Uno de los ajustes más importantes de una herramienta de guía es la sensibilidad y el ancho de línea. La sensibilidad determina qué tan rápido o lento se mueve la guía cuando arrastra el dedo en la pantalla. El ancho de línea determina el grosor o el grosor de la guía que aparece en la pantalla. Debe ajustar estos ajustes de acuerdo con su preferencia y nivel de comodidad. Una mayor sensibilidad significa que puede mover la guía más rápido y con mayor precisión, pero también significa que puede cometer más errores si no tiene cuidado. Una sensibilidad más baja significa que puede mover la guía más lento y sin problemas, pero también significa que puede perder algunas oportunidades si no es lo suficientemente rápido. Un ancho de línea más grueso significa que puedes ver la guía mejor y más claramente, pero también significa que puedes bloquear algunas partes de la pantalla del juego. Un ancho de línea más delgado significa que puedes ver la pantalla del juego mejor y más completamente, pero también significa que puedes perder de vista la guía si es demasiado débil.

-

Para ajustar la sensibilidad y el ancho de línea de una herramienta de guía, debe ir al menú de configuración de la aplicación y encontrar las opciones de sensibilidad y ancho de línea. Puede usar un control deslizante o un botón para cambiar los valores de estas opciones. También puede probar la configuración jugando un juego de práctica o un partido amistoso. Deberías experimentar con diferentes configuraciones hasta que encuentres las que más te convengan.

-

Cómo usar el modo de vista del banco y la cuadrícula

- -

Para usar el modo de toma bancaria y vista de cuadrícula de una herramienta de guía, debe habilitarlos o desactivarlos desde el menú de configuración de la aplicación o desde un botón en la pantalla del juego. Cuando active el modo de disparo de banco, verá una guía extendida que le muestra dónde irá la bola blanca después de rebotar contra los rieles. Puede utilizar este modo para planificar sus disparos de cojín y evitar rascarse o ensuciarse. Cuando habilite el modo de vista de cuadrícula, verá una superposición de cuadrícula en la parte superior de la pantalla del juego. Puedes usar este modo para alinear tu palo, bola blanca y bola objetivo y asegurarte de que estén en línea recta. También puede usar este modo para medir ángulos y distancias entre bolas.

-

Cómo usar la función de 3 líneas

-

Una tercera característica de algunas herramientas de guía es la función de 3 líneas. La función de 3 líneas le muestra tres líneas en lugar de una cuando apunta su tiro. La primera línea es la pauta normal que muestra dónde irá la bola blanca cuando la golpees. La segunda línea es la línea de bola objetivo que le muestra dónde irá la bola objetivo cuando es golpeada por la bola blanca. La tercera línea es la línea de bola blanca que te muestra dónde irá la bola blanca después de golpear la bola objetivo. Esta función puede ayudarle con disparos más avanzados que implican predecir el movimiento de ambas bolas después de la colisión.

-

Para usar la función de 3 líneas de una herramienta de guía, debe activarla o desactivarla desde el menú de configuración de la aplicación o desde un botón en la pantalla del juego. Cuando active la función de 3 líneas, verá tres líneas en lugar de una cuando apunte su tiro. Puede utilizar esta función para predecir el movimiento de ambas bolas después de la colisión y planificar sus disparos en consecuencia. También puede utilizar esta función para evitar golpear la bola equivocada o embolsarse la bola equivocada.

-

Los pros y los contras de usar una herramienta de guía para la piscina de bolas 8

- -

Los pros de usar una herramienta de guía para la piscina de bolas 8

-

Algunos de los beneficios de usar una herramienta de guía para 8 Ball Pool son:

-

-
    -
  • Puede mejorar su precisión y confianza al disparar la pelota. Puede hacer más disparos y ganar más partidos con una herramienta de guía.
  • -
  • Puede mejorar su disfrute y satisfacción al jugar 8 Ball Pool. Usted puede tener más diversión y emoción con una herramienta de guía.
  • -
  • Puede ayudarle a aprender y practicar nuevas habilidades y estrategias para 8 Ball Pool. Puede mejorar su juego y conocimiento con una herramienta de guía.
  • -
  • Puede ahorrarle tiempo y dinero al jugar 8 Ball Pool. Puede evitar perder monedas y dinero en perder partidos con una herramienta de guía.
  • -
-

Los contras de usar una herramienta de guía para la piscina de bolas 8

-

Algunos de los inconvenientes de usar una herramienta de guía para 8 Ball Pool son:

-
    -
  • Puede plantear cuestiones éticas y preguntas sobre el juego limpio. Puedes ser considerado infiel o injusto por otros jugadores o por los desarrolladores del juego si usas una herramienta de guía.
  • -
  • Puede causar problemas técnicos y problemas con su dispositivo o juego. Puede experimentar problemas técnicos, bloqueos o prohibiciones si utiliza una herramienta de guía.
  • -
  • Puede crear dependencia y adicción a la herramienta. Puede perder sus habilidades naturales y habilidades si se basa demasiado en una herramienta de guía.
  • -
-

Conclusión

- -

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes y sus respuestas sobre 8 Ball Pool APK guía:

-
    -
  1. ¿Es legal usar una herramienta de guía para 8 Ball Pool?
    -El uso de una herramienta de guía para 8 Ball Pool no es ilegal, pero puede violar los términos de servicio o la política de privacidad del juego o los desarrolladores del juego. Por lo tanto, debe tener cuidado al usar una herramienta de guía para 8 Ball Pool y solo descargar de fuentes confiables. También debe evitar el uso de herramientas de hackeo que puedan contener malware, virus o anuncios que puedan dañar su dispositivo o juego.
  2. -
  3. ¿Está usando una herramienta de guía para 8 Ball Pool seguro?
    -El uso de una herramienta de guía para 8 Ball Pool es generalmente seguro, pero puede causar algunos problemas técnicos o problemas con su dispositivo o juego. Puede experimentar fallas, bloqueos o prohibiciones si utiliza una herramienta de guía para 8 Ball Pool. Por lo tanto, debe realizar copias de seguridad de sus datos y actualizar su dispositivo y juego regularmente cuando utilice una herramienta de guía para 8 Ball Pool. También debes desinstalar cualquier aplicación no deseada o sospechosa que pueda interferir con tu dispositivo o juego.
  4. -
  5. ¿Está utilizando una herramienta de guía para 8 Ball Pool engaño?
    -Usar una herramienta de guía para 8 Ball Pool no es hacer trampa, pero puede ser considerado injusto o poco ético por otros jugadores o por los desarrolladores del juego. Por lo tanto, debe usar una herramienta de guía para 8 Ball Pool de manera inteligente y responsable, y solo como una herramienta de aprendizaje o práctica. También debe respetar las reglas y regulaciones del juego y los desarrolladores del juego, y evitar el uso de cualquier herramienta de hackeo que puede darle una ventaja injusta sobre otros jugadores.
  6. -
  7. ¿Cómo descargar e instalar una herramienta de guía para 8 Ball Pool?
    - -
  8. ¿Cómo actualizar una herramienta de guía para 8 Ball Pool?
    -Para actualizar una herramienta de guía para 8 Ball Pool, debes seguir estos pasos: - Ve a Google Play Store o APKCombo y busca la herramienta de guía que tienes instalada en tu dispositivo, como 8 Pool Master, Aim Pool 2 u 8 Pool Guideline Ultimate. - Compruebe si hay alguna nueva versión o actualización disponible para la aplicación y asegúrese de que es compatible con su dispositivo y juego. - Toque en el botón de actualización y esperar a que la aplicación para descargar e instalar la última versión en su dispositivo. - Inicie la aplicación y compruebe si hay nuevas características o mejoras en la aplicación. - Disfrute de la herramienta de guía actualizada para 8 Ball Pool.
  9. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Can I Download There Is Day.md b/spaces/Benson/text-generation/Examples/Can I Download There Is Day.md deleted file mode 100644 index 0803b13fb3e64bdca8f260dc738b39c3c70d94cc..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Can I Download There Is Day.md +++ /dev/null @@ -1,65 +0,0 @@ - -

¿Puedo descargar Hay Day?

-

Si usted está buscando un juego divertido y relajante que le permite experimentar la vida sencilla de trabajar la tierra, entonces es posible que desee probar Hay Day. Hay Day es un popular simulador de agricultura que tiene millones de jugadores en todo el mundo. Pero, ¿puedes descargar Hay Day en tu dispositivo? ¡La respuesta es sí, usted puede! En este artículo, le diremos qué día del heno es, cómo descargarlo, y cuáles son las ventajas de jugarlo.

-

¿Qué es el día del heno?

-

Hay Day es un juego desarrollado por Supercell, la misma compañía que creó otros juegos exitosos como Clash of Clans y Brawl Stars. Hay Day fue lanzado en 2012 y desde entonces se ha convertido en uno de los juegos más descargados en ambas plataformas Android e iOS. Pero ¿qué es Hay Day exactamente?

-

can i download there is day


Download Zip ✔✔✔ https://bltlly.com/2v6LG3



-

Un juego de simulador de agricultura

-

Hay Day es un juego que te permite crear tu propia granja y cultivar, criar animales y hacer bienes. Puedes cosechar trigo, maíz, zanahorias y más, y usarlos para hornear pan, hacer queso o producir azúcar. También puede alimentar a sus pollos, vacas, cerdos y otros animales, y recoger huevos, leche, tocino y lana. Incluso se puede pescar en el lago o en la mina.

-

Un juego social

-

Hay Day no es solo un juego en solitario. También puedes jugar con amigos y vecinos de todo el mundo. Puede unirse o crear un vecindario y chatear con otros jugadores. También puede comerciar y vender sus cultivos y productos con ellos, o ayudarles con sus pedidos y solicitudes. También puedes competir en eventos semanales y ganar recompensas.

-

Un juego gratuito

- -

¿Cómo descargar Hay Day?

-

Descargar Hay Day es fácil y rápido. Dependiendo de tu dispositivo, puedes seguir estos pasos:

-

Para dispositivos Android

-
    -
  1. Ir a la aplicación Google Play Store en su dispositivo.
  2. -
  3. Buscar "Hay Day" en la barra de búsqueda.
  4. -
  5. Toque en el botón "Instalar" y espere a que termine la descarga.
  6. -
  7. Abre la aplicación y disfruta jugando Hay Day.
  8. -
-

Para dispositivos iOS

-
    -
  1. Ir a la aplicación App Store en su dispositivo.
  2. -
  3. Buscar "Hay Day" en la barra de búsqueda.
  4. -
  5. Toque en el botón "Obtener" e introduzca su contraseña de Apple ID si se le solicita.
  6. -
  7. Espere a que la descarga termine y abra la aplicación.
  8. -
  9. Diviértete jugando Hay Day.
  10. -
-

Para dispositivos Windows

-

Desafortunadamente, Hay Day no está disponible oficialmente para dispositivos Windows. Sin embargo, todavía se puede jugar en su PC o portátil mediante el uso de un emulador de Android. Un emulador de Android es un software que te permite ejecutar aplicaciones de Android en tu dispositivo Windows. Estos son algunos pasos para hacerlo:

-
    -
  1. Descargar e instalar un emulador de Android de su elección. Algunos populares son BlueStacks, NoxPlayer, o LDPlayer.
  2. -
  3. Abre el emulador e inicia sesión con tu cuenta de Google.
  4. -
  5. Ir a la aplicación Google Play Store dentro del emulador. Buscar "Hay Day" en la barra de búsqueda e instalarlo.
  6. -
  7. Abra la aplicación y comience a jugar Hay Day en su dispositivo Windows.
  8. -
-

¿Cuáles son los beneficios de descargar el día del heno?

-

Descargar Hay Day puede traerte muchos beneficios, como:

-

Disfrutar de la agricultura en cualquier momento, en cualquier lugar

-

Con Hay Day, puede experimentar la alegría de la agricultura en cualquier momento y en cualquier lugar que desee. Puedes jugar Hay Day sin conexión o en línea, en tu teléfono o tableta, o en tu PC o portátil. También puedes pausar y reanudar tu juego cuando quieras. Puedes cultivar tu granja a tu propio ritmo y estilo.

-

Personalizar su granja y decorarla

- -

Comercio y venta de bienes con amigos y vecinos

-

Hay Day no es solo un juego de agricultura, sino también un juego de comercio. Puede intercambiar y vender sus productos con amigos y vecinos a través de la tienda de carretera, el periódico, el barco o el camión. También puede comprar productos de ellos o ayudarles con sus pedidos. Puede ganar monedas y puntos de experiencia al hacerlo.

-

-

Explorar el valle y la ciudad

-

Hay Day no se limita a su granja. También puede explorar el valle y la ciudad, donde puede encontrar más actividades y sorpresas. Usted puede conducir su coche en el valle y recoger fichas, o tomar el tren a la ciudad y servir a los visitantes. También puedes descubrir nuevos lugares y personajes en el camino.

-

Conclusión

-

Hay Day es un juego divertido y relajante que te permite crear tu propia granja y disfrutar de la vida sencilla de trabajar la tierra. Puede descargar Hay Day en su dispositivo Android o iOS, o reproducirlo en su dispositivo Windows utilizando un emulador de Android. También puede jugar con amigos y vecinos de todo el mundo, personalizar su granja y decorarla, el comercio y la venta de bienes, y explorar el valle y la ciudad. Hay Day es un juego gratuito que puedes jugar sin gastar dinero, pero también puedes comprar diamantes si quieres conseguir algunos objetos extra o acelerar tu progreso. Si estás buscando un juego que combine agricultura, comercio y socialización, entonces Hay Day es el juego para ti.

-

Preguntas frecuentes

-
    -
  • Q: ¿Cómo puedo obtener más diamantes en Hay Day?
  • -
  • A: Puedes obtener más diamantes completando logros, viendo anuncios, encontrándolos en cajas misteriosas o comprándolos con dinero real.
  • -
  • Q: ¿Cómo puedo subir de nivel más rápido en Hay Day?
  • -
  • A: Puedes subir de nivel más rápido completando pedidos, cosechando cosechas, alimentando animales, haciendo bienes, comerciando con amigos y vecinos, o usando Tom o boosters.
  • -
  • Q: ¿Cómo puedo unirme o crear un vecindario en Hay Day?
  • - -
  • Q: ¿Cómo participo en el derby en Hay Day?
  • -
  • A: Puedes participar en el derby uniéndote a un vecindario que está inscrito en el derby. Necesitas tener al menos el nivel 18 para hacerlo. A continuación, puede completar las tareas de la junta de derby para ganar puntos para su barrio.
  • -
  • Q: ¿Cómo puedo contactar al soporte de Supercell en Hay Day?
  • -
  • A: Puede ponerse en contacto con el soporte de Supercell tocando en el icono de configuración en la esquina superior izquierda de la pantalla, luego tocando en "Ayuda y soporte". A continuación, puede navegar a través de las preguntas frecuentes o enviar una solicitud.
  • -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis Fuego Avance Servidor Versi Terbaru.md b/spaces/Benson/text-generation/Examples/Descargar Gratis Fuego Avance Servidor Versi Terbaru.md deleted file mode 100644 index 338867956a5a1b36f730fd7b770d4212e2703e8d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gratis Fuego Avance Servidor Versi Terbaru.md +++ /dev/null @@ -1,58 +0,0 @@ -
-

Cómo descargar gratis Fire Advance Server Versi Terbaru

-

Free Fire es uno de los juegos battle royale más populares del mundo, con millones de jugadores disfrutando de su emocionante juego y características. Pero ¿sabías que hay una versión especial de Free Fire que te permite probar nuevas actualizaciones antes de que se publiquen en el servidor normal? Esta versión se llama Free Fire Advance Server, y solo está disponible por un tiempo limitado y para un número limitado de jugadores. En este artículo, le diremos todo lo que necesita saber sobre Free Fire Advance Server versi terbaru, o la última versión de Free Fire Advance Server. Le explicaremos qué es, cómo registrarse e iniciar sesión, cómo descargarlo e instalarlo, y cómo obtener recompensas de él. Así que, si estás interesado en ser uno de los primeros jugadores en experimentar nuevas características y contenido en Free Fire, ¡sigue leyendo!

-

descargar gratis fuego avance servidor versi terbaru


Download Filehttps://bltlly.com/2v6JK1



-

¿Qué es Free Fire Advance Server?

-

Free Fire Advance Server es un servidor de prueba o un servidor beta que Garena lanza como un lugar para probar nuevas actualizaciones. Aquí, puedes probar varios contenidos nuevos que aún no se han lanzado, como personajes, mascotas, armas, modos y mapas. El propósito de Free Fire Advance Server es recopilar comentarios de los jugadores y corregir cualquier error o error antes de iniciar las actualizaciones en el servidor normal.

-

Los beneficios de jugar en Free Fire Advance Server

-

Jugar en Free Fire Advance Server tiene varios beneficios para los jugadores que quieren tener una experiencia de juego diferente y exclusiva. Algunos de estos beneficios son:

-
    -
  • Puede acceder a nuevo contenido que no está disponible en el servidor regular.
  • -
  • Puedes dar tus sugerencias y opiniones a Garena y ayudar a mejorar el juego.
  • -
  • Puedes obtener recompensas en forma de diamantes si encuentras y reportas errores en Free Fire Advance Server.
  • -
  • Puedes divertirte y desafiarte con nuevos elementos de juego.
  • -
- -

Free Fire Advance Server y el servidor regular tienen algunas diferencias que debe tener en cuenta antes de jugar en él. Algunas de estas diferencias son:

- Free Fire Advance Server tiene una capacidad limitada y solo está abierto durante un cierto período de tiempo. Necesita registrarse y obtener un código de activación para unirse a él. El servidor normal está abierto para todos y no requiere ningún código.

-

-

- Free Fire Advance Server puede tener algunos errores o errores que pueden afectar el juego. El servidor regular es más estable y suave.

-

- Gratis Fire Advance Server no puede tener todas las características o contenido que el servidor regular tiene. El servidor regular tiene más variedad y opciones para los jugadores.

-

- Free Fire Advance Server puede tener diferentes reglas o configuraciones que el servidor normal. Por ejemplo, el sistema de clasificación, el matchmaking, la moneda y las recompensas pueden ser diferentes.

-

Cómo registrarse e iniciar sesión en Free Fire Advance Server

-

Si quieres jugar en Free Fire Advance Server, primero debes registrarte e iniciar sesión. Estos son los pasos para hacerlo:

-

Los pasos para registrarse en Free Fire Advance Server

-
    -
  1. Ir al sitio web oficial de Free Fire Advance Server en https://ff-advance.ff.garena.com/.
  2. -
  3. Haga clic en el botón "Login Facebook" e introduzca los detalles de su cuenta de Facebook.
  4. -
  5. Rellene su información personal, como su nombre, correo electrónico, número de teléfono y Free Fire ID.
  6. -
  7. Haga clic en el botón "Unirse ahora" y espere el correo electrónico de confirmación.
  8. -
  9. Si está seleccionado, recibirá un correo electrónico con un código de activación y un enlace para descargar el archivo APK.
  10. -
-

Los requisitos para unirse a Free Fire Advance Server

-

No todo el mundo puede unirse a Free Fire Advance Server, ya que hay algunos requisitos que debe cumplir. Algunos de estos requisitos son:

-
    -
  • Necesitas tener una cuenta de Facebook que esté vinculada a tu cuenta de Free Fire.
  • - -
  • Necesitas tener suficiente espacio de almacenamiento en tu dispositivo para descargar e instalar el archivo APK.
  • -
  • Necesitas estar dispuesto a reportar cualquier error que encuentres en Free Fire Advance Server.
  • -
-

Cómo obtener el código de activación de Free Fire Advance Server

-

Un código de activación es un código único que debe ingresar cuando inicia sesión en Free Fire Advance Server por primera vez. Sin él, no se puede acceder al juego. El código de activación solo se da a un número limitado de jugadores que se registren en Free Fire Advance Server. Estos son algunos consejos para obtener un código de activación:

-
    -
  • Regístrese lo antes posible cuando el registro esté abierto. Cuanto antes se registre, mayores serán las posibilidades de obtener un código de activación.
  • -
  • Revise su correo electrónico regularmente para cualquier actualización de Garena. A veces, pueden enviar códigos de activación al azar o en función de ciertos criterios.
  • -
  • Sigue las cuentas oficiales de Free Fire y Garena. Pueden anunciar algunos eventos o sorteos donde puedes ganar un código de activación.
  • -
  • Sé activo y leal en Free Fire. Garena puede recompensar a algunos jugadores que juegan con frecuencia y gastan dinero en el juego con un código de activación.
  • -
usted entiende cómo descargar gratis Fire Advance Server versi terbaru y disfrutar de sus características. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer y jugar feliz!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas y respuestas frecuentes sobre Free Fire Advance Server:

-
    -
  1. Q: ¿Cuándo estará abierto Free Fire Advance Server?
  2. -R: Free Fire Advance Server generalmente está abierto durante unos días o semanas antes de que se lance una actualización importante en el servidor regular. Puede consultar el sitio web oficial o las cuentas de redes sociales de Free Fire y Garena para las fechas exactas y los horarios de Free Fire Advance Server.
  3. P: ¿Puedo jugar con mis amigos en Free Fire Advance Server?
  4. - -R: No, su progreso y los datos en Free Fire Advance Server no se transferirán al servidor normal. Son servidores separados con diferentes contenidos y configuraciones. Tendrá que empezar desde cero en el servidor normal.
  5. Q: ¿Cómo puedo obtener más diamantes en Free Fire Advance Server?
  6. -R: Puedes obtener más diamantes en Free Fire Advance Server al reportar errores o errores que encuentres en el juego. Garena te recompensará con diamantes según la gravedad y validez de tus informes. También puede obtener diamantes participando en algunos eventos o regalos que Garena puede alojar en Free Fire Advance Server.
  7. P: ¿Cómo puedo contactar con Garena si tengo algún problema o sugerencia en Free Fire Advance Server?
  8. -R: Puede ponerse en contacto con Garena usando el botón "Reportar" en el menú del juego, enviando un correo electrónico a ff-advance@ff.garena.com, o visitando su página de servicio al cliente en https://ffsupport.zendesk.com/hc/en-us. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/colorama/ansi.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/colorama/ansi.py deleted file mode 100644 index 11ec695ff79627463a0282d25079527562de9e42..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/colorama/ansi.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -''' -This module generates ANSI character codes to printing colors to terminals. -See: http://en.wikipedia.org/wiki/ANSI_escape_code -''' - -CSI = '\033[' -OSC = '\033]' -BEL = '\a' - - -def code_to_chars(code): - return CSI + str(code) + 'm' - -def set_title(title): - return OSC + '2;' + title + BEL - -def clear_screen(mode=2): - return CSI + str(mode) + 'J' - -def clear_line(mode=2): - return CSI + str(mode) + 'K' - - -class AnsiCodes(object): - def __init__(self): - # the subclasses declare class attributes which are numbers. - # Upon instantiation we define instance attributes, which are the same - # as the class attributes but wrapped with the ANSI escape sequence - for name in dir(self): - if not name.startswith('_'): - value = getattr(self, name) - setattr(self, name, code_to_chars(value)) - - -class AnsiCursor(object): - def UP(self, n=1): - return CSI + str(n) + 'A' - def DOWN(self, n=1): - return CSI + str(n) + 'B' - def FORWARD(self, n=1): - return CSI + str(n) + 'C' - def BACK(self, n=1): - return CSI + str(n) + 'D' - def POS(self, x=1, y=1): - return CSI + str(y) + ';' + str(x) + 'H' - - -class AnsiFore(AnsiCodes): - BLACK = 30 - RED = 31 - GREEN = 32 - YELLOW = 33 - BLUE = 34 - MAGENTA = 35 - CYAN = 36 - WHITE = 37 - RESET = 39 - - # These are fairly well supported, but not part of the standard. - LIGHTBLACK_EX = 90 - LIGHTRED_EX = 91 - LIGHTGREEN_EX = 92 - LIGHTYELLOW_EX = 93 - LIGHTBLUE_EX = 94 - LIGHTMAGENTA_EX = 95 - LIGHTCYAN_EX = 96 - LIGHTWHITE_EX = 97 - - -class AnsiBack(AnsiCodes): - BLACK = 40 - RED = 41 - GREEN = 42 - YELLOW = 43 - BLUE = 44 - MAGENTA = 45 - CYAN = 46 - WHITE = 47 - RESET = 49 - - # These are fairly well supported, but not part of the standard. - LIGHTBLACK_EX = 100 - LIGHTRED_EX = 101 - LIGHTGREEN_EX = 102 - LIGHTYELLOW_EX = 103 - LIGHTBLUE_EX = 104 - LIGHTMAGENTA_EX = 105 - LIGHTCYAN_EX = 106 - LIGHTWHITE_EX = 107 - - -class AnsiStyle(AnsiCodes): - BRIGHT = 1 - DIM = 2 - NORMAL = 22 - RESET_ALL = 0 - -Fore = AnsiFore() -Back = AnsiBack() -Style = AnsiStyle() -Cursor = AnsiCursor() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/idnadata.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/idnadata.py deleted file mode 100644 index 67db4625829680298b2a5a9032a379d870a00700..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/idnadata.py +++ /dev/null @@ -1,2151 +0,0 @@ -# This file is automatically generated by tools/idna-data - -__version__ = '15.0.0' -scripts = { - 'Greek': ( - 0x37000000374, - 0x37500000378, - 0x37a0000037e, - 0x37f00000380, - 0x38400000385, - 0x38600000387, - 0x3880000038b, - 0x38c0000038d, - 0x38e000003a2, - 0x3a3000003e2, - 0x3f000000400, - 0x1d2600001d2b, - 0x1d5d00001d62, - 0x1d6600001d6b, - 0x1dbf00001dc0, - 0x1f0000001f16, - 0x1f1800001f1e, - 0x1f2000001f46, - 0x1f4800001f4e, - 0x1f5000001f58, - 0x1f5900001f5a, - 0x1f5b00001f5c, - 0x1f5d00001f5e, - 0x1f5f00001f7e, - 0x1f8000001fb5, - 0x1fb600001fc5, - 0x1fc600001fd4, - 0x1fd600001fdc, - 0x1fdd00001ff0, - 0x1ff200001ff5, - 0x1ff600001fff, - 0x212600002127, - 0xab650000ab66, - 0x101400001018f, - 0x101a0000101a1, - 0x1d2000001d246, - ), - 'Han': ( - 0x2e8000002e9a, - 0x2e9b00002ef4, - 0x2f0000002fd6, - 0x300500003006, - 0x300700003008, - 0x30210000302a, - 0x30380000303c, - 0x340000004dc0, - 0x4e000000a000, - 0xf9000000fa6e, - 0xfa700000fada, - 0x16fe200016fe4, - 0x16ff000016ff2, - 0x200000002a6e0, - 0x2a7000002b73a, - 0x2b7400002b81e, - 0x2b8200002cea2, - 0x2ceb00002ebe1, - 0x2f8000002fa1e, - 0x300000003134b, - 0x31350000323b0, - ), - 'Hebrew': ( - 0x591000005c8, - 0x5d0000005eb, - 0x5ef000005f5, - 0xfb1d0000fb37, - 0xfb380000fb3d, - 0xfb3e0000fb3f, - 0xfb400000fb42, - 0xfb430000fb45, - 0xfb460000fb50, - ), - 'Hiragana': ( - 0x304100003097, - 0x309d000030a0, - 0x1b0010001b120, - 0x1b1320001b133, - 0x1b1500001b153, - 0x1f2000001f201, - ), - 'Katakana': ( - 0x30a1000030fb, - 0x30fd00003100, - 0x31f000003200, - 0x32d0000032ff, - 0x330000003358, - 0xff660000ff70, - 0xff710000ff9e, - 0x1aff00001aff4, - 0x1aff50001affc, - 0x1affd0001afff, - 0x1b0000001b001, - 0x1b1200001b123, - 0x1b1550001b156, - 0x1b1640001b168, - ), -} -joining_types = { - 0x600: 85, - 0x601: 85, - 0x602: 85, - 0x603: 85, - 0x604: 85, - 0x605: 85, - 0x608: 85, - 0x60b: 85, - 0x620: 68, - 0x621: 85, - 0x622: 82, - 0x623: 82, - 0x624: 82, - 0x625: 82, - 0x626: 68, - 0x627: 82, - 0x628: 68, - 0x629: 82, - 0x62a: 68, - 0x62b: 68, - 0x62c: 68, - 0x62d: 68, - 0x62e: 68, - 0x62f: 82, - 0x630: 82, - 0x631: 82, - 0x632: 82, - 0x633: 68, - 0x634: 68, - 0x635: 68, - 0x636: 68, - 0x637: 68, - 0x638: 68, - 0x639: 68, - 0x63a: 68, - 0x63b: 68, - 0x63c: 68, - 0x63d: 68, - 0x63e: 68, - 0x63f: 68, - 0x640: 67, - 0x641: 68, - 0x642: 68, - 0x643: 68, - 0x644: 68, - 0x645: 68, - 0x646: 68, - 0x647: 68, - 0x648: 82, - 0x649: 68, - 0x64a: 68, - 0x66e: 68, - 0x66f: 68, - 0x671: 82, - 0x672: 82, - 0x673: 82, - 0x674: 85, - 0x675: 82, - 0x676: 82, - 0x677: 82, - 0x678: 68, - 0x679: 68, - 0x67a: 68, - 0x67b: 68, - 0x67c: 68, - 0x67d: 68, - 0x67e: 68, - 0x67f: 68, - 0x680: 68, - 0x681: 68, - 0x682: 68, - 0x683: 68, - 0x684: 68, - 0x685: 68, - 0x686: 68, - 0x687: 68, - 0x688: 82, - 0x689: 82, - 0x68a: 82, - 0x68b: 82, - 0x68c: 82, - 0x68d: 82, - 0x68e: 82, - 0x68f: 82, - 0x690: 82, - 0x691: 82, - 0x692: 82, - 0x693: 82, - 0x694: 82, - 0x695: 82, - 0x696: 82, - 0x697: 82, - 0x698: 82, - 0x699: 82, - 0x69a: 68, - 0x69b: 68, - 0x69c: 68, - 0x69d: 68, - 0x69e: 68, - 0x69f: 68, - 0x6a0: 68, - 0x6a1: 68, - 0x6a2: 68, - 0x6a3: 68, - 0x6a4: 68, - 0x6a5: 68, - 0x6a6: 68, - 0x6a7: 68, - 0x6a8: 68, - 0x6a9: 68, - 0x6aa: 68, - 0x6ab: 68, - 0x6ac: 68, - 0x6ad: 68, - 0x6ae: 68, - 0x6af: 68, - 0x6b0: 68, - 0x6b1: 68, - 0x6b2: 68, - 0x6b3: 68, - 0x6b4: 68, - 0x6b5: 68, - 0x6b6: 68, - 0x6b7: 68, - 0x6b8: 68, - 0x6b9: 68, - 0x6ba: 68, - 0x6bb: 68, - 0x6bc: 68, - 0x6bd: 68, - 0x6be: 68, - 0x6bf: 68, - 0x6c0: 82, - 0x6c1: 68, - 0x6c2: 68, - 0x6c3: 82, - 0x6c4: 82, - 0x6c5: 82, - 0x6c6: 82, - 0x6c7: 82, - 0x6c8: 82, - 0x6c9: 82, - 0x6ca: 82, - 0x6cb: 82, - 0x6cc: 68, - 0x6cd: 82, - 0x6ce: 68, - 0x6cf: 82, - 0x6d0: 68, - 0x6d1: 68, - 0x6d2: 82, - 0x6d3: 82, - 0x6d5: 82, - 0x6dd: 85, - 0x6ee: 82, - 0x6ef: 82, - 0x6fa: 68, - 0x6fb: 68, - 0x6fc: 68, - 0x6ff: 68, - 0x70f: 84, - 0x710: 82, - 0x712: 68, - 0x713: 68, - 0x714: 68, - 0x715: 82, - 0x716: 82, - 0x717: 82, - 0x718: 82, - 0x719: 82, - 0x71a: 68, - 0x71b: 68, - 0x71c: 68, - 0x71d: 68, - 0x71e: 82, - 0x71f: 68, - 0x720: 68, - 0x721: 68, - 0x722: 68, - 0x723: 68, - 0x724: 68, - 0x725: 68, - 0x726: 68, - 0x727: 68, - 0x728: 82, - 0x729: 68, - 0x72a: 82, - 0x72b: 68, - 0x72c: 82, - 0x72d: 68, - 0x72e: 68, - 0x72f: 82, - 0x74d: 82, - 0x74e: 68, - 0x74f: 68, - 0x750: 68, - 0x751: 68, - 0x752: 68, - 0x753: 68, - 0x754: 68, - 0x755: 68, - 0x756: 68, - 0x757: 68, - 0x758: 68, - 0x759: 82, - 0x75a: 82, - 0x75b: 82, - 0x75c: 68, - 0x75d: 68, - 0x75e: 68, - 0x75f: 68, - 0x760: 68, - 0x761: 68, - 0x762: 68, - 0x763: 68, - 0x764: 68, - 0x765: 68, - 0x766: 68, - 0x767: 68, - 0x768: 68, - 0x769: 68, - 0x76a: 68, - 0x76b: 82, - 0x76c: 82, - 0x76d: 68, - 0x76e: 68, - 0x76f: 68, - 0x770: 68, - 0x771: 82, - 0x772: 68, - 0x773: 82, - 0x774: 82, - 0x775: 68, - 0x776: 68, - 0x777: 68, - 0x778: 82, - 0x779: 82, - 0x77a: 68, - 0x77b: 68, - 0x77c: 68, - 0x77d: 68, - 0x77e: 68, - 0x77f: 68, - 0x7ca: 68, - 0x7cb: 68, - 0x7cc: 68, - 0x7cd: 68, - 0x7ce: 68, - 0x7cf: 68, - 0x7d0: 68, - 0x7d1: 68, - 0x7d2: 68, - 0x7d3: 68, - 0x7d4: 68, - 0x7d5: 68, - 0x7d6: 68, - 0x7d7: 68, - 0x7d8: 68, - 0x7d9: 68, - 0x7da: 68, - 0x7db: 68, - 0x7dc: 68, - 0x7dd: 68, - 0x7de: 68, - 0x7df: 68, - 0x7e0: 68, - 0x7e1: 68, - 0x7e2: 68, - 0x7e3: 68, - 0x7e4: 68, - 0x7e5: 68, - 0x7e6: 68, - 0x7e7: 68, - 0x7e8: 68, - 0x7e9: 68, - 0x7ea: 68, - 0x7fa: 67, - 0x840: 82, - 0x841: 68, - 0x842: 68, - 0x843: 68, - 0x844: 68, - 0x845: 68, - 0x846: 82, - 0x847: 82, - 0x848: 68, - 0x849: 82, - 0x84a: 68, - 0x84b: 68, - 0x84c: 68, - 0x84d: 68, - 0x84e: 68, - 0x84f: 68, - 0x850: 68, - 0x851: 68, - 0x852: 68, - 0x853: 68, - 0x854: 82, - 0x855: 68, - 0x856: 82, - 0x857: 82, - 0x858: 82, - 0x860: 68, - 0x861: 85, - 0x862: 68, - 0x863: 68, - 0x864: 68, - 0x865: 68, - 0x866: 85, - 0x867: 82, - 0x868: 68, - 0x869: 82, - 0x86a: 82, - 0x870: 82, - 0x871: 82, - 0x872: 82, - 0x873: 82, - 0x874: 82, - 0x875: 82, - 0x876: 82, - 0x877: 82, - 0x878: 82, - 0x879: 82, - 0x87a: 82, - 0x87b: 82, - 0x87c: 82, - 0x87d: 82, - 0x87e: 82, - 0x87f: 82, - 0x880: 82, - 0x881: 82, - 0x882: 82, - 0x883: 67, - 0x884: 67, - 0x885: 67, - 0x886: 68, - 0x887: 85, - 0x888: 85, - 0x889: 68, - 0x88a: 68, - 0x88b: 68, - 0x88c: 68, - 0x88d: 68, - 0x88e: 82, - 0x890: 85, - 0x891: 85, - 0x8a0: 68, - 0x8a1: 68, - 0x8a2: 68, - 0x8a3: 68, - 0x8a4: 68, - 0x8a5: 68, - 0x8a6: 68, - 0x8a7: 68, - 0x8a8: 68, - 0x8a9: 68, - 0x8aa: 82, - 0x8ab: 82, - 0x8ac: 82, - 0x8ad: 85, - 0x8ae: 82, - 0x8af: 68, - 0x8b0: 68, - 0x8b1: 82, - 0x8b2: 82, - 0x8b3: 68, - 0x8b4: 68, - 0x8b5: 68, - 0x8b6: 68, - 0x8b7: 68, - 0x8b8: 68, - 0x8b9: 82, - 0x8ba: 68, - 0x8bb: 68, - 0x8bc: 68, - 0x8bd: 68, - 0x8be: 68, - 0x8bf: 68, - 0x8c0: 68, - 0x8c1: 68, - 0x8c2: 68, - 0x8c3: 68, - 0x8c4: 68, - 0x8c5: 68, - 0x8c6: 68, - 0x8c7: 68, - 0x8c8: 68, - 0x8e2: 85, - 0x1806: 85, - 0x1807: 68, - 0x180a: 67, - 0x180e: 85, - 0x1820: 68, - 0x1821: 68, - 0x1822: 68, - 0x1823: 68, - 0x1824: 68, - 0x1825: 68, - 0x1826: 68, - 0x1827: 68, - 0x1828: 68, - 0x1829: 68, - 0x182a: 68, - 0x182b: 68, - 0x182c: 68, - 0x182d: 68, - 0x182e: 68, - 0x182f: 68, - 0x1830: 68, - 0x1831: 68, - 0x1832: 68, - 0x1833: 68, - 0x1834: 68, - 0x1835: 68, - 0x1836: 68, - 0x1837: 68, - 0x1838: 68, - 0x1839: 68, - 0x183a: 68, - 0x183b: 68, - 0x183c: 68, - 0x183d: 68, - 0x183e: 68, - 0x183f: 68, - 0x1840: 68, - 0x1841: 68, - 0x1842: 68, - 0x1843: 68, - 0x1844: 68, - 0x1845: 68, - 0x1846: 68, - 0x1847: 68, - 0x1848: 68, - 0x1849: 68, - 0x184a: 68, - 0x184b: 68, - 0x184c: 68, - 0x184d: 68, - 0x184e: 68, - 0x184f: 68, - 0x1850: 68, - 0x1851: 68, - 0x1852: 68, - 0x1853: 68, - 0x1854: 68, - 0x1855: 68, - 0x1856: 68, - 0x1857: 68, - 0x1858: 68, - 0x1859: 68, - 0x185a: 68, - 0x185b: 68, - 0x185c: 68, - 0x185d: 68, - 0x185e: 68, - 0x185f: 68, - 0x1860: 68, - 0x1861: 68, - 0x1862: 68, - 0x1863: 68, - 0x1864: 68, - 0x1865: 68, - 0x1866: 68, - 0x1867: 68, - 0x1868: 68, - 0x1869: 68, - 0x186a: 68, - 0x186b: 68, - 0x186c: 68, - 0x186d: 68, - 0x186e: 68, - 0x186f: 68, - 0x1870: 68, - 0x1871: 68, - 0x1872: 68, - 0x1873: 68, - 0x1874: 68, - 0x1875: 68, - 0x1876: 68, - 0x1877: 68, - 0x1878: 68, - 0x1880: 85, - 0x1881: 85, - 0x1882: 85, - 0x1883: 85, - 0x1884: 85, - 0x1885: 84, - 0x1886: 84, - 0x1887: 68, - 0x1888: 68, - 0x1889: 68, - 0x188a: 68, - 0x188b: 68, - 0x188c: 68, - 0x188d: 68, - 0x188e: 68, - 0x188f: 68, - 0x1890: 68, - 0x1891: 68, - 0x1892: 68, - 0x1893: 68, - 0x1894: 68, - 0x1895: 68, - 0x1896: 68, - 0x1897: 68, - 0x1898: 68, - 0x1899: 68, - 0x189a: 68, - 0x189b: 68, - 0x189c: 68, - 0x189d: 68, - 0x189e: 68, - 0x189f: 68, - 0x18a0: 68, - 0x18a1: 68, - 0x18a2: 68, - 0x18a3: 68, - 0x18a4: 68, - 0x18a5: 68, - 0x18a6: 68, - 0x18a7: 68, - 0x18a8: 68, - 0x18aa: 68, - 0x200c: 85, - 0x200d: 67, - 0x202f: 85, - 0x2066: 85, - 0x2067: 85, - 0x2068: 85, - 0x2069: 85, - 0xa840: 68, - 0xa841: 68, - 0xa842: 68, - 0xa843: 68, - 0xa844: 68, - 0xa845: 68, - 0xa846: 68, - 0xa847: 68, - 0xa848: 68, - 0xa849: 68, - 0xa84a: 68, - 0xa84b: 68, - 0xa84c: 68, - 0xa84d: 68, - 0xa84e: 68, - 0xa84f: 68, - 0xa850: 68, - 0xa851: 68, - 0xa852: 68, - 0xa853: 68, - 0xa854: 68, - 0xa855: 68, - 0xa856: 68, - 0xa857: 68, - 0xa858: 68, - 0xa859: 68, - 0xa85a: 68, - 0xa85b: 68, - 0xa85c: 68, - 0xa85d: 68, - 0xa85e: 68, - 0xa85f: 68, - 0xa860: 68, - 0xa861: 68, - 0xa862: 68, - 0xa863: 68, - 0xa864: 68, - 0xa865: 68, - 0xa866: 68, - 0xa867: 68, - 0xa868: 68, - 0xa869: 68, - 0xa86a: 68, - 0xa86b: 68, - 0xa86c: 68, - 0xa86d: 68, - 0xa86e: 68, - 0xa86f: 68, - 0xa870: 68, - 0xa871: 68, - 0xa872: 76, - 0xa873: 85, - 0x10ac0: 68, - 0x10ac1: 68, - 0x10ac2: 68, - 0x10ac3: 68, - 0x10ac4: 68, - 0x10ac5: 82, - 0x10ac6: 85, - 0x10ac7: 82, - 0x10ac8: 85, - 0x10ac9: 82, - 0x10aca: 82, - 0x10acb: 85, - 0x10acc: 85, - 0x10acd: 76, - 0x10ace: 82, - 0x10acf: 82, - 0x10ad0: 82, - 0x10ad1: 82, - 0x10ad2: 82, - 0x10ad3: 68, - 0x10ad4: 68, - 0x10ad5: 68, - 0x10ad6: 68, - 0x10ad7: 76, - 0x10ad8: 68, - 0x10ad9: 68, - 0x10ada: 68, - 0x10adb: 68, - 0x10adc: 68, - 0x10add: 82, - 0x10ade: 68, - 0x10adf: 68, - 0x10ae0: 68, - 0x10ae1: 82, - 0x10ae2: 85, - 0x10ae3: 85, - 0x10ae4: 82, - 0x10aeb: 68, - 0x10aec: 68, - 0x10aed: 68, - 0x10aee: 68, - 0x10aef: 82, - 0x10b80: 68, - 0x10b81: 82, - 0x10b82: 68, - 0x10b83: 82, - 0x10b84: 82, - 0x10b85: 82, - 0x10b86: 68, - 0x10b87: 68, - 0x10b88: 68, - 0x10b89: 82, - 0x10b8a: 68, - 0x10b8b: 68, - 0x10b8c: 82, - 0x10b8d: 68, - 0x10b8e: 82, - 0x10b8f: 82, - 0x10b90: 68, - 0x10b91: 82, - 0x10ba9: 82, - 0x10baa: 82, - 0x10bab: 82, - 0x10bac: 82, - 0x10bad: 68, - 0x10bae: 68, - 0x10baf: 85, - 0x10d00: 76, - 0x10d01: 68, - 0x10d02: 68, - 0x10d03: 68, - 0x10d04: 68, - 0x10d05: 68, - 0x10d06: 68, - 0x10d07: 68, - 0x10d08: 68, - 0x10d09: 68, - 0x10d0a: 68, - 0x10d0b: 68, - 0x10d0c: 68, - 0x10d0d: 68, - 0x10d0e: 68, - 0x10d0f: 68, - 0x10d10: 68, - 0x10d11: 68, - 0x10d12: 68, - 0x10d13: 68, - 0x10d14: 68, - 0x10d15: 68, - 0x10d16: 68, - 0x10d17: 68, - 0x10d18: 68, - 0x10d19: 68, - 0x10d1a: 68, - 0x10d1b: 68, - 0x10d1c: 68, - 0x10d1d: 68, - 0x10d1e: 68, - 0x10d1f: 68, - 0x10d20: 68, - 0x10d21: 68, - 0x10d22: 82, - 0x10d23: 68, - 0x10f30: 68, - 0x10f31: 68, - 0x10f32: 68, - 0x10f33: 82, - 0x10f34: 68, - 0x10f35: 68, - 0x10f36: 68, - 0x10f37: 68, - 0x10f38: 68, - 0x10f39: 68, - 0x10f3a: 68, - 0x10f3b: 68, - 0x10f3c: 68, - 0x10f3d: 68, - 0x10f3e: 68, - 0x10f3f: 68, - 0x10f40: 68, - 0x10f41: 68, - 0x10f42: 68, - 0x10f43: 68, - 0x10f44: 68, - 0x10f45: 85, - 0x10f51: 68, - 0x10f52: 68, - 0x10f53: 68, - 0x10f54: 82, - 0x10f70: 68, - 0x10f71: 68, - 0x10f72: 68, - 0x10f73: 68, - 0x10f74: 82, - 0x10f75: 82, - 0x10f76: 68, - 0x10f77: 68, - 0x10f78: 68, - 0x10f79: 68, - 0x10f7a: 68, - 0x10f7b: 68, - 0x10f7c: 68, - 0x10f7d: 68, - 0x10f7e: 68, - 0x10f7f: 68, - 0x10f80: 68, - 0x10f81: 68, - 0x10fb0: 68, - 0x10fb1: 85, - 0x10fb2: 68, - 0x10fb3: 68, - 0x10fb4: 82, - 0x10fb5: 82, - 0x10fb6: 82, - 0x10fb7: 85, - 0x10fb8: 68, - 0x10fb9: 82, - 0x10fba: 82, - 0x10fbb: 68, - 0x10fbc: 68, - 0x10fbd: 82, - 0x10fbe: 68, - 0x10fbf: 68, - 0x10fc0: 85, - 0x10fc1: 68, - 0x10fc2: 82, - 0x10fc3: 82, - 0x10fc4: 68, - 0x10fc5: 85, - 0x10fc6: 85, - 0x10fc7: 85, - 0x10fc8: 85, - 0x10fc9: 82, - 0x10fca: 68, - 0x10fcb: 76, - 0x110bd: 85, - 0x110cd: 85, - 0x1e900: 68, - 0x1e901: 68, - 0x1e902: 68, - 0x1e903: 68, - 0x1e904: 68, - 0x1e905: 68, - 0x1e906: 68, - 0x1e907: 68, - 0x1e908: 68, - 0x1e909: 68, - 0x1e90a: 68, - 0x1e90b: 68, - 0x1e90c: 68, - 0x1e90d: 68, - 0x1e90e: 68, - 0x1e90f: 68, - 0x1e910: 68, - 0x1e911: 68, - 0x1e912: 68, - 0x1e913: 68, - 0x1e914: 68, - 0x1e915: 68, - 0x1e916: 68, - 0x1e917: 68, - 0x1e918: 68, - 0x1e919: 68, - 0x1e91a: 68, - 0x1e91b: 68, - 0x1e91c: 68, - 0x1e91d: 68, - 0x1e91e: 68, - 0x1e91f: 68, - 0x1e920: 68, - 0x1e921: 68, - 0x1e922: 68, - 0x1e923: 68, - 0x1e924: 68, - 0x1e925: 68, - 0x1e926: 68, - 0x1e927: 68, - 0x1e928: 68, - 0x1e929: 68, - 0x1e92a: 68, - 0x1e92b: 68, - 0x1e92c: 68, - 0x1e92d: 68, - 0x1e92e: 68, - 0x1e92f: 68, - 0x1e930: 68, - 0x1e931: 68, - 0x1e932: 68, - 0x1e933: 68, - 0x1e934: 68, - 0x1e935: 68, - 0x1e936: 68, - 0x1e937: 68, - 0x1e938: 68, - 0x1e939: 68, - 0x1e93a: 68, - 0x1e93b: 68, - 0x1e93c: 68, - 0x1e93d: 68, - 0x1e93e: 68, - 0x1e93f: 68, - 0x1e940: 68, - 0x1e941: 68, - 0x1e942: 68, - 0x1e943: 68, - 0x1e94b: 84, -} -codepoint_classes = { - 'PVALID': ( - 0x2d0000002e, - 0x300000003a, - 0x610000007b, - 0xdf000000f7, - 0xf800000100, - 0x10100000102, - 0x10300000104, - 0x10500000106, - 0x10700000108, - 0x1090000010a, - 0x10b0000010c, - 0x10d0000010e, - 0x10f00000110, - 0x11100000112, - 0x11300000114, - 0x11500000116, - 0x11700000118, - 0x1190000011a, - 0x11b0000011c, - 0x11d0000011e, - 0x11f00000120, - 0x12100000122, - 0x12300000124, - 0x12500000126, - 0x12700000128, - 0x1290000012a, - 0x12b0000012c, - 0x12d0000012e, - 0x12f00000130, - 0x13100000132, - 0x13500000136, - 0x13700000139, - 0x13a0000013b, - 0x13c0000013d, - 0x13e0000013f, - 0x14200000143, - 0x14400000145, - 0x14600000147, - 0x14800000149, - 0x14b0000014c, - 0x14d0000014e, - 0x14f00000150, - 0x15100000152, - 0x15300000154, - 0x15500000156, - 0x15700000158, - 0x1590000015a, - 0x15b0000015c, - 0x15d0000015e, - 0x15f00000160, - 0x16100000162, - 0x16300000164, - 0x16500000166, - 0x16700000168, - 0x1690000016a, - 0x16b0000016c, - 0x16d0000016e, - 0x16f00000170, - 0x17100000172, - 0x17300000174, - 0x17500000176, - 0x17700000178, - 0x17a0000017b, - 0x17c0000017d, - 0x17e0000017f, - 0x18000000181, - 0x18300000184, - 0x18500000186, - 0x18800000189, - 0x18c0000018e, - 0x19200000193, - 0x19500000196, - 0x1990000019c, - 0x19e0000019f, - 0x1a1000001a2, - 0x1a3000001a4, - 0x1a5000001a6, - 0x1a8000001a9, - 0x1aa000001ac, - 0x1ad000001ae, - 0x1b0000001b1, - 0x1b4000001b5, - 0x1b6000001b7, - 0x1b9000001bc, - 0x1bd000001c4, - 0x1ce000001cf, - 0x1d0000001d1, - 0x1d2000001d3, - 0x1d4000001d5, - 0x1d6000001d7, - 0x1d8000001d9, - 0x1da000001db, - 0x1dc000001de, - 0x1df000001e0, - 0x1e1000001e2, - 0x1e3000001e4, - 0x1e5000001e6, - 0x1e7000001e8, - 0x1e9000001ea, - 0x1eb000001ec, - 0x1ed000001ee, - 0x1ef000001f1, - 0x1f5000001f6, - 0x1f9000001fa, - 0x1fb000001fc, - 0x1fd000001fe, - 0x1ff00000200, - 0x20100000202, - 0x20300000204, - 0x20500000206, - 0x20700000208, - 0x2090000020a, - 0x20b0000020c, - 0x20d0000020e, - 0x20f00000210, - 0x21100000212, - 0x21300000214, - 0x21500000216, - 0x21700000218, - 0x2190000021a, - 0x21b0000021c, - 0x21d0000021e, - 0x21f00000220, - 0x22100000222, - 0x22300000224, - 0x22500000226, - 0x22700000228, - 0x2290000022a, - 0x22b0000022c, - 0x22d0000022e, - 0x22f00000230, - 0x23100000232, - 0x2330000023a, - 0x23c0000023d, - 0x23f00000241, - 0x24200000243, - 0x24700000248, - 0x2490000024a, - 0x24b0000024c, - 0x24d0000024e, - 0x24f000002b0, - 0x2b9000002c2, - 0x2c6000002d2, - 0x2ec000002ed, - 0x2ee000002ef, - 0x30000000340, - 0x34200000343, - 0x3460000034f, - 0x35000000370, - 0x37100000372, - 0x37300000374, - 0x37700000378, - 0x37b0000037e, - 0x39000000391, - 0x3ac000003cf, - 0x3d7000003d8, - 0x3d9000003da, - 0x3db000003dc, - 0x3dd000003de, - 0x3df000003e0, - 0x3e1000003e2, - 0x3e3000003e4, - 0x3e5000003e6, - 0x3e7000003e8, - 0x3e9000003ea, - 0x3eb000003ec, - 0x3ed000003ee, - 0x3ef000003f0, - 0x3f3000003f4, - 0x3f8000003f9, - 0x3fb000003fd, - 0x43000000460, - 0x46100000462, - 0x46300000464, - 0x46500000466, - 0x46700000468, - 0x4690000046a, - 0x46b0000046c, - 0x46d0000046e, - 0x46f00000470, - 0x47100000472, - 0x47300000474, - 0x47500000476, - 0x47700000478, - 0x4790000047a, - 0x47b0000047c, - 0x47d0000047e, - 0x47f00000480, - 0x48100000482, - 0x48300000488, - 0x48b0000048c, - 0x48d0000048e, - 0x48f00000490, - 0x49100000492, - 0x49300000494, - 0x49500000496, - 0x49700000498, - 0x4990000049a, - 0x49b0000049c, - 0x49d0000049e, - 0x49f000004a0, - 0x4a1000004a2, - 0x4a3000004a4, - 0x4a5000004a6, - 0x4a7000004a8, - 0x4a9000004aa, - 0x4ab000004ac, - 0x4ad000004ae, - 0x4af000004b0, - 0x4b1000004b2, - 0x4b3000004b4, - 0x4b5000004b6, - 0x4b7000004b8, - 0x4b9000004ba, - 0x4bb000004bc, - 0x4bd000004be, - 0x4bf000004c0, - 0x4c2000004c3, - 0x4c4000004c5, - 0x4c6000004c7, - 0x4c8000004c9, - 0x4ca000004cb, - 0x4cc000004cd, - 0x4ce000004d0, - 0x4d1000004d2, - 0x4d3000004d4, - 0x4d5000004d6, - 0x4d7000004d8, - 0x4d9000004da, - 0x4db000004dc, - 0x4dd000004de, - 0x4df000004e0, - 0x4e1000004e2, - 0x4e3000004e4, - 0x4e5000004e6, - 0x4e7000004e8, - 0x4e9000004ea, - 0x4eb000004ec, - 0x4ed000004ee, - 0x4ef000004f0, - 0x4f1000004f2, - 0x4f3000004f4, - 0x4f5000004f6, - 0x4f7000004f8, - 0x4f9000004fa, - 0x4fb000004fc, - 0x4fd000004fe, - 0x4ff00000500, - 0x50100000502, - 0x50300000504, - 0x50500000506, - 0x50700000508, - 0x5090000050a, - 0x50b0000050c, - 0x50d0000050e, - 0x50f00000510, - 0x51100000512, - 0x51300000514, - 0x51500000516, - 0x51700000518, - 0x5190000051a, - 0x51b0000051c, - 0x51d0000051e, - 0x51f00000520, - 0x52100000522, - 0x52300000524, - 0x52500000526, - 0x52700000528, - 0x5290000052a, - 0x52b0000052c, - 0x52d0000052e, - 0x52f00000530, - 0x5590000055a, - 0x56000000587, - 0x58800000589, - 0x591000005be, - 0x5bf000005c0, - 0x5c1000005c3, - 0x5c4000005c6, - 0x5c7000005c8, - 0x5d0000005eb, - 0x5ef000005f3, - 0x6100000061b, - 0x62000000640, - 0x64100000660, - 0x66e00000675, - 0x679000006d4, - 0x6d5000006dd, - 0x6df000006e9, - 0x6ea000006f0, - 0x6fa00000700, - 0x7100000074b, - 0x74d000007b2, - 0x7c0000007f6, - 0x7fd000007fe, - 0x8000000082e, - 0x8400000085c, - 0x8600000086b, - 0x87000000888, - 0x8890000088f, - 0x898000008e2, - 0x8e300000958, - 0x96000000964, - 0x96600000970, - 0x97100000984, - 0x9850000098d, - 0x98f00000991, - 0x993000009a9, - 0x9aa000009b1, - 0x9b2000009b3, - 0x9b6000009ba, - 0x9bc000009c5, - 0x9c7000009c9, - 0x9cb000009cf, - 0x9d7000009d8, - 0x9e0000009e4, - 0x9e6000009f2, - 0x9fc000009fd, - 0x9fe000009ff, - 0xa0100000a04, - 0xa0500000a0b, - 0xa0f00000a11, - 0xa1300000a29, - 0xa2a00000a31, - 0xa3200000a33, - 0xa3500000a36, - 0xa3800000a3a, - 0xa3c00000a3d, - 0xa3e00000a43, - 0xa4700000a49, - 0xa4b00000a4e, - 0xa5100000a52, - 0xa5c00000a5d, - 0xa6600000a76, - 0xa8100000a84, - 0xa8500000a8e, - 0xa8f00000a92, - 0xa9300000aa9, - 0xaaa00000ab1, - 0xab200000ab4, - 0xab500000aba, - 0xabc00000ac6, - 0xac700000aca, - 0xacb00000ace, - 0xad000000ad1, - 0xae000000ae4, - 0xae600000af0, - 0xaf900000b00, - 0xb0100000b04, - 0xb0500000b0d, - 0xb0f00000b11, - 0xb1300000b29, - 0xb2a00000b31, - 0xb3200000b34, - 0xb3500000b3a, - 0xb3c00000b45, - 0xb4700000b49, - 0xb4b00000b4e, - 0xb5500000b58, - 0xb5f00000b64, - 0xb6600000b70, - 0xb7100000b72, - 0xb8200000b84, - 0xb8500000b8b, - 0xb8e00000b91, - 0xb9200000b96, - 0xb9900000b9b, - 0xb9c00000b9d, - 0xb9e00000ba0, - 0xba300000ba5, - 0xba800000bab, - 0xbae00000bba, - 0xbbe00000bc3, - 0xbc600000bc9, - 0xbca00000bce, - 0xbd000000bd1, - 0xbd700000bd8, - 0xbe600000bf0, - 0xc0000000c0d, - 0xc0e00000c11, - 0xc1200000c29, - 0xc2a00000c3a, - 0xc3c00000c45, - 0xc4600000c49, - 0xc4a00000c4e, - 0xc5500000c57, - 0xc5800000c5b, - 0xc5d00000c5e, - 0xc6000000c64, - 0xc6600000c70, - 0xc8000000c84, - 0xc8500000c8d, - 0xc8e00000c91, - 0xc9200000ca9, - 0xcaa00000cb4, - 0xcb500000cba, - 0xcbc00000cc5, - 0xcc600000cc9, - 0xcca00000cce, - 0xcd500000cd7, - 0xcdd00000cdf, - 0xce000000ce4, - 0xce600000cf0, - 0xcf100000cf4, - 0xd0000000d0d, - 0xd0e00000d11, - 0xd1200000d45, - 0xd4600000d49, - 0xd4a00000d4f, - 0xd5400000d58, - 0xd5f00000d64, - 0xd6600000d70, - 0xd7a00000d80, - 0xd8100000d84, - 0xd8500000d97, - 0xd9a00000db2, - 0xdb300000dbc, - 0xdbd00000dbe, - 0xdc000000dc7, - 0xdca00000dcb, - 0xdcf00000dd5, - 0xdd600000dd7, - 0xdd800000de0, - 0xde600000df0, - 0xdf200000df4, - 0xe0100000e33, - 0xe3400000e3b, - 0xe4000000e4f, - 0xe5000000e5a, - 0xe8100000e83, - 0xe8400000e85, - 0xe8600000e8b, - 0xe8c00000ea4, - 0xea500000ea6, - 0xea700000eb3, - 0xeb400000ebe, - 0xec000000ec5, - 0xec600000ec7, - 0xec800000ecf, - 0xed000000eda, - 0xede00000ee0, - 0xf0000000f01, - 0xf0b00000f0c, - 0xf1800000f1a, - 0xf2000000f2a, - 0xf3500000f36, - 0xf3700000f38, - 0xf3900000f3a, - 0xf3e00000f43, - 0xf4400000f48, - 0xf4900000f4d, - 0xf4e00000f52, - 0xf5300000f57, - 0xf5800000f5c, - 0xf5d00000f69, - 0xf6a00000f6d, - 0xf7100000f73, - 0xf7400000f75, - 0xf7a00000f81, - 0xf8200000f85, - 0xf8600000f93, - 0xf9400000f98, - 0xf9900000f9d, - 0xf9e00000fa2, - 0xfa300000fa7, - 0xfa800000fac, - 0xfad00000fb9, - 0xfba00000fbd, - 0xfc600000fc7, - 0x10000000104a, - 0x10500000109e, - 0x10d0000010fb, - 0x10fd00001100, - 0x120000001249, - 0x124a0000124e, - 0x125000001257, - 0x125800001259, - 0x125a0000125e, - 0x126000001289, - 0x128a0000128e, - 0x1290000012b1, - 0x12b2000012b6, - 0x12b8000012bf, - 0x12c0000012c1, - 0x12c2000012c6, - 0x12c8000012d7, - 0x12d800001311, - 0x131200001316, - 0x13180000135b, - 0x135d00001360, - 0x138000001390, - 0x13a0000013f6, - 0x14010000166d, - 0x166f00001680, - 0x16810000169b, - 0x16a0000016eb, - 0x16f1000016f9, - 0x170000001716, - 0x171f00001735, - 0x174000001754, - 0x17600000176d, - 0x176e00001771, - 0x177200001774, - 0x1780000017b4, - 0x17b6000017d4, - 0x17d7000017d8, - 0x17dc000017de, - 0x17e0000017ea, - 0x18100000181a, - 0x182000001879, - 0x1880000018ab, - 0x18b0000018f6, - 0x19000000191f, - 0x19200000192c, - 0x19300000193c, - 0x19460000196e, - 0x197000001975, - 0x1980000019ac, - 0x19b0000019ca, - 0x19d0000019da, - 0x1a0000001a1c, - 0x1a2000001a5f, - 0x1a6000001a7d, - 0x1a7f00001a8a, - 0x1a9000001a9a, - 0x1aa700001aa8, - 0x1ab000001abe, - 0x1abf00001acf, - 0x1b0000001b4d, - 0x1b5000001b5a, - 0x1b6b00001b74, - 0x1b8000001bf4, - 0x1c0000001c38, - 0x1c4000001c4a, - 0x1c4d00001c7e, - 0x1cd000001cd3, - 0x1cd400001cfb, - 0x1d0000001d2c, - 0x1d2f00001d30, - 0x1d3b00001d3c, - 0x1d4e00001d4f, - 0x1d6b00001d78, - 0x1d7900001d9b, - 0x1dc000001e00, - 0x1e0100001e02, - 0x1e0300001e04, - 0x1e0500001e06, - 0x1e0700001e08, - 0x1e0900001e0a, - 0x1e0b00001e0c, - 0x1e0d00001e0e, - 0x1e0f00001e10, - 0x1e1100001e12, - 0x1e1300001e14, - 0x1e1500001e16, - 0x1e1700001e18, - 0x1e1900001e1a, - 0x1e1b00001e1c, - 0x1e1d00001e1e, - 0x1e1f00001e20, - 0x1e2100001e22, - 0x1e2300001e24, - 0x1e2500001e26, - 0x1e2700001e28, - 0x1e2900001e2a, - 0x1e2b00001e2c, - 0x1e2d00001e2e, - 0x1e2f00001e30, - 0x1e3100001e32, - 0x1e3300001e34, - 0x1e3500001e36, - 0x1e3700001e38, - 0x1e3900001e3a, - 0x1e3b00001e3c, - 0x1e3d00001e3e, - 0x1e3f00001e40, - 0x1e4100001e42, - 0x1e4300001e44, - 0x1e4500001e46, - 0x1e4700001e48, - 0x1e4900001e4a, - 0x1e4b00001e4c, - 0x1e4d00001e4e, - 0x1e4f00001e50, - 0x1e5100001e52, - 0x1e5300001e54, - 0x1e5500001e56, - 0x1e5700001e58, - 0x1e5900001e5a, - 0x1e5b00001e5c, - 0x1e5d00001e5e, - 0x1e5f00001e60, - 0x1e6100001e62, - 0x1e6300001e64, - 0x1e6500001e66, - 0x1e6700001e68, - 0x1e6900001e6a, - 0x1e6b00001e6c, - 0x1e6d00001e6e, - 0x1e6f00001e70, - 0x1e7100001e72, - 0x1e7300001e74, - 0x1e7500001e76, - 0x1e7700001e78, - 0x1e7900001e7a, - 0x1e7b00001e7c, - 0x1e7d00001e7e, - 0x1e7f00001e80, - 0x1e8100001e82, - 0x1e8300001e84, - 0x1e8500001e86, - 0x1e8700001e88, - 0x1e8900001e8a, - 0x1e8b00001e8c, - 0x1e8d00001e8e, - 0x1e8f00001e90, - 0x1e9100001e92, - 0x1e9300001e94, - 0x1e9500001e9a, - 0x1e9c00001e9e, - 0x1e9f00001ea0, - 0x1ea100001ea2, - 0x1ea300001ea4, - 0x1ea500001ea6, - 0x1ea700001ea8, - 0x1ea900001eaa, - 0x1eab00001eac, - 0x1ead00001eae, - 0x1eaf00001eb0, - 0x1eb100001eb2, - 0x1eb300001eb4, - 0x1eb500001eb6, - 0x1eb700001eb8, - 0x1eb900001eba, - 0x1ebb00001ebc, - 0x1ebd00001ebe, - 0x1ebf00001ec0, - 0x1ec100001ec2, - 0x1ec300001ec4, - 0x1ec500001ec6, - 0x1ec700001ec8, - 0x1ec900001eca, - 0x1ecb00001ecc, - 0x1ecd00001ece, - 0x1ecf00001ed0, - 0x1ed100001ed2, - 0x1ed300001ed4, - 0x1ed500001ed6, - 0x1ed700001ed8, - 0x1ed900001eda, - 0x1edb00001edc, - 0x1edd00001ede, - 0x1edf00001ee0, - 0x1ee100001ee2, - 0x1ee300001ee4, - 0x1ee500001ee6, - 0x1ee700001ee8, - 0x1ee900001eea, - 0x1eeb00001eec, - 0x1eed00001eee, - 0x1eef00001ef0, - 0x1ef100001ef2, - 0x1ef300001ef4, - 0x1ef500001ef6, - 0x1ef700001ef8, - 0x1ef900001efa, - 0x1efb00001efc, - 0x1efd00001efe, - 0x1eff00001f08, - 0x1f1000001f16, - 0x1f2000001f28, - 0x1f3000001f38, - 0x1f4000001f46, - 0x1f5000001f58, - 0x1f6000001f68, - 0x1f7000001f71, - 0x1f7200001f73, - 0x1f7400001f75, - 0x1f7600001f77, - 0x1f7800001f79, - 0x1f7a00001f7b, - 0x1f7c00001f7d, - 0x1fb000001fb2, - 0x1fb600001fb7, - 0x1fc600001fc7, - 0x1fd000001fd3, - 0x1fd600001fd8, - 0x1fe000001fe3, - 0x1fe400001fe8, - 0x1ff600001ff7, - 0x214e0000214f, - 0x218400002185, - 0x2c3000002c60, - 0x2c6100002c62, - 0x2c6500002c67, - 0x2c6800002c69, - 0x2c6a00002c6b, - 0x2c6c00002c6d, - 0x2c7100002c72, - 0x2c7300002c75, - 0x2c7600002c7c, - 0x2c8100002c82, - 0x2c8300002c84, - 0x2c8500002c86, - 0x2c8700002c88, - 0x2c8900002c8a, - 0x2c8b00002c8c, - 0x2c8d00002c8e, - 0x2c8f00002c90, - 0x2c9100002c92, - 0x2c9300002c94, - 0x2c9500002c96, - 0x2c9700002c98, - 0x2c9900002c9a, - 0x2c9b00002c9c, - 0x2c9d00002c9e, - 0x2c9f00002ca0, - 0x2ca100002ca2, - 0x2ca300002ca4, - 0x2ca500002ca6, - 0x2ca700002ca8, - 0x2ca900002caa, - 0x2cab00002cac, - 0x2cad00002cae, - 0x2caf00002cb0, - 0x2cb100002cb2, - 0x2cb300002cb4, - 0x2cb500002cb6, - 0x2cb700002cb8, - 0x2cb900002cba, - 0x2cbb00002cbc, - 0x2cbd00002cbe, - 0x2cbf00002cc0, - 0x2cc100002cc2, - 0x2cc300002cc4, - 0x2cc500002cc6, - 0x2cc700002cc8, - 0x2cc900002cca, - 0x2ccb00002ccc, - 0x2ccd00002cce, - 0x2ccf00002cd0, - 0x2cd100002cd2, - 0x2cd300002cd4, - 0x2cd500002cd6, - 0x2cd700002cd8, - 0x2cd900002cda, - 0x2cdb00002cdc, - 0x2cdd00002cde, - 0x2cdf00002ce0, - 0x2ce100002ce2, - 0x2ce300002ce5, - 0x2cec00002ced, - 0x2cee00002cf2, - 0x2cf300002cf4, - 0x2d0000002d26, - 0x2d2700002d28, - 0x2d2d00002d2e, - 0x2d3000002d68, - 0x2d7f00002d97, - 0x2da000002da7, - 0x2da800002daf, - 0x2db000002db7, - 0x2db800002dbf, - 0x2dc000002dc7, - 0x2dc800002dcf, - 0x2dd000002dd7, - 0x2dd800002ddf, - 0x2de000002e00, - 0x2e2f00002e30, - 0x300500003008, - 0x302a0000302e, - 0x303c0000303d, - 0x304100003097, - 0x30990000309b, - 0x309d0000309f, - 0x30a1000030fb, - 0x30fc000030ff, - 0x310500003130, - 0x31a0000031c0, - 0x31f000003200, - 0x340000004dc0, - 0x4e000000a48d, - 0xa4d00000a4fe, - 0xa5000000a60d, - 0xa6100000a62c, - 0xa6410000a642, - 0xa6430000a644, - 0xa6450000a646, - 0xa6470000a648, - 0xa6490000a64a, - 0xa64b0000a64c, - 0xa64d0000a64e, - 0xa64f0000a650, - 0xa6510000a652, - 0xa6530000a654, - 0xa6550000a656, - 0xa6570000a658, - 0xa6590000a65a, - 0xa65b0000a65c, - 0xa65d0000a65e, - 0xa65f0000a660, - 0xa6610000a662, - 0xa6630000a664, - 0xa6650000a666, - 0xa6670000a668, - 0xa6690000a66a, - 0xa66b0000a66c, - 0xa66d0000a670, - 0xa6740000a67e, - 0xa67f0000a680, - 0xa6810000a682, - 0xa6830000a684, - 0xa6850000a686, - 0xa6870000a688, - 0xa6890000a68a, - 0xa68b0000a68c, - 0xa68d0000a68e, - 0xa68f0000a690, - 0xa6910000a692, - 0xa6930000a694, - 0xa6950000a696, - 0xa6970000a698, - 0xa6990000a69a, - 0xa69b0000a69c, - 0xa69e0000a6e6, - 0xa6f00000a6f2, - 0xa7170000a720, - 0xa7230000a724, - 0xa7250000a726, - 0xa7270000a728, - 0xa7290000a72a, - 0xa72b0000a72c, - 0xa72d0000a72e, - 0xa72f0000a732, - 0xa7330000a734, - 0xa7350000a736, - 0xa7370000a738, - 0xa7390000a73a, - 0xa73b0000a73c, - 0xa73d0000a73e, - 0xa73f0000a740, - 0xa7410000a742, - 0xa7430000a744, - 0xa7450000a746, - 0xa7470000a748, - 0xa7490000a74a, - 0xa74b0000a74c, - 0xa74d0000a74e, - 0xa74f0000a750, - 0xa7510000a752, - 0xa7530000a754, - 0xa7550000a756, - 0xa7570000a758, - 0xa7590000a75a, - 0xa75b0000a75c, - 0xa75d0000a75e, - 0xa75f0000a760, - 0xa7610000a762, - 0xa7630000a764, - 0xa7650000a766, - 0xa7670000a768, - 0xa7690000a76a, - 0xa76b0000a76c, - 0xa76d0000a76e, - 0xa76f0000a770, - 0xa7710000a779, - 0xa77a0000a77b, - 0xa77c0000a77d, - 0xa77f0000a780, - 0xa7810000a782, - 0xa7830000a784, - 0xa7850000a786, - 0xa7870000a789, - 0xa78c0000a78d, - 0xa78e0000a790, - 0xa7910000a792, - 0xa7930000a796, - 0xa7970000a798, - 0xa7990000a79a, - 0xa79b0000a79c, - 0xa79d0000a79e, - 0xa79f0000a7a0, - 0xa7a10000a7a2, - 0xa7a30000a7a4, - 0xa7a50000a7a6, - 0xa7a70000a7a8, - 0xa7a90000a7aa, - 0xa7af0000a7b0, - 0xa7b50000a7b6, - 0xa7b70000a7b8, - 0xa7b90000a7ba, - 0xa7bb0000a7bc, - 0xa7bd0000a7be, - 0xa7bf0000a7c0, - 0xa7c10000a7c2, - 0xa7c30000a7c4, - 0xa7c80000a7c9, - 0xa7ca0000a7cb, - 0xa7d10000a7d2, - 0xa7d30000a7d4, - 0xa7d50000a7d6, - 0xa7d70000a7d8, - 0xa7d90000a7da, - 0xa7f20000a7f5, - 0xa7f60000a7f8, - 0xa7fa0000a828, - 0xa82c0000a82d, - 0xa8400000a874, - 0xa8800000a8c6, - 0xa8d00000a8da, - 0xa8e00000a8f8, - 0xa8fb0000a8fc, - 0xa8fd0000a92e, - 0xa9300000a954, - 0xa9800000a9c1, - 0xa9cf0000a9da, - 0xa9e00000a9ff, - 0xaa000000aa37, - 0xaa400000aa4e, - 0xaa500000aa5a, - 0xaa600000aa77, - 0xaa7a0000aac3, - 0xaadb0000aade, - 0xaae00000aaf0, - 0xaaf20000aaf7, - 0xab010000ab07, - 0xab090000ab0f, - 0xab110000ab17, - 0xab200000ab27, - 0xab280000ab2f, - 0xab300000ab5b, - 0xab600000ab69, - 0xabc00000abeb, - 0xabec0000abee, - 0xabf00000abfa, - 0xac000000d7a4, - 0xfa0e0000fa10, - 0xfa110000fa12, - 0xfa130000fa15, - 0xfa1f0000fa20, - 0xfa210000fa22, - 0xfa230000fa25, - 0xfa270000fa2a, - 0xfb1e0000fb1f, - 0xfe200000fe30, - 0xfe730000fe74, - 0x100000001000c, - 0x1000d00010027, - 0x100280001003b, - 0x1003c0001003e, - 0x1003f0001004e, - 0x100500001005e, - 0x10080000100fb, - 0x101fd000101fe, - 0x102800001029d, - 0x102a0000102d1, - 0x102e0000102e1, - 0x1030000010320, - 0x1032d00010341, - 0x103420001034a, - 0x103500001037b, - 0x103800001039e, - 0x103a0000103c4, - 0x103c8000103d0, - 0x104280001049e, - 0x104a0000104aa, - 0x104d8000104fc, - 0x1050000010528, - 0x1053000010564, - 0x10597000105a2, - 0x105a3000105b2, - 0x105b3000105ba, - 0x105bb000105bd, - 0x1060000010737, - 0x1074000010756, - 0x1076000010768, - 0x1078000010786, - 0x10787000107b1, - 0x107b2000107bb, - 0x1080000010806, - 0x1080800010809, - 0x1080a00010836, - 0x1083700010839, - 0x1083c0001083d, - 0x1083f00010856, - 0x1086000010877, - 0x108800001089f, - 0x108e0000108f3, - 0x108f4000108f6, - 0x1090000010916, - 0x109200001093a, - 0x10980000109b8, - 0x109be000109c0, - 0x10a0000010a04, - 0x10a0500010a07, - 0x10a0c00010a14, - 0x10a1500010a18, - 0x10a1900010a36, - 0x10a3800010a3b, - 0x10a3f00010a40, - 0x10a6000010a7d, - 0x10a8000010a9d, - 0x10ac000010ac8, - 0x10ac900010ae7, - 0x10b0000010b36, - 0x10b4000010b56, - 0x10b6000010b73, - 0x10b8000010b92, - 0x10c0000010c49, - 0x10cc000010cf3, - 0x10d0000010d28, - 0x10d3000010d3a, - 0x10e8000010eaa, - 0x10eab00010ead, - 0x10eb000010eb2, - 0x10efd00010f1d, - 0x10f2700010f28, - 0x10f3000010f51, - 0x10f7000010f86, - 0x10fb000010fc5, - 0x10fe000010ff7, - 0x1100000011047, - 0x1106600011076, - 0x1107f000110bb, - 0x110c2000110c3, - 0x110d0000110e9, - 0x110f0000110fa, - 0x1110000011135, - 0x1113600011140, - 0x1114400011148, - 0x1115000011174, - 0x1117600011177, - 0x11180000111c5, - 0x111c9000111cd, - 0x111ce000111db, - 0x111dc000111dd, - 0x1120000011212, - 0x1121300011238, - 0x1123e00011242, - 0x1128000011287, - 0x1128800011289, - 0x1128a0001128e, - 0x1128f0001129e, - 0x1129f000112a9, - 0x112b0000112eb, - 0x112f0000112fa, - 0x1130000011304, - 0x113050001130d, - 0x1130f00011311, - 0x1131300011329, - 0x1132a00011331, - 0x1133200011334, - 0x113350001133a, - 0x1133b00011345, - 0x1134700011349, - 0x1134b0001134e, - 0x1135000011351, - 0x1135700011358, - 0x1135d00011364, - 0x113660001136d, - 0x1137000011375, - 0x114000001144b, - 0x114500001145a, - 0x1145e00011462, - 0x11480000114c6, - 0x114c7000114c8, - 0x114d0000114da, - 0x11580000115b6, - 0x115b8000115c1, - 0x115d8000115de, - 0x1160000011641, - 0x1164400011645, - 0x116500001165a, - 0x11680000116b9, - 0x116c0000116ca, - 0x117000001171b, - 0x1171d0001172c, - 0x117300001173a, - 0x1174000011747, - 0x118000001183b, - 0x118c0000118ea, - 0x118ff00011907, - 0x119090001190a, - 0x1190c00011914, - 0x1191500011917, - 0x1191800011936, - 0x1193700011939, - 0x1193b00011944, - 0x119500001195a, - 0x119a0000119a8, - 0x119aa000119d8, - 0x119da000119e2, - 0x119e3000119e5, - 0x11a0000011a3f, - 0x11a4700011a48, - 0x11a5000011a9a, - 0x11a9d00011a9e, - 0x11ab000011af9, - 0x11c0000011c09, - 0x11c0a00011c37, - 0x11c3800011c41, - 0x11c5000011c5a, - 0x11c7200011c90, - 0x11c9200011ca8, - 0x11ca900011cb7, - 0x11d0000011d07, - 0x11d0800011d0a, - 0x11d0b00011d37, - 0x11d3a00011d3b, - 0x11d3c00011d3e, - 0x11d3f00011d48, - 0x11d5000011d5a, - 0x11d6000011d66, - 0x11d6700011d69, - 0x11d6a00011d8f, - 0x11d9000011d92, - 0x11d9300011d99, - 0x11da000011daa, - 0x11ee000011ef7, - 0x11f0000011f11, - 0x11f1200011f3b, - 0x11f3e00011f43, - 0x11f5000011f5a, - 0x11fb000011fb1, - 0x120000001239a, - 0x1248000012544, - 0x12f9000012ff1, - 0x1300000013430, - 0x1344000013456, - 0x1440000014647, - 0x1680000016a39, - 0x16a4000016a5f, - 0x16a6000016a6a, - 0x16a7000016abf, - 0x16ac000016aca, - 0x16ad000016aee, - 0x16af000016af5, - 0x16b0000016b37, - 0x16b4000016b44, - 0x16b5000016b5a, - 0x16b6300016b78, - 0x16b7d00016b90, - 0x16e6000016e80, - 0x16f0000016f4b, - 0x16f4f00016f88, - 0x16f8f00016fa0, - 0x16fe000016fe2, - 0x16fe300016fe5, - 0x16ff000016ff2, - 0x17000000187f8, - 0x1880000018cd6, - 0x18d0000018d09, - 0x1aff00001aff4, - 0x1aff50001affc, - 0x1affd0001afff, - 0x1b0000001b123, - 0x1b1320001b133, - 0x1b1500001b153, - 0x1b1550001b156, - 0x1b1640001b168, - 0x1b1700001b2fc, - 0x1bc000001bc6b, - 0x1bc700001bc7d, - 0x1bc800001bc89, - 0x1bc900001bc9a, - 0x1bc9d0001bc9f, - 0x1cf000001cf2e, - 0x1cf300001cf47, - 0x1da000001da37, - 0x1da3b0001da6d, - 0x1da750001da76, - 0x1da840001da85, - 0x1da9b0001daa0, - 0x1daa10001dab0, - 0x1df000001df1f, - 0x1df250001df2b, - 0x1e0000001e007, - 0x1e0080001e019, - 0x1e01b0001e022, - 0x1e0230001e025, - 0x1e0260001e02b, - 0x1e0300001e06e, - 0x1e08f0001e090, - 0x1e1000001e12d, - 0x1e1300001e13e, - 0x1e1400001e14a, - 0x1e14e0001e14f, - 0x1e2900001e2af, - 0x1e2c00001e2fa, - 0x1e4d00001e4fa, - 0x1e7e00001e7e7, - 0x1e7e80001e7ec, - 0x1e7ed0001e7ef, - 0x1e7f00001e7ff, - 0x1e8000001e8c5, - 0x1e8d00001e8d7, - 0x1e9220001e94c, - 0x1e9500001e95a, - 0x200000002a6e0, - 0x2a7000002b73a, - 0x2b7400002b81e, - 0x2b8200002cea2, - 0x2ceb00002ebe1, - 0x300000003134b, - 0x31350000323b0, - ), - 'CONTEXTJ': ( - 0x200c0000200e, - ), - 'CONTEXTO': ( - 0xb7000000b8, - 0x37500000376, - 0x5f3000005f5, - 0x6600000066a, - 0x6f0000006fa, - 0x30fb000030fc, - ), -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/__init__.py deleted file mode 100644 index c6fa38212fb559a9b51fe36b72892839efae63f5..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/__init__.py +++ /dev/null @@ -1,102 +0,0 @@ -""" -Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more -""" -from __future__ import absolute_import - -# Set default logging handler to avoid "No handler found" warnings. -import logging -import warnings -from logging import NullHandler - -from . import exceptions -from ._version import __version__ -from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url -from .filepost import encode_multipart_formdata -from .poolmanager import PoolManager, ProxyManager, proxy_from_url -from .response import HTTPResponse -from .util.request import make_headers -from .util.retry import Retry -from .util.timeout import Timeout -from .util.url import get_host - -# === NOTE TO REPACKAGERS AND VENDORS === -# Please delete this block, this logic is only -# for urllib3 being distributed via PyPI. -# See: https://github.com/urllib3/urllib3/issues/2680 -try: - import urllib3_secure_extra # type: ignore # noqa: F401 -except ImportError: - pass -else: - warnings.warn( - "'urllib3[secure]' extra is deprecated and will be removed " - "in a future release of urllib3 2.x. Read more in this issue: " - "https://github.com/urllib3/urllib3/issues/2680", - category=DeprecationWarning, - stacklevel=2, - ) - -__author__ = "Andrey Petrov (andrey.petrov@shazow.net)" -__license__ = "MIT" -__version__ = __version__ - -__all__ = ( - "HTTPConnectionPool", - "HTTPSConnectionPool", - "PoolManager", - "ProxyManager", - "HTTPResponse", - "Retry", - "Timeout", - "add_stderr_logger", - "connection_from_url", - "disable_warnings", - "encode_multipart_formdata", - "get_host", - "make_headers", - "proxy_from_url", -) - -logging.getLogger(__name__).addHandler(NullHandler()) - - -def add_stderr_logger(level=logging.DEBUG): - """ - Helper for quickly adding a StreamHandler to the logger. Useful for - debugging. - - Returns the handler after adding it. - """ - # This method needs to be in this __init__.py to get the __name__ correct - # even if urllib3 is vendored within another package. - logger = logging.getLogger(__name__) - handler = logging.StreamHandler() - handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(message)s")) - logger.addHandler(handler) - logger.setLevel(level) - logger.debug("Added a stderr logging handler to logger: %s", __name__) - return handler - - -# ... Clean up. -del NullHandler - - -# All warning filters *must* be appended unless you're really certain that they -# shouldn't be: otherwise, it's very hard for users to use most Python -# mechanisms to silence them. -# SecurityWarning's always go off by default. -warnings.simplefilter("always", exceptions.SecurityWarning, append=True) -# SubjectAltNameWarning's should go off once per host -warnings.simplefilter("default", exceptions.SubjectAltNameWarning, append=True) -# InsecurePlatformWarning's don't vary between requests, so we keep it default. -warnings.simplefilter("default", exceptions.InsecurePlatformWarning, append=True) -# SNIMissingWarnings should go off only once. -warnings.simplefilter("default", exceptions.SNIMissingWarning, append=True) - - -def disable_warnings(category=exceptions.HTTPWarning): - """ - Helper for quickly disabling all urllib3 warnings. - """ - warnings.simplefilter("ignore", category) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/__init__.py deleted file mode 100644 index 3c50c5dcfeeda2efed282200a5c5cc8c5f7542f7..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from .__about__ import ( - __author__, - __copyright__, - __email__, - __license__, - __summary__, - __title__, - __uri__, - __version__, -) - -__all__ = [ - "__title__", - "__summary__", - "__uri__", - "__version__", - "__author__", - "__email__", - "__license__", - "__copyright__", -] diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/abstract/updateable_api_resource.py b/spaces/Boadiwaa/Recipes/openai/api_resources/abstract/updateable_api_resource.py deleted file mode 100644 index e7289d12d3005217c0d2cffb771a9cdf6696998a..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/api_resources/abstract/updateable_api_resource.py +++ /dev/null @@ -1,10 +0,0 @@ -from urllib.parse import quote_plus - -from openai.api_resources.abstract.api_resource import APIResource - - -class UpdateableAPIResource(APIResource): - @classmethod - def modify(cls, sid, **params): - url = "%s/%s" % (cls.class_url(), quote_plus(sid)) - return cls._static_request("post", url, **params) diff --git a/spaces/CVPR/LIVE/thrust/thrust/adjacent_difference.h b/spaces/CVPR/LIVE/thrust/thrust/adjacent_difference.h deleted file mode 100644 index 838beabe5fd62ab6bba85ec5e12319a587f9accc..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/adjacent_difference.h +++ /dev/null @@ -1,246 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file adjacent_difference.h - * \brief Compute difference between consecutive elements of a range - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup transformations Transformations - * \{ - */ - - -/*! \p adjacent_difference calculates the differences of adjacent elements in the - * range [first, last). That is, \*first is assigned to - * \*result, and, for each iterator \p i in the range - * [first + 1, last), the difference of \*i and *(i - 1) - * is assigned to \*(result + (i - first)). - * - * This version of \p adjacent_difference uses operator- to calculate - * differences. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the input range. - * \param last The end of the input range. - * \param result The beginning of the output range. - * \return The iterator result + (last - first) - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator is a model of Input Iterator, - * and \c x and \c y are objects of \p InputIterator's \c value_type, then \c x - \c is defined, - * and \p InputIterator's \c value_type is convertible to a type in \p OutputIterator's set of \c value_types, - * and the return type of x - y is convertible to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \remark Note that \p result is permitted to be the same iterator as \p first. This is - * useful for computing differences "in place". - * - * The following code snippet demonstrates how to use \p adjacent_difference to compute - * the difference between adjacent elements of a range using the \p thrust::device execution policy: - * - * \code - * #include - * #include - * #include - * ... - * int h_data[8] = {1, 2, 1, 2, 1, 2, 1, 2}; - * thrust::device_vector d_data(h_data, h_data + 8); - * thrust::device_vector d_result(8); - * - * thrust::adjacent_difference(thrust::device, d_data.begin(), d_data.end(), d_result.begin()); - * - * // d_result is now [1, 1, -1, 1, -1, 1, -1, 1] - * \endcode - * - * \see http://www.sgi.com/tech/stl/adjacent_difference.html - * \see inclusive_scan - */ -template -__host__ __device__ -OutputIterator adjacent_difference(const thrust::detail::execution_policy_base &exec, - InputIterator first, InputIterator last, - OutputIterator result); - -/*! \p adjacent_difference calculates the differences of adjacent elements in the - * range [first, last). That is, *first is assigned to - * \*result, and, for each iterator \p i in the range - * [first + 1, last), binary_op(\*i, \*(i - 1)) is assigned to - * \*(result + (i - first)). - * - * This version of \p adjacent_difference uses the binary function \p binary_op to - * calculate differences. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the input range. - * \param last The end of the input range. - * \param result The beginning of the output range. - * \param binary_op The binary function used to compute differences. - * \return The iterator result + (last - first) - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator is a model of Input Iterator, - * and \p InputIterator's \c value_type is convertible to \p BinaryFunction's \c first_argument_type and \c second_argument_type, - * and \p InputIterator's \c value_type is convertible to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam BinaryFunction's \c result_type is convertible to a type in \p OutputIterator's set of \c value_types. - * - * \remark Note that \p result is permitted to be the same iterator as \p first. This is - * useful for computing differences "in place". - * - * The following code snippet demonstrates how to use \p adjacent_difference to compute - * the sum between adjacent elements of a range using the \p thrust::device execution policy: - * - * \code - * #include - * #include - * #include - * #include - * ... - * int h_data[8] = {1, 2, 1, 2, 1, 2, 1, 2}; - * thrust::device_vector d_data(h_data, h_data + 8); - * thrust::device_vector d_result(8); - * - * thrust::adjacent_difference(thrust::device, d_data.begin(), d_data.end(), d_result.begin(), thrust::plus()); - * - * // d_result is now [1, 3, 3, 3, 3, 3, 3, 3] - * \endcode - * - * \see http://www.sgi.com/tech/stl/adjacent_difference.html - * \see inclusive_scan - */ -template -__host__ __device__ -OutputIterator adjacent_difference(const thrust::detail::execution_policy_base &exec, - InputIterator first, InputIterator last, - OutputIterator result, - BinaryFunction binary_op); - -/*! \p adjacent_difference calculates the differences of adjacent elements in the - * range [first, last). That is, \*first is assigned to - * \*result, and, for each iterator \p i in the range - * [first + 1, last), the difference of \*i and *(i - 1) - * is assigned to \*(result + (i - first)). - * - * This version of \p adjacent_difference uses operator- to calculate - * differences. - * - * \param first The beginning of the input range. - * \param last The end of the input range. - * \param result The beginning of the output range. - * \return The iterator result + (last - first) - * - * \tparam InputIterator is a model of Input Iterator, - * and \c x and \c y are objects of \p InputIterator's \c value_type, then \c x - \c is defined, - * and \p InputIterator's \c value_type is convertible to a type in \p OutputIterator's set of \c value_types, - * and the return type of x - y is convertible to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \remark Note that \p result is permitted to be the same iterator as \p first. This is - * useful for computing differences "in place". - * - * The following code snippet demonstrates how to use \p adjacent_difference to compute - * the difference between adjacent elements of a range. - * - * \code - * #include - * #include - * ... - * int h_data[8] = {1, 2, 1, 2, 1, 2, 1, 2}; - * thrust::device_vector d_data(h_data, h_data + 8); - * thrust::device_vector d_result(8); - * - * thrust::adjacent_difference(d_data.begin(), d_data.end(), d_result.begin()); - * - * // d_result is now [1, 1, -1, 1, -1, 1, -1, 1] - * \endcode - * - * \see http://www.sgi.com/tech/stl/adjacent_difference.html - * \see inclusive_scan - */ -template -OutputIterator adjacent_difference(InputIterator first, InputIterator last, - OutputIterator result); - -/*! \p adjacent_difference calculates the differences of adjacent elements in the - * range [first, last). That is, *first is assigned to - * \*result, and, for each iterator \p i in the range - * [first + 1, last), binary_op(\*i, \*(i - 1)) is assigned to - * \*(result + (i - first)). - * - * This version of \p adjacent_difference uses the binary function \p binary_op to - * calculate differences. - * - * \param first The beginning of the input range. - * \param last The end of the input range. - * \param result The beginning of the output range. - * \param binary_op The binary function used to compute differences. - * \return The iterator result + (last - first) - * - * \tparam InputIterator is a model of Input Iterator, - * and \p InputIterator's \c value_type is convertible to \p BinaryFunction's \c first_argument_type and \c second_argument_type, - * and \p InputIterator's \c value_type is convertible to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam BinaryFunction's \c result_type is convertible to a type in \p OutputIterator's set of \c value_types. - * - * \remark Note that \p result is permitted to be the same iterator as \p first. This is - * useful for computing differences "in place". - * - * The following code snippet demonstrates how to use \p adjacent_difference to compute - * the sum between adjacent elements of a range. - * - * \code - * #include - * #include - * #include - * ... - * int h_data[8] = {1, 2, 1, 2, 1, 2, 1, 2}; - * thrust::device_vector d_data(h_data, h_data + 8); - * thrust::device_vector d_result(8); - * - * thrust::adjacent_difference(d_data.begin(), d_data.end(), d_result.begin(), thrust::plus()); - * - * // d_result is now [1, 3, 3, 3, 3, 3, 3, 3] - * \endcode - * - * \see http://www.sgi.com/tech/stl/adjacent_difference.html - * \see inclusive_scan - */ -template -OutputIterator adjacent_difference(InputIterator first, InputIterator last, - OutputIterator result, - BinaryFunction binary_op); - -/*! \} - */ - -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/polymorphic_adaptor.h b/spaces/CVPR/LIVE/thrust/thrust/mr/polymorphic_adaptor.h deleted file mode 100644 index d5d98bf8382e9544605a6689e4bc2611b55f960d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/mr/polymorphic_adaptor.h +++ /dev/null @@ -1,56 +0,0 @@ -/* - * Copyright 2018-2019 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include "memory_resource.h" - -namespace thrust -{ -namespace mr -{ - -template -class polymorphic_adaptor_resource THRUST_FINAL : public memory_resource -{ -public: - polymorphic_adaptor_resource(memory_resource * t) : upstream_resource(t) - { - } - - virtual Pointer do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { - return upstream_resource->allocate(bytes, alignment); - } - - virtual void do_deallocate(Pointer p, std::size_t bytes, std::size_t alignment) THRUST_OVERRIDE - { - return upstream_resource->deallocate(p, bytes, alignment); - } - - __host__ __device__ - virtual bool do_is_equal(const memory_resource & other) const THRUST_NOEXCEPT THRUST_OVERRIDE - { - return upstream_resource->is_equal(other); - } - -private: - memory_resource * upstream_resource; -}; - -} // end mr -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/unique_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/unique_by_key.h deleted file mode 100644 index dcf9acd42cd730a8f42bedb01407cf75137a86fb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/unique_by_key.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a fill of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the unique_by_key.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch unique_by_key - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_UNIQUE_BY_KEY_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/unique_by_key.h> -#include __THRUST_HOST_SYSTEM_UNIQUE_BY_KEY_HEADER -#undef __THRUST_HOST_SYSTEM_UNIQUE_BY_KEY_HEADER - -#define __THRUST_DEVICE_SYSTEM_UNIQUE_BY_KEY_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/unique_by_key.h> -#include __THRUST_DEVICE_SYSTEM_UNIQUE_BY_KEY_HEADER -#undef __THRUST_DEVICE_SYSTEM_UNIQUE_BY_KEY_HEADER - diff --git a/spaces/CVPR/TokenCut/app.py b/spaces/CVPR/TokenCut/app.py deleted file mode 100644 index d02400df0101b8023d4a31b00950bb18ea1cc028..0000000000000000000000000000000000000000 --- a/spaces/CVPR/TokenCut/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import os -import gradio as gr -from pathlib import Path - - -os.system("git clone https://github.com/YangtaoWANG95/TokenCut.git") -os.chdir("TokenCut") -os.system("wget https://raw.githubusercontent.com/YangtaoWANG95/TokenCut/master/examples/VOC07_000064.jpg -O parrot.jpg") - -def inference(img): - os.system("python main_tokencut.py --image_path "+img+" --visualize all --resize 320") - filename = Path(img).stem - return "./outputs/TokenCut-vit_small16_k/"+filename+"_TokenCut_attn.jpg","./outputs/TokenCut-vit_small16_k/"+filename+"_TokenCut_pred.jpg" - -title="TokenCut" -description="Gradio demo for TokenCut: Self-Supervised Transformers for Unsupervised Object Discovery using Normalized Cut. To use it, simply upload your image or click on one of the examples to load them. We resize the smaller edge of the image to 320 to accelerate inference time. Read more at the links below" - -article = "

Self-Supervised Transformers for Unsupervised Object Discovery using Normalized Cut | Github Repo

" - -examples=[['parrot.jpg']] -gr.Interface(inference,gr.inputs.Image(type="filepath"),[gr.outputs.Image(type="filepath",label="TokenCut_attn"),gr.outputs.Image(type="filepath",label="TokenCut_predication")],title=title,description=description,article=article,examples=examples).launch(enable_queue=True) - diff --git a/spaces/CVPR/lama-example/saicinpainting/evaluation/masks/countless/countless2d.py b/spaces/CVPR/lama-example/saicinpainting/evaluation/masks/countless/countless2d.py deleted file mode 100644 index dc27b73affa20ab1a8a199542469a10aaf1f555a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/evaluation/masks/countless/countless2d.py +++ /dev/null @@ -1,529 +0,0 @@ -from __future__ import print_function, division - -""" -COUNTLESS performance test in Python. - -python countless2d.py ./images/NAMEOFIMAGE -""" - -import six -from six.moves import range -from collections import defaultdict -from functools import reduce -import operator -import io -import os -from PIL import Image -import math -import numpy as np -import random -import sys -import time -from tqdm import tqdm -from scipy import ndimage - -def simplest_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab = a * (a == b) # PICK(A,B) - ac = a * (a == c) # PICK(A,C) - bc = b * (b == c) # PICK(B,C) - - a = ab | ac | bc # Bitwise OR, safe b/c non-matches are zeroed - - return a + (a == 0) * d # AB || AC || BC || D - -def quick_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - bc = b * (b == c) # PICK(B,C) - - a = ab_ac | bc # (PICK(A,B) || PICK(A,C)) or PICK(B,C) - return a + (a == 0) * d # AB || AC || BC || D - -def quickest_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - ab_ac |= b * (b == c) # PICK(B,C) - return ab_ac + (ab_ac == 0) * d # AB || AC || BC || D - -def quick_countless_xor(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab = a ^ (a ^ b) # a or b - ab += (ab != a) * ((ab ^ (ab ^ c)) - b) # b or c - ab += (ab == c) * ((ab ^ (ab ^ d)) - c) # c or d - return ab - -def stippled_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm - that treats zero as "background" and inflates lone - pixels. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - ab_ac |= b * (b == c) # PICK(B,C) - - nonzero = a + (a == 0) * (b + (b == 0) * c) - return ab_ac + (ab_ac == 0) * (d + (d == 0) * nonzero) # AB || AC || BC || D - -def zero_corrected_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - # allows us to prevent losing 1/2 a bit of information - # at the top end by using a bigger type. Without this 255 is handled incorrectly. - data, upgraded = upgrade_type(data) - - # offset from zero, raw countless doesn't handle 0 correctly - # we'll remove the extra 1 at the end. - data += 1 - - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab = a * (a == b) # PICK(A,B) - ac = a * (a == c) # PICK(A,C) - bc = b * (b == c) # PICK(B,C) - - a = ab | ac | bc # Bitwise OR, safe b/c non-matches are zeroed - - result = a + (a == 0) * d - 1 # a or d - 1 - - if upgraded: - return downgrade_type(result) - - # only need to reset data if we weren't upgraded - # b/c no copy was made in that case - data -= 1 - - return result - -def countless_extreme(data): - nonzeros = np.count_nonzero(data) - # print("nonzeros", nonzeros) - - N = reduce(operator.mul, data.shape) - - if nonzeros == N: - print("quick") - return quick_countless(data) - elif np.count_nonzero(data + 1) == N: - print("quick") - # print("upper", nonzeros) - return quick_countless(data) - else: - return countless(data) - - -def countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - # allows us to prevent losing 1/2 a bit of information - # at the top end by using a bigger type. Without this 255 is handled incorrectly. - data, upgraded = upgrade_type(data) - - # offset from zero, raw countless doesn't handle 0 correctly - # we'll remove the extra 1 at the end. - data += 1 - - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - ab_ac |= b * (b == c) # PICK(B,C) - result = ab_ac + (ab_ac == 0) * d - 1 # (matches or d) - 1 - - if upgraded: - return downgrade_type(result) - - # only need to reset data if we weren't upgraded - # b/c no copy was made in that case - data -= 1 - - return result - -def upgrade_type(arr): - dtype = arr.dtype - - if dtype == np.uint8: - return arr.astype(np.uint16), True - elif dtype == np.uint16: - return arr.astype(np.uint32), True - elif dtype == np.uint32: - return arr.astype(np.uint64), True - - return arr, False - -def downgrade_type(arr): - dtype = arr.dtype - - if dtype == np.uint64: - return arr.astype(np.uint32) - elif dtype == np.uint32: - return arr.astype(np.uint16) - elif dtype == np.uint16: - return arr.astype(np.uint8) - - return arr - -def odd_to_even(image): - """ - To facilitate 2x2 downsampling segmentation, change an odd sized image into an even sized one. - Works by mirroring the starting 1 pixel edge of the image on odd shaped sides. - - e.g. turn a 3x3x5 image into a 4x4x5 (the x and y are what are getting downsampled) - - For example: [ 3, 2, 4 ] => [ 3, 3, 2, 4 ] which is now easy to downsample. - - """ - shape = np.array(image.shape) - - offset = (shape % 2)[:2] # x,y offset - - # detect if we're dealing with an even - # image. if so it's fine, just return. - if not np.any(offset): - return image - - oddshape = image.shape[:2] + offset - oddshape = np.append(oddshape, shape[2:]) - oddshape = oddshape.astype(int) - - newimg = np.empty(shape=oddshape, dtype=image.dtype) - - ox,oy = offset - sx,sy = oddshape - - newimg[0,0] = image[0,0] # corner - newimg[ox:sx,0] = image[:,0] # x axis line - newimg[0,oy:sy] = image[0,:] # y axis line - - return newimg - -def counting(array): - factor = (2, 2, 1) - shape = array.shape - - while len(shape) < 4: - array = np.expand_dims(array, axis=-1) - shape = array.shape - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(shape, factor)) - output = np.zeros(output_shape, dtype=array.dtype) - - for chan in range(0, shape[3]): - for z in range(0, shape[2]): - for x in range(0, shape[0], 2): - for y in range(0, shape[1], 2): - block = array[ x:x+2, y:y+2, z, chan ] # 2x2 block - - hashtable = defaultdict(int) - for subx, suby in np.ndindex(block.shape[0], block.shape[1]): - hashtable[block[subx, suby]] += 1 - - best = (0, 0) - for segid, val in six.iteritems(hashtable): - if best[1] < val: - best = (segid, val) - - output[ x // 2, y // 2, chan ] = best[0] - - return output - -def ndzoom(array): - if len(array.shape) == 3: - ratio = ( 1 / 2.0, 1 / 2.0, 1.0 ) - else: - ratio = ( 1 / 2.0, 1 / 2.0) - return ndimage.interpolation.zoom(array, ratio, order=1) - -def countless_if(array): - factor = (2, 2, 1) - shape = array.shape - - if len(shape) < 3: - array = array[ :,:, np.newaxis ] - shape = array.shape - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(shape, factor)) - output = np.zeros(output_shape, dtype=array.dtype) - - for chan in range(0, shape[2]): - for x in range(0, shape[0], 2): - for y in range(0, shape[1], 2): - block = array[ x:x+2, y:y+2, chan ] # 2x2 block - - if block[0,0] == block[1,0]: - pick = block[0,0] - elif block[0,0] == block[0,1]: - pick = block[0,0] - elif block[1,0] == block[0,1]: - pick = block[1,0] - else: - pick = block[1,1] - - output[ x // 2, y // 2, chan ] = pick - - return np.squeeze(output) - -def downsample_with_averaging(array): - """ - Downsample x by factor using averaging. - - @return: The downsampled array, of the same type as x. - """ - - if len(array.shape) == 3: - factor = (2,2,1) - else: - factor = (2,2) - - if np.array_equal(factor[:3], np.array([1,1,1])): - return array - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(array.shape, factor)) - temp = np.zeros(output_shape, float) - counts = np.zeros(output_shape, np.int) - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - indexing_expr = tuple(np.s_[:s] for s in part.shape) - temp[indexing_expr] += part - counts[indexing_expr] += 1 - return np.cast[array.dtype](temp / counts) - -def downsample_with_max_pooling(array): - - factor = (2,2) - - if np.all(np.array(factor, int) == 1): - return array - - sections = [] - - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - output = sections[0].copy() - - for section in sections[1:]: - np.maximum(output, section, output) - - return output - -def striding(array): - """Downsample x by factor using striding. - - @return: The downsampled array, of the same type as x. - """ - factor = (2,2) - if np.all(np.array(factor, int) == 1): - return array - return array[tuple(np.s_[::f] for f in factor)] - -def benchmark(): - filename = sys.argv[1] - img = Image.open(filename) - data = np.array(img.getdata(), dtype=np.uint8) - - if len(data.shape) == 1: - n_channels = 1 - reshape = (img.height, img.width) - else: - n_channels = min(data.shape[1], 3) - data = data[:, :n_channels] - reshape = (img.height, img.width, n_channels) - - data = data.reshape(reshape).astype(np.uint8) - - methods = [ - simplest_countless, - quick_countless, - quick_countless_xor, - quickest_countless, - stippled_countless, - zero_corrected_countless, - countless, - downsample_with_averaging, - downsample_with_max_pooling, - ndzoom, - striding, - # countless_if, - # counting, - ] - - formats = { - 1: 'L', - 3: 'RGB', - 4: 'RGBA' - } - - if not os.path.exists('./results'): - os.mkdir('./results') - - N = 500 - img_size = float(img.width * img.height) / 1024.0 / 1024.0 - print("N = %d, %dx%d (%.2f MPx) %d chan, %s" % (N, img.width, img.height, img_size, n_channels, filename)) - print("Algorithm\tMPx/sec\tMB/sec\tSec") - for fn in methods: - print(fn.__name__, end='') - sys.stdout.flush() - - start = time.time() - # tqdm is here to show you what's going on the first time you run it. - # Feel free to remove it to get slightly more accurate timing results. - for _ in tqdm(range(N), desc=fn.__name__, disable=True): - result = fn(data) - end = time.time() - print("\r", end='') - - total_time = (end - start) - mpx = N * img_size / total_time - mbytes = N * img_size * n_channels / total_time - # Output in tab separated format to enable copy-paste into excel/numbers - print("%s\t%.3f\t%.3f\t%.2f" % (fn.__name__, mpx, mbytes, total_time)) - outimg = Image.fromarray(np.squeeze(result), formats[n_channels]) - outimg.save('./results/{}.png'.format(fn.__name__, "PNG")) - -if __name__ == '__main__': - benchmark() - - -# Example results: -# N = 5, 1024x1024 (1.00 MPx) 1 chan, images/gray_segmentation.png -# Function MPx/sec MB/sec Sec -# simplest_countless 752.855 752.855 0.01 -# quick_countless 920.328 920.328 0.01 -# zero_corrected_countless 534.143 534.143 0.01 -# countless 644.247 644.247 0.01 -# downsample_with_averaging 372.575 372.575 0.01 -# downsample_with_max_pooling 974.060 974.060 0.01 -# ndzoom 137.517 137.517 0.04 -# striding 38550.588 38550.588 0.00 -# countless_if 4.377 4.377 1.14 -# counting 0.117 0.117 42.85 - -# Run without non-numpy implementations: -# N = 2000, 1024x1024 (1.00 MPx) 1 chan, images/gray_segmentation.png -# Algorithm MPx/sec MB/sec Sec -# simplest_countless 800.522 800.522 2.50 -# quick_countless 945.420 945.420 2.12 -# quickest_countless 947.256 947.256 2.11 -# stippled_countless 544.049 544.049 3.68 -# zero_corrected_countless 575.310 575.310 3.48 -# countless 646.684 646.684 3.09 -# downsample_with_averaging 385.132 385.132 5.19 -# downsample_with_max_poolin 988.361 988.361 2.02 -# ndzoom 163.104 163.104 12.26 -# striding 81589.340 81589.340 0.02 - - - - diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/can_can_need/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/can_can_need/__init__.py deleted file mode 100644 index b10f4784d0663ba7d59e726e9e97efb44dd19967..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/can_can_need/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from pathlib import Path -from typing import List - -from meme_generator import add_meme -from pil_utils import BuildImage - -img_dir = Path(__file__).parent / "images" - - -def can_can_need(images: List[BuildImage], texts, args): - frame = BuildImage.open(img_dir / "0.jpg") - frame.paste( - images[1].convert("RGBA").circle().resize((340, 340)), (120, 21), alpha=True - ).paste( - images[0].convert("RGBA").circle().resize((300, 300)), (611, 718), alpha=True - ) - return frame.save_jpg() - - -add_meme("can_can_need", can_can_need, min_images=2, max_images=2, keywords=["看看你的"]) diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/my_friend/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/my_friend/__init__.py deleted file mode 100644 index 21b8aa44954389a72c029a2bf52ddd3a4b6b190a..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/my_friend/__init__.py +++ /dev/null @@ -1,79 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage, Text2Image -from pydantic import Field - -from meme_generator import MemeArgsModel, MemeArgsParser, MemeArgsType, add_meme -from meme_generator.exception import TextOverLength - -img_dir = Path(__file__).parent / "images" - -help = "指定名字" - -parser = MemeArgsParser() -parser.add_argument("-n", "--name", type=str, default="", help=help) - - -class Model(MemeArgsModel): - name: str = Field("", description=help) - - -def my_friend(images: List[BuildImage], texts: List[str], args: Model): - name = args.name or (args.user_infos[-1].name if args.user_infos else "") or "朋友" - img = images[0].convert("RGBA").circle().resize((100, 100)) - - name_img = Text2Image.from_text(name, 25, fill="#868894").to_image() - name_w, name_h = name_img.size - if name_w >= 600: - raise TextOverLength(name) - - corner1 = BuildImage.open(img_dir / "corner1.png") - corner2 = BuildImage.open(img_dir / "corner2.png") - corner3 = BuildImage.open(img_dir / "corner3.png") - corner4 = BuildImage.open(img_dir / "corner4.png") - label = BuildImage.open(img_dir / "label.png") - - def make_dialog(text: str) -> BuildImage: - text_img = Text2Image.from_text(text, 40).wrap(600).to_image() - text_w, text_h = text_img.size - box_w = max(text_w, name_w + 15) + 140 - box_h = max(text_h + 103, 150) - box = BuildImage.new("RGBA", (box_w, box_h)) - box.paste(corner1, (0, 0)) - box.paste(corner2, (0, box_h - 75)) - box.paste(corner3, (text_w + 70, 0)) - box.paste(corner4, (text_w + 70, box_h - 75)) - box.paste(BuildImage.new("RGBA", (text_w, box_h - 40), "white"), (70, 20)) - box.paste(BuildImage.new("RGBA", (text_w + 88, box_h - 150), "white"), (27, 75)) - box.paste(text_img, (70, 17 + (box_h - 40 - text_h) // 2), alpha=True) - - dialog = BuildImage.new("RGBA", (box.width + 130, box.height + 60), "#eaedf4") - dialog.paste(img, (20, 20), alpha=True) - dialog.paste(box, (130, 60), alpha=True) - dialog.paste(label, (160, 25)) - dialog.paste(name_img, (260, 22 + (35 - name_h) // 2), alpha=True) - return dialog - - dialogs = [make_dialog(text) for text in texts] - frame_w = max((dialog.width for dialog in dialogs)) - frame_h = sum((dialog.height for dialog in dialogs)) - frame = BuildImage.new("RGBA", (frame_w, frame_h), "#eaedf4") - current_h = 0 - for dialog in dialogs: - frame.paste(dialog, (0, current_h)) - current_h += dialog.height - return frame.save_jpg() - - -add_meme( - "my_friend", - my_friend, - min_images=1, - max_images=1, - min_texts=1, - max_texts=10, - default_texts=["让我康康"], - args_type=MemeArgsType(parser, Model), - keywords=["我朋友说"], -) diff --git a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/__init__.py b/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/video_processor.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/video_processor.py deleted file mode 100644 index b272e318230c544748818476bbe4caa1fc9f847d..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/video_processor.py +++ /dev/null @@ -1,237 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import torch -from video_llama.common.registry import registry -from decord import VideoReader -import decord -import numpy as np -from video_llama.processors import transforms_video -from video_llama.processors.base_processor import BaseProcessor -from video_llama.processors.randaugment import VideoRandomAugment -from video_llama.processors import functional_video as F -from omegaconf import OmegaConf -from torchvision import transforms -import random as rnd - - -MAX_INT = registry.get("MAX_INT") -decord.bridge.set_bridge("torch") - -def load_video(video_path, n_frms=MAX_INT, height=-1, width=-1, sampling="uniform", return_msg = False): - decord.bridge.set_bridge("torch") - vr = VideoReader(uri=video_path, height=height, width=width) - - vlen = len(vr) - start, end = 0, vlen - - n_frms = min(n_frms, vlen) - - if sampling == "uniform": - indices = np.arange(start, end, vlen / n_frms).astype(int).tolist() - elif sampling == "headtail": - indices_h = sorted(rnd.sample(range(vlen // 2), n_frms // 2)) - indices_t = sorted(rnd.sample(range(vlen // 2, vlen), n_frms // 2)) - indices = indices_h + indices_t - else: - raise NotImplementedError - - # get_batch -> T, H, W, C - temp_frms = vr.get_batch(indices) - # print(type(temp_frms)) - tensor_frms = torch.from_numpy(temp_frms) if type(temp_frms) is not torch.Tensor else temp_frms - frms = tensor_frms.permute(3, 0, 1, 2).float() # (C, T, H, W) - - if not return_msg: - return frms - - fps = float(vr.get_avg_fps()) - sec = ", ".join([str(round(f / fps, 1)) for f in indices]) - # " " should be added in the start and end - msg = f"The video contains {len(indices)} frames sampled at {sec} seconds. " - return frms, msg - - -class AlproVideoBaseProcessor(BaseProcessor): - def __init__(self, mean=None, std=None, n_frms=MAX_INT): - if mean is None: - mean = (0.48145466, 0.4578275, 0.40821073) - if std is None: - std = (0.26862954, 0.26130258, 0.27577711) - - self.normalize = transforms_video.NormalizeVideo(mean, std) - - self.n_frms = n_frms - - -class ToUint8(object): - def __init__(self): - pass - - def __call__(self, tensor): - return tensor.to(torch.uint8) - - def __repr__(self): - return self.__class__.__name__ - - -class ToTHWC(object): - """ - Args: - clip (torch.tensor, dtype=torch.uint8): Size is (C, T, H, W) - Return: - clip (torch.tensor, dtype=torch.float): Size is (T, H, W, C) - """ - - def __init__(self): - pass - - def __call__(self, tensor): - return tensor.permute(1, 2, 3, 0) - - def __repr__(self): - return self.__class__.__name__ - - -class ResizeVideo(object): - def __init__(self, target_size, interpolation_mode="bilinear"): - self.target_size = target_size - self.interpolation_mode = interpolation_mode - - def __call__(self, clip): - """ - Args: - clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W) - Returns: - torch.tensor: central cropping of video clip. Size is - (C, T, crop_size, crop_size) - """ - return F.resize(clip, self.target_size, self.interpolation_mode) - - def __repr__(self): - return self.__class__.__name__ + "(resize_size={0})".format(self.target_size) - - -@registry.register_processor("alpro_video_train") -class AlproVideoTrainProcessor(AlproVideoBaseProcessor): - def __init__( - self, - image_size=384, - mean=None, - std=None, - min_scale=0.5, - max_scale=1.0, - n_frms=MAX_INT, - ): - super().__init__(mean=mean, std=std, n_frms=n_frms) - - self.image_size = image_size - - self.transform = transforms.Compose( - [ - # Video size is (C, T, H, W) - transforms_video.RandomResizedCropVideo( - image_size, - scale=(min_scale, max_scale), - interpolation_mode="bicubic", - ), - ToTHWC(), # C, T, H, W -> T, H, W, C - ToUint8(), - transforms_video.ToTensorVideo(), # T, H, W, C -> C, T, H, W - self.normalize, - ] - ) - - def __call__(self, vpath): - """ - Args: - clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W) - Returns: - torch.tensor: video clip after transforms. Size is (C, T, size, size). - """ - clip = load_video( - video_path=vpath, - n_frms=self.n_frms, - height=self.image_size, - width=self.image_size, - sampling="headtail", - ) - - return self.transform(clip) - - @classmethod - def from_config(cls, cfg=None): - if cfg is None: - cfg = OmegaConf.create() - - image_size = cfg.get("image_size", 256) - - mean = cfg.get("mean", None) - std = cfg.get("std", None) - - min_scale = cfg.get("min_scale", 0.5) - max_scale = cfg.get("max_scale", 1.0) - - n_frms = cfg.get("n_frms", MAX_INT) - - return cls( - image_size=image_size, - mean=mean, - std=std, - min_scale=min_scale, - max_scale=max_scale, - n_frms=n_frms, - ) - - -@registry.register_processor("alpro_video_eval") -class AlproVideoEvalProcessor(AlproVideoBaseProcessor): - def __init__(self, image_size=256, mean=None, std=None, n_frms=MAX_INT): - super().__init__(mean=mean, std=std, n_frms=n_frms) - - self.image_size = image_size - - # Input video size is (C, T, H, W) - self.transform = transforms.Compose( - [ - # frames will be resized during decord loading. - ToUint8(), # C, T, H, W - ToTHWC(), # T, H, W, C - transforms_video.ToTensorVideo(), # C, T, H, W - self.normalize, # C, T, H, W - ] - ) - - def __call__(self, vpath): - """ - Args: - clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W) - Returns: - torch.tensor: video clip after transforms. Size is (C, T, size, size). - """ - clip = load_video( - video_path=vpath, - n_frms=self.n_frms, - height=self.image_size, - width=self.image_size, - ) - - return self.transform(clip) - - @classmethod - def from_config(cls, cfg=None): - if cfg is None: - cfg = OmegaConf.create() - - image_size = cfg.get("image_size", 256) - - mean = cfg.get("mean", None) - std = cfg.get("std", None) - - n_frms = cfg.get("n_frms", MAX_INT) - - return cls(image_size=image_size, mean=mean, std=std, n_frms=n_frms) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_g_v_a_r.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_g_v_a_r.py deleted file mode 100644 index 11485bf09aee04a15307d094fdead26e7e4572ea..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_g_v_a_r.py +++ /dev/null @@ -1,284 +0,0 @@ -from collections import UserDict, deque -from functools import partial -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import array -import itertools -import logging -import struct -import sys -import fontTools.ttLib.tables.TupleVariation as tv - - -log = logging.getLogger(__name__) -TupleVariation = tv.TupleVariation - - -# https://www.microsoft.com/typography/otspec/gvar.htm -# https://www.microsoft.com/typography/otspec/otvarcommonformats.htm -# -# Apple's documentation of 'gvar': -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6gvar.html -# -# FreeType2 source code for parsing 'gvar': -# http://git.savannah.gnu.org/cgit/freetype/freetype2.git/tree/src/truetype/ttgxvar.c - -GVAR_HEADER_FORMAT = """ - > # big endian - version: H - reserved: H - axisCount: H - sharedTupleCount: H - offsetToSharedTuples: I - glyphCount: H - flags: H - offsetToGlyphVariationData: I -""" - -GVAR_HEADER_SIZE = sstruct.calcsize(GVAR_HEADER_FORMAT) - - -class _LazyDict(UserDict): - def __init__(self, data): - super().__init__() - self.data = data - - def __getitem__(self, k): - v = self.data[k] - if callable(v): - v = v() - self.data[k] = v - return v - - -class table__g_v_a_r(DefaultTable.DefaultTable): - dependencies = ["fvar", "glyf"] - - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.version, self.reserved = 1, 0 - self.variations = {} - - def compile(self, ttFont): - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - sharedTuples = tv.compileSharedTuples( - axisTags, itertools.chain(*self.variations.values()) - ) - sharedTupleIndices = {coord: i for i, coord in enumerate(sharedTuples)} - sharedTupleSize = sum([len(c) for c in sharedTuples]) - compiledGlyphs = self.compileGlyphs_(ttFont, axisTags, sharedTupleIndices) - offset = 0 - offsets = [] - for glyph in compiledGlyphs: - offsets.append(offset) - offset += len(glyph) - offsets.append(offset) - compiledOffsets, tableFormat = self.compileOffsets_(offsets) - - header = {} - header["version"] = self.version - header["reserved"] = self.reserved - header["axisCount"] = len(axisTags) - header["sharedTupleCount"] = len(sharedTuples) - header["offsetToSharedTuples"] = GVAR_HEADER_SIZE + len(compiledOffsets) - header["glyphCount"] = len(compiledGlyphs) - header["flags"] = tableFormat - header["offsetToGlyphVariationData"] = ( - header["offsetToSharedTuples"] + sharedTupleSize - ) - compiledHeader = sstruct.pack(GVAR_HEADER_FORMAT, header) - - result = [compiledHeader, compiledOffsets] - result.extend(sharedTuples) - result.extend(compiledGlyphs) - return b"".join(result) - - def compileGlyphs_(self, ttFont, axisTags, sharedCoordIndices): - result = [] - glyf = ttFont["glyf"] - for glyphName in ttFont.getGlyphOrder(): - variations = self.variations.get(glyphName, []) - if not variations: - result.append(b"") - continue - pointCountUnused = 0 # pointCount is actually unused by compileGlyph - result.append( - compileGlyph_( - variations, pointCountUnused, axisTags, sharedCoordIndices - ) - ) - return result - - def decompile(self, data, ttFont): - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - glyphs = ttFont.getGlyphOrder() - sstruct.unpack(GVAR_HEADER_FORMAT, data[0:GVAR_HEADER_SIZE], self) - assert len(glyphs) == self.glyphCount - assert len(axisTags) == self.axisCount - offsets = self.decompileOffsets_( - data[GVAR_HEADER_SIZE:], - tableFormat=(self.flags & 1), - glyphCount=self.glyphCount, - ) - sharedCoords = tv.decompileSharedTuples( - axisTags, self.sharedTupleCount, data, self.offsetToSharedTuples - ) - variations = {} - offsetToData = self.offsetToGlyphVariationData - glyf = ttFont["glyf"] - - def decompileVarGlyph(glyphName, gid): - gvarData = data[ - offsetToData + offsets[gid] : offsetToData + offsets[gid + 1] - ] - if not gvarData: - return [] - glyph = glyf[glyphName] - numPointsInGlyph = self.getNumPoints_(glyph) - return decompileGlyph_(numPointsInGlyph, sharedCoords, axisTags, gvarData) - - for gid in range(self.glyphCount): - glyphName = glyphs[gid] - variations[glyphName] = partial(decompileVarGlyph, glyphName, gid) - self.variations = _LazyDict(variations) - - if ttFont.lazy is False: # Be lazy for None and True - self.ensureDecompiled() - - def ensureDecompiled(self, recurse=False): - # The recurse argument is unused, but part of the signature of - # ensureDecompiled across the library. - # Use a zero-length deque to consume the lazy dict - deque(self.variations.values(), maxlen=0) - - @staticmethod - def decompileOffsets_(data, tableFormat, glyphCount): - if tableFormat == 0: - # Short format: array of UInt16 - offsets = array.array("H") - offsetsSize = (glyphCount + 1) * 2 - else: - # Long format: array of UInt32 - offsets = array.array("I") - offsetsSize = (glyphCount + 1) * 4 - offsets.frombytes(data[0:offsetsSize]) - if sys.byteorder != "big": - offsets.byteswap() - - # In the short format, offsets need to be multiplied by 2. - # This is not documented in Apple's TrueType specification, - # but can be inferred from the FreeType implementation, and - # we could verify it with two sample GX fonts. - if tableFormat == 0: - offsets = [off * 2 for off in offsets] - - return offsets - - @staticmethod - def compileOffsets_(offsets): - """Packs a list of offsets into a 'gvar' offset table. - - Returns a pair (bytestring, tableFormat). Bytestring is the - packed offset table. Format indicates whether the table - uses short (tableFormat=0) or long (tableFormat=1) integers. - The returned tableFormat should get packed into the flags field - of the 'gvar' header. - """ - assert len(offsets) >= 2 - for i in range(1, len(offsets)): - assert offsets[i - 1] <= offsets[i] - if max(offsets) <= 0xFFFF * 2: - packed = array.array("H", [n >> 1 for n in offsets]) - tableFormat = 0 - else: - packed = array.array("I", offsets) - tableFormat = 1 - if sys.byteorder != "big": - packed.byteswap() - return (packed.tobytes(), tableFormat) - - def toXML(self, writer, ttFont): - writer.simpletag("version", value=self.version) - writer.newline() - writer.simpletag("reserved", value=self.reserved) - writer.newline() - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - for glyphName in ttFont.getGlyphNames(): - variations = self.variations.get(glyphName) - if not variations: - continue - writer.begintag("glyphVariations", glyph=glyphName) - writer.newline() - for gvar in variations: - gvar.toXML(writer, axisTags) - writer.endtag("glyphVariations") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": - self.version = safeEval(attrs["value"]) - elif name == "reserved": - self.reserved = safeEval(attrs["value"]) - elif name == "glyphVariations": - if not hasattr(self, "variations"): - self.variations = {} - glyphName = attrs["glyph"] - glyph = ttFont["glyf"][glyphName] - numPointsInGlyph = self.getNumPoints_(glyph) - glyphVariations = [] - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - if name == "tuple": - gvar = TupleVariation({}, [None] * numPointsInGlyph) - glyphVariations.append(gvar) - for tupleElement in content: - if isinstance(tupleElement, tuple): - tupleName, tupleAttrs, tupleContent = tupleElement - gvar.fromXML(tupleName, tupleAttrs, tupleContent) - self.variations[glyphName] = glyphVariations - - @staticmethod - def getNumPoints_(glyph): - NUM_PHANTOM_POINTS = 4 - - if glyph.isComposite(): - return len(glyph.components) + NUM_PHANTOM_POINTS - elif glyph.isVarComposite(): - count = 0 - for component in glyph.components: - count += component.getPointCount() - return count + NUM_PHANTOM_POINTS - else: - # Empty glyphs (eg. space, nonmarkingreturn) have no "coordinates" attribute. - return len(getattr(glyph, "coordinates", [])) + NUM_PHANTOM_POINTS - - -def compileGlyph_(variations, pointCount, axisTags, sharedCoordIndices): - tupleVariationCount, tuples, data = tv.compileTupleVariationStore( - variations, pointCount, axisTags, sharedCoordIndices - ) - if tupleVariationCount == 0: - return b"" - result = [struct.pack(">HH", tupleVariationCount, 4 + len(tuples)), tuples, data] - if (len(tuples) + len(data)) % 2 != 0: - result.append(b"\0") # padding - return b"".join(result) - - -def decompileGlyph_(pointCount, sharedTuples, axisTags, data): - if len(data) < 4: - return [] - tupleVariationCount, offsetToData = struct.unpack(">HH", data[:4]) - dataPos = offsetToData - return tv.decompileTupleVariationStore( - "gvar", - axisTags, - tupleVariationCount, - pointCount, - sharedTuples, - data, - 4, - offsetToData, - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/__init__.py deleted file mode 100644 index ba798ceeafd786f3dc6423160a74ac9de9d38cd3..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/__init__.py +++ /dev/null @@ -1,1444 +0,0 @@ -""" -Module for dealing with 'gvar'-style font variations, also known as run-time -interpolation. - -The ideas here are very similar to MutatorMath. There is even code to read -MutatorMath .designspace files in the varLib.designspace module. - -For now, if you run this file on a designspace file, it tries to find -ttf-interpolatable files for the masters and build a variable-font from -them. Such ttf-interpolatable and designspace files can be generated from -a Glyphs source, eg., using noto-source as an example: - - $ fontmake -o ttf-interpolatable -g NotoSansArabic-MM.glyphs - -Then you can make a variable-font this way: - - $ fonttools varLib master_ufo/NotoSansArabic.designspace - -API *will* change in near future. -""" -from typing import List -from fontTools.misc.vector import Vector -from fontTools.misc.roundTools import noRound, otRound -from fontTools.misc.fixedTools import floatToFixed as fl2fi -from fontTools.misc.textTools import Tag, tostr -from fontTools.ttLib import TTFont, newTable -from fontTools.ttLib.tables._f_v_a_r import Axis, NamedInstance -from fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates, dropImpliedOnCurvePoints -from fontTools.ttLib.tables.ttProgram import Program -from fontTools.ttLib.tables.TupleVariation import TupleVariation -from fontTools.ttLib.tables import otTables as ot -from fontTools.ttLib.tables.otBase import OTTableWriter -from fontTools.varLib import builder, models, varStore -from fontTools.varLib.merger import VariationMerger, COLRVariationMerger -from fontTools.varLib.mvar import MVAR_ENTRIES -from fontTools.varLib.iup import iup_delta_optimize -from fontTools.varLib.featureVars import addFeatureVariations -from fontTools.designspaceLib import DesignSpaceDocument, InstanceDescriptor -from fontTools.designspaceLib.split import splitInterpolable, splitVariableFonts -from fontTools.varLib.stat import buildVFStatTable -from fontTools.colorLib.builder import buildColrV1 -from fontTools.colorLib.unbuilder import unbuildColrV1 -from functools import partial -from collections import OrderedDict, defaultdict, namedtuple -import os.path -import logging -from copy import deepcopy -from pprint import pformat -from re import fullmatch -from .errors import VarLibError, VarLibValidationError - -log = logging.getLogger("fontTools.varLib") - -# This is a lib key for the designspace document. The value should be -# an OpenType feature tag, to be used as the FeatureVariations feature. -# If present, the DesignSpace flag is ignored. -FEAVAR_FEATURETAG_LIB_KEY = "com.github.fonttools.varLib.featureVarsFeatureTag" - -# -# Creation routines -# - - -def _add_fvar(font, axes, instances: List[InstanceDescriptor]): - """ - Add 'fvar' table to font. - - axes is an ordered dictionary of DesignspaceAxis objects. - - instances is list of dictionary objects with 'location', 'stylename', - and possibly 'postscriptfontname' entries. - """ - - assert axes - assert isinstance(axes, OrderedDict) - - log.info("Generating fvar") - - fvar = newTable("fvar") - nameTable = font["name"] - - for a in axes.values(): - axis = Axis() - axis.axisTag = Tag(a.tag) - # TODO Skip axes that have no variation. - axis.minValue, axis.defaultValue, axis.maxValue = ( - a.minimum, - a.default, - a.maximum, - ) - axis.axisNameID = nameTable.addMultilingualName( - a.labelNames, font, minNameID=256 - ) - axis.flags = int(a.hidden) - fvar.axes.append(axis) - - for instance in instances: - # Filter out discrete axis locations - coordinates = { - name: value for name, value in instance.location.items() if name in axes - } - - if "en" not in instance.localisedStyleName: - if not instance.styleName: - raise VarLibValidationError( - f"Instance at location '{coordinates}' must have a default English " - "style name ('stylename' attribute on the instance element or a " - "stylename element with an 'xml:lang=\"en\"' attribute)." - ) - localisedStyleName = dict(instance.localisedStyleName) - localisedStyleName["en"] = tostr(instance.styleName) - else: - localisedStyleName = instance.localisedStyleName - - psname = instance.postScriptFontName - - inst = NamedInstance() - inst.subfamilyNameID = nameTable.addMultilingualName(localisedStyleName) - if psname is not None: - psname = tostr(psname) - inst.postscriptNameID = nameTable.addName(psname) - inst.coordinates = { - axes[k].tag: axes[k].map_backward(v) for k, v in coordinates.items() - } - # inst.coordinates = {axes[k].tag:v for k,v in coordinates.items()} - fvar.instances.append(inst) - - assert "fvar" not in font - font["fvar"] = fvar - - return fvar - - -def _add_avar(font, axes, mappings, axisTags): - """ - Add 'avar' table to font. - - axes is an ordered dictionary of AxisDescriptor objects. - """ - - assert axes - assert isinstance(axes, OrderedDict) - - log.info("Generating avar") - - avar = newTable("avar") - - interesting = False - vals_triples = {} - for axis in axes.values(): - # Currently, some rasterizers require that the default value maps - # (-1 to -1, 0 to 0, and 1 to 1) be present for all the segment - # maps, even when the default normalization mapping for the axis - # was not modified. - # https://github.com/googlei18n/fontmake/issues/295 - # https://github.com/fonttools/fonttools/issues/1011 - # TODO(anthrotype) revert this (and 19c4b37) when issue is fixed - curve = avar.segments[axis.tag] = {-1.0: -1.0, 0.0: 0.0, 1.0: 1.0} - - keys_triple = (axis.minimum, axis.default, axis.maximum) - vals_triple = tuple(axis.map_forward(v) for v in keys_triple) - vals_triples[axis.tag] = vals_triple - - if not axis.map: - continue - - items = sorted(axis.map) - keys = [item[0] for item in items] - vals = [item[1] for item in items] - - # Current avar requirements. We don't have to enforce - # these on the designer and can deduce some ourselves, - # but for now just enforce them. - if axis.minimum != min(keys): - raise VarLibValidationError( - f"Axis '{axis.name}': there must be a mapping for the axis minimum " - f"value {axis.minimum} and it must be the lowest input mapping value." - ) - if axis.maximum != max(keys): - raise VarLibValidationError( - f"Axis '{axis.name}': there must be a mapping for the axis maximum " - f"value {axis.maximum} and it must be the highest input mapping value." - ) - if axis.default not in keys: - raise VarLibValidationError( - f"Axis '{axis.name}': there must be a mapping for the axis default " - f"value {axis.default}." - ) - # No duplicate input values (output values can be >= their preceeding value). - if len(set(keys)) != len(keys): - raise VarLibValidationError( - f"Axis '{axis.name}': All axis mapping input='...' values must be " - "unique, but we found duplicates." - ) - # Ascending values - if sorted(vals) != vals: - raise VarLibValidationError( - f"Axis '{axis.name}': mapping output values must be in ascending order." - ) - - keys = [models.normalizeValue(v, keys_triple) for v in keys] - vals = [models.normalizeValue(v, vals_triple) for v in vals] - - if all(k == v for k, v in zip(keys, vals)): - continue - interesting = True - - curve.update(zip(keys, vals)) - - assert 0.0 in curve and curve[0.0] == 0.0 - assert -1.0 not in curve or curve[-1.0] == -1.0 - assert +1.0 not in curve or curve[+1.0] == +1.0 - # curve.update({-1.0: -1.0, 0.0: 0.0, 1.0: 1.0}) - - if mappings: - interesting = True - - hiddenAxes = [axis for axis in axes.values() if axis.hidden] - - inputLocations = [ - { - axes[name].tag: models.normalizeValue(v, vals_triples[axes[name].tag]) - for name, v in mapping.inputLocation.items() - } - for mapping in mappings - ] - outputLocations = [ - { - axes[name].tag: models.normalizeValue(v, vals_triples[axes[name].tag]) - for name, v in mapping.outputLocation.items() - } - for mapping in mappings - ] - assert len(inputLocations) == len(outputLocations) - - # If base-master is missing, insert it at zero location. - if not any(all(v == 0 for k, v in loc.items()) for loc in inputLocations): - inputLocations.insert(0, {}) - outputLocations.insert(0, {}) - - model = models.VariationModel(inputLocations, axisTags) - storeBuilder = varStore.OnlineVarStoreBuilder(axisTags) - storeBuilder.setModel(model) - varIdxes = {} - for tag in axisTags: - masterValues = [] - for vo, vi in zip(outputLocations, inputLocations): - if tag not in vo: - masterValues.append(0) - continue - v = vo[tag] - vi.get(tag, 0) - masterValues.append(fl2fi(v, 14)) - varIdxes[tag] = storeBuilder.storeMasters(masterValues)[1] - - store = storeBuilder.finish() - optimized = store.optimize() - varIdxes = {axis: optimized[value] for axis, value in varIdxes.items()} - - varIdxMap = builder.buildDeltaSetIndexMap(varIdxes[t] for t in axisTags) - - avar.majorVersion = 2 - avar.table = ot.avar() - avar.table.VarIdxMap = varIdxMap - avar.table.VarStore = store - - assert "avar" not in font - if not interesting: - log.info("No need for avar") - avar = None - else: - font["avar"] = avar - - return avar - - -def _add_stat(font): - # Note: this function only gets called by old code that calls `build()` - # directly. Newer code that wants to benefit from STAT data from the - # designspace should call `build_many()` - - if "STAT" in font: - return - - from ..otlLib.builder import buildStatTable - - fvarTable = font["fvar"] - axes = [dict(tag=a.axisTag, name=a.axisNameID) for a in fvarTable.axes] - buildStatTable(font, axes) - - -_MasterData = namedtuple("_MasterData", ["glyf", "hMetrics", "vMetrics"]) - - -def _add_gvar(font, masterModel, master_ttfs, tolerance=0.5, optimize=True): - if tolerance < 0: - raise ValueError("`tolerance` must be a positive number.") - - log.info("Generating gvar") - assert "gvar" not in font - gvar = font["gvar"] = newTable("gvar") - glyf = font["glyf"] - defaultMasterIndex = masterModel.reverseMapping[0] - - master_datas = [ - _MasterData( - m["glyf"], m["hmtx"].metrics, getattr(m.get("vmtx"), "metrics", None) - ) - for m in master_ttfs - ] - - for glyph in font.getGlyphOrder(): - log.debug("building gvar for glyph '%s'", glyph) - isComposite = glyf[glyph].isComposite() - - allData = [ - m.glyf._getCoordinatesAndControls(glyph, m.hMetrics, m.vMetrics) - for m in master_datas - ] - - if allData[defaultMasterIndex][1].numberOfContours != 0: - # If the default master is not empty, interpret empty non-default masters - # as missing glyphs from a sparse master - allData = [ - d if d is not None and d[1].numberOfContours != 0 else None - for d in allData - ] - - model, allData = masterModel.getSubModel(allData) - - allCoords = [d[0] for d in allData] - allControls = [d[1] for d in allData] - control = allControls[0] - if not models.allEqual(allControls): - log.warning("glyph %s has incompatible masters; skipping" % glyph) - continue - del allControls - - # Update gvar - gvar.variations[glyph] = [] - deltas = model.getDeltas( - allCoords, round=partial(GlyphCoordinates.__round__, round=round) - ) - supports = model.supports - assert len(deltas) == len(supports) - - # Prepare for IUP optimization - origCoords = deltas[0] - endPts = control.endPts - - for i, (delta, support) in enumerate(zip(deltas[1:], supports[1:])): - if all(v == 0 for v in delta.array) and not isComposite: - continue - var = TupleVariation(support, delta) - if optimize: - delta_opt = iup_delta_optimize( - delta, origCoords, endPts, tolerance=tolerance - ) - - if None in delta_opt: - """In composite glyphs, there should be one 0 entry - to make sure the gvar entry is written to the font. - - This is to work around an issue with macOS 10.14 and can be - removed once the behaviour of macOS is changed. - - https://github.com/fonttools/fonttools/issues/1381 - """ - if all(d is None for d in delta_opt): - delta_opt = [(0, 0)] + [None] * (len(delta_opt) - 1) - # Use "optimized" version only if smaller... - var_opt = TupleVariation(support, delta_opt) - - axis_tags = sorted( - support.keys() - ) # Shouldn't matter that this is different from fvar...? - tupleData, auxData = var.compile(axis_tags) - unoptimized_len = len(tupleData) + len(auxData) - tupleData, auxData = var_opt.compile(axis_tags) - optimized_len = len(tupleData) + len(auxData) - - if optimized_len < unoptimized_len: - var = var_opt - - gvar.variations[glyph].append(var) - - -def _remove_TTHinting(font): - for tag in ("cvar", "cvt ", "fpgm", "prep"): - if tag in font: - del font[tag] - maxp = font["maxp"] - for attr in ( - "maxTwilightPoints", - "maxStorage", - "maxFunctionDefs", - "maxInstructionDefs", - "maxStackElements", - "maxSizeOfInstructions", - ): - setattr(maxp, attr, 0) - maxp.maxZones = 1 - font["glyf"].removeHinting() - # TODO: Modify gasp table to deactivate gridfitting for all ranges? - - -def _merge_TTHinting(font, masterModel, master_ttfs): - log.info("Merging TT hinting") - assert "cvar" not in font - - # Check that the existing hinting is compatible - - # fpgm and prep table - - for tag in ("fpgm", "prep"): - all_pgms = [m[tag].program for m in master_ttfs if tag in m] - if not all_pgms: - continue - font_pgm = getattr(font.get(tag), "program", None) - if any(pgm != font_pgm for pgm in all_pgms): - log.warning( - "Masters have incompatible %s tables, hinting is discarded." % tag - ) - _remove_TTHinting(font) - return - - # glyf table - - font_glyf = font["glyf"] - master_glyfs = [m["glyf"] for m in master_ttfs] - for name, glyph in font_glyf.glyphs.items(): - all_pgms = [getattr(glyf.get(name), "program", None) for glyf in master_glyfs] - if not any(all_pgms): - continue - glyph.expand(font_glyf) - font_pgm = getattr(glyph, "program", None) - if any(pgm != font_pgm for pgm in all_pgms if pgm): - log.warning( - "Masters have incompatible glyph programs in glyph '%s', hinting is discarded." - % name - ) - # TODO Only drop hinting from this glyph. - _remove_TTHinting(font) - return - - # cvt table - - all_cvs = [Vector(m["cvt "].values) if "cvt " in m else None for m in master_ttfs] - - nonNone_cvs = models.nonNone(all_cvs) - if not nonNone_cvs: - # There is no cvt table to make a cvar table from, we're done here. - return - - if not models.allEqual(len(c) for c in nonNone_cvs): - log.warning("Masters have incompatible cvt tables, hinting is discarded.") - _remove_TTHinting(font) - return - - variations = [] - deltas, supports = masterModel.getDeltasAndSupports( - all_cvs, round=round - ) # builtin round calls into Vector.__round__, which uses builtin round as we like - for i, (delta, support) in enumerate(zip(deltas[1:], supports[1:])): - if all(v == 0 for v in delta): - continue - var = TupleVariation(support, delta) - variations.append(var) - - # We can build the cvar table now. - if variations: - cvar = font["cvar"] = newTable("cvar") - cvar.version = 1 - cvar.variations = variations - - -_MetricsFields = namedtuple( - "_MetricsFields", - ["tableTag", "metricsTag", "sb1", "sb2", "advMapping", "vOrigMapping"], -) - -HVAR_FIELDS = _MetricsFields( - tableTag="HVAR", - metricsTag="hmtx", - sb1="LsbMap", - sb2="RsbMap", - advMapping="AdvWidthMap", - vOrigMapping=None, -) - -VVAR_FIELDS = _MetricsFields( - tableTag="VVAR", - metricsTag="vmtx", - sb1="TsbMap", - sb2="BsbMap", - advMapping="AdvHeightMap", - vOrigMapping="VOrgMap", -) - - -def _add_HVAR(font, masterModel, master_ttfs, axisTags): - _add_VHVAR(font, masterModel, master_ttfs, axisTags, HVAR_FIELDS) - - -def _add_VVAR(font, masterModel, master_ttfs, axisTags): - _add_VHVAR(font, masterModel, master_ttfs, axisTags, VVAR_FIELDS) - - -def _add_VHVAR(font, masterModel, master_ttfs, axisTags, tableFields): - tableTag = tableFields.tableTag - assert tableTag not in font - log.info("Generating " + tableTag) - VHVAR = newTable(tableTag) - tableClass = getattr(ot, tableTag) - vhvar = VHVAR.table = tableClass() - vhvar.Version = 0x00010000 - - glyphOrder = font.getGlyphOrder() - - # Build list of source font advance widths for each glyph - metricsTag = tableFields.metricsTag - advMetricses = [m[metricsTag].metrics for m in master_ttfs] - - # Build list of source font vertical origin coords for each glyph - if tableTag == "VVAR" and "VORG" in master_ttfs[0]: - vOrigMetricses = [m["VORG"].VOriginRecords for m in master_ttfs] - defaultYOrigs = [m["VORG"].defaultVertOriginY for m in master_ttfs] - vOrigMetricses = list(zip(vOrigMetricses, defaultYOrigs)) - else: - vOrigMetricses = None - - metricsStore, advanceMapping, vOrigMapping = _get_advance_metrics( - font, - masterModel, - master_ttfs, - axisTags, - glyphOrder, - advMetricses, - vOrigMetricses, - ) - - vhvar.VarStore = metricsStore - if advanceMapping is None: - setattr(vhvar, tableFields.advMapping, None) - else: - setattr(vhvar, tableFields.advMapping, advanceMapping) - if vOrigMapping is not None: - setattr(vhvar, tableFields.vOrigMapping, vOrigMapping) - setattr(vhvar, tableFields.sb1, None) - setattr(vhvar, tableFields.sb2, None) - - font[tableTag] = VHVAR - return - - -def _get_advance_metrics( - font, - masterModel, - master_ttfs, - axisTags, - glyphOrder, - advMetricses, - vOrigMetricses=None, -): - vhAdvanceDeltasAndSupports = {} - vOrigDeltasAndSupports = {} - for glyph in glyphOrder: - vhAdvances = [ - metrics[glyph][0] if glyph in metrics else None for metrics in advMetricses - ] - vhAdvanceDeltasAndSupports[glyph] = masterModel.getDeltasAndSupports( - vhAdvances, round=round - ) - - singleModel = models.allEqual(id(v[1]) for v in vhAdvanceDeltasAndSupports.values()) - - if vOrigMetricses: - singleModel = False - for glyph in glyphOrder: - # We need to supply a vOrigs tuple with non-None default values - # for each glyph. vOrigMetricses contains values only for those - # glyphs which have a non-default vOrig. - vOrigs = [ - metrics[glyph] if glyph in metrics else defaultVOrig - for metrics, defaultVOrig in vOrigMetricses - ] - vOrigDeltasAndSupports[glyph] = masterModel.getDeltasAndSupports( - vOrigs, round=round - ) - - directStore = None - if singleModel: - # Build direct mapping - supports = next(iter(vhAdvanceDeltasAndSupports.values()))[1][1:] - varTupleList = builder.buildVarRegionList(supports, axisTags) - varTupleIndexes = list(range(len(supports))) - varData = builder.buildVarData(varTupleIndexes, [], optimize=False) - for glyphName in glyphOrder: - varData.addItem(vhAdvanceDeltasAndSupports[glyphName][0], round=noRound) - varData.optimize() - directStore = builder.buildVarStore(varTupleList, [varData]) - - # Build optimized indirect mapping - storeBuilder = varStore.OnlineVarStoreBuilder(axisTags) - advMapping = {} - for glyphName in glyphOrder: - deltas, supports = vhAdvanceDeltasAndSupports[glyphName] - storeBuilder.setSupports(supports) - advMapping[glyphName] = storeBuilder.storeDeltas(deltas, round=noRound) - - if vOrigMetricses: - vOrigMap = {} - for glyphName in glyphOrder: - deltas, supports = vOrigDeltasAndSupports[glyphName] - storeBuilder.setSupports(supports) - vOrigMap[glyphName] = storeBuilder.storeDeltas(deltas, round=noRound) - - indirectStore = storeBuilder.finish() - mapping2 = indirectStore.optimize(use_NO_VARIATION_INDEX=False) - advMapping = [mapping2[advMapping[g]] for g in glyphOrder] - advanceMapping = builder.buildVarIdxMap(advMapping, glyphOrder) - - if vOrigMetricses: - vOrigMap = [mapping2[vOrigMap[g]] for g in glyphOrder] - - useDirect = False - vOrigMapping = None - if directStore: - # Compile both, see which is more compact - - writer = OTTableWriter() - directStore.compile(writer, font) - directSize = len(writer.getAllData()) - - writer = OTTableWriter() - indirectStore.compile(writer, font) - advanceMapping.compile(writer, font) - indirectSize = len(writer.getAllData()) - - useDirect = directSize < indirectSize - - if useDirect: - metricsStore = directStore - advanceMapping = None - else: - metricsStore = indirectStore - if vOrigMetricses: - vOrigMapping = builder.buildVarIdxMap(vOrigMap, glyphOrder) - - return metricsStore, advanceMapping, vOrigMapping - - -def _add_MVAR(font, masterModel, master_ttfs, axisTags): - log.info("Generating MVAR") - - store_builder = varStore.OnlineVarStoreBuilder(axisTags) - - records = [] - lastTableTag = None - fontTable = None - tables = None - # HACK: we need to special-case post.underlineThickness and .underlinePosition - # and unilaterally/arbitrarily define a sentinel value to distinguish the case - # when a post table is present in a given master simply because that's where - # the glyph names in TrueType must be stored, but the underline values are not - # meant to be used for building MVAR's deltas. The value of -0x8000 (-36768) - # the minimum FWord (int16) value, was chosen for its unlikelyhood to appear - # in real-world underline position/thickness values. - specialTags = {"unds": -0x8000, "undo": -0x8000} - - for tag, (tableTag, itemName) in sorted(MVAR_ENTRIES.items(), key=lambda kv: kv[1]): - # For each tag, fetch the associated table from all fonts (or not when we are - # still looking at a tag from the same tables) and set up the variation model - # for them. - if tableTag != lastTableTag: - tables = fontTable = None - if tableTag in font: - fontTable = font[tableTag] - tables = [] - for master in master_ttfs: - if tableTag not in master or ( - tag in specialTags - and getattr(master[tableTag], itemName) == specialTags[tag] - ): - tables.append(None) - else: - tables.append(master[tableTag]) - model, tables = masterModel.getSubModel(tables) - store_builder.setModel(model) - lastTableTag = tableTag - - if tables is None: # Tag not applicable to the master font. - continue - - # TODO support gasp entries - - master_values = [getattr(table, itemName) for table in tables] - if models.allEqual(master_values): - base, varIdx = master_values[0], None - else: - base, varIdx = store_builder.storeMasters(master_values) - setattr(fontTable, itemName, base) - - if varIdx is None: - continue - log.info(" %s: %s.%s %s", tag, tableTag, itemName, master_values) - rec = ot.MetricsValueRecord() - rec.ValueTag = tag - rec.VarIdx = varIdx - records.append(rec) - - assert "MVAR" not in font - if records: - store = store_builder.finish() - # Optimize - mapping = store.optimize() - for rec in records: - rec.VarIdx = mapping[rec.VarIdx] - - MVAR = font["MVAR"] = newTable("MVAR") - mvar = MVAR.table = ot.MVAR() - mvar.Version = 0x00010000 - mvar.Reserved = 0 - mvar.VarStore = store - # XXX these should not be hard-coded but computed automatically - mvar.ValueRecordSize = 8 - mvar.ValueRecordCount = len(records) - mvar.ValueRecord = sorted(records, key=lambda r: r.ValueTag) - - -def _add_BASE(font, masterModel, master_ttfs, axisTags): - log.info("Generating BASE") - - merger = VariationMerger(masterModel, axisTags, font) - merger.mergeTables(font, master_ttfs, ["BASE"]) - store = merger.store_builder.finish() - - if not store: - return - base = font["BASE"].table - assert base.Version == 0x00010000 - base.Version = 0x00010001 - base.VarStore = store - - -def _merge_OTL(font, model, master_fonts, axisTags): - log.info("Merging OpenType Layout tables") - merger = VariationMerger(model, axisTags, font) - - merger.mergeTables(font, master_fonts, ["GSUB", "GDEF", "GPOS"]) - store = merger.store_builder.finish() - if not store: - return - try: - GDEF = font["GDEF"].table - assert GDEF.Version <= 0x00010002 - except KeyError: - font["GDEF"] = newTable("GDEF") - GDEFTable = font["GDEF"] = newTable("GDEF") - GDEF = GDEFTable.table = ot.GDEF() - GDEF.GlyphClassDef = None - GDEF.AttachList = None - GDEF.LigCaretList = None - GDEF.MarkAttachClassDef = None - GDEF.MarkGlyphSetsDef = None - - GDEF.Version = 0x00010003 - GDEF.VarStore = store - - # Optimize - varidx_map = store.optimize() - GDEF.remap_device_varidxes(varidx_map) - if "GPOS" in font: - font["GPOS"].table.remap_device_varidxes(varidx_map) - - -def _add_GSUB_feature_variations(font, axes, internal_axis_supports, rules, featureTag): - def normalize(name, value): - return models.normalizeLocation({name: value}, internal_axis_supports)[name] - - log.info("Generating GSUB FeatureVariations") - - axis_tags = {name: axis.tag for name, axis in axes.items()} - - conditional_subs = [] - for rule in rules: - region = [] - for conditions in rule.conditionSets: - space = {} - for condition in conditions: - axis_name = condition["name"] - if condition["minimum"] is not None: - minimum = normalize(axis_name, condition["minimum"]) - else: - minimum = -1.0 - if condition["maximum"] is not None: - maximum = normalize(axis_name, condition["maximum"]) - else: - maximum = 1.0 - tag = axis_tags[axis_name] - space[tag] = (minimum, maximum) - region.append(space) - - subs = {k: v for k, v in rule.subs} - - conditional_subs.append((region, subs)) - - addFeatureVariations(font, conditional_subs, featureTag) - - -_DesignSpaceData = namedtuple( - "_DesignSpaceData", - [ - "axes", - "axisMappings", - "internal_axis_supports", - "base_idx", - "normalized_master_locs", - "masters", - "instances", - "rules", - "rulesProcessingLast", - "lib", - ], -) - - -def _add_CFF2(varFont, model, master_fonts): - from .cff import merge_region_fonts - - glyphOrder = varFont.getGlyphOrder() - if "CFF2" not in varFont: - from .cff import convertCFFtoCFF2 - - convertCFFtoCFF2(varFont) - ordered_fonts_list = model.reorderMasters(master_fonts, model.reverseMapping) - # re-ordering the master list simplifies building the CFF2 data item lists. - merge_region_fonts(varFont, model, ordered_fonts_list, glyphOrder) - - -def _add_COLR(font, model, master_fonts, axisTags, colr_layer_reuse=True): - merger = COLRVariationMerger( - model, axisTags, font, allowLayerReuse=colr_layer_reuse - ) - merger.mergeTables(font, master_fonts) - store = merger.store_builder.finish() - - colr = font["COLR"].table - if store: - mapping = store.optimize() - colr.VarStore = store - varIdxes = [mapping[v] for v in merger.varIdxes] - colr.VarIndexMap = builder.buildDeltaSetIndexMap(varIdxes) - - -def load_designspace(designspace): - # TODO: remove this and always assume 'designspace' is a DesignSpaceDocument, - # never a file path, as that's already handled by caller - if hasattr(designspace, "sources"): # Assume a DesignspaceDocument - ds = designspace - else: # Assume a file path - ds = DesignSpaceDocument.fromfile(designspace) - - masters = ds.sources - if not masters: - raise VarLibValidationError("Designspace must have at least one source.") - instances = ds.instances - - # TODO: Use fontTools.designspaceLib.tagForAxisName instead. - standard_axis_map = OrderedDict( - [ - ("weight", ("wght", {"en": "Weight"})), - ("width", ("wdth", {"en": "Width"})), - ("slant", ("slnt", {"en": "Slant"})), - ("optical", ("opsz", {"en": "Optical Size"})), - ("italic", ("ital", {"en": "Italic"})), - ] - ) - - # Setup axes - if not ds.axes: - raise VarLibValidationError(f"Designspace must have at least one axis.") - - axes = OrderedDict() - for axis_index, axis in enumerate(ds.axes): - axis_name = axis.name - if not axis_name: - if not axis.tag: - raise VarLibValidationError(f"Axis at index {axis_index} needs a tag.") - axis_name = axis.name = axis.tag - - if axis_name in standard_axis_map: - if axis.tag is None: - axis.tag = standard_axis_map[axis_name][0] - if not axis.labelNames: - axis.labelNames.update(standard_axis_map[axis_name][1]) - else: - if not axis.tag: - raise VarLibValidationError(f"Axis at index {axis_index} needs a tag.") - if not axis.labelNames: - axis.labelNames["en"] = tostr(axis_name) - - axes[axis_name] = axis - log.info("Axes:\n%s", pformat([axis.asdict() for axis in axes.values()])) - - axisMappings = ds.axisMappings - if axisMappings: - log.info("Mappings:\n%s", pformat(axisMappings)) - - # Check all master and instance locations are valid and fill in defaults - for obj in masters + instances: - obj_name = obj.name or obj.styleName or "" - loc = obj.getFullDesignLocation(ds) - obj.designLocation = loc - if loc is None: - raise VarLibValidationError( - f"Source or instance '{obj_name}' has no location." - ) - for axis_name in loc.keys(): - if axis_name not in axes: - raise VarLibValidationError( - f"Location axis '{axis_name}' unknown for '{obj_name}'." - ) - for axis_name, axis in axes.items(): - v = axis.map_backward(loc[axis_name]) - if not (axis.minimum <= v <= axis.maximum): - raise VarLibValidationError( - f"Source or instance '{obj_name}' has out-of-range location " - f"for axis '{axis_name}': is mapped to {v} but must be in " - f"mapped range [{axis.minimum}..{axis.maximum}] (NOTE: all " - "values are in user-space)." - ) - - # Normalize master locations - - internal_master_locs = [o.getFullDesignLocation(ds) for o in masters] - log.info("Internal master locations:\n%s", pformat(internal_master_locs)) - - # TODO This mapping should ideally be moved closer to logic in _add_fvar/avar - internal_axis_supports = {} - for axis in axes.values(): - triple = (axis.minimum, axis.default, axis.maximum) - internal_axis_supports[axis.name] = [axis.map_forward(v) for v in triple] - log.info("Internal axis supports:\n%s", pformat(internal_axis_supports)) - - normalized_master_locs = [ - models.normalizeLocation(m, internal_axis_supports) - for m in internal_master_locs - ] - log.info("Normalized master locations:\n%s", pformat(normalized_master_locs)) - - # Find base master - base_idx = None - for i, m in enumerate(normalized_master_locs): - if all(v == 0 for v in m.values()): - if base_idx is not None: - raise VarLibValidationError( - "More than one base master found in Designspace." - ) - base_idx = i - if base_idx is None: - raise VarLibValidationError( - "Base master not found; no master at default location?" - ) - log.info("Index of base master: %s", base_idx) - - return _DesignSpaceData( - axes, - axisMappings, - internal_axis_supports, - base_idx, - normalized_master_locs, - masters, - instances, - ds.rules, - ds.rulesProcessingLast, - ds.lib, - ) - - -# https://docs.microsoft.com/en-us/typography/opentype/spec/os2#uswidthclass -WDTH_VALUE_TO_OS2_WIDTH_CLASS = { - 50: 1, - 62.5: 2, - 75: 3, - 87.5: 4, - 100: 5, - 112.5: 6, - 125: 7, - 150: 8, - 200: 9, -} - - -def set_default_weight_width_slant(font, location): - if "OS/2" in font: - if "wght" in location: - weight_class = otRound(max(1, min(location["wght"], 1000))) - if font["OS/2"].usWeightClass != weight_class: - log.info("Setting OS/2.usWeightClass = %s", weight_class) - font["OS/2"].usWeightClass = weight_class - - if "wdth" in location: - # map 'wdth' axis (50..200) to OS/2.usWidthClass (1..9), rounding to closest - widthValue = min(max(location["wdth"], 50), 200) - widthClass = otRound( - models.piecewiseLinearMap(widthValue, WDTH_VALUE_TO_OS2_WIDTH_CLASS) - ) - if font["OS/2"].usWidthClass != widthClass: - log.info("Setting OS/2.usWidthClass = %s", widthClass) - font["OS/2"].usWidthClass = widthClass - - if "slnt" in location and "post" in font: - italicAngle = max(-90, min(location["slnt"], 90)) - if font["post"].italicAngle != italicAngle: - log.info("Setting post.italicAngle = %s", italicAngle) - font["post"].italicAngle = italicAngle - - -def drop_implied_oncurve_points(*masters: TTFont) -> int: - """Drop impliable on-curve points from all the simple glyphs in masters. - - In TrueType glyf outlines, on-curve points can be implied when they are located - exactly at the midpoint of the line connecting two consecutive off-curve points. - - The input masters' glyf tables are assumed to contain same-named glyphs that are - interpolatable. Oncurve points are only dropped if they can be implied for all - the masters. The fonts are modified in-place. - - Args: - masters: The TTFont(s) to modify - - Returns: - The total number of points that were dropped if any. - - Reference: - https://developer.apple.com/fonts/TrueType-Reference-Manual/RM01/Chap1.html - """ - - count = 0 - glyph_masters = defaultdict(list) - # multiple DS source may point to the same TTFont object and we want to - # avoid processing the same glyph twice as they are modified in-place - for font in {id(m): m for m in masters}.values(): - glyf = font["glyf"] - for glyphName in glyf.keys(): - glyph_masters[glyphName].append(glyf[glyphName]) - count = 0 - for glyphName, glyphs in glyph_masters.items(): - try: - dropped = dropImpliedOnCurvePoints(*glyphs) - except ValueError as e: - # we don't fail for incompatible glyphs in _add_gvar so we shouldn't here - log.warning("Failed to drop implied oncurves for %r: %s", glyphName, e) - else: - count += len(dropped) - return count - - -def build_many( - designspace: DesignSpaceDocument, - master_finder=lambda s: s, - exclude=[], - optimize=True, - skip_vf=lambda vf_name: False, - colr_layer_reuse=True, - drop_implied_oncurves=False, -): - """ - Build variable fonts from a designspace file, version 5 which can define - several VFs, or version 4 which has implicitly one VF covering the whole doc. - - If master_finder is set, it should be a callable that takes master - filename as found in designspace file and map it to master font - binary as to be opened (eg. .ttf or .otf). - - skip_vf can be used to skip building some of the variable fonts defined in - the input designspace. It's a predicate that takes as argument the name - of the variable font and returns `bool`. - - Always returns a Dict[str, TTFont] keyed by VariableFontDescriptor.name - """ - res = {} - # varLib.build (used further below) by default only builds an incomplete 'STAT' - # with an empty AxisValueArray--unless the VF inherited 'STAT' from its base master. - # Designspace version 5 can also be used to define 'STAT' labels or customize - # axes ordering, etc. To avoid overwriting a pre-existing 'STAT' or redoing the - # same work twice, here we check if designspace contains any 'STAT' info before - # proceeding to call buildVFStatTable for each VF. - # https://github.com/fonttools/fonttools/pull/3024 - # https://github.com/fonttools/fonttools/issues/3045 - doBuildStatFromDSv5 = ( - "STAT" not in exclude - and designspace.formatTuple >= (5, 0) - and ( - any(a.axisLabels or a.axisOrdering is not None for a in designspace.axes) - or designspace.locationLabels - ) - ) - for _location, subDoc in splitInterpolable(designspace): - for name, vfDoc in splitVariableFonts(subDoc): - if skip_vf(name): - log.debug(f"Skipping variable TTF font: {name}") - continue - vf = build( - vfDoc, - master_finder, - exclude=exclude, - optimize=optimize, - colr_layer_reuse=colr_layer_reuse, - drop_implied_oncurves=drop_implied_oncurves, - )[0] - if doBuildStatFromDSv5: - buildVFStatTable(vf, designspace, name) - res[name] = vf - return res - - -def build( - designspace, - master_finder=lambda s: s, - exclude=[], - optimize=True, - colr_layer_reuse=True, - drop_implied_oncurves=False, -): - """ - Build variation font from a designspace file. - - If master_finder is set, it should be a callable that takes master - filename as found in designspace file and map it to master font - binary as to be opened (eg. .ttf or .otf). - """ - if hasattr(designspace, "sources"): # Assume a DesignspaceDocument - pass - else: # Assume a file path - designspace = DesignSpaceDocument.fromfile(designspace) - - ds = load_designspace(designspace) - log.info("Building variable font") - - log.info("Loading master fonts") - master_fonts = load_masters(designspace, master_finder) - - # TODO: 'master_ttfs' is unused except for return value, remove later - master_ttfs = [] - for master in master_fonts: - try: - master_ttfs.append(master.reader.file.name) - except AttributeError: - master_ttfs.append(None) # in-memory fonts have no path - - if drop_implied_oncurves and "glyf" in master_fonts[ds.base_idx]: - drop_count = drop_implied_oncurve_points(*master_fonts) - log.info( - "Dropped %s on-curve points from simple glyphs in the 'glyf' table", - drop_count, - ) - - # Copy the base master to work from it - vf = deepcopy(master_fonts[ds.base_idx]) - - if "DSIG" in vf: - del vf["DSIG"] - - # TODO append masters as named-instances as well; needs .designspace change. - fvar = _add_fvar(vf, ds.axes, ds.instances) - if "STAT" not in exclude: - _add_stat(vf) - - # Map from axis names to axis tags... - normalized_master_locs = [ - {ds.axes[k].tag: v for k, v in loc.items()} for loc in ds.normalized_master_locs - ] - # From here on, we use fvar axes only - axisTags = [axis.axisTag for axis in fvar.axes] - - # Assume single-model for now. - model = models.VariationModel(normalized_master_locs, axisOrder=axisTags) - assert 0 == model.mapping[ds.base_idx] - - log.info("Building variations tables") - if "avar" not in exclude: - _add_avar(vf, ds.axes, ds.axisMappings, axisTags) - if "BASE" not in exclude and "BASE" in vf: - _add_BASE(vf, model, master_fonts, axisTags) - if "MVAR" not in exclude: - _add_MVAR(vf, model, master_fonts, axisTags) - if "HVAR" not in exclude: - _add_HVAR(vf, model, master_fonts, axisTags) - if "VVAR" not in exclude and "vmtx" in vf: - _add_VVAR(vf, model, master_fonts, axisTags) - if "GDEF" not in exclude or "GPOS" not in exclude: - _merge_OTL(vf, model, master_fonts, axisTags) - if "gvar" not in exclude and "glyf" in vf: - _add_gvar(vf, model, master_fonts, optimize=optimize) - if "cvar" not in exclude and "glyf" in vf: - _merge_TTHinting(vf, model, master_fonts) - if "GSUB" not in exclude and ds.rules: - featureTag = ds.lib.get( - FEAVAR_FEATURETAG_LIB_KEY, "rclt" if ds.rulesProcessingLast else "rvrn" - ) - _add_GSUB_feature_variations( - vf, ds.axes, ds.internal_axis_supports, ds.rules, featureTag - ) - if "CFF2" not in exclude and ("CFF " in vf or "CFF2" in vf): - _add_CFF2(vf, model, master_fonts) - if "post" in vf: - # set 'post' to format 2 to keep the glyph names dropped from CFF2 - post = vf["post"] - if post.formatType != 2.0: - post.formatType = 2.0 - post.extraNames = [] - post.mapping = {} - if "COLR" not in exclude and "COLR" in vf and vf["COLR"].version > 0: - _add_COLR(vf, model, master_fonts, axisTags, colr_layer_reuse) - - set_default_weight_width_slant( - vf, location={axis.axisTag: axis.defaultValue for axis in vf["fvar"].axes} - ) - - for tag in exclude: - if tag in vf: - del vf[tag] - - # TODO: Only return vf for 4.0+, the rest is unused. - return vf, model, master_ttfs - - -def _open_font(path, master_finder=lambda s: s): - # load TTFont masters from given 'path': this can be either a .TTX or an - # OpenType binary font; or if neither of these, try use the 'master_finder' - # callable to resolve the path to a valid .TTX or OpenType font binary. - from fontTools.ttx import guessFileType - - master_path = os.path.normpath(path) - tp = guessFileType(master_path) - if tp is None: - # not an OpenType binary/ttx, fall back to the master finder. - master_path = master_finder(master_path) - tp = guessFileType(master_path) - if tp in ("TTX", "OTX"): - font = TTFont() - font.importXML(master_path) - elif tp in ("TTF", "OTF", "WOFF", "WOFF2"): - font = TTFont(master_path) - else: - raise VarLibValidationError("Invalid master path: %r" % master_path) - return font - - -def load_masters(designspace, master_finder=lambda s: s): - """Ensure that all SourceDescriptor.font attributes have an appropriate TTFont - object loaded, or else open TTFont objects from the SourceDescriptor.path - attributes. - - The paths can point to either an OpenType font, a TTX file, or a UFO. In the - latter case, use the provided master_finder callable to map from UFO paths to - the respective master font binaries (e.g. .ttf, .otf or .ttx). - - Return list of master TTFont objects in the same order they are listed in the - DesignSpaceDocument. - """ - for master in designspace.sources: - # If a SourceDescriptor has a layer name, demand that the compiled TTFont - # be supplied by the caller. This spares us from modifying MasterFinder. - if master.layerName and master.font is None: - raise VarLibValidationError( - f"Designspace source '{master.name or ''}' specified a " - "layer name but lacks the required TTFont object in the 'font' " - "attribute." - ) - - return designspace.loadSourceFonts(_open_font, master_finder=master_finder) - - -class MasterFinder(object): - def __init__(self, template): - self.template = template - - def __call__(self, src_path): - fullname = os.path.abspath(src_path) - dirname, basename = os.path.split(fullname) - stem, ext = os.path.splitext(basename) - path = self.template.format( - fullname=fullname, - dirname=dirname, - basename=basename, - stem=stem, - ext=ext, - ) - return os.path.normpath(path) - - -def main(args=None): - """Build variable fonts from a designspace file and masters""" - from argparse import ArgumentParser - from fontTools import configLogger - - parser = ArgumentParser(prog="varLib", description=main.__doc__) - parser.add_argument("designspace") - output_group = parser.add_mutually_exclusive_group() - output_group.add_argument( - "-o", metavar="OUTPUTFILE", dest="outfile", default=None, help="output file" - ) - output_group.add_argument( - "-d", - "--output-dir", - metavar="OUTPUTDIR", - default=None, - help="output dir (default: same as input designspace file)", - ) - parser.add_argument( - "-x", - metavar="TAG", - dest="exclude", - action="append", - default=[], - help="exclude table", - ) - parser.add_argument( - "--disable-iup", - dest="optimize", - action="store_false", - help="do not perform IUP optimization", - ) - parser.add_argument( - "--no-colr-layer-reuse", - dest="colr_layer_reuse", - action="store_false", - help="do not rebuild variable COLR table to optimize COLR layer reuse", - ) - parser.add_argument( - "--drop-implied-oncurves", - action="store_true", - help=( - "drop on-curve points that can be implied when exactly in the middle of " - "two off-curve points (only applies to TrueType fonts)" - ), - ) - parser.add_argument( - "--master-finder", - default="master_ttf_interpolatable/{stem}.ttf", - help=( - "templated string used for finding binary font " - "files given the source file names defined in the " - "designspace document. The following special strings " - "are defined: {fullname} is the absolute source file " - "name; {basename} is the file name without its " - "directory; {stem} is the basename without the file " - "extension; {ext} is the source file extension; " - "{dirname} is the directory of the absolute file " - 'name. The default value is "%(default)s".' - ), - ) - parser.add_argument( - "--variable-fonts", - default=".*", - metavar="VF_NAME", - help=( - "Filter the list of variable fonts produced from the input " - "Designspace v5 file. By default all listed variable fonts are " - "generated. To generate a specific variable font (or variable fonts) " - 'that match a given "name" attribute, you can pass as argument ' - "the full name or a regular expression. E.g.: --variable-fonts " - '"MyFontVF_WeightOnly"; or --variable-fonts "MyFontVFItalic_.*".' - ), - ) - logging_group = parser.add_mutually_exclusive_group(required=False) - logging_group.add_argument( - "-v", "--verbose", action="store_true", help="Run more verbosely." - ) - logging_group.add_argument( - "-q", "--quiet", action="store_true", help="Turn verbosity off." - ) - options = parser.parse_args(args) - - configLogger( - level=("DEBUG" if options.verbose else "ERROR" if options.quiet else "INFO") - ) - - designspace_filename = options.designspace - designspace = DesignSpaceDocument.fromfile(designspace_filename) - - vf_descriptors = designspace.getVariableFonts() - if not vf_descriptors: - parser.error(f"No variable fonts in given designspace {designspace.path!r}") - - vfs_to_build = [] - for vf in vf_descriptors: - # Skip variable fonts that do not match the user's inclusion regex if given. - if not fullmatch(options.variable_fonts, vf.name): - continue - vfs_to_build.append(vf) - - if not vfs_to_build: - parser.error(f"No variable fonts matching {options.variable_fonts!r}") - - if options.outfile is not None and len(vfs_to_build) > 1: - parser.error( - "can't specify -o because there are multiple VFs to build; " - "use --output-dir, or select a single VF with --variable-fonts" - ) - - output_dir = options.output_dir - if output_dir is None: - output_dir = os.path.dirname(designspace_filename) - - vf_name_to_output_path = {} - if len(vfs_to_build) == 1 and options.outfile is not None: - vf_name_to_output_path[vfs_to_build[0].name] = options.outfile - else: - for vf in vfs_to_build: - filename = vf.filename if vf.filename is not None else vf.name + ".{ext}" - vf_name_to_output_path[vf.name] = os.path.join(output_dir, filename) - - finder = MasterFinder(options.master_finder) - - vfs = build_many( - designspace, - finder, - exclude=options.exclude, - optimize=options.optimize, - colr_layer_reuse=options.colr_layer_reuse, - drop_implied_oncurves=options.drop_implied_oncurves, - ) - - for vf_name, vf in vfs.items(): - ext = "otf" if vf.sfntVersion == "OTTO" else "ttf" - output_path = vf_name_to_output_path[vf_name].format(ext=ext) - output_dir = os.path.dirname(output_path) - if output_dir: - os.makedirs(output_dir, exist_ok=True) - log.info("Saving variation font %s", output_path) - vf.save(output_path) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) > 1: - sys.exit(main()) - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/varStore.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/varStore.py deleted file mode 100644 index 55d70e278d4ead2f4734d32d577152f058598c04..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/varStore.py +++ /dev/null @@ -1,703 +0,0 @@ -from fontTools.misc.roundTools import noRound, otRound -from fontTools.misc.intTools import bit_count -from fontTools.ttLib.tables import otTables as ot -from fontTools.varLib.models import supportScalar -from fontTools.varLib.builder import ( - buildVarRegionList, - buildVarStore, - buildVarRegion, - buildVarData, -) -from functools import partial -from collections import defaultdict -from heapq import heappush, heappop - - -NO_VARIATION_INDEX = ot.NO_VARIATION_INDEX -ot.VarStore.NO_VARIATION_INDEX = NO_VARIATION_INDEX - - -def _getLocationKey(loc): - return tuple(sorted(loc.items(), key=lambda kv: kv[0])) - - -class OnlineVarStoreBuilder(object): - def __init__(self, axisTags): - self._axisTags = axisTags - self._regionMap = {} - self._regionList = buildVarRegionList([], axisTags) - self._store = buildVarStore(self._regionList, []) - self._data = None - self._model = None - self._supports = None - self._varDataIndices = {} - self._varDataCaches = {} - self._cache = {} - - def setModel(self, model): - self.setSupports(model.supports) - self._model = model - - def setSupports(self, supports): - self._model = None - self._supports = list(supports) - if not self._supports[0]: - del self._supports[0] # Drop base master support - self._cache = {} - self._data = None - - def finish(self, optimize=True): - self._regionList.RegionCount = len(self._regionList.Region) - self._store.VarDataCount = len(self._store.VarData) - for data in self._store.VarData: - data.ItemCount = len(data.Item) - data.calculateNumShorts(optimize=optimize) - return self._store - - def _add_VarData(self): - regionMap = self._regionMap - regionList = self._regionList - - regions = self._supports - regionIndices = [] - for region in regions: - key = _getLocationKey(region) - idx = regionMap.get(key) - if idx is None: - varRegion = buildVarRegion(region, self._axisTags) - idx = regionMap[key] = len(regionList.Region) - regionList.Region.append(varRegion) - regionIndices.append(idx) - - # Check if we have one already... - key = tuple(regionIndices) - varDataIdx = self._varDataIndices.get(key) - if varDataIdx is not None: - self._outer = varDataIdx - self._data = self._store.VarData[varDataIdx] - self._cache = self._varDataCaches[key] - if len(self._data.Item) == 0xFFFF: - # This is full. Need new one. - varDataIdx = None - - if varDataIdx is None: - self._data = buildVarData(regionIndices, [], optimize=False) - self._outer = len(self._store.VarData) - self._store.VarData.append(self._data) - self._varDataIndices[key] = self._outer - if key not in self._varDataCaches: - self._varDataCaches[key] = {} - self._cache = self._varDataCaches[key] - - def storeMasters(self, master_values, *, round=round): - deltas = self._model.getDeltas(master_values, round=round) - base = deltas.pop(0) - return base, self.storeDeltas(deltas, round=noRound) - - def storeDeltas(self, deltas, *, round=round): - deltas = [round(d) for d in deltas] - if len(deltas) == len(self._supports) + 1: - deltas = tuple(deltas[1:]) - else: - assert len(deltas) == len(self._supports) - deltas = tuple(deltas) - - varIdx = self._cache.get(deltas) - if varIdx is not None: - return varIdx - - if not self._data: - self._add_VarData() - inner = len(self._data.Item) - if inner == 0xFFFF: - # Full array. Start new one. - self._add_VarData() - return self.storeDeltas(deltas) - self._data.addItem(deltas, round=noRound) - - varIdx = (self._outer << 16) + inner - self._cache[deltas] = varIdx - return varIdx - - -def VarData_addItem(self, deltas, *, round=round): - deltas = [round(d) for d in deltas] - - countUs = self.VarRegionCount - countThem = len(deltas) - if countUs + 1 == countThem: - deltas = tuple(deltas[1:]) - else: - assert countUs == countThem, (countUs, countThem) - deltas = tuple(deltas) - self.Item.append(list(deltas)) - self.ItemCount = len(self.Item) - - -ot.VarData.addItem = VarData_addItem - - -def VarRegion_get_support(self, fvar_axes): - return { - fvar_axes[i].axisTag: (reg.StartCoord, reg.PeakCoord, reg.EndCoord) - for i, reg in enumerate(self.VarRegionAxis) - if reg.PeakCoord != 0 - } - - -ot.VarRegion.get_support = VarRegion_get_support - - -def VarStore___bool__(self): - return bool(self.VarData) - - -ot.VarStore.__bool__ = VarStore___bool__ - - -class VarStoreInstancer(object): - def __init__(self, varstore, fvar_axes, location={}): - self.fvar_axes = fvar_axes - assert varstore is None or varstore.Format == 1 - self._varData = varstore.VarData if varstore else [] - self._regions = varstore.VarRegionList.Region if varstore else [] - self.setLocation(location) - - def setLocation(self, location): - self.location = dict(location) - self._clearCaches() - - def _clearCaches(self): - self._scalars = {} - - def _getScalar(self, regionIdx): - scalar = self._scalars.get(regionIdx) - if scalar is None: - support = self._regions[regionIdx].get_support(self.fvar_axes) - scalar = supportScalar(self.location, support) - self._scalars[regionIdx] = scalar - return scalar - - @staticmethod - def interpolateFromDeltasAndScalars(deltas, scalars): - delta = 0.0 - for d, s in zip(deltas, scalars): - if not s: - continue - delta += d * s - return delta - - def __getitem__(self, varidx): - major, minor = varidx >> 16, varidx & 0xFFFF - if varidx == NO_VARIATION_INDEX: - return 0.0 - varData = self._varData - scalars = [self._getScalar(ri) for ri in varData[major].VarRegionIndex] - deltas = varData[major].Item[minor] - return self.interpolateFromDeltasAndScalars(deltas, scalars) - - def interpolateFromDeltas(self, varDataIndex, deltas): - varData = self._varData - scalars = [self._getScalar(ri) for ri in varData[varDataIndex].VarRegionIndex] - return self.interpolateFromDeltasAndScalars(deltas, scalars) - - -# -# Optimizations -# -# retainFirstMap - If true, major 0 mappings are retained. Deltas for unused indices are zeroed -# advIdxes - Set of major 0 indices for advance deltas to be listed first. Other major 0 indices follow. - - -def VarStore_subset_varidxes( - self, varIdxes, optimize=True, retainFirstMap=False, advIdxes=set() -): - # Sort out used varIdxes by major/minor. - used = {} - for varIdx in varIdxes: - if varIdx == NO_VARIATION_INDEX: - continue - major = varIdx >> 16 - minor = varIdx & 0xFFFF - d = used.get(major) - if d is None: - d = used[major] = set() - d.add(minor) - del varIdxes - - # - # Subset VarData - # - - varData = self.VarData - newVarData = [] - varDataMap = {NO_VARIATION_INDEX: NO_VARIATION_INDEX} - for major, data in enumerate(varData): - usedMinors = used.get(major) - if usedMinors is None: - continue - newMajor = len(newVarData) - newVarData.append(data) - - items = data.Item - newItems = [] - if major == 0 and retainFirstMap: - for minor in range(len(items)): - newItems.append( - items[minor] if minor in usedMinors else [0] * len(items[minor]) - ) - varDataMap[minor] = minor - else: - if major == 0: - minors = sorted(advIdxes) + sorted(usedMinors - advIdxes) - else: - minors = sorted(usedMinors) - for minor in minors: - newMinor = len(newItems) - newItems.append(items[minor]) - varDataMap[(major << 16) + minor] = (newMajor << 16) + newMinor - - data.Item = newItems - data.ItemCount = len(data.Item) - - data.calculateNumShorts(optimize=optimize) - - self.VarData = newVarData - self.VarDataCount = len(self.VarData) - - self.prune_regions() - - return varDataMap - - -ot.VarStore.subset_varidxes = VarStore_subset_varidxes - - -def VarStore_prune_regions(self): - """Remove unused VarRegions.""" - # - # Subset VarRegionList - # - - # Collect. - usedRegions = set() - for data in self.VarData: - usedRegions.update(data.VarRegionIndex) - # Subset. - regionList = self.VarRegionList - regions = regionList.Region - newRegions = [] - regionMap = {} - for i in sorted(usedRegions): - regionMap[i] = len(newRegions) - newRegions.append(regions[i]) - regionList.Region = newRegions - regionList.RegionCount = len(regionList.Region) - # Map. - for data in self.VarData: - data.VarRegionIndex = [regionMap[i] for i in data.VarRegionIndex] - - -ot.VarStore.prune_regions = VarStore_prune_regions - - -def _visit(self, func): - """Recurse down from self, if type of an object is ot.Device, - call func() on it. Works on otData-style classes.""" - - if type(self) == ot.Device: - func(self) - - elif isinstance(self, list): - for that in self: - _visit(that, func) - - elif hasattr(self, "getConverters") and not hasattr(self, "postRead"): - for conv in self.getConverters(): - that = getattr(self, conv.name, None) - if that is not None: - _visit(that, func) - - elif isinstance(self, ot.ValueRecord): - for that in self.__dict__.values(): - _visit(that, func) - - -def _Device_recordVarIdx(self, s): - """Add VarIdx in this Device table (if any) to the set s.""" - if self.DeltaFormat == 0x8000: - s.add((self.StartSize << 16) + self.EndSize) - - -def Object_collect_device_varidxes(self, varidxes): - adder = partial(_Device_recordVarIdx, s=varidxes) - _visit(self, adder) - - -ot.GDEF.collect_device_varidxes = Object_collect_device_varidxes -ot.GPOS.collect_device_varidxes = Object_collect_device_varidxes - - -def _Device_mapVarIdx(self, mapping, done): - """Map VarIdx in this Device table (if any) through mapping.""" - if id(self) in done: - return - done.add(id(self)) - if self.DeltaFormat == 0x8000: - varIdx = mapping[(self.StartSize << 16) + self.EndSize] - self.StartSize = varIdx >> 16 - self.EndSize = varIdx & 0xFFFF - - -def Object_remap_device_varidxes(self, varidxes_map): - mapper = partial(_Device_mapVarIdx, mapping=varidxes_map, done=set()) - _visit(self, mapper) - - -ot.GDEF.remap_device_varidxes = Object_remap_device_varidxes -ot.GPOS.remap_device_varidxes = Object_remap_device_varidxes - - -class _Encoding(object): - def __init__(self, chars): - self.chars = chars - self.width = bit_count(chars) - self.columns = self._columns(chars) - self.overhead = self._characteristic_overhead(self.columns) - self.items = set() - - def append(self, row): - self.items.add(row) - - def extend(self, lst): - self.items.update(lst) - - def get_room(self): - """Maximum number of bytes that can be added to characteristic - while still being beneficial to merge it into another one.""" - count = len(self.items) - return max(0, (self.overhead - 1) // count - self.width) - - room = property(get_room) - - def get_gain(self): - """Maximum possible byte gain from merging this into another - characteristic.""" - count = len(self.items) - return max(0, self.overhead - count) - - gain = property(get_gain) - - def gain_sort_key(self): - return self.gain, self.chars - - def width_sort_key(self): - return self.width, self.chars - - @staticmethod - def _characteristic_overhead(columns): - """Returns overhead in bytes of encoding this characteristic - as a VarData.""" - c = 4 + 6 # 4 bytes for LOffset, 6 bytes for VarData header - c += bit_count(columns) * 2 - return c - - @staticmethod - def _columns(chars): - cols = 0 - i = 1 - while chars: - if chars & 0b1111: - cols |= i - chars >>= 4 - i <<= 1 - return cols - - def gain_from_merging(self, other_encoding): - combined_chars = other_encoding.chars | self.chars - combined_width = bit_count(combined_chars) - combined_columns = self.columns | other_encoding.columns - combined_overhead = _Encoding._characteristic_overhead(combined_columns) - combined_gain = ( - +self.overhead - + other_encoding.overhead - - combined_overhead - - (combined_width - self.width) * len(self.items) - - (combined_width - other_encoding.width) * len(other_encoding.items) - ) - return combined_gain - - -class _EncodingDict(dict): - def __missing__(self, chars): - r = self[chars] = _Encoding(chars) - return r - - def add_row(self, row): - chars = self._row_characteristics(row) - self[chars].append(row) - - @staticmethod - def _row_characteristics(row): - """Returns encoding characteristics for a row.""" - longWords = False - - chars = 0 - i = 1 - for v in row: - if v: - chars += i - if not (-128 <= v <= 127): - chars += i * 0b0010 - if not (-32768 <= v <= 32767): - longWords = True - break - i <<= 4 - - if longWords: - # Redo; only allow 2byte/4byte encoding - chars = 0 - i = 1 - for v in row: - if v: - chars += i * 0b0011 - if not (-32768 <= v <= 32767): - chars += i * 0b1100 - i <<= 4 - - return chars - - -def VarStore_optimize(self, use_NO_VARIATION_INDEX=True, quantization=1): - """Optimize storage. Returns mapping from old VarIdxes to new ones.""" - - # Overview: - # - # For each VarData row, we first extend it with zeroes to have - # one column per region in VarRegionList. We then group the - # rows into _Encoding objects, by their "characteristic" bitmap. - # The characteristic bitmap is a binary number representing how - # many bytes each column of the data takes up to encode. Each - # column is encoded in four bits. For example, if a column has - # only values in the range -128..127, it would only have a single - # bit set in the characteristic bitmap for that column. If it has - # values in the range -32768..32767, it would have two bits set. - # The number of ones in the characteristic bitmap is the "width" - # of the encoding. - # - # Each encoding as such has a number of "active" (ie. non-zero) - # columns. The overhead of encoding the characteristic bitmap - # is 10 bytes, plus 2 bytes per active column. - # - # When an encoding is merged into another one, if the characteristic - # of the old encoding is a subset of the new one, then the overhead - # of the old encoding is completely eliminated. However, each row - # now would require more bytes to encode, to the tune of one byte - # per characteristic bit that is active in the new encoding but not - # in the old one. The number of bits that can be added to an encoding - # while still beneficial to merge it into another encoding is called - # the "room" for that encoding. - # - # The "gain" of an encodings is the maximum number of bytes we can - # save by merging it into another encoding. The "gain" of merging - # two encodings is how many bytes we save by doing so. - # - # High-level algorithm: - # - # - Each encoding has a minimal way to encode it. However, because - # of the overhead of encoding the characteristic bitmap, it may - # be beneficial to merge two encodings together, if there is - # gain in doing so. As such, we need to search for the best - # such successive merges. - # - # Algorithm: - # - # - Put all encodings into a "todo" list. - # - # - Sort todo list by decreasing gain (for stability). - # - # - Make a priority-queue of the gain from combining each two - # encodings in the todo list. The priority queue is sorted by - # decreasing gain. Only positive gains are included. - # - # - While priority queue is not empty: - # - Pop the first item from the priority queue, - # - Merge the two encodings it represents, - # - Remove the two encodings from the todo list, - # - Insert positive gains from combining the new encoding with - # all existing todo list items into the priority queue, - # - If a todo list item with the same characteristic bitmap as - # the new encoding exists, remove it from the todo list and - # merge it into the new encoding. - # - Insert the new encoding into the todo list, - # - # - Encode all remaining items in the todo list. - - # TODO - # Check that no two VarRegions are the same; if they are, fold them. - - n = len(self.VarRegionList.Region) # Number of columns - zeroes = [0] * n - - front_mapping = {} # Map from old VarIdxes to full row tuples - - encodings = _EncodingDict() - - # Collect all items into a set of full rows (with lots of zeroes.) - for major, data in enumerate(self.VarData): - regionIndices = data.VarRegionIndex - - for minor, item in enumerate(data.Item): - row = list(zeroes) - - if quantization == 1: - for regionIdx, v in zip(regionIndices, item): - row[regionIdx] += v - else: - for regionIdx, v in zip(regionIndices, item): - row[regionIdx] += ( - round(v / quantization) * quantization - ) # TODO https://github.com/fonttools/fonttools/pull/3126#discussion_r1205439785 - - row = tuple(row) - - if use_NO_VARIATION_INDEX and not any(row): - front_mapping[(major << 16) + minor] = None - continue - - encodings.add_row(row) - front_mapping[(major << 16) + minor] = row - - # Prepare for the main algorithm. - todo = sorted(encodings.values(), key=_Encoding.gain_sort_key) - del encodings - - # Repeatedly pick two best encodings to combine, and combine them. - - heap = [] - for i, encoding in enumerate(todo): - for j in range(i + 1, len(todo)): - other_encoding = todo[j] - combining_gain = encoding.gain_from_merging(other_encoding) - if combining_gain > 0: - heappush(heap, (-combining_gain, i, j)) - - while heap: - _, i, j = heappop(heap) - if todo[i] is None or todo[j] is None: - continue - - encoding, other_encoding = todo[i], todo[j] - todo[i], todo[j] = None, None - - # Combine the two encodings - combined_chars = other_encoding.chars | encoding.chars - combined_encoding = _Encoding(combined_chars) - combined_encoding.extend(encoding.items) - combined_encoding.extend(other_encoding.items) - - for k, enc in enumerate(todo): - if enc is None: - continue - - # In the unlikely event that the same encoding exists already, - # combine it. - if enc.chars == combined_chars: - combined_encoding.extend(enc.items) - todo[k] = None - continue - - combining_gain = combined_encoding.gain_from_merging(enc) - if combining_gain > 0: - heappush(heap, (-combining_gain, k, len(todo))) - - todo.append(combined_encoding) - - encodings = [encoding for encoding in todo if encoding is not None] - - # Assemble final store. - back_mapping = {} # Mapping from full rows to new VarIdxes - encodings.sort(key=_Encoding.width_sort_key) - self.VarData = [] - for major, encoding in enumerate(encodings): - data = ot.VarData() - self.VarData.append(data) - data.VarRegionIndex = range(n) - data.VarRegionCount = len(data.VarRegionIndex) - data.Item = sorted(encoding.items) - for minor, item in enumerate(data.Item): - back_mapping[item] = (major << 16) + minor - - # Compile final mapping. - varidx_map = {NO_VARIATION_INDEX: NO_VARIATION_INDEX} - for k, v in front_mapping.items(): - varidx_map[k] = back_mapping[v] if v is not None else NO_VARIATION_INDEX - - # Remove unused regions. - self.prune_regions() - - # Recalculate things and go home. - self.VarRegionList.RegionCount = len(self.VarRegionList.Region) - self.VarDataCount = len(self.VarData) - for data in self.VarData: - data.ItemCount = len(data.Item) - data.optimize() - - return varidx_map - - -ot.VarStore.optimize = VarStore_optimize - - -def main(args=None): - """Optimize a font's GDEF variation store""" - from argparse import ArgumentParser - from fontTools import configLogger - from fontTools.ttLib import TTFont - from fontTools.ttLib.tables.otBase import OTTableWriter - - parser = ArgumentParser(prog="varLib.varStore", description=main.__doc__) - parser.add_argument("--quantization", type=int, default=1) - parser.add_argument("fontfile") - parser.add_argument("outfile", nargs="?") - options = parser.parse_args(args) - - # TODO: allow user to configure logging via command-line options - configLogger(level="INFO") - - quantization = options.quantization - fontfile = options.fontfile - outfile = options.outfile - - font = TTFont(fontfile) - gdef = font["GDEF"] - store = gdef.table.VarStore - - writer = OTTableWriter() - store.compile(writer, font) - size = len(writer.getAllData()) - print("Before: %7d bytes" % size) - - varidx_map = store.optimize(quantization=quantization) - - writer = OTTableWriter() - store.compile(writer, font) - size = len(writer.getAllData()) - print("After: %7d bytes" % size) - - if outfile is not None: - gdef.table.remap_device_varidxes(varidx_map) - if "GPOS" in font: - font["GPOS"].table.remap_device_varidxes(varidx_map) - - font.save(outfile) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) > 1: - sys.exit(main()) - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/Damnbro/andite-anything-v4.0/app.py b/spaces/Damnbro/andite-anything-v4.0/app.py deleted file mode 100644 index 47a2051db6dadeea03edf70d62694fd3e5e88ba7..0000000000000000000000000000000000000000 --- a/spaces/Damnbro/andite-anything-v4.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/andite/anything-v4.0").launch() \ No newline at end of file diff --git a/spaces/Dauzy/whisper-webui/src/vad.py b/spaces/Dauzy/whisper-webui/src/vad.py deleted file mode 100644 index e68ee7391e93f539a05d548601f2d87168bb1282..0000000000000000000000000000000000000000 --- a/spaces/Dauzy/whisper-webui/src/vad.py +++ /dev/null @@ -1,568 +0,0 @@ -from abc import ABC, abstractmethod -from collections import Counter, deque -import time - -from typing import Any, Deque, Iterator, List, Dict - -from pprint import pprint -from src.hooks.progressListener import ProgressListener -from src.hooks.subTaskProgressListener import SubTaskProgressListener -from src.hooks.whisperProgressHook import create_progress_listener_handle -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -from src.segments import merge_timestamps -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback - -# Workaround for https://github.com/tensorflow/tensorflow/issues/48797 -try: - import tensorflow as tf -except ModuleNotFoundError: - # Error handling - pass - -import torch - -import ffmpeg -import numpy as np - -from src.utils import format_timestamp -from enum import Enum - -class NonSpeechStrategy(Enum): - """ - Ignore non-speech frames segments. - """ - SKIP = 1 - """ - Just treat non-speech segments as speech. - """ - CREATE_SEGMENT = 2 - """ - Expand speech segments into subsequent non-speech segments. - """ - EXPAND_SEGMENT = 3 - -# Defaults for Silero -SPEECH_TRESHOLD = 0.3 - -# Minimum size of segments to process -MIN_SEGMENT_DURATION = 1 - -# The maximum time for texts from old segments to be used in the next segment -MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled) -PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this - -VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio - -class TranscriptionConfig(ABC): - def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - self.non_speech_strategy = non_speech_strategy - self.segment_padding_left = segment_padding_left - self.segment_padding_right = segment_padding_right - self.max_silent_period = max_silent_period - self.max_merge_size = max_merge_size - self.max_prompt_window = max_prompt_window - self.initial_segment_index = initial_segment_index - -class PeriodicTranscriptionConfig(TranscriptionConfig): - def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index) - self.periodic_duration = periodic_duration - -class AbstractTranscription(ABC): - def __init__(self, sampling_rate: int = 16000): - self.sampling_rate = sampling_rate - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - return load_audio(str, self.sampling_rate, start_time, duration) - - def is_transcribe_timestamps_fast(self): - """ - Determine if get_transcribe_timestamps is fast enough to not need parallelization. - """ - return False - - @abstractmethod - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - return - - def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method, - after merging the given segments using the specified configuration. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size, - config.segment_padding_left, config.segment_padding_right) - - if config.non_speech_strategy != NonSpeechStrategy.SKIP: - # Expand segments to include the gaps between them - if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT): - # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size - merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size) - elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT: - # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment) - merged = self.expand_gaps(merged, total_duration=total_duration) - else: - raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy)) - - print("Transcribing non-speech:") - pprint(merged) - return merged - - def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig, - progressListener: ProgressListener = None): - """ - Transcribe the given audo file. - - Parameters - ---------- - audio: str - The audio file. - whisperCallable: WhisperCallback - A callback object to call to transcribe each segment. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - - try: - max_audio_duration = self.get_audio_duration(audio, config) - timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration) - - # Get speech timestamps from full audio file - merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration) - - # A deque of transcribed segments that is passed to the next segment as a prompt - prompt_window = deque() - - print("Processing timestamps:") - pprint(merged) - - result = { - 'text': "", - 'segments': [], - 'language': "" - } - languageCounter = Counter() - detected_language = None - - segment_index = config.initial_segment_index - - # Calculate progress - progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0 - progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged]) - - # For each time segment, run whisper - for segment in merged: - segment_index += 1 - segment_start = segment['start'] - segment_end = segment['end'] - segment_expand_amount = segment.get('expand_amount', 0) - segment_gap = segment.get('gap', False) - - segment_duration = segment_end - segment_start - - if segment_duration < MIN_SEGMENT_DURATION: - continue - - # Audio to run on Whisper - segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration)) - # Previous segments to use as a prompt - segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None - - # Detected language - detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None - - print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ", - segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language) - - perf_start_time = time.perf_counter() - - scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration, - sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration) - segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener) - - perf_end_time = time.perf_counter() - print("Whisper took {} seconds".format(perf_end_time - perf_start_time)) - - adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration) - - # Propagate expand amount to the segments - if (segment_expand_amount > 0): - segment_without_expansion = segment_duration - segment_expand_amount - - for adjusted_segment in adjusted_segments: - adjusted_segment_end = adjusted_segment['end'] - - # Add expand amount if the segment got expanded - if (adjusted_segment_end > segment_without_expansion): - adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion - - # Append to output - result['text'] += segment_result['text'] - result['segments'].extend(adjusted_segments) - - # Increment detected language - if not segment_gap: - languageCounter[segment_result['language']] += 1 - - # Update prompt window - self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config) - - if detected_language is not None: - result['language'] = detected_language - finally: - # Notify progress listener that we are done - if progressListener is not None: - progressListener.on_finished() - return result - - def get_audio_duration(self, audio: str, config: TranscriptionConfig): - return get_audio_duration(audio) - - def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig): - if (config.max_prompt_window is not None and config.max_prompt_window > 0): - # Add segments to the current prompt window (unless it is a speech gap) - if not segment_gap: - for segment in adjusted_segments: - if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB: - prompt_window.append(segment) - - while (len(prompt_window) > 0): - first_end_time = prompt_window[0].get('end', 0) - # Time expanded in the segments should be discounted from the prompt window - first_expand_time = prompt_window[0].get('expand_amount', 0) - - if (first_end_time - first_expand_time < segment_end - config.max_prompt_window): - prompt_window.popleft() - else: - break - - def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float): - result = [] - last_end_time = 0 - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - if (last_end_time != segment_start): - delta = segment_start - last_end_time - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } ) - - last_end_time = segment_end - result.append(segment) - - # Also include total duration if specified - if (total_duration is not None and last_end_time < total_duration): - delta = total_duration - segment_start - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } ) - - return result - - # Expand the end time of each segment to the start of the next segment - def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - # Expand if the gap actually exists - if (delta >= 0): - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - - result.append(current_segment) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - if (last_segment['end'] < total_duration): - last_segment = last_segment.copy() - last_segment['end'] = total_duration - result[-1] = last_segment - - return result - - def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - expanded = False - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - if (max_expand_size is not None and delta <= max_expand_size): - # Just expand the current segment - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - expanded = True - - result.append(current_segment) - - # Add a gap to the next segment if needed - if (delta >= 0 and not expanded): - result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } ) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - delta = total_duration - last_segment['end'] - - if (delta > 0): - if (max_expand_size is not None and delta <= max_expand_size): - # Expand the last segment - last_segment = last_segment.copy() - last_segment['expand_amount'] = delta - last_segment['end'] = total_duration - result[-1] = last_segment - else: - result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } ) - - return result - - def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None): - result = [] - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - # Filter segments? - if (max_source_time is not None): - if (segment_start > max_source_time): - continue - segment_end = min(max_source_time, segment_end) - - new_segment = segment.copy() - - # Add to start and end - new_segment['start'] = segment_start + adjust_seconds - new_segment['end'] = segment_end + adjust_seconds - - # Handle words - if ('words' in new_segment): - for word in new_segment['words']: - # Adjust start and end - word['start'] = word['start'] + adjust_seconds - word['end'] = word['end'] + adjust_seconds - - result.append(new_segment) - return result - - def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float): - result = [] - - for entry in timestamps: - start = entry['start'] - end = entry['end'] - - result.append({ - 'start': start * factor, - 'end': end * factor - }) - return result - - -class VadSileroTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None): - super().__init__(sampling_rate=sampling_rate) - self.model = None - self.cache = cache - self._initialize_model() - - def _initialize_model(self): - if (self.cache is not None): - model_key = "VadSileroTranscription" - self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model) - print("Loaded Silerio model from cache.") - else: - self.model, self.get_speech_timestamps = self._create_model() - print("Created Silerio model") - - def _create_model(self): - model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad') - - # Silero does not benefit from multi-threading - torch.set_num_threads(1) # JIT - (get_speech_timestamps, _, _, _, _) = utils - - return model, get_speech_timestamps - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - result = [] - - print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time)) - perf_start_time = time.perf_counter() - - # Divide procesisng of audio into chunks - chunk_start = start_time - - while (chunk_start < end_time): - chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK) - - print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration))) - wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration)) - - sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD) - seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate) - adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration) - - #pprint(adjusted) - - result.extend(adjusted) - chunk_start += chunk_duration - - perf_end_time = time.perf_counter() - print("VAD processing took {} seconds".format(perf_end_time - perf_start_time)) - - return result - - def __getstate__(self): - # We only need the sampling rate - return { 'sampling_rate': self.sampling_rate } - - def __setstate__(self, state): - self.sampling_rate = state['sampling_rate'] - self.model = None - # Use the global cache - self.cache = GLOBAL_MODEL_CACHE - self._initialize_model() - -# A very simple VAD that just marks every N seconds as speech -class VadPeriodicTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000): - super().__init__(sampling_rate=sampling_rate) - - def is_transcribe_timestamps_fast(self): - # This is a very fast VAD - no need to parallelize it - return True - - def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float): - result = [] - - # Generate a timestamp every N seconds - start_timestamp = start_time - - while (start_timestamp < end_time): - end_timestamp = min(start_timestamp + config.periodic_duration, end_time) - segment_duration = end_timestamp - start_timestamp - - # Minimum duration is 1 second - if (segment_duration >= 1): - result.append( { 'start': start_timestamp, 'end': end_timestamp } ) - - start_timestamp = end_timestamp - - return result - -def get_audio_duration(file: str): - return float(ffmpeg.probe(file)["format"]["duration"]) - -def load_audio(file: str, sample_rate: int = 16000, - start_time: str = None, duration: str = None): - """ - Open an audio file and read as mono waveform, resampling as necessary - - Parameters - ---------- - file: str - The audio file to open - - sr: int - The sample rate to resample the audio if necessary - - start_time: str - The start time, using the standard FFMPEG time duration syntax, or None to disable. - - duration: str - The duration, using the standard FFMPEG time duration syntax, or None to disable. - - Returns - ------- - A NumPy array containing the audio waveform, in float32 dtype. - """ - try: - inputArgs = {'threads': 0} - - if (start_time is not None): - inputArgs['ss'] = start_time - if (duration is not None): - inputArgs['t'] = duration - - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - out, _ = ( - ffmpeg.input(file, **inputArgs) - .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate) - .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True) - ) - except ffmpeg.Error as e: - raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") - - return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0 \ No newline at end of file diff --git a/spaces/DeeeTeeee01/SentimentAnalysis/app.py b/spaces/DeeeTeeee01/SentimentAnalysis/app.py deleted file mode 100644 index 64bf20baa659c5317a6bd1df334ff12ac865dfa1..0000000000000000000000000000000000000000 --- a/spaces/DeeeTeeee01/SentimentAnalysis/app.py +++ /dev/null @@ -1,146 +0,0 @@ -import streamlit as st -import transformers -import torch - -# Load the model and tokenizer -model = transformers.AutoModelForSequenceClassification.from_pretrained("DeeeTeeee01/mytest_trainer_roberta-base") -tokenizer = transformers.AutoTokenizer.from_pretrained("DeeeTeeee01/mytest_trainer_roberta-base") - -# Define the function for sentiment analysis -@st.cache_resource -def predict_sentiment(text): - # Load the pipeline - pipeline = transformers.pipeline("sentiment-analysis", model = "DeeeTeeee01/mytest_trainer_roberta-base", tokenizer= "DeeeTeeee01/mytest_trainer_roberta-base") - - - # Predict the sentiment - prediction = pipeline(text) - sentiment = prediction[0]["label"] - score = prediction[0]["score"] - - return sentiment, score - -# Setting the page configurations -st.set_page_config( - page_title="Sentiment Analysis App", - page_icon=":smile:", - layout="wide", - initial_sidebar_state="auto", -) - -# Add description and title -st.write(""" -# Twit Analyzer -Please type your text and click the Predict button to know if your text has a positive, negative or neutral sentiment! -""") - -# Add image -image = st.image("sentiment.jpeg", width=400) - -# Get user input -text = st.text_input("Type here:") - -# Add Predict button -predict_button = st.button("Predict") - -# Define the CSS style for the app -st.markdown( -""" - -""", -unsafe_allow_html=True -) - -# Show sentiment output -if predict_button and text: - sentiment, score = predict_sentiment(text) - if sentiment == "Positive": - st.success(f"The sentiment is {sentiment} with a score of {score*100:.2f}%!") - elif sentiment == "Negative": - st.error(f"The sentiment is {sentiment} with a score of {score*100:.2f}%!") - else: - st.warning(f"The sentiment is {sentiment} with a score of {score*100:.2f}%!") - - - - -# import streamlit as st -# import transformers -# import torch - -# # Load the model and tokenizer -# model = transformers.AutoModelForSequenceClassification.from_pretrained("DeeeTeeee01/twitter-xlm-roberta-base-sentiment_dee") -# tokenizer = transformers.AutoTokenizer.from_pretrained("DeeeTeeee01/twitter-xlm-roberta-base-sentiment_dee") - -# # Define the function for sentiment analysis -# @st.cache_resource -# def predict_sentiment(text): -# # Load the pipeline. -# pipeline = transformers.pipeline("sentiment-analysis") - -# # Predict the sentiment. -# prediction = pipeline(text) -# sentiment = prediction[0]["label"] -# score = prediction[0]["score"] - -# return sentiment, score - -# # Setting the page configurations -# st.set_page_config( -# page_title="Sentiment Analysis App", -# page_icon=":smile:", -# layout="wide", -# initial_sidebar_state="auto", -# ) - -# # Add description and title -# st.write(""" -# # Predict if your text is Positive, Negative or Nuetral ... -# Please type your text and press ENTER key to know if your text is positive, negative, or neutral sentiment! -# """) - - -# # Add image -# image = st.image("sentiment.jpeg", width=400) - -# # Get user input -# text = st.text_input("Type here:") - -# # Define the CSS style for the app -# st.markdown( -# """ -# -# """, -# unsafe_allow_html=True -# ) - -# # Show sentiment output -# if text: -# sentiment, score = predict_sentiment(text) -# if sentiment == "Positive": -# st.success(f"The sentiment is {sentiment} with a score of {score*100:.2f}%!") -# elif sentiment == "Negative": -# st.error(f"The sentiment is {sentiment} with a score of {score*100:.2f}%!") -# else: -# st.warning(f"The sentiment is {sentiment} with a score of {score*100:.2f}%!") - - - - - - diff --git a/spaces/DrSong/ChatGLM-6B-ChatBot/app.py b/spaces/DrSong/ChatGLM-6B-ChatBot/app.py deleted file mode 100644 index c83d372bb94d59e27d627bd57412c1c732480420..0000000000000000000000000000000000000000 --- a/spaces/DrSong/ChatGLM-6B-ChatBot/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import psutil -import gradio as gr - -from functools import partial -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -mem = psutil.virtual_memory() - -tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) - -model = AutoModelForSeq2SeqLM.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).quantize(bits=4, compile_parallel_kernel=True, parallel_num=2).cpu().float() - -def chat(query, history=[]): - _, history = model.chat(tokenizer, query, history, max_length=512) - return history, history - -description = "This is an unofficial chatbot application based on open source model ChatGLM-6B(https://github.com/THUDM/ChatGLM-6B), running on cpu(therefore max_length is limited to 512). \nIf you want to use this chat bot in your space, 'Duplicate this space' by click the button close to 'Linked Models'. \n" -title = "ChatGLM-6B Chatbot" -examples = [["Hello?"], ["你好。"], ["介绍清华"]] - -chatbot_interface = gr.Interface( - fn=chat, - title=title, - description=description, - examples=examples, - inputs=["text", "state"], - outputs=["chatbot", "state"] -) - -chatbot_interface.launch() diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/custom_ops.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/custom_ops.py deleted file mode 100644 index 702471e2006af6858345c1225c1e55b0acd17d32..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/custom_ops.py +++ /dev/null @@ -1,181 +0,0 @@ -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""TensorFlow custom ops builder. -""" - -import glob -import os -import re -import uuid -import hashlib -import tempfile -import shutil -import tensorflow as tf -from tensorflow.python.client import device_lib # pylint: disable=no-name-in-module - -from .. import util - -#---------------------------------------------------------------------------- -# Global configs. - -cuda_cache_path = None -cuda_cache_version_tag = 'v1' -do_not_hash_included_headers = True # Speed up compilation by assuming that headers included by the CUDA code never change. -verbose = True # Print status messages to stdout. - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - hostx64_paths = sorted(glob.glob('C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64'), reverse=True) - if hostx64_paths != []: - return hostx64_paths[0] - hostx64_paths = sorted(glob.glob('C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64'), reverse=True) - if hostx64_paths != []: - return hostx64_paths[0] - hostx64_paths = sorted(glob.glob('C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64'), reverse=True) - if hostx64_paths != []: - return hostx64_paths[0] - vc_bin_dir = 'C:/Program Files (x86)/Microsoft Visual Studio 14.0/vc/bin' - if os.path.isdir(vc_bin_dir): - return vc_bin_dir - return None - -def _get_compute_cap(device): - caps_str = device.physical_device_desc - m = re.search('compute capability: (\\d+).(\\d+)', caps_str) - major = m.group(1) - minor = m.group(2) - return (major, minor) - -def _get_cuda_gpu_arch_string(): - gpus = [x for x in device_lib.list_local_devices() if x.device_type == 'GPU'] - if len(gpus) == 0: - raise RuntimeError('No GPU devices found') - (major, minor) = _get_compute_cap(gpus[0]) - return 'sm_%s%s' % (major, minor) - -def _run_cmd(cmd): - with os.popen(cmd) as pipe: - output = pipe.read() - status = pipe.close() - if status is not None: - raise RuntimeError('NVCC returned an error. See below for full command line and output log:\n\n%s\n\n%s' % (cmd, output)) - -def _prepare_nvcc_cli(opts): - cmd = 'nvcc ' + opts.strip() - cmd += ' --disable-warnings' - cmd += ' --include-path "%s"' % tf.sysconfig.get_include() - cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'protobuf_archive', 'src') - cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'com_google_absl') - cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'eigen_archive') - - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - # Require that _find_compiler_bindir succeeds on Windows. Allow - # nvcc to use whatever is the default on Linux. - if os.name == 'nt': - raise RuntimeError('Could not find MSVC/GCC/CLANG installation on this computer. Check compiler_bindir_search_path list in "%s".' % __file__) - else: - cmd += ' --compiler-bindir "%s"' % compiler_bindir - cmd += ' 2>&1' - return cmd - -#---------------------------------------------------------------------------- -# Main entry point. - -_plugin_cache = dict() - -def get_plugin(cuda_file, extra_nvcc_options=[]): - cuda_file_base = os.path.basename(cuda_file) - cuda_file_name, cuda_file_ext = os.path.splitext(cuda_file_base) - - # Already in cache? - if cuda_file in _plugin_cache: - return _plugin_cache[cuda_file] - - # Setup plugin. - if verbose: - print('Setting up TensorFlow plugin "%s": ' % cuda_file_base, end='', flush=True) - try: - # Hash CUDA source. - md5 = hashlib.md5() - with open(cuda_file, 'rb') as f: - md5.update(f.read()) - md5.update(b'\n') - - # Hash headers included by the CUDA code by running it through the preprocessor. - if not do_not_hash_included_headers: - if verbose: - print('Preprocessing... ', end='', flush=True) - with tempfile.TemporaryDirectory() as tmp_dir: - tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + cuda_file_ext) - _run_cmd(_prepare_nvcc_cli('"%s" --preprocess -o "%s" --keep --keep-dir "%s"' % (cuda_file, tmp_file, tmp_dir))) - with open(tmp_file, 'rb') as f: - bad_file_str = ('"' + cuda_file.replace('\\', '/') + '"').encode('utf-8') # __FILE__ in error check macros - good_file_str = ('"' + cuda_file_base + '"').encode('utf-8') - for ln in f: - if not ln.startswith(b'# ') and not ln.startswith(b'#line '): # ignore line number pragmas - ln = ln.replace(bad_file_str, good_file_str) - md5.update(ln) - md5.update(b'\n') - - # Select compiler configs. - compile_opts = '' - if os.name == 'nt': - compile_opts += '"%s"' % os.path.join(tf.sysconfig.get_lib(), 'python', '_pywrap_tensorflow_internal.lib') - elif os.name == 'posix': - compile_opts += f' --compiler-options \'-fPIC\'' - compile_opts += f' --compiler-options \'{" ".join(tf.sysconfig.get_compile_flags())}\'' - compile_opts += f' --linker-options \'{" ".join(tf.sysconfig.get_link_flags())}\'' - else: - assert False # not Windows or Linux, w00t? - compile_opts += f' --gpu-architecture={_get_cuda_gpu_arch_string()}' - compile_opts += ' --use_fast_math' - for opt in extra_nvcc_options: - compile_opts += ' ' + opt - nvcc_cmd = _prepare_nvcc_cli(compile_opts) - - # Hash build configuration. - md5.update(('nvcc_cmd: ' + nvcc_cmd).encode('utf-8') + b'\n') - md5.update(('tf.VERSION: ' + tf.VERSION).encode('utf-8') + b'\n') - md5.update(('cuda_cache_version_tag: ' + cuda_cache_version_tag).encode('utf-8') + b'\n') - - # Compile if not already compiled. - cache_dir = util.make_cache_dir_path('tflib-cudacache') if cuda_cache_path is None else cuda_cache_path - bin_file_ext = '.dll' if os.name == 'nt' else '.so' - bin_file = os.path.join(cache_dir, cuda_file_name + '_' + md5.hexdigest() + bin_file_ext) - if not os.path.isfile(bin_file): - if verbose: - print('Compiling... ', end='', flush=True) - with tempfile.TemporaryDirectory() as tmp_dir: - tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + bin_file_ext) - _run_cmd(nvcc_cmd + ' "%s" --shared -o "%s" --keep --keep-dir "%s"' % (cuda_file, tmp_file, tmp_dir)) - os.makedirs(cache_dir, exist_ok=True) - intermediate_file = os.path.join(cache_dir, cuda_file_name + '_' + uuid.uuid4().hex + '_tmp' + bin_file_ext) - shutil.copyfile(tmp_file, intermediate_file) - os.rename(intermediate_file, bin_file) # atomic - - # Load. - if verbose: - print('Loading... ', end='', flush=True) - plugin = tf.load_op_library(bin_file) - - # Add to cache. - _plugin_cache[cuda_file] = plugin - if verbose: - print('Done.', flush=True) - return plugin - - except: - if verbose: - print('Failed!', flush=True) - raise - -#---------------------------------------------------------------------------- diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/evaluation/__init__.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/evaluation/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/fpn.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/fpn.py deleted file mode 100644 index 7df65a178ce4a105d5c803ff5aa18aa56c44d374..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/fpn.py +++ /dev/null @@ -1,312 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -from typing import Callable, Dict, List, Optional, Tuple, Union - -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.init import xavier_uniform_, constant_, uniform_, normal_ -from torch.cuda.amp import autocast - -from detectron2.config import configurable -from detectron2.layers import Conv2d, DeformConv, ShapeSpec, get_norm -from detectron2.modeling import SEM_SEG_HEADS_REGISTRY - -from ..transformer_decoder.position_encoding import PositionEmbeddingSine -from ..transformer_decoder.transformer import TransformerEncoder, TransformerEncoderLayer, _get_clones, _get_activation_fn - - -def build_pixel_decoder(cfg, input_shape): - """ - Build a pixel decoder from `cfg.MODEL.MASK_FORMER.PIXEL_DECODER_NAME`. - """ - name = cfg.MODEL.SEM_SEG_HEAD.PIXEL_DECODER_NAME - model = SEM_SEG_HEADS_REGISTRY.get(name)(cfg, input_shape) - forward_features = getattr(model, "forward_features", None) - if not callable(forward_features): - raise ValueError( - "Only SEM_SEG_HEADS with forward_features method can be used as pixel decoder. " - f"Please implement forward_features for {name} to only return mask features." - ) - return model - - -# This is a modified FPN decoder. -@SEM_SEG_HEADS_REGISTRY.register() -class BasePixelDecoder(nn.Module): - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - conv_dim: int, - mask_dim: int, - norm: Optional[Union[str, Callable]] = None, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - conv_dims: number of output channels for the intermediate conv layers. - mask_dim: number of output channels for the final conv layer. - norm (str or callable): normalization for all conv layers - """ - super().__init__() - - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - feature_channels = [v.channels for k, v in input_shape] - - lateral_convs = [] - output_convs = [] - - use_bias = norm == "" - for idx, in_channels in enumerate(feature_channels): - if idx == len(self.in_features) - 1: - output_norm = get_norm(norm, conv_dim) - output_conv = Conv2d( - in_channels, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(output_conv) - self.add_module("layer_{}".format(idx + 1), output_conv) - - lateral_convs.append(None) - output_convs.append(output_conv) - else: - lateral_norm = get_norm(norm, conv_dim) - output_norm = get_norm(norm, conv_dim) - - lateral_conv = Conv2d( - in_channels, conv_dim, kernel_size=1, bias=use_bias, norm=lateral_norm - ) - output_conv = Conv2d( - conv_dim, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(lateral_conv) - weight_init.c2_xavier_fill(output_conv) - self.add_module("adapter_{}".format(idx + 1), lateral_conv) - self.add_module("layer_{}".format(idx + 1), output_conv) - - lateral_convs.append(lateral_conv) - output_convs.append(output_conv) - # Place convs into top-down order (from low to high resolution) - # to make the top-down computation in forward clearer. - self.lateral_convs = lateral_convs[::-1] - self.output_convs = output_convs[::-1] - - self.mask_dim = mask_dim - self.mask_features = Conv2d( - conv_dim, - mask_dim, - kernel_size=3, - stride=1, - padding=1, - ) - weight_init.c2_xavier_fill(self.mask_features) - - self.maskformer_num_feature_levels = 3 # always use 3 scales - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = {} - ret["input_shape"] = { - k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - } - ret["conv_dim"] = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - ret["norm"] = cfg.MODEL.SEM_SEG_HEAD.NORM - return ret - - def forward_features(self, features): - multi_scale_features = [] - num_cur_levels = 0 - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.in_features[::-1]): - x = features[f] - lateral_conv = self.lateral_convs[idx] - output_conv = self.output_convs[idx] - if lateral_conv is None: - y = output_conv(x) - else: - cur_fpn = lateral_conv(x) - # Following FPN implementation, we use nearest upsampling here - y = cur_fpn + F.interpolate(y, size=cur_fpn.shape[-2:], mode="nearest") - y = output_conv(y) - if num_cur_levels < self.maskformer_num_feature_levels: - multi_scale_features.append(y) - num_cur_levels += 1 - return self.mask_features(y), None, multi_scale_features - - def forward(self, features, targets=None): - logger = logging.getLogger(__name__) - logger.warning("Calling forward() may cause unpredicted behavior of PixelDecoder module.") - return self.forward_features(features) - - -class TransformerEncoderOnly(nn.Module): - def __init__( - self, - d_model=512, - nhead=8, - num_encoder_layers=6, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - - encoder_layer = TransformerEncoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm) - - self._reset_parameters() - - self.d_model = d_model - self.nhead = nhead - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, src, mask, pos_embed): - # flatten NxCxHxW to HWxNxC - bs, c, h, w = src.shape - src = src.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - if mask is not None: - mask = mask.flatten(1) - - memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) - return memory.permute(1, 2, 0).view(bs, c, h, w) - - -# This is a modified FPN decoder with extra Transformer encoder that processes the lowest-resolution feature map. -@SEM_SEG_HEADS_REGISTRY.register() -class TransformerEncoderPixelDecoder(BasePixelDecoder): - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - transformer_dropout: float, - transformer_nheads: int, - transformer_dim_feedforward: int, - transformer_enc_layers: int, - transformer_pre_norm: bool, - conv_dim: int, - mask_dim: int, - norm: Optional[Union[str, Callable]] = None, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - transformer_dropout: dropout probability in transformer - transformer_nheads: number of heads in transformer - transformer_dim_feedforward: dimension of feedforward network - transformer_enc_layers: number of transformer encoder layers - transformer_pre_norm: whether to use pre-layernorm or not - conv_dims: number of output channels for the intermediate conv layers. - mask_dim: number of output channels for the final conv layer. - norm (str or callable): normalization for all conv layers - """ - super().__init__(input_shape, conv_dim=conv_dim, mask_dim=mask_dim, norm=norm) - - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - feature_strides = [v.stride for k, v in input_shape] - feature_channels = [v.channels for k, v in input_shape] - - in_channels = feature_channels[len(self.in_features) - 1] - self.input_proj = Conv2d(in_channels, conv_dim, kernel_size=1) - weight_init.c2_xavier_fill(self.input_proj) - self.transformer = TransformerEncoderOnly( - d_model=conv_dim, - dropout=transformer_dropout, - nhead=transformer_nheads, - dim_feedforward=transformer_dim_feedforward, - num_encoder_layers=transformer_enc_layers, - normalize_before=transformer_pre_norm, - ) - N_steps = conv_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - # update layer - use_bias = norm == "" - output_norm = get_norm(norm, conv_dim) - output_conv = Conv2d( - conv_dim, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(output_conv) - delattr(self, "layer_{}".format(len(self.in_features))) - self.add_module("layer_{}".format(len(self.in_features)), output_conv) - self.output_convs[0] = output_conv - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super().from_config(cfg, input_shape) - ret["transformer_dropout"] = cfg.MODEL.MASK_FORMER.DROPOUT - ret["transformer_nheads"] = cfg.MODEL.MASK_FORMER.NHEADS - ret["transformer_dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD - ret[ - "transformer_enc_layers" - ] = cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS # a separate config - ret["transformer_pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM - return ret - - def forward_features(self, features): - multi_scale_features = [] - num_cur_levels = 0 - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.in_features[::-1]): - x = features[f] - lateral_conv = self.lateral_convs[idx] - output_conv = self.output_convs[idx] - if lateral_conv is None: - transformer = self.input_proj(x) - pos = self.pe_layer(x) - transformer = self.transformer(transformer, None, pos) - y = output_conv(transformer) - # save intermediate feature as input to Transformer decoder - transformer_encoder_features = transformer - else: - cur_fpn = lateral_conv(x) - # Following FPN implementation, we use nearest upsampling here - y = cur_fpn + F.interpolate(y, size=cur_fpn.shape[-2:], mode="nearest") - y = output_conv(y) - if num_cur_levels < self.maskformer_num_feature_levels: - multi_scale_features.append(y) - num_cur_levels += 1 - return self.mask_features(y), transformer_encoder_features, multi_scale_features - - def forward(self, features, targets=None): - logger = logging.getLogger(__name__) - logger.warning("Calling forward() may cause unpredicted behavior of PixelDecoder module.") - return self.forward_features(features) diff --git a/spaces/Enderfga/mtCNN_sysu/train.py b/spaces/Enderfga/mtCNN_sysu/train.py deleted file mode 100644 index 637bb1aea59c5c468a3e091d4bc0b8c4dc3a24ca..0000000000000000000000000000000000000000 --- a/spaces/Enderfga/mtCNN_sysu/train.py +++ /dev/null @@ -1,351 +0,0 @@ -from utils.dataloader import TrainImageReader,convert_image_to_tensor,ImageDB -import datetime -import os -from utils.models import PNet,RNet,ONet,LossFn -import torch -#from torch.autograd import Variable 新版本中已弃用 -import utils.config as config -import argparse -import sys -sys.path.append(os.getcwd()) -import numpy as np - - - -def compute_accuracy(prob_cls, gt_cls): - - prob_cls = torch.squeeze(prob_cls) - gt_cls = torch.squeeze(gt_cls) - - #we only need the detection which >= 0 - mask = torch.ge(gt_cls,0) - #get valid element - valid_gt_cls = torch.masked_select(gt_cls,mask) - valid_prob_cls = torch.masked_select(prob_cls,mask) - size = min(valid_gt_cls.size()[0], valid_prob_cls.size()[0]) - prob_ones = torch.ge(valid_prob_cls,0.6).float() - right_ones = torch.eq(prob_ones,valid_gt_cls).float() - - ## if size == 0 meaning that your gt_labels are all negative, landmark or part - - return torch.div(torch.mul(torch.sum(right_ones),float(1.0)),float(size)) ## divided by zero meaning that your gt_labels are all negative, landmark or part - - -def train_pnet(model_store_path, end_epoch,imdb, - batch_size,frequent=10,base_lr=0.01,lr_epoch_decay=[9],use_cuda=True,load=''): - - #create lr_list - lr_epoch_decay.append(end_epoch+1) - lr_list = np.zeros(end_epoch) - lr_t = base_lr - for i in range(len(lr_epoch_decay)): - if i==0: - lr_list[0:lr_epoch_decay[i]-1]=lr_t - else: - lr_list[lr_epoch_decay[i-1]-1:lr_epoch_decay[i]-1]=lr_t - lr_t*=0.1 - - - if not os.path.exists(model_store_path): - os.makedirs(model_store_path) - - lossfn = LossFn() - net = PNet(is_train=True, use_cuda=use_cuda) - if load!='': - net.load_state_dict(torch.load(load)) - print('model loaded',load) - net.train() - - if use_cuda: - net.cuda() - - - optimizer = torch.optim.Adam(net.parameters(), lr=lr_list[0]) - #optimizer = torch.optim.SGD(net.parameters(), lr=lr_list[0]) - - train_data=TrainImageReader(imdb,12,batch_size,shuffle=True) - - #frequent = 10 - for cur_epoch in range(1,end_epoch+1): - train_data.reset() # shuffle - for param in optimizer.param_groups: - param['lr'] = lr_list[cur_epoch-1] - for batch_idx,(image,(gt_label,gt_bbox,gt_landmark))in enumerate(train_data): - - im_tensor = [ convert_image_to_tensor(image[i,:,:,:]) for i in range(image.shape[0]) ] - im_tensor = torch.stack(im_tensor) - - im_tensor.requires_grad = True - gt_label = torch.from_numpy(gt_label).float() - gt_label.requires_grad = True - - gt_bbox = torch.from_numpy(gt_bbox).float() - gt_bbox.requires_grad = True - # gt_landmark = Variable(torch.from_numpy(gt_landmark).float()) - - if use_cuda: - im_tensor = im_tensor.cuda() - gt_label = gt_label.cuda() - gt_bbox = gt_bbox.cuda() - # gt_landmark = gt_landmark.cuda() - - cls_pred, box_offset_pred = net(im_tensor) - # all_loss, cls_loss, offset_loss = lossfn.loss(gt_label=label_y,gt_offset=bbox_y, pred_label=cls_pred, pred_offset=box_offset_pred) - - cls_loss = lossfn.cls_loss(gt_label,cls_pred) - box_offset_loss = lossfn.box_loss(gt_label,gt_bbox,box_offset_pred) - # landmark_loss = lossfn.landmark_loss(gt_label,gt_landmark,landmark_offset_pred) - - all_loss = cls_loss*1.0+box_offset_loss*0.5 - - if batch_idx %frequent==0: - accuracy=compute_accuracy(cls_pred,gt_label) - - show1 = accuracy.data.cpu().numpy() - show2 = cls_loss.data.cpu().numpy() - show3 = box_offset_loss.data.cpu().numpy() - # show4 = landmark_loss.data.cpu().numpy() - show5 = all_loss.data.cpu().numpy() - - print("%s : Epoch: %d, Step: %d, accuracy: %s, det loss: %s, bbox loss: %s, all_loss: %s, lr:%s "%(datetime.datetime.now(),cur_epoch,batch_idx, show1,show2,show3,show5,lr_list[cur_epoch-1])) - - optimizer.zero_grad() - all_loss.backward() - optimizer.step() - - torch.save(net.state_dict(), os.path.join(model_store_path,"pnet_epoch_%d.pt" % cur_epoch)) - torch.save(net, os.path.join(model_store_path,"pnet_epoch_model_%d.pkl" % cur_epoch)) - - - - -def train_rnet(model_store_path, end_epoch,imdb, - batch_size,frequent=50,base_lr=0.01,lr_epoch_decay=[9],use_cuda=True,load=''): - - #create lr_list - lr_epoch_decay.append(end_epoch+1) - lr_list = np.zeros(end_epoch) - lr_t = base_lr - for i in range(len(lr_epoch_decay)): - if i==0: - lr_list[0:lr_epoch_decay[i]-1]=lr_t - else: - lr_list[lr_epoch_decay[i-1]-1:lr_epoch_decay[i]-1]=lr_t - lr_t*=0.1 - #print(lr_list) - if not os.path.exists(model_store_path): - os.makedirs(model_store_path) - - lossfn = LossFn() - net = RNet(is_train=True, use_cuda=use_cuda) - net.train() - if load!='': - net.load_state_dict(torch.load(load)) - print('model loaded',load) - if use_cuda: - net.cuda() - - - optimizer = torch.optim.Adam(net.parameters(), lr=base_lr) - - train_data=TrainImageReader(imdb,24,batch_size,shuffle=True) - - - for cur_epoch in range(1,end_epoch+1): - train_data.reset() - for param in optimizer.param_groups: - param['lr'] = lr_list[cur_epoch-1] - - for batch_idx,(image,(gt_label,gt_bbox,gt_landmark))in enumerate(train_data): - - im_tensor = [ convert_image_to_tensor(image[i,:,:,:]) for i in range(image.shape[0]) ] - im_tensor = torch.stack(im_tensor) - - im_tensor.requires_grad = True - gt_label = torch.from_numpy(gt_label).float() - gt_label.requires_grad = True - - gt_bbox = torch.from_numpy(gt_bbox).float() - gt_bbox.requires_grad = True - gt_landmark = torch.from_numpy(gt_landmark).float() - gt_landmark.requires_grad = True - - if use_cuda: - im_tensor = im_tensor.cuda() - gt_label = gt_label.cuda() - gt_bbox = gt_bbox.cuda() - gt_landmark = gt_landmark.cuda() - - cls_pred, box_offset_pred = net(im_tensor) - # all_loss, cls_loss, offset_loss = lossfn.loss(gt_label=label_y,gt_offset=bbox_y, pred_label=cls_pred, pred_offset=box_offset_pred) - - cls_loss = lossfn.cls_loss(gt_label,cls_pred) - box_offset_loss = lossfn.box_loss(gt_label,gt_bbox,box_offset_pred) - # landmark_loss = lossfn.landmark_loss(gt_label,gt_landmark,landmark_offset_pred) - - all_loss = cls_loss*1.0+box_offset_loss*0.5 - - if batch_idx%frequent==0: - accuracy=compute_accuracy(cls_pred,gt_label) - - show1 = accuracy.data.cpu().numpy() - show2 = cls_loss.data.cpu().numpy() - show3 = box_offset_loss.data.cpu().numpy() - # show4 = landmark_loss.data.cpu().numpy() - show5 = all_loss.data.cpu().numpy() - - print("%s : Epoch: %d, Step: %d, accuracy: %s, det loss: %s, bbox loss: %s, all_loss: %s, lr:%s "%(datetime.datetime.now(), cur_epoch, batch_idx, show1, show2, show3, show5, lr_list[cur_epoch-1])) - - optimizer.zero_grad() - all_loss.backward() - optimizer.step() - - torch.save(net.state_dict(), os.path.join(model_store_path,"rnet_epoch_%d.pt" % cur_epoch)) - torch.save(net, os.path.join(model_store_path,"rnet_epoch_model_%d.pkl" % cur_epoch)) - - -def train_onet(model_store_path, end_epoch,imdb, - batch_size,frequent=50,base_lr=0.01,lr_epoch_decay=[9],use_cuda=True,load=''): - #create lr_list - lr_epoch_decay.append(end_epoch+1) - lr_list = np.zeros(end_epoch) - lr_t = base_lr - for i in range(len(lr_epoch_decay)): - if i==0: - lr_list[0:lr_epoch_decay[i]-1]=lr_t - else: - lr_list[lr_epoch_decay[i-1]-1:lr_epoch_decay[i]-1]=lr_t - lr_t*=0.1 - #print(lr_list) - - if not os.path.exists(model_store_path): - os.makedirs(model_store_path) - - lossfn = LossFn() - net = ONet(is_train=True) - if load!='': - net.load_state_dict(torch.load(load)) - print('model loaded',load) - net.train() - #print(use_cuda) - if use_cuda: - net.cuda() - - - optimizer = torch.optim.Adam(net.parameters(), lr=base_lr) - - train_data=TrainImageReader(imdb,48,batch_size,shuffle=True) - - - for cur_epoch in range(1,end_epoch+1): - - train_data.reset() - for param in optimizer.param_groups: - param['lr'] = lr_list[cur_epoch-1] - for batch_idx,(image,(gt_label,gt_bbox,gt_landmark))in enumerate(train_data): - # print("batch id {0}".format(batch_idx)) - im_tensor = [ convert_image_to_tensor(image[i,:,:,:]) for i in range(image.shape[0]) ] - im_tensor = torch.stack(im_tensor) - - im_tensor.requires_grad = True - gt_label = torch.from_numpy(gt_label).float() - gt_label.requires_grad = True - - gt_bbox = torch.from_numpy(gt_bbox).float() - gt_bbox.requires_grad = True - gt_landmark = torch.from_numpy(gt_landmark).float() - gt_landmark.requires_grad = True - - if use_cuda: - im_tensor = im_tensor.cuda() - gt_label = gt_label.cuda() - gt_bbox = gt_bbox.cuda() - gt_landmark = gt_landmark.cuda() - - cls_pred, box_offset_pred, landmark_offset_pred = net(im_tensor) - - # all_loss, cls_loss, offset_loss = lossfn.loss(gt_label=label_y,gt_offset=bbox_y, pred_label=cls_pred, pred_offset=box_offset_pred) - - cls_loss = lossfn.cls_loss(gt_label,cls_pred) - box_offset_loss = lossfn.box_loss(gt_label,gt_bbox,box_offset_pred) - landmark_loss = lossfn.landmark_loss(gt_label,gt_landmark,landmark_offset_pred) - - all_loss = cls_loss*0.8+box_offset_loss*0.6+landmark_loss*1.5 - - if batch_idx%frequent==0: - accuracy=compute_accuracy(cls_pred,gt_label) - - show1 = accuracy.data.cpu().numpy() - show2 = cls_loss.data.cpu().numpy() - show3 = box_offset_loss.data.cpu().numpy() - show4 = landmark_loss.data.cpu().numpy() - show5 = all_loss.data.cpu().numpy() - - print("%s : Epoch: %d, Step: %d, accuracy: %s, det loss: %s, bbox loss: %s, landmark loss: %s, all_loss: %s, lr:%s "%(datetime.datetime.now(),cur_epoch,batch_idx, show1,show2,show3,show4,show5,base_lr)) - #print("%s : Epoch: %d, Step: %d, accuracy: %s, det loss: %s, bbox loss: %s, all_loss: %s, lr:%s "%(datetime.datetime.now(),cur_epoch,batch_idx, show1,show2,show3,show5,lr_list[cur_epoch-1])) - - optimizer.zero_grad() - all_loss.backward() - optimizer.step() - - torch.save(net.state_dict(), os.path.join(model_store_path,"onet_epoch_%d.pt" % cur_epoch)) - torch.save(net, os.path.join(model_store_path,"onet_epoch_model_%d.pkl" % cur_epoch)) - - - - - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train MTCNN', - formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - parser.add_argument('--net', dest='net', help='which net to train', type=str) - - parser.add_argument('--anno_file', dest='annotation_file', help='training data annotation file', type=str) - parser.add_argument('--model_path', dest='model_store_path', help='training model store directory', - default=config.MODEL_STORE_DIR, type=str) - parser.add_argument('--end_epoch', dest='end_epoch', help='end epoch of training', - default=config.END_EPOCH, type=int) - parser.add_argument('--frequent', dest='frequent', help='frequency of logging', - default=200, type=int) - parser.add_argument('--lr', dest='lr', help='learning rate', - default=config.TRAIN_LR, type=float) - parser.add_argument('--batch_size', dest='batch_size', help='train batch size', - default=config.TRAIN_BATCH_SIZE, type=int) - parser.add_argument('--gpu', dest='use_cuda', help='train with gpu', - default=config.USE_CUDA, type=bool) - parser.add_argument('--load', dest='load', help='load model', type=str) - - args = parser.parse_args() - return args - -def train_net(annotation_file, model_store_path, - end_epoch=16, frequent=200, lr=0.01,lr_epoch_decay=[9], - batch_size=128, use_cuda=False,load='',net='pnet'): - if net=='pnet': - annotation_file = os.path.join(config.ANNO_STORE_DIR,config.PNET_TRAIN_IMGLIST_FILENAME) - elif net=='rnet': - annotation_file = os.path.join(config.ANNO_STORE_DIR,config.RNET_TRAIN_IMGLIST_FILENAME) - elif net=='onet': - annotation_file = os.path.join(config.ANNO_STORE_DIR,config.ONET_TRAIN_IMGLIST_FILENAME) - imagedb = ImageDB(annotation_file) - gt_imdb = imagedb.load_imdb() - print('DATASIZE',len(gt_imdb)) - gt_imdb = imagedb.append_flipped_images(gt_imdb) - print('FLIP DATASIZE',len(gt_imdb)) - if net=="pnet": - print("Training Pnet:") - train_pnet(model_store_path=model_store_path, end_epoch=end_epoch, imdb=gt_imdb, batch_size=batch_size, frequent=frequent, base_lr=lr,lr_epoch_decay=lr_epoch_decay, use_cuda=use_cuda,load=load) - elif net=="rnet": - print("Training Rnet:") - train_rnet(model_store_path=model_store_path, end_epoch=end_epoch, imdb=gt_imdb, batch_size=batch_size, frequent=frequent, base_lr=lr,lr_epoch_decay=lr_epoch_decay, use_cuda=use_cuda,load=load) - elif net=="onet": - print("Training Onet:") - train_onet(model_store_path=model_store_path, end_epoch=end_epoch, imdb=gt_imdb, batch_size=batch_size, frequent=frequent, base_lr=lr,lr_epoch_decay=lr_epoch_decay, use_cuda=use_cuda,load=load) - -if __name__ == '__main__': - args = parse_args() - lr_epoch_decay = [9] - train_net(annotation_file=args.annotation_file, model_store_path=args.model_store_path, - end_epoch=args.end_epoch, frequent=args.frequent, lr=args.lr,lr_epoch_decay=lr_epoch_decay,batch_size=args.batch_size, use_cuda=args.use_cuda,load=args.load,net=args.net) \ No newline at end of file diff --git a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Feynlee/Receipt_Parser/app.py b/spaces/Feynlee/Receipt_Parser/app.py deleted file mode 100644 index b2970455cf84fa4ba72975cd95787146b904cf73..0000000000000000000000000000000000000000 --- a/spaces/Feynlee/Receipt_Parser/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import cv2 -import re -import pytesseract -import numpy as np -import gradio as gr -import pandas as pd -from matplotlib import pyplot as plt - - -# get grayscale image -def get_grayscale(image): - return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - -# noise removal -def remove_noise(image): - return cv2.medianBlur(image,5) - -#thresholding -def thresholding(image, thresh_hold=0, which='ostu'): - if which == 'ostu': - return cv2.threshold(image, thresh_hold, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] - elif which == 'simple': - _, img = cv2.threshold(image,thresh_hold,255,cv2.THRESH_BINARY) - return img - elif which == 'adaptive': - return cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2) - -#dilation -def dilate(image): - kernel = np.ones((5,5),np.uint8) - return cv2.dilate(image, kernel, iterations = 1) - -#erosion -def erode(image): - kernel = np.ones((5,5),np.uint8) - return cv2.erode(image, kernel, iterations = 1) - -#opening - erosion followed by dilation -def opening(image): - kernel = np.ones((5,5),np.uint8) - return cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel) - -#canny edge detection -def canny(image): - return cv2.Canny(image, 100, 200) - -#skew correction -def deskew(image): - coords = np.column_stack(np.where(image > 0)) - angle = cv2.minAreaRect(coords)[-1] - if angle < -45: - angle = -(90 + angle) - else: - angle = -angle - (h, w) = image.shape[:2] - center = (w // 2, h // 2) - M = cv2.getRotationMatrix2D(center, angle, 1.0) - rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE) - return rotated - -#template matching -def match_template(image, template): - return cv2.matchTemplate(image, template, cv2.TM_CCOEFF_NORMED) - -def show_cvimg(img, figsize=(15, 15)): - fig, ax = plt.subplots(dpi=80, figsize=figsize) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - ax.imshow(img) - - -def extract_purchased_items(txt): - pat = '.*\s*(FB|FA|NB)' - p = re.compile("(.*) (\d+[,\.\/\:\']*\d+) (FB|FA|NB)") - - txts = txt.split('\n') - items = [] - not_parsed = [] - for t in txts: - if re.match(pat, t): - result = p.search(t) - if result is not None: - items.append({'item': result.group(1), - 'price': re.sub('[,\.\/\:\']', '.', result.group(2)), - 'type': result.group(3) - }) - else: - not_parsed.append({'not parsed': t}) - return pd.DataFrame(items), pd.DataFrame(not_parsed) - - -def parse_receipt(img, **kwargs): - # preprocessing - gray = get_grayscale(img) - thresh = thresholding(gray, **kwargs) - - # ocr - custom_config = r'--oem 3 --psm 6' - txt = pytesseract.image_to_string(thresh, config=custom_config) - - return extract_purchased_items(txt) - - -iface = gr.Interface(fn=parse_receipt, inputs="image", outputs=["dataframe", "dataframe"]) -iface.launch() \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/encoder.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/encoder.py deleted file mode 100644 index 670b5bb7682b16bea1644d036eddc0466cfefd9b..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/encoder.py +++ /dev/null @@ -1,12 +0,0 @@ -class SpeechEncoder(object): - def __init__(self,vec_path = "pretrain/checkpoint_best_legacy_500.pt",device=None): - self.model = None #This is Model - self.hidden_dim = 768 - pass - - def encoder(self,wav): - ''' - input: wav:[batchsize,signal_length] - output: embedding:[batchsize,wav_frame,hidden_dim] - ''' - pass \ No newline at end of file diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/__init__.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/__init__.py deleted file mode 100644 index a3c197bb932cfc9cf3447b7a3b52ce76db262fc9..0000000000000000000000000000000000000000 --- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -""" -A codebase for performing model inference with a text-conditional diffusion model. -""" diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/train.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/train.py deleted file mode 100644 index e3887e16bc17d833ca578abb049929063f30d902..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/train.py +++ /dev/null @@ -1,204 +0,0 @@ -from torch import optim -from torch.utils.data import DataLoader -from torchvision.utils import save_image -from tqdm import trange - -from Dataloader import * -from .utils import image_quality -from .utils.cls import CyclicLR -from .utils.prepare_images import * - -train_folder = "./dataset/train" -test_folder = "./dataset/test" - -img_dataset = ImageDBData( - db_file="dataset/images.db", - db_table="train_images_size_128_noise_1_rgb", - max_images=24, -) -img_data = DataLoader(img_dataset, batch_size=6, shuffle=True, num_workers=6) - -total_batch = len(img_data) -print(len(img_dataset)) - -test_dataset = ImageDBData( - db_file="dataset/test2.db", - db_table="test_images_size_128_noise_1_rgb", - max_images=None, -) -num_test = len(test_dataset) -test_data = DataLoader(test_dataset, batch_size=1, shuffle=False, num_workers=1) - -criteria = nn.L1Loss() - -model = CARN_V2( - color_channels=3, - mid_channels=64, - conv=nn.Conv2d, - single_conv_size=3, - single_conv_group=1, - scale=2, - activation=nn.LeakyReLU(0.1), - SEBlock=True, - repeat_blocks=3, - atrous=(1, 1, 1), -) - -model.total_parameters() - - -# model.initialize_weights_xavier_uniform() - -# fp16 training is available in GPU only -model = network_to_half(model) -model = model.cuda() -model.load_state_dict(torch.load("CARN_model_checkpoint.pt")) - -learning_rate = 1e-4 -weight_decay = 1e-6 -optimizer = optim.Adam( - model.parameters(), lr=learning_rate, weight_decay=weight_decay, amsgrad=True -) -# optimizer = optim.SGD(model.parameters(), momentum=0.9, nesterov=True, weight_decay=weight_decay, lr=learning_rate) - -# optimizer = FP16_Optimizer(optimizer, static_loss_scale=128.0, verbose=False) -# optimizer.load_state_dict(torch.load("CARN_adam_checkpoint.pt")) - -last_iter = -1 # torch.load("CARN_scheduler_last_iter") -scheduler = CyclicLR( - optimizer, - base_lr=1e-4, - max_lr=1e-4, - step_size=3 * total_batch, - mode="triangular", - last_batch_iteration=last_iter, -) -train_loss = [] -train_ssim = [] -train_psnr = [] - -test_loss = [] -test_ssim = [] -test_psnr = [] - -# train_loss = torch.load("train_loss.pt") -# train_ssim = torch.load("train_ssim.pt") -# train_psnr = torch.load("train_psnr.pt") -# -# test_loss = torch.load("test_loss.pt") -# test_ssim = torch.load("test_ssim.pt") -# test_psnr = torch.load("test_psnr.pt") - - -counter = 0 -iteration = 2 -ibar = trange( - iteration, - ascii=True, - maxinterval=1, - postfix={"avg_loss": 0, "train_ssim": 0, "test_ssim": 0}, -) -for i in ibar: - # batch_loss = [] - # insample_ssim = [] - # insample_psnr = [] - for index, batch in enumerate(img_data): - scheduler.batch_step() - lr_img, hr_img = batch - lr_img = lr_img.cuda().half() - hr_img = hr_img.cuda() - - # model.zero_grad() - optimizer.zero_grad() - outputs = model.forward(lr_img) - outputs = outputs.float() - loss = criteria(outputs, hr_img) - # loss.backward() - optimizer.backward(loss) - # nn.utils.clip_grad_norm_(model.parameters(), 5) - optimizer.step() - - counter += 1 - # train_loss.append(loss.item()) - - ssim = image_quality.msssim(outputs, hr_img).item() - psnr = image_quality.psnr(outputs, hr_img).item() - - ibar.set_postfix( - ratio=index / total_batch, - loss=loss.item(), - ssim=ssim, - batch=index, - psnr=psnr, - lr=scheduler.current_lr, - ) - train_loss.append(loss.item()) - train_ssim.append(ssim) - train_psnr.append(psnr) - - # +++++++++++++++++++++++++++++++++++++ - # save checkpoints by iterations - # ------------------------------------- - - if (counter + 1) % 500 == 0: - torch.save(model.state_dict(), "CARN_model_checkpoint.pt") - torch.save(optimizer.state_dict(), "CARN_adam_checkpoint.pt") - torch.save(train_loss, "train_loss.pt") - torch.save(train_ssim, "train_ssim.pt") - torch.save(train_psnr, "train_psnr.pt") - torch.save(scheduler.last_batch_iteration, "CARN_scheduler_last_iter.pt") - - # +++++++++++++++++++++++++++++++++++++ - # End of One Epoch - # ------------------------------------- - - # one_ite_loss = np.mean(batch_loss) - # one_ite_ssim = np.mean(insample_ssim) - # one_ite_psnr = np.mean(insample_psnr) - - # print(f"One iteration loss {one_ite_loss}, ssim {one_ite_ssim}, psnr {one_ite_psnr}") - # train_loss.append(one_ite_loss) - # train_ssim.append(one_ite_ssim) - # train_psnr.append(one_ite_psnr) - - torch.save(model.state_dict(), "CARN_model_checkpoint.pt") - # torch.save(scheduler, "CARN_scheduler_optim.pt") - torch.save(optimizer.state_dict(), "CARN_adam_checkpoint.pt") - torch.save(train_loss, "train_loss.pt") - torch.save(train_ssim, "train_ssim.pt") - torch.save(train_psnr, "train_psnr.pt") - # torch.save(scheduler.last_batch_iteration, "CARN_scheduler_last_iter.pt") - - # +++++++++++++++++++++++++++++++++++++ - # Test - # ------------------------------------- - - with torch.no_grad(): - ssim = [] - batch_loss = [] - psnr = [] - for index, test_batch in enumerate(test_data): - lr_img, hr_img = test_batch - lr_img = lr_img.cuda() - hr_img = hr_img.cuda() - - lr_img_up = model(lr_img) - lr_img_up = lr_img_up.float() - loss = criteria(lr_img_up, hr_img) - - save_image([lr_img_up[0], hr_img[0]], f"check_test_imgs/{index}.png") - batch_loss.append(loss.item()) - ssim.append(image_quality.msssim(lr_img_up, hr_img).item()) - psnr.append(image_quality.psnr(lr_img_up, hr_img).item()) - - test_ssim.append(np.mean(ssim)) - test_loss.append(np.mean(batch_loss)) - test_psnr.append(np.mean(psnr)) - - torch.save(test_loss, "test_loss.pt") - torch.save(test_ssim, "test_ssim.pt") - torch.save(test_psnr, "test_psnr.pt") - -# import subprocess - -# subprocess.call(["shutdown", "/s"]) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/detectors_resnext.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/detectors_resnext.py deleted file mode 100644 index 57d032fe37ed82d5ba24e761bdc014cc0ee5ac64..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/detectors_resnext.py +++ /dev/null @@ -1,122 +0,0 @@ -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .detectors_resnet import Bottleneck as _Bottleneck -from .detectors_resnet import DetectoRS_ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - elif not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class DetectoRS_ResNeXt(DetectoRS_ResNet): - """ResNeXt backbone for DetectoRS. - - Args: - groups (int): The number of groups in ResNeXt. - base_width (int): The base width of ResNeXt. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(DetectoRS_ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - return super().make_res_layer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 145cadb24016eeea87fccff8171c5b0dfb78f7ab..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/pspnet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/compression/encodec_base_24khz.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/compression/encodec_base_24khz.py deleted file mode 100644 index 117b2b1e496ca31b3d614672b472c9213cedb4ad..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/compression/encodec_base_24khz.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid shows how to train a base causal EnCodec model at 24 kHz. -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=8, partition=partitions) - # base causal EnCodec trained on monophonic audio sampled at 24 kHz - launcher.bind_(solver='compression/encodec_base_24khz') - # replace this by the desired dataset - launcher.bind_(dset='audio/example') - # launch xp - launcher() diff --git a/spaces/GuyYariv/AudioToken/modules/fga/atten.py b/spaces/GuyYariv/AudioToken/modules/fga/atten.py deleted file mode 100644 index 701a29f7efb0ed3e2d73d78016342b8a36f57e16..0000000000000000000000000000000000000000 --- a/spaces/GuyYariv/AudioToken/modules/fga/atten.py +++ /dev/null @@ -1,303 +0,0 @@ -#!/usr/bin/env python -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Variable -from itertools import product, permutations, combinations_with_replacement, chain - - -class Unary(nn.Module): - def __init__(self, embed_size): - """ - Captures local entity information - :param embed_size: the embedding dimension - """ - super(Unary, self).__init__() - self.embed = nn.Conv1d(embed_size, embed_size, 1) - self.feature_reduce = nn.Conv1d(embed_size, 1, 1) - - def forward(self, X): - X = X.transpose(1, 2) - - X_embed = self.embed(X) - - X_nl_embed = F.dropout(F.relu(X_embed)) - X_poten = self.feature_reduce(X_nl_embed) - return X_poten.squeeze(1) - - -class Pairwise(nn.Module): - def __init__(self, embed_x_size, x_spatial_dim=None, embed_y_size=None, y_spatial_dim=None): - """ - Captures interaction between utilities or entities of the same utility - :param embed_x_size: the embedding dimension of the first utility - :param x_spatial_dim: the spatial dimension of the first utility for batch norm and weighted marginalization - :param embed_y_size: the embedding dimension of the second utility (none for self-interactions) - :param y_spatial_dim: the spatial dimension of the second utility for batch norm and weighted marginalization - """ - - super(Pairwise, self).__init__() - embed_y_size = embed_y_size if y_spatial_dim is not None else embed_x_size - self.y_spatial_dim = y_spatial_dim if y_spatial_dim is not None else x_spatial_dim - - self.embed_size = max(embed_x_size, embed_y_size) - self.x_spatial_dim = x_spatial_dim - - self.embed_X = nn.Conv1d(embed_x_size, self.embed_size, 1) - self.embed_Y = nn.Conv1d(embed_y_size, self.embed_size, 1) - if x_spatial_dim is not None: - self.normalize_S = nn.BatchNorm1d(self.x_spatial_dim * self.y_spatial_dim) - - self.margin_X = nn.Conv1d(self.y_spatial_dim, 1, 1) - self.margin_Y = nn.Conv1d(self.x_spatial_dim, 1, 1) - - def forward(self, X, Y=None): - - X_t = X.transpose(1, 2) - Y_t = Y.transpose(1, 2) if Y is not None else X_t - - - X_embed = self.embed_X(X_t) - Y_embed = self.embed_Y(Y_t) - - X_norm = F.normalize(X_embed) - Y_norm = F.normalize(Y_embed) - - S = X_norm.transpose(1, 2).bmm(Y_norm) - if self.x_spatial_dim is not None: - S = self.normalize_S(S.view(-1, self.x_spatial_dim * self.y_spatial_dim)) \ - .view(-1, self.x_spatial_dim, self.y_spatial_dim) - - X_poten = self.margin_X(S.transpose(1, 2)).transpose(1, 2).squeeze(2) - Y_poten = self.margin_Y(S).transpose(1, 2).squeeze(2) - else: - X_poten = S.mean(dim=2, keepdim=False) - Y_poten = S.mean(dim=1, keepdim=False) - - if Y is None: - return X_poten - else: - return X_poten, Y_poten - - -class Atten(nn.Module): - def __init__(self, util_e, sharing_factor_weights=[], prior_flag=False, - sizes=[], size_force=False, pairwise_flag=True, - unary_flag=True, self_flag=True): - """ - The class performs an attention on a given list of utilities representation. - :param util_e: the embedding dimensions - :param sharing_factor_weights: To share weights, provide a dict of tuples: - {idx: (num_utils, connected utils) - Note, for efficiency, the shared utils (i.e., history, are connected to ans - and question only. - TODO: connections between shared utils - :param prior_flag: is prior factor provided - :param sizes: the spatial simension (used for batch-norm and weighted marginalization) - :param size_force: force spatial size with adaptive avg pooling. - :param pairwise_flag: use pairwise interaction between utilities - :param unary_flag: use local information - :param self_flag: use self interactions between utilitie's entities - """ - super(Atten, self).__init__() - self.util_e = util_e - - self.prior_flag = prior_flag - - self.n_utils = len(util_e) - - self.spatial_pool = nn.ModuleDict() - - self.un_models = nn.ModuleList() - - self.self_flag = self_flag - self.pairwise_flag = pairwise_flag - self.unary_flag = unary_flag - self.size_force = size_force - - if len(sizes) == 0: - sizes = [None for _ in util_e] - - self.sharing_factor_weights = sharing_factor_weights - - #force the provided size - for idx, e_dim in enumerate(util_e): - self.un_models.append(Unary(e_dim)) - if self.size_force: - self.spatial_pool[str(idx)] = nn.AdaptiveAvgPool1d(sizes[idx]) - - #Pairwise - self.pp_models = nn.ModuleDict() - for ((idx1, e_dim_1), (idx2, e_dim_2)) \ - in combinations_with_replacement(enumerate(util_e), 2): - # self - if self.self_flag and idx1 == idx2: - self.pp_models[str(idx1)] = Pairwise(e_dim_1, sizes[idx1]) - else: - if pairwise_flag: - if idx1 in self.sharing_factor_weights: - # not connected - if idx2 not in self.sharing_factor_weights[idx1][1]: - continue - if idx2 in self.sharing_factor_weights: - # not connected - if idx1 not in self.sharing_factor_weights[idx2][1]: - continue - self.pp_models[str((idx1, idx2))] = Pairwise(e_dim_1, sizes[idx1], e_dim_2, sizes[idx2]) - - # Handle reduce potentials (with scalars) - self.reduce_potentials = nn.ModuleList() - - self.num_of_potentials = dict() - - self.default_num_of_potentials = 0 - - if self.self_flag: - self.default_num_of_potentials += 1 - if self.unary_flag: - self.default_num_of_potentials += 1 - if self.prior_flag: - self.default_num_of_potentials += 1 - for idx in range(self.n_utils): - self.num_of_potentials[idx] = self.default_num_of_potentials - - ''' - All other utilities - ''' - if pairwise_flag: - for idx, (num_utils, connected_utils) in sharing_factor_weights: - for c_u in connected_utils: - self.num_of_potentials[c_u] += num_utils - self.num_of_potentials[idx] += 1 - for k in self.num_of_potentials: - if k not in self.sharing_factor_weights: - self.num_of_potentials[k] += (self.n_utils - 1) \ - - len(sharing_factor_weights) - - for idx in range(self.n_utils): - self.reduce_potentials.append(nn.Conv1d(self.num_of_potentials[idx], - 1, 1, bias=False)) - - def forward(self, utils, priors=None): - assert self.n_utils == len(utils) - assert (priors is None and not self.prior_flag) \ - or (priors is not None - and self.prior_flag - and len(priors) == self.n_utils) - b_size = utils[0].size(0) - util_factors = dict() - attention = list() - - #Force size, constant size is used for pairwise batch normalization - if self.size_force: - for i, (num_utils, _) in self.sharing_factor_weights.items(): - if str(i) not in self.spatial_pool.keys(): - continue - else: - high_util = utils[i] - high_util = high_util.view(num_utils * b_size, high_util.size(2), high_util.size(3)) - high_util = high_util.transpose(1, 2) - utils[i] = self.spatial_pool[str(i)](high_util).transpose(1, 2) - - for i in range(self.n_utils): - if i in self.sharing_factor_weights \ - or str(i) not in self.spatial_pool.keys(): - continue - utils[i] = utils[i].transpose(1, 2) - utils[i] = self.spatial_pool[str(i)](utils[i]).transpose(1, 2) - if self.prior_flag and priors[i] is not None: - priors[i] = self.spatial_pool[str(i)](priors[i].unsqueeze(1)).squeeze(1) - - # handle Shared weights - for i, (num_utils, connected_list) in self.sharing_factor_weights: - if self.unary_flag: - util_factors.setdefault(i, []).append(self.un_models[i](utils[i])) - - if self.self_flag: - util_factors.setdefault(i, []).append(self.pp_models[str(i)](utils[i])) - - if self.pairwise_flag: - for j in connected_list: - other_util = utils[j] - expanded_util = other_util.unsqueeze(1).expand(b_size, - num_utils, - other_util.size(1), - other_util.size(2)).contiguous().view( - b_size * num_utils, - other_util.size(1), - other_util.size(2)) - - if i < j: - factor_ij, factor_ji = self.pp_models[str((i, j))](utils[i], expanded_util) - else: - factor_ji, factor_ij = self.pp_models[str((j, i))](expanded_util, utils[i]) - util_factors[i].append(factor_ij) - util_factors.setdefault(j, []).append(factor_ji.view(b_size, num_utils, factor_ji.size(1))) - - # handle local factors - for i in range(self.n_utils): - if i in self.sharing_factor_weights: - continue - if self.unary_flag: - util_factors.setdefault(i, []).append(self.un_models[i](utils[i])) - if self.self_flag: - util_factors.setdefault(i, []).append(self.pp_models[str(i)](utils[i])) - - # joint - if self.pairwise_flag: - for (i, j) in combinations_with_replacement(range(self.n_utils), 2): - if i in self.sharing_factor_weights \ - or j in self.sharing_factor_weights: - continue - if i == j: - continue - else: - factor_ij, factor_ji = self.pp_models[str((i, j))](utils[i], utils[j]) - util_factors.setdefault(i, []).append(factor_ij) - util_factors.setdefault(j, []).append(factor_ji) - - # perform attention - for i in range(self.n_utils): - if self.prior_flag: - prior = priors[i] \ - if priors[i] is not None \ - else torch.zeros_like(util_factors[i][0], requires_grad=False).cuda() - - util_factors[i].append(prior) - - util_factors[i] = torch.cat([p if len(p.size()) == 3 else p.unsqueeze(1) - for p in util_factors[i]], dim=1) - util_factors[i] = self.reduce_potentials[i](util_factors[i]).squeeze(1) - util_factors[i] = F.softmax(util_factors[i], dim=1).unsqueeze(2) - attention.append(torch.bmm(utils[i].transpose(1, 2), util_factors[i]).squeeze(2)) - - return attention - - -class NaiveAttention(nn.Module): - def __init__(self): - """ - Used for ablation analysis - removing attention. - """ - super(NaiveAttention, self).__init__() - - def forward(self, utils, priors): - atten = [] - spatial_atten = [] - for u, p in zip(utils, priors): - if type(u) is tuple: - u = u[1] - num_elements = u.shape[0] - if p is not None: - u = u.view(-1, u.shape[-2], u.shape[-1]) - p = p.view(-1, p.shape[-2], p.shape[-1]) - spatial_atten.append( - torch.bmm(p.transpose(1, 2), u).squeeze(2).view(num_elements, -1, u.shape[-2], u.shape[-1])) - else: - spatial_atten.append(u.mean(2)) - continue - if p is not None: - atten.append(torch.bmm(u.transpose(1, 2), p.unsqueeze(2)).squeeze(2)) - else: - atten.append(u.mean(1)) - return atten, spatial_atten \ No newline at end of file diff --git a/spaces/HighCWu/GPEN/__init_paths.py b/spaces/HighCWu/GPEN/__init_paths.py deleted file mode 100644 index cd0eb1f793f7a6237099d75cb66496f5d32a693c..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GPEN/__init_paths.py +++ /dev/null @@ -1,21 +0,0 @@ -''' -@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021) -@author: yangxy (yangtao9009@gmail.com) -''' -import os.path as osp -import sys - -def add_path(path): - if path not in sys.path: - sys.path.insert(0, path) - -this_dir = osp.dirname(__file__) - -path = osp.join(this_dir, 'retinaface') -add_path(path) - -path = osp.join(this_dir, 'sr_model') -add_path(path) - -path = osp.join(this_dir, 'face_model') -add_path(path) diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/modules/image_degradation/__init__.py b/spaces/Iceclear/StableSR/StableSR/ldm/modules/image_degradation/__init__.py deleted file mode 100644 index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/ldm/modules/image_degradation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr -from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light diff --git a/spaces/Illumotion/Koboldcpp/otherarch/rwkv_vocab.cpp b/spaces/Illumotion/Koboldcpp/otherarch/rwkv_vocab.cpp deleted file mode 100644 index 1df437bfe2489aedefa22e51e06770a1eeb0dd04..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/rwkv_vocab.cpp +++ /dev/null @@ -1,87 +0,0 @@ -#include -#include -#include -#include - -#include "expose.h" - -std::vector rwkv_vocab; -std::vector special = {"Ā","ā","Ă","ă","Ą","ą","Ć","ć","Ĉ","ĉ","Ċ","ċ","Č","č","Ď","ď","Đ","đ","Ē","ē","Ĕ","ĕ","Ė","ė","Ę","ę","Ě","ě","Ĝ","ĝ","Ğ","ğ","Ġ","!","\"","#","$","%","&","\'","(",")","*","+",",","-",".","/","0","1","2","3","4","5","6","7","8","9",":",";","<","=",">","?","@","A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z","[","\\","]","^","_","`","a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","{","|","}","~","ġ","Ģ","ģ","Ĥ","ĥ","Ħ","ħ","Ĩ","ĩ","Ī","ī","Ĭ","ĭ","Į","į","İ","ı","IJ","ij","Ĵ","ĵ","Ķ","ķ","ĸ","Ĺ","ĺ","Ļ","ļ","Ľ","ľ","Ŀ","ŀ","Ł","ł","¡","¢","£","¤","¥","¦","§","¨","©","ª","«","¬","Ń","®","¯","°","±","²","³","´","µ","¶","·","¸","¹","º","»","¼","½","¾","¿","À","Á","Â","Ã","Ä","Å","Æ","Ç","È","É","Ê","Ë","Ì","Í","Î","Ï","Ð","Ñ","Ò","Ó","Ô","Õ","Ö","×","Ø","Ù","Ú","Û","Ü","Ý","Þ","ß","à","á","â","ã","ä","å","æ","ç","è","é","ê","ë","ì","í","î","ï","ð","ñ","ò","ó","ô","õ","ö","÷","ø","ù","ú","û","ü","ý","þ","ÿ"}; - -static void replaceAll(std::string& str, const std::string& from, const std::string& to) { - if(from.empty()) - return; - size_t start_pos = 0; - while((start_pos = str.find(from, start_pos)) != std::string::npos) { - str.replace(start_pos, from.length(), to); - start_pos += to.length(); // In case 'to' contains 'from', like replacing 'x' with 'yx' - } -} - -static std::string hexToUnicode(const std::string& hexString) { - std::string unicodeString; - for (size_t i = 0; i < hexString.length(); i += 2) { - std::string byteString = hexString.substr(i, 2); - unsigned int byteValue = std::stoi(byteString, nullptr, 16); - unicodeString += static_cast(byteValue); - } - return unicodeString; -} - -void read_rwkv_vocab() -{ - std::string line; - auto filepath = executable_path+ "rwkv_vocab.embd"; - printf("\nReading vocab from %s",filepath.c_str()); - std::ifstream myfile(filepath); - if (myfile.is_open()) - { - int slen = special.size(); - while (myfile.good()) - { - getline(myfile, line); - for(int i=0;i>"); - while (myfile.good()) - { - getline(myfile, line); - unicodeString = hexToUnicode(line); - // printf("\n%d: %s",idx,unicodeString.c_str()); - rwkv_vocab.push_back(unicodeString); - ++idx; - } - myfile.close(); - } - - else - { - std::cout << "Unable to open RWKV world vocab file"; - } -} \ No newline at end of file diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless2d.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless2d.py deleted file mode 100644 index dc27b73affa20ab1a8a199542469a10aaf1f555a..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless2d.py +++ /dev/null @@ -1,529 +0,0 @@ -from __future__ import print_function, division - -""" -COUNTLESS performance test in Python. - -python countless2d.py ./images/NAMEOFIMAGE -""" - -import six -from six.moves import range -from collections import defaultdict -from functools import reduce -import operator -import io -import os -from PIL import Image -import math -import numpy as np -import random -import sys -import time -from tqdm import tqdm -from scipy import ndimage - -def simplest_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab = a * (a == b) # PICK(A,B) - ac = a * (a == c) # PICK(A,C) - bc = b * (b == c) # PICK(B,C) - - a = ab | ac | bc # Bitwise OR, safe b/c non-matches are zeroed - - return a + (a == 0) * d # AB || AC || BC || D - -def quick_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - bc = b * (b == c) # PICK(B,C) - - a = ab_ac | bc # (PICK(A,B) || PICK(A,C)) or PICK(B,C) - return a + (a == 0) * d # AB || AC || BC || D - -def quickest_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - ab_ac |= b * (b == c) # PICK(B,C) - return ab_ac + (ab_ac == 0) * d # AB || AC || BC || D - -def quick_countless_xor(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab = a ^ (a ^ b) # a or b - ab += (ab != a) * ((ab ^ (ab ^ c)) - b) # b or c - ab += (ab == c) * ((ab ^ (ab ^ d)) - c) # c or d - return ab - -def stippled_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm - that treats zero as "background" and inflates lone - pixels. - - data is a 2D numpy array with even dimensions. - """ - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - ab_ac |= b * (b == c) # PICK(B,C) - - nonzero = a + (a == 0) * (b + (b == 0) * c) - return ab_ac + (ab_ac == 0) * (d + (d == 0) * nonzero) # AB || AC || BC || D - -def zero_corrected_countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - # allows us to prevent losing 1/2 a bit of information - # at the top end by using a bigger type. Without this 255 is handled incorrectly. - data, upgraded = upgrade_type(data) - - # offset from zero, raw countless doesn't handle 0 correctly - # we'll remove the extra 1 at the end. - data += 1 - - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab = a * (a == b) # PICK(A,B) - ac = a * (a == c) # PICK(A,C) - bc = b * (b == c) # PICK(B,C) - - a = ab | ac | bc # Bitwise OR, safe b/c non-matches are zeroed - - result = a + (a == 0) * d - 1 # a or d - 1 - - if upgraded: - return downgrade_type(result) - - # only need to reset data if we weren't upgraded - # b/c no copy was made in that case - data -= 1 - - return result - -def countless_extreme(data): - nonzeros = np.count_nonzero(data) - # print("nonzeros", nonzeros) - - N = reduce(operator.mul, data.shape) - - if nonzeros == N: - print("quick") - return quick_countless(data) - elif np.count_nonzero(data + 1) == N: - print("quick") - # print("upper", nonzeros) - return quick_countless(data) - else: - return countless(data) - - -def countless(data): - """ - Vectorized implementation of downsampling a 2D - image by 2 on each side using the COUNTLESS algorithm. - - data is a 2D numpy array with even dimensions. - """ - # allows us to prevent losing 1/2 a bit of information - # at the top end by using a bigger type. Without this 255 is handled incorrectly. - data, upgraded = upgrade_type(data) - - # offset from zero, raw countless doesn't handle 0 correctly - # we'll remove the extra 1 at the end. - data += 1 - - sections = [] - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - a, b, c, d = sections - - ab_ac = a * ((a == b) | (a == c)) # PICK(A,B) || PICK(A,C) w/ optimization - ab_ac |= b * (b == c) # PICK(B,C) - result = ab_ac + (ab_ac == 0) * d - 1 # (matches or d) - 1 - - if upgraded: - return downgrade_type(result) - - # only need to reset data if we weren't upgraded - # b/c no copy was made in that case - data -= 1 - - return result - -def upgrade_type(arr): - dtype = arr.dtype - - if dtype == np.uint8: - return arr.astype(np.uint16), True - elif dtype == np.uint16: - return arr.astype(np.uint32), True - elif dtype == np.uint32: - return arr.astype(np.uint64), True - - return arr, False - -def downgrade_type(arr): - dtype = arr.dtype - - if dtype == np.uint64: - return arr.astype(np.uint32) - elif dtype == np.uint32: - return arr.astype(np.uint16) - elif dtype == np.uint16: - return arr.astype(np.uint8) - - return arr - -def odd_to_even(image): - """ - To facilitate 2x2 downsampling segmentation, change an odd sized image into an even sized one. - Works by mirroring the starting 1 pixel edge of the image on odd shaped sides. - - e.g. turn a 3x3x5 image into a 4x4x5 (the x and y are what are getting downsampled) - - For example: [ 3, 2, 4 ] => [ 3, 3, 2, 4 ] which is now easy to downsample. - - """ - shape = np.array(image.shape) - - offset = (shape % 2)[:2] # x,y offset - - # detect if we're dealing with an even - # image. if so it's fine, just return. - if not np.any(offset): - return image - - oddshape = image.shape[:2] + offset - oddshape = np.append(oddshape, shape[2:]) - oddshape = oddshape.astype(int) - - newimg = np.empty(shape=oddshape, dtype=image.dtype) - - ox,oy = offset - sx,sy = oddshape - - newimg[0,0] = image[0,0] # corner - newimg[ox:sx,0] = image[:,0] # x axis line - newimg[0,oy:sy] = image[0,:] # y axis line - - return newimg - -def counting(array): - factor = (2, 2, 1) - shape = array.shape - - while len(shape) < 4: - array = np.expand_dims(array, axis=-1) - shape = array.shape - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(shape, factor)) - output = np.zeros(output_shape, dtype=array.dtype) - - for chan in range(0, shape[3]): - for z in range(0, shape[2]): - for x in range(0, shape[0], 2): - for y in range(0, shape[1], 2): - block = array[ x:x+2, y:y+2, z, chan ] # 2x2 block - - hashtable = defaultdict(int) - for subx, suby in np.ndindex(block.shape[0], block.shape[1]): - hashtable[block[subx, suby]] += 1 - - best = (0, 0) - for segid, val in six.iteritems(hashtable): - if best[1] < val: - best = (segid, val) - - output[ x // 2, y // 2, chan ] = best[0] - - return output - -def ndzoom(array): - if len(array.shape) == 3: - ratio = ( 1 / 2.0, 1 / 2.0, 1.0 ) - else: - ratio = ( 1 / 2.0, 1 / 2.0) - return ndimage.interpolation.zoom(array, ratio, order=1) - -def countless_if(array): - factor = (2, 2, 1) - shape = array.shape - - if len(shape) < 3: - array = array[ :,:, np.newaxis ] - shape = array.shape - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(shape, factor)) - output = np.zeros(output_shape, dtype=array.dtype) - - for chan in range(0, shape[2]): - for x in range(0, shape[0], 2): - for y in range(0, shape[1], 2): - block = array[ x:x+2, y:y+2, chan ] # 2x2 block - - if block[0,0] == block[1,0]: - pick = block[0,0] - elif block[0,0] == block[0,1]: - pick = block[0,0] - elif block[1,0] == block[0,1]: - pick = block[1,0] - else: - pick = block[1,1] - - output[ x // 2, y // 2, chan ] = pick - - return np.squeeze(output) - -def downsample_with_averaging(array): - """ - Downsample x by factor using averaging. - - @return: The downsampled array, of the same type as x. - """ - - if len(array.shape) == 3: - factor = (2,2,1) - else: - factor = (2,2) - - if np.array_equal(factor[:3], np.array([1,1,1])): - return array - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(array.shape, factor)) - temp = np.zeros(output_shape, float) - counts = np.zeros(output_shape, np.int) - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - indexing_expr = tuple(np.s_[:s] for s in part.shape) - temp[indexing_expr] += part - counts[indexing_expr] += 1 - return np.cast[array.dtype](temp / counts) - -def downsample_with_max_pooling(array): - - factor = (2,2) - - if np.all(np.array(factor, int) == 1): - return array - - sections = [] - - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - output = sections[0].copy() - - for section in sections[1:]: - np.maximum(output, section, output) - - return output - -def striding(array): - """Downsample x by factor using striding. - - @return: The downsampled array, of the same type as x. - """ - factor = (2,2) - if np.all(np.array(factor, int) == 1): - return array - return array[tuple(np.s_[::f] for f in factor)] - -def benchmark(): - filename = sys.argv[1] - img = Image.open(filename) - data = np.array(img.getdata(), dtype=np.uint8) - - if len(data.shape) == 1: - n_channels = 1 - reshape = (img.height, img.width) - else: - n_channels = min(data.shape[1], 3) - data = data[:, :n_channels] - reshape = (img.height, img.width, n_channels) - - data = data.reshape(reshape).astype(np.uint8) - - methods = [ - simplest_countless, - quick_countless, - quick_countless_xor, - quickest_countless, - stippled_countless, - zero_corrected_countless, - countless, - downsample_with_averaging, - downsample_with_max_pooling, - ndzoom, - striding, - # countless_if, - # counting, - ] - - formats = { - 1: 'L', - 3: 'RGB', - 4: 'RGBA' - } - - if not os.path.exists('./results'): - os.mkdir('./results') - - N = 500 - img_size = float(img.width * img.height) / 1024.0 / 1024.0 - print("N = %d, %dx%d (%.2f MPx) %d chan, %s" % (N, img.width, img.height, img_size, n_channels, filename)) - print("Algorithm\tMPx/sec\tMB/sec\tSec") - for fn in methods: - print(fn.__name__, end='') - sys.stdout.flush() - - start = time.time() - # tqdm is here to show you what's going on the first time you run it. - # Feel free to remove it to get slightly more accurate timing results. - for _ in tqdm(range(N), desc=fn.__name__, disable=True): - result = fn(data) - end = time.time() - print("\r", end='') - - total_time = (end - start) - mpx = N * img_size / total_time - mbytes = N * img_size * n_channels / total_time - # Output in tab separated format to enable copy-paste into excel/numbers - print("%s\t%.3f\t%.3f\t%.2f" % (fn.__name__, mpx, mbytes, total_time)) - outimg = Image.fromarray(np.squeeze(result), formats[n_channels]) - outimg.save('./results/{}.png'.format(fn.__name__, "PNG")) - -if __name__ == '__main__': - benchmark() - - -# Example results: -# N = 5, 1024x1024 (1.00 MPx) 1 chan, images/gray_segmentation.png -# Function MPx/sec MB/sec Sec -# simplest_countless 752.855 752.855 0.01 -# quick_countless 920.328 920.328 0.01 -# zero_corrected_countless 534.143 534.143 0.01 -# countless 644.247 644.247 0.01 -# downsample_with_averaging 372.575 372.575 0.01 -# downsample_with_max_pooling 974.060 974.060 0.01 -# ndzoom 137.517 137.517 0.04 -# striding 38550.588 38550.588 0.00 -# countless_if 4.377 4.377 1.14 -# counting 0.117 0.117 42.85 - -# Run without non-numpy implementations: -# N = 2000, 1024x1024 (1.00 MPx) 1 chan, images/gray_segmentation.png -# Algorithm MPx/sec MB/sec Sec -# simplest_countless 800.522 800.522 2.50 -# quick_countless 945.420 945.420 2.12 -# quickest_countless 947.256 947.256 2.11 -# stippled_countless 544.049 544.049 3.68 -# zero_corrected_countless 575.310 575.310 3.48 -# countless 646.684 646.684 3.09 -# downsample_with_averaging 385.132 385.132 5.19 -# downsample_with_max_poolin 988.361 988.361 2.02 -# ndzoom 163.104 163.104 12.26 -# striding 81589.340 81589.340 0.02 - - - - diff --git a/spaces/JMalott/ai_architecture/min_dalle/models/dalle_bart_encoder.py b/spaces/JMalott/ai_architecture/min_dalle/models/dalle_bart_encoder.py deleted file mode 100644 index e67a0ed77615a9d1004d22052530f0f97209aa72..0000000000000000000000000000000000000000 --- a/spaces/JMalott/ai_architecture/min_dalle/models/dalle_bart_encoder.py +++ /dev/null @@ -1,142 +0,0 @@ -from typing import List -import torch -from torch import nn, BoolTensor, FloatTensor, LongTensor - - -class GLU(nn.Module): - def __init__(self, count_in_out: int, count_middle: int): - super().__init__() - self.gelu = nn.GELU() - self.ln0 = nn.LayerNorm(count_in_out) - self.ln1 = nn.LayerNorm(count_middle) - self.fc0 = nn.Linear(count_in_out, count_middle, bias=False) - self.fc1 = nn.Linear(count_in_out, count_middle, bias=False) - self.fc2 = nn.Linear(count_middle, count_in_out, bias=False) - - def forward(self, z: FloatTensor) -> FloatTensor: - z = self.ln0.forward(z) - w = self.fc0.forward(z) - w = self.gelu.forward(w) - v = self.fc1.forward(z) - z = self.ln1.forward(w * v) - z = self.fc2.forward(z) - return z - - -class AttentionBase(nn.Module): - def __init__(self, head_count: int, embed_count: int): - super().__init__() - self.head_count = head_count - self.embed_count = embed_count - - self.k_proj = nn.Linear(embed_count, embed_count, bias=False) - self.v_proj = nn.Linear(embed_count, embed_count, bias=False) - self.q_proj = nn.Linear(embed_count, embed_count, bias=False) - self.out_proj = nn.Linear(embed_count, embed_count, bias=False) - - def forward( - self, - keys: FloatTensor, - values: FloatTensor, - queries: FloatTensor, - attention_mask: BoolTensor - ) -> FloatTensor: - keys = keys.reshape(keys.shape[:2] + (self.head_count, -1)) - values = values.reshape(values.shape[:2] + (self.head_count, -1)) - queries = queries.reshape(queries.shape[:2] + (self.head_count, -1)) - queries /= queries.shape[-1] ** 0.5 - - attention_bias = (1 - attention_mask.to(torch.float32)) * -1e12 - attention_weights: FloatTensor = torch.einsum( - 'bqhc,bkhc->bhqk', - queries, - keys - ) - attention_weights += attention_bias[:, None, None, :] - attention_weights = torch.softmax(attention_weights, -1) - attention_output: FloatTensor = torch.einsum( - "bhqk,bkhc->bqhc", - attention_weights, - values - ) - shape = attention_output.shape[:2] + (self.embed_count,) - attention_output = attention_output.reshape(shape) - attention_output = self.out_proj.forward(attention_output) - return attention_output - - -class EncoderSelfAttention(AttentionBase): - def forward( - self, - encoder_state: FloatTensor, - attention_mask: BoolTensor - ) -> FloatTensor: - keys = self.k_proj.forward(encoder_state) - values = self.v_proj.forward(encoder_state) - queries = self.q_proj.forward(encoder_state) - return super().forward(keys, values, queries, attention_mask) - - -class EncoderLayer(nn.Module): - def __init__(self, embed_count: int, head_count: int, glu_embed_count: int): - super().__init__() - self.pre_self_attn_layer_norm = nn.LayerNorm(embed_count) - self.self_attn = EncoderSelfAttention(head_count, embed_count) - self.self_attn_layer_norm = nn.LayerNorm(embed_count) - self.glu = GLU(embed_count, glu_embed_count) - - def forward( - self, - encoder_state: FloatTensor, - attention_mask: BoolTensor - ) -> FloatTensor: - residual = encoder_state - encoder_state = self.pre_self_attn_layer_norm.forward(encoder_state) - encoder_state = self.self_attn.forward(encoder_state, attention_mask) - encoder_state = self.self_attn_layer_norm.forward(encoder_state) - encoder_state = residual + encoder_state - residual = encoder_state - encoder_state = self.glu.forward(encoder_state) - encoder_state = residual + encoder_state - return encoder_state - - -class DalleBartEncoder(nn.Module): - def __init__( - self, - layer_count: int, - embed_count: int, - attention_head_count: int, - text_vocab_count: int, - text_token_count: int, - glu_embed_count: int, - device: str - ): - super().__init__() - self.text_vocab_count = text_vocab_count - self.embed_tokens = nn.Embedding(text_vocab_count, embed_count) - self.embed_positions = nn.Embedding(text_token_count, embed_count) - self.layers: List[EncoderLayer] = nn.ModuleList([ - EncoderLayer( - embed_count = embed_count, - head_count = attention_head_count, - glu_embed_count = glu_embed_count - ) - for _ in range(layer_count) - ]) - self.layernorm_embedding = nn.LayerNorm(embed_count) - self.final_ln = nn.LayerNorm(embed_count) - token_indices = torch.arange(text_token_count, device=device) - self.pose_tokens = torch.stack([token_indices] * 2) - - def forward(self, text_tokens: LongTensor) -> FloatTensor: - attention_mask = text_tokens.not_equal(1) - encoder_state = ( - self.embed_tokens.forward(text_tokens) + - self.embed_positions.forward(self.pose_tokens) - ) - encoder_state = self.layernorm_embedding.forward(encoder_state) - for layer in self.layers: - encoder_state = layer.forward(encoder_state, attention_mask) - encoder_state = self.final_ln.forward(encoder_state) - return encoder_state \ No newline at end of file diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/MiDaS/run.py b/spaces/Jacks2003/3D_Photo_Inpainting/MiDaS/run.py deleted file mode 100644 index a483d2850a81b3520b80097eff4bb9367ef6a144..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/MiDaS/run.py +++ /dev/null @@ -1,81 +0,0 @@ -"""Compute depth maps for images in the input folder. -""" -import os -import glob -import torch -# from monodepth_net import MonoDepthNet -# import utils -import matplotlib.pyplot as plt -import numpy as np -import cv2 -import imageio - - -def run_depth(img_names, input_path, output_path, model_path, Net, utils, target_w=None): - """Run MonoDepthNN to compute depth maps. - - Args: - input_path (str): path to input folder - output_path (str): path to output folder - model_path (str): path to saved model - """ - print("initialize") - - # select device - device = torch.device("cpu") - print("device: %s" % device) - - # load network - model = Net(model_path) - model.to(device) - model.eval() - - # get input - # img_names = glob.glob(os.path.join(input_path, "*")) - num_images = len(img_names) - - # create output folder - os.makedirs(output_path, exist_ok=True) - - print("start processing") - - for ind, img_name in enumerate(img_names): - - print(" processing {} ({}/{})".format(img_name, ind + 1, num_images)) - - # input - img = utils.read_image(img_name) - w = img.shape[1] - scale = 640. / max(img.shape[0], img.shape[1]) - target_height, target_width = int(round(img.shape[0] * scale)), int(round(img.shape[1] * scale)) - img_input = utils.resize_image(img) - print(img_input.shape) - img_input = img_input.to(device) - # compute - with torch.no_grad(): - out = model.forward(img_input) - - depth = utils.resize_depth(out, target_width, target_height) - img = cv2.resize((img * 255).astype(np.uint8), (target_width, target_height), interpolation=cv2.INTER_AREA) - - filename = os.path.join( - output_path, os.path.splitext(os.path.basename(img_name))[0] - ) - np.save(filename + '.npy', depth) - utils.write_depth(filename, depth, bits=2) - - print("finished") - - -# if __name__ == "__main__": -# # set paths -# INPUT_PATH = "image" -# OUTPUT_PATH = "output" -# MODEL_PATH = "model.pt" - -# # set torch options -# torch.backends.cudnn.enabled = True -# torch.backends.cudnn.benchmark = True - -# # compute depth maps -# run_depth(INPUT_PATH, OUTPUT_PATH, MODEL_PATH, Net, target_w=640) diff --git a/spaces/Jarvis2301/Aku/mel_processing.py b/spaces/Jarvis2301/Aku/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/Jarvis2301/Aku/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/JohnTan38/ChatGPT_LangChain/README.md b/spaces/JohnTan38/ChatGPT_LangChain/README.md deleted file mode 100644 index 4ec2156f811f5904e0ed9300805c661d2f11f16d..0000000000000000000000000000000000000000 --- a/spaces/JohnTan38/ChatGPT_LangChain/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGPT LangChain -emoji: 🏢 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KGHL/img-to-music/constants.py b/spaces/KGHL/img-to-music/constants.py deleted file mode 100644 index 86863d1b778d4c66f0d8e1e0b699f1bb937c1d50..0000000000000000000000000000000000000000 --- a/spaces/KGHL/img-to-music/constants.py +++ /dev/null @@ -1,9 +0,0 @@ -import numpy as np -import os - -MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE') -MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN') - -MUBERT_MODE = "loop" -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) \ No newline at end of file diff --git a/spaces/KPCGD/bingo/src/components/chat-list.tsx b/spaces/KPCGD/bingo/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
- {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
- ) -} diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/pisa_ssd_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/pisa_ssd_head.py deleted file mode 100644 index ec09cb40a9c95d3f9889d736b80dfccef07f6fd1..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/pisa_ssd_head.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, List, Union - -import torch -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.utils import InstanceList, OptInstanceList -from ..losses import CrossEntropyLoss, SmoothL1Loss, carl_loss, isr_p -from ..utils import multi_apply -from .ssd_head import SSDHead - - -# TODO: add loss evaluator for SSD -@MODELS.register_module() -class PISASSDHead(SSDHead): - """Implementation of `PISA SSD head `_ - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (Sequence[int]): Number of channels in the input feature - map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Defaults to 0. - feat_channels (int): Number of hidden channels when stacked_convs - > 0. Defaults to 256. - use_depthwise (bool): Whether to use DepthwiseSeparableConv. - Defaults to False. - conv_cfg (:obj:`ConfigDict` or dict, Optional): Dictionary to construct - and config conv layer. Defaults to None. - norm_cfg (:obj:`ConfigDict` or dict, Optional): Dictionary to construct - and config norm layer. Defaults to None. - act_cfg (:obj:`ConfigDict` or dict, Optional): Dictionary to construct - and config activation layer. Defaults to None. - anchor_generator (:obj:`ConfigDict` or dict): Config dict for anchor - generator. - bbox_coder (:obj:`ConfigDict` or dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Defaults to False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (:obj:`ConfigDict` or dict, Optional): Training config of - anchor head. - test_cfg (:obj:`ConfigDict` or dict, Optional): Testing config of - anchor head. - init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \ - dict], Optional): Initialization config dict. - """ # noqa: W605 - - def loss_by_feat( - self, - cls_scores: List[Tensor], - bbox_preds: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None - ) -> Dict[str, Union[List[Tensor], Tensor]]: - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], Optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - Returns: - dict[str, Union[List[Tensor], Tensor]]: A dictionary of loss - components. the dict has components below: - - - loss_cls (list[Tensor]): A list containing each feature map \ - classification loss. - - loss_bbox (list[Tensor]): A list containing each feature map \ - regression loss. - - loss_carl (Tensor): The loss of CARL. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, batch_img_metas, device=device) - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - batch_gt_instances, - batch_img_metas, - batch_gt_instances_ignore=batch_gt_instances_ignore, - unmap_outputs=False, - return_sampling_results=True) - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - avg_factor, sampling_results_list) = cls_reg_targets - - num_images = len(batch_img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - isr_cfg = self.train_cfg.get('isr', None) - all_targets = (all_labels.view(-1), all_label_weights.view(-1), - all_bbox_targets.view(-1, - 4), all_bbox_weights.view(-1, 4)) - # apply ISR-P - if isr_cfg is not None: - all_targets = isr_p( - all_cls_scores.view(-1, all_cls_scores.size(-1)), - all_bbox_preds.view(-1, 4), - all_targets, - torch.cat(all_anchors), - sampling_results_list, - loss_cls=CrossEntropyLoss(), - bbox_coder=self.bbox_coder, - **self.train_cfg['isr'], - num_class=self.num_classes) - (new_labels, new_label_weights, new_bbox_targets, - new_bbox_weights) = all_targets - all_labels = new_labels.view(all_labels.shape) - all_label_weights = new_label_weights.view(all_label_weights.shape) - all_bbox_targets = new_bbox_targets.view(all_bbox_targets.shape) - all_bbox_weights = new_bbox_weights.view(all_bbox_weights.shape) - - # add CARL loss - carl_loss_cfg = self.train_cfg.get('carl', None) - if carl_loss_cfg is not None: - loss_carl = carl_loss( - all_cls_scores.view(-1, all_cls_scores.size(-1)), - all_targets[0], - all_bbox_preds.view(-1, 4), - all_targets[2], - SmoothL1Loss(beta=1.), - **self.train_cfg['carl'], - avg_factor=avg_factor, - num_class=self.num_classes) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_by_feat_single, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - avg_factor=avg_factor) - loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - if carl_loss_cfg is not None: - loss_dict.update(loss_carl) - return loss_dict diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/necks/rfp.py b/spaces/KyanChen/RSPrompter/mmdet/models/necks/rfp.py deleted file mode 100644 index 7ec9b3753c5031bb12a2b4c88733f13bf27c44e2..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/necks/rfp.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmengine.model import BaseModule, ModuleList, constant_init, xavier_init - -from mmdet.registry import MODELS -from .fpn import FPN - - -class ASPP(BaseModule): - """ASPP (Atrous Spatial Pyramid Pooling) - - This is an implementation of the ASPP module used in DetectoRS - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of channels produced by this module - dilations (tuple[int]): Dilations of the four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - dilations=(1, 3, 6, 1), - init_cfg=dict(type='Kaiming', layer='Conv2d')): - super().__init__(init_cfg) - assert dilations[-1] == 1 - self.aspp = nn.ModuleList() - for dilation in dilations: - kernel_size = 3 if dilation > 1 else 1 - padding = dilation if dilation > 1 else 0 - conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=1, - dilation=dilation, - padding=padding, - bias=True) - self.aspp.append(conv) - self.gap = nn.AdaptiveAvgPool2d(1) - - def forward(self, x): - avg_x = self.gap(x) - out = [] - for aspp_idx in range(len(self.aspp)): - inp = avg_x if (aspp_idx == len(self.aspp) - 1) else x - out.append(F.relu_(self.aspp[aspp_idx](inp))) - out[-1] = out[-1].expand_as(out[-2]) - out = torch.cat(out, dim=1) - return out - - -@MODELS.register_module() -class RFP(FPN): - """RFP (Recursive Feature Pyramid) - - This is an implementation of RFP in `DetectoRS - `_. Different from standard FPN, the - input of RFP should be multi level features along with origin input image - of backbone. - - Args: - rfp_steps (int): Number of unrolled steps of RFP. - rfp_backbone (dict): Configuration of the backbone for RFP. - aspp_out_channels (int): Number of output channels of ASPP module. - aspp_dilations (tuple[int]): Dilation rates of four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - rfp_steps, - rfp_backbone, - aspp_out_channels, - aspp_dilations=(1, 3, 6, 1), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super().__init__(init_cfg=init_cfg, **kwargs) - self.rfp_steps = rfp_steps - # Be careful! Pretrained weights cannot be loaded when use - # nn.ModuleList - self.rfp_modules = ModuleList() - for rfp_idx in range(1, rfp_steps): - rfp_module = MODELS.build(rfp_backbone) - self.rfp_modules.append(rfp_module) - self.rfp_aspp = ASPP(self.out_channels, aspp_out_channels, - aspp_dilations) - self.rfp_weight = nn.Conv2d( - self.out_channels, - 1, - kernel_size=1, - stride=1, - padding=0, - bias=True) - - def init_weights(self): - # Avoid using super().init_weights(), which may alter the default - # initialization of the modules in self.rfp_modules that have missing - # keys in the pretrained checkpoint. - for convs in [self.lateral_convs, self.fpn_convs]: - for m in convs.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - for rfp_idx in range(self.rfp_steps - 1): - self.rfp_modules[rfp_idx].init_weights() - constant_init(self.rfp_weight, 0) - - def forward(self, inputs): - inputs = list(inputs) - assert len(inputs) == len(self.in_channels) + 1 # +1 for input image - img = inputs.pop(0) - # FPN forward - x = super().forward(tuple(inputs)) - for rfp_idx in range(self.rfp_steps - 1): - rfp_feats = [x[0]] + list( - self.rfp_aspp(x[i]) for i in range(1, len(x))) - x_idx = self.rfp_modules[rfp_idx].rfp_forward(img, rfp_feats) - # FPN forward - x_idx = super().forward(x_idx) - x_new = [] - for ft_idx in range(len(x_idx)): - add_weight = torch.sigmoid(self.rfp_weight(x_idx[ft_idx])) - x_new.append(add_weight * x_idx[ft_idx] + - (1 - add_weight) * x[ft_idx]) - x = x_new - return x diff --git a/spaces/LZRi/LZR-Bert-VITS2/losses.py b/spaces/LZRi/LZR-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/LZRi/LZR-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Lamai/LAMAIGPT/autogpt/commands/improve_code.py b/spaces/Lamai/LAMAIGPT/autogpt/commands/improve_code.py deleted file mode 100644 index e3440d8b7c6ee8cb62d73df48623ab757c973c59..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/commands/improve_code.py +++ /dev/null @@ -1,29 +0,0 @@ -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def improve_code(suggestions: list[str], code: str) -> str: - """ - A function that takes in code and suggestions and returns a response from create - chat completion api call. - - Parameters: - suggestions (List): A list of suggestions around what needs to be improved. - code (str): Code to be improved. - Returns: - A result string from create chat completion. Improved code in response. - """ - - function_string = ( - "def generate_improved_code(suggestions: List[str], code: str) -> str:" - ) - args = [json.dumps(suggestions), code] - description_string = ( - "Improves the provided code based on the suggestions" - " provided, making no other changes." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/LuxOAI/ResumeBud/app.py b/spaces/LuxOAI/ResumeBud/app.py deleted file mode 100644 index 396ea2232f683cced156cb24272c9f31d9d78ed1..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ResumeBud/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -import openai -import os -from dotenv import load_dotenv -from collections import Counter -import time - -# Load the API key from the .env file -load_dotenv() -openai.api_key = os.getenv("OPENAI_API_KEY") - -# List of keywords to track -keywords = ["teacher", "educator", "childcare specialist", "afterschool specialist", - "youth development", "babysitter", "daycare assistant", - "daycare aide", "youth program"] - -def count_keywords(text): - text = text.lower() - keyword_counts = Counter() - for keyword in keywords: - keyword_counts[keyword] = text.count(keyword) - return keyword_counts - -def complete_prompt(prompt): - try: - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0, - max_tokens=824, - top_p=1, - frequency_penalty=0, - presence_penalty=0 - ) - completion = response.choices[0].text.strip() - keyword_counts = count_keywords(prompt + completion) - return completion, keyword_counts - except Exception as e: - return str(e), {} - -# Define Gradio interface -iface = gr.Interface( - fn=complete_prompt, - inputs=gr.inputs.Textbox(lines=20, placeholder="Enter the prompt here..."), - outputs=["text", "json"], - title="ResumePAL", - description="Developed by Alex Leschik", -) - -# Launch the Gradio app -iface.launch() diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/models/networks.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/models/networks.py deleted file mode 100644 index 6c4b08664b7ea139b310b658a63d2e46e61d8d75..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/models/networks.py +++ /dev/null @@ -1,875 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch -import torch.nn as nn -import functools -from torch.autograd import Variable -import numpy as np -from torch.nn.utils import spectral_norm - -# from util.util import SwitchNorm2d -import torch.nn.functional as F - -############################################################################### -# Functions -############################################################################### -def weights_init(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(0.0, 0.02) - elif classname.find("BatchNorm2d") != -1: - m.weight.data.normal_(1.0, 0.02) - m.bias.data.fill_(0) - - -def get_norm_layer(norm_type="instance"): - if norm_type == "batch": - norm_layer = functools.partial(nn.BatchNorm2d, affine=True) - elif norm_type == "instance": - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False) - elif norm_type == "spectral": - norm_layer = spectral_norm() - elif norm_type == "SwitchNorm": - norm_layer = SwitchNorm2d - else: - raise NotImplementedError("normalization layer [%s] is not found" % norm_type) - return norm_layer - - -def print_network(net): - if isinstance(net, list): - net = net[0] - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - print(net) - print("Total number of parameters: %d" % num_params) - - -def define_G(input_nc, output_nc, ngf, netG, k_size=3, n_downsample_global=3, n_blocks_global=9, n_local_enhancers=1, - n_blocks_local=3, norm='instance', gpu_ids=[], opt=None): - - norm_layer = get_norm_layer(norm_type=norm) - if netG == 'global': - # if opt.self_gen: - if opt.use_v2: - netG = GlobalGenerator_DCDCv2(input_nc, output_nc, ngf, k_size, n_downsample_global, norm_layer, opt=opt) - else: - netG = GlobalGenerator_v2(input_nc, output_nc, ngf, k_size, n_downsample_global, n_blocks_global, norm_layer, opt=opt) - else: - raise('generator not implemented!') - print(netG) - if len(gpu_ids) > 0: - assert(torch.cuda.is_available()) - netG.cuda(gpu_ids[0]) - netG.apply(weights_init) - return netG - - -def define_D(input_nc, ndf, n_layers_D, opt, norm='instance', use_sigmoid=False, num_D=1, getIntermFeat=False, gpu_ids=[]): - norm_layer = get_norm_layer(norm_type=norm) - netD = MultiscaleDiscriminator(input_nc, opt, ndf, n_layers_D, norm_layer, use_sigmoid, num_D, getIntermFeat) - print(netD) - if len(gpu_ids) > 0: - assert(torch.cuda.is_available()) - netD.cuda(gpu_ids[0]) - netD.apply(weights_init) - return netD - - - -class GlobalGenerator_DCDCv2(nn.Module): - def __init__( - self, - input_nc, - output_nc, - ngf=64, - k_size=3, - n_downsampling=8, - norm_layer=nn.BatchNorm2d, - padding_type="reflect", - opt=None, - ): - super(GlobalGenerator_DCDCv2, self).__init__() - activation = nn.ReLU(True) - - model = [ - nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, min(ngf, opt.mc), kernel_size=7, padding=0), - norm_layer(ngf), - activation, - ] - ### downsample - for i in range(opt.start_r): - mult = 2 ** i - model += [ - nn.Conv2d( - min(ngf * mult, opt.mc), - min(ngf * mult * 2, opt.mc), - kernel_size=k_size, - stride=2, - padding=1, - ), - norm_layer(min(ngf * mult * 2, opt.mc)), - activation, - ] - for i in range(opt.start_r, n_downsampling - 1): - mult = 2 ** i - model += [ - nn.Conv2d( - min(ngf * mult, opt.mc), - min(ngf * mult * 2, opt.mc), - kernel_size=k_size, - stride=2, - padding=1, - ), - norm_layer(min(ngf * mult * 2, opt.mc)), - activation, - ] - model += [ - ResnetBlock( - min(ngf * mult * 2, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - model += [ - ResnetBlock( - min(ngf * mult * 2, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - mult = 2 ** (n_downsampling - 1) - - if opt.spatio_size == 32: - model += [ - nn.Conv2d( - min(ngf * mult, opt.mc), - min(ngf * mult * 2, opt.mc), - kernel_size=k_size, - stride=2, - padding=1, - ), - norm_layer(min(ngf * mult * 2, opt.mc)), - activation, - ] - if opt.spatio_size == 64: - model += [ - ResnetBlock( - min(ngf * mult * 2, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - model += [ - ResnetBlock( - min(ngf * mult * 2, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - # model += [nn.Conv2d(min(ngf * mult * 2, opt.mc), min(ngf, opt.mc), 1, 1)] - if opt.feat_dim > 0: - model += [nn.Conv2d(min(ngf * mult * 2, opt.mc), opt.feat_dim, 1, 1)] - self.encoder = nn.Sequential(*model) - - # decode - model = [] - if opt.feat_dim > 0: - model += [nn.Conv2d(opt.feat_dim, min(ngf * mult * 2, opt.mc), 1, 1)] - # model += [nn.Conv2d(min(ngf, opt.mc), min(ngf * mult * 2, opt.mc), 1, 1)] - o_pad = 0 if k_size == 4 else 1 - mult = 2 ** n_downsampling - model += [ - ResnetBlock( - min(ngf * mult, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - - if opt.spatio_size == 32: - model += [ - nn.ConvTranspose2d( - min(ngf * mult, opt.mc), - min(int(ngf * mult / 2), opt.mc), - kernel_size=k_size, - stride=2, - padding=1, - output_padding=o_pad, - ), - norm_layer(min(int(ngf * mult / 2), opt.mc)), - activation, - ] - if opt.spatio_size == 64: - model += [ - ResnetBlock( - min(ngf * mult, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - - for i in range(1, n_downsampling - opt.start_r): - mult = 2 ** (n_downsampling - i) - model += [ - ResnetBlock( - min(ngf * mult, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - model += [ - ResnetBlock( - min(ngf * mult, opt.mc), - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - ) - ] - model += [ - nn.ConvTranspose2d( - min(ngf * mult, opt.mc), - min(int(ngf * mult / 2), opt.mc), - kernel_size=k_size, - stride=2, - padding=1, - output_padding=o_pad, - ), - norm_layer(min(int(ngf * mult / 2), opt.mc)), - activation, - ] - for i in range(n_downsampling - opt.start_r, n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [ - nn.ConvTranspose2d( - min(ngf * mult, opt.mc), - min(int(ngf * mult / 2), opt.mc), - kernel_size=k_size, - stride=2, - padding=1, - output_padding=o_pad, - ), - norm_layer(min(int(ngf * mult / 2), opt.mc)), - activation, - ] - if opt.use_segmentation_model: - model += [nn.ReflectionPad2d(3), nn.Conv2d(min(ngf, opt.mc), output_nc, kernel_size=7, padding=0)] - else: - model += [ - nn.ReflectionPad2d(3), - nn.Conv2d(min(ngf, opt.mc), output_nc, kernel_size=7, padding=0), - nn.Tanh(), - ] - self.decoder = nn.Sequential(*model) - - def forward(self, input, flow="enc_dec"): - if flow == "enc": - return self.encoder(input) - elif flow == "dec": - return self.decoder(input) - elif flow == "enc_dec": - x = self.encoder(input) - x = self.decoder(x) - return x - - -# Define a resnet block -class ResnetBlock(nn.Module): - def __init__( - self, dim, padding_type, norm_layer, opt, activation=nn.ReLU(True), use_dropout=False, dilation=1 - ): - super(ResnetBlock, self).__init__() - self.opt = opt - self.dilation = dilation - self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout) - - def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout): - conv_block = [] - p = 0 - if padding_type == "reflect": - conv_block += [nn.ReflectionPad2d(self.dilation)] - elif padding_type == "replicate": - conv_block += [nn.ReplicationPad2d(self.dilation)] - elif padding_type == "zero": - p = self.dilation - else: - raise NotImplementedError("padding [%s] is not implemented" % padding_type) - - conv_block += [ - nn.Conv2d(dim, dim, kernel_size=3, padding=p, dilation=self.dilation), - norm_layer(dim), - activation, - ] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - p = 0 - if padding_type == "reflect": - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == "replicate": - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == "zero": - p = 1 - else: - raise NotImplementedError("padding [%s] is not implemented" % padding_type) - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, dilation=1), norm_layer(dim)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - out = x + self.conv_block(x) - return out - - -class Encoder(nn.Module): - def __init__(self, input_nc, output_nc, ngf=32, n_downsampling=4, norm_layer=nn.BatchNorm2d): - super(Encoder, self).__init__() - self.output_nc = output_nc - - model = [ - nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - nn.ReLU(True), - ] - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [ - nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1), - norm_layer(ngf * mult * 2), - nn.ReLU(True), - ] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [ - nn.ConvTranspose2d( - ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, output_padding=1 - ), - norm_layer(int(ngf * mult / 2)), - nn.ReLU(True), - ] - - model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), nn.Tanh()] - self.model = nn.Sequential(*model) - - def forward(self, input, inst): - outputs = self.model(input) - - # instance-wise average pooling - outputs_mean = outputs.clone() - inst_list = np.unique(inst.cpu().numpy().astype(int)) - for i in inst_list: - for b in range(input.size()[0]): - indices = (inst[b : b + 1] == int(i)).nonzero() # n x 4 - for j in range(self.output_nc): - output_ins = outputs[indices[:, 0] + b, indices[:, 1] + j, indices[:, 2], indices[:, 3]] - mean_feat = torch.mean(output_ins).expand_as(output_ins) - outputs_mean[ - indices[:, 0] + b, indices[:, 1] + j, indices[:, 2], indices[:, 3] - ] = mean_feat - return outputs_mean - - -def SN(module, mode=True): - if mode: - return torch.nn.utils.spectral_norm(module) - - return module - - -class NonLocalBlock2D_with_mask_Res(nn.Module): - def __init__( - self, - in_channels, - inter_channels, - mode="add", - re_norm=False, - temperature=1.0, - use_self=False, - cosin=False, - ): - super(NonLocalBlock2D_with_mask_Res, self).__init__() - - self.cosin = cosin - self.renorm = re_norm - self.in_channels = in_channels - self.inter_channels = inter_channels - - self.g = nn.Conv2d( - in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - ) - - self.W = nn.Conv2d( - in_channels=self.inter_channels, out_channels=self.in_channels, kernel_size=1, stride=1, padding=0 - ) - # for pytorch 0.3.1 - # nn.init.constant(self.W.weight, 0) - # nn.init.constant(self.W.bias, 0) - # for pytorch 0.4.0 - nn.init.constant_(self.W.weight, 0) - nn.init.constant_(self.W.bias, 0) - self.theta = nn.Conv2d( - in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - ) - - self.phi = nn.Conv2d( - in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - ) - - self.mode = mode - self.temperature = temperature - self.use_self = use_self - - norm_layer = get_norm_layer(norm_type="instance") - activation = nn.ReLU(True) - - model = [] - for i in range(3): - model += [ - ResnetBlock( - inter_channels, - padding_type="reflect", - activation=activation, - norm_layer=norm_layer, - opt=None, - ) - ] - self.res_block = nn.Sequential(*model) - - def forward(self, x, mask): ## The shape of mask is Batch*1*H*W - batch_size = x.size(0) - - g_x = self.g(x).view(batch_size, self.inter_channels, -1) - - g_x = g_x.permute(0, 2, 1) - - theta_x = self.theta(x).view(batch_size, self.inter_channels, -1) - - theta_x = theta_x.permute(0, 2, 1) - - phi_x = self.phi(x).view(batch_size, self.inter_channels, -1) - - if self.cosin: - theta_x = F.normalize(theta_x, dim=2) - phi_x = F.normalize(phi_x, dim=1) - - f = torch.matmul(theta_x, phi_x) - - f /= self.temperature - - f_div_C = F.softmax(f, dim=2) - - tmp = 1 - mask - mask = F.interpolate(mask, (x.size(2), x.size(3)), mode="bilinear") - mask[mask > 0] = 1.0 - mask = 1 - mask - - tmp = F.interpolate(tmp, (x.size(2), x.size(3))) - mask *= tmp - - mask_expand = mask.view(batch_size, 1, -1) - mask_expand = mask_expand.repeat(1, x.size(2) * x.size(3), 1) - - # mask = 1 - mask - # mask=F.interpolate(mask,(x.size(2),x.size(3))) - # mask_expand=mask.view(batch_size,1,-1) - # mask_expand=mask_expand.repeat(1,x.size(2)*x.size(3),1) - - if self.use_self: - mask_expand[:, range(x.size(2) * x.size(3)), range(x.size(2) * x.size(3))] = 1.0 - - # print(mask_expand.shape) - # print(f_div_C.shape) - - f_div_C = mask_expand * f_div_C - if self.renorm: - f_div_C = F.normalize(f_div_C, p=1, dim=2) - - ########################### - - y = torch.matmul(f_div_C, g_x) - - y = y.permute(0, 2, 1).contiguous() - - y = y.view(batch_size, self.inter_channels, *x.size()[2:]) - W_y = self.W(y) - - W_y = self.res_block(W_y) - - if self.mode == "combine": - full_mask = mask.repeat(1, self.inter_channels, 1, 1) - z = full_mask * x + (1 - full_mask) * W_y - return z - - -class MultiscaleDiscriminator(nn.Module): - def __init__(self, input_nc, opt, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, - use_sigmoid=False, num_D=3, getIntermFeat=False): - super(MultiscaleDiscriminator, self).__init__() - self.num_D = num_D - self.n_layers = n_layers - self.getIntermFeat = getIntermFeat - - for i in range(num_D): - netD = NLayerDiscriminator(input_nc, opt, ndf, n_layers, norm_layer, use_sigmoid, getIntermFeat) - if getIntermFeat: - for j in range(n_layers+2): - setattr(self, 'scale'+str(i)+'_layer'+str(j), getattr(netD, 'model'+str(j))) - else: - setattr(self, 'layer'+str(i), netD.model) - - self.downsample = nn.AvgPool2d(3, stride=2, padding=[1, 1], count_include_pad=False) - - def singleD_forward(self, model, input): - if self.getIntermFeat: - result = [input] - for i in range(len(model)): - result.append(model[i](result[-1])) - return result[1:] - else: - return [model(input)] - - def forward(self, input): - num_D = self.num_D - result = [] - input_downsampled = input - for i in range(num_D): - if self.getIntermFeat: - model = [getattr(self, 'scale'+str(num_D-1-i)+'_layer'+str(j)) for j in range(self.n_layers+2)] - else: - model = getattr(self, 'layer'+str(num_D-1-i)) - result.append(self.singleD_forward(model, input_downsampled)) - if i != (num_D-1): - input_downsampled = self.downsample(input_downsampled) - return result - -# Defines the PatchGAN discriminator with the specified arguments. -class NLayerDiscriminator(nn.Module): - def __init__(self, input_nc, opt, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_sigmoid=False, getIntermFeat=False): - super(NLayerDiscriminator, self).__init__() - self.getIntermFeat = getIntermFeat - self.n_layers = n_layers - - kw = 4 - padw = int(np.ceil((kw-1.0)/2)) - sequence = [[SN(nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw),opt.use_SN), nn.LeakyReLU(0.2, True)]] - - nf = ndf - for n in range(1, n_layers): - nf_prev = nf - nf = min(nf * 2, 512) - sequence += [[ - SN(nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw),opt.use_SN), - norm_layer(nf), nn.LeakyReLU(0.2, True) - ]] - - nf_prev = nf - nf = min(nf * 2, 512) - sequence += [[ - SN(nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw),opt.use_SN), - norm_layer(nf), - nn.LeakyReLU(0.2, True) - ]] - - sequence += [[SN(nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw),opt.use_SN)]] - - if use_sigmoid: - sequence += [[nn.Sigmoid()]] - - if getIntermFeat: - for n in range(len(sequence)): - setattr(self, 'model'+str(n), nn.Sequential(*sequence[n])) - else: - sequence_stream = [] - for n in range(len(sequence)): - sequence_stream += sequence[n] - self.model = nn.Sequential(*sequence_stream) - - def forward(self, input): - if self.getIntermFeat: - res = [input] - for n in range(self.n_layers+2): - model = getattr(self, 'model'+str(n)) - res.append(model(res[-1])) - return res[1:] - else: - return self.model(input) - - - -class Patch_Attention_4(nn.Module): ## While combine the feature map, use conv and mask - def __init__(self, in_channels, inter_channels, patch_size): - super(Patch_Attention_4, self).__init__() - - self.patch_size=patch_size - - - # self.g = nn.Conv2d( - # in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - # ) - - # self.W = nn.Conv2d( - # in_channels=self.inter_channels, out_channels=self.in_channels, kernel_size=1, stride=1, padding=0 - # ) - # # for pytorch 0.3.1 - # # nn.init.constant(self.W.weight, 0) - # # nn.init.constant(self.W.bias, 0) - # # for pytorch 0.4.0 - # nn.init.constant_(self.W.weight, 0) - # nn.init.constant_(self.W.bias, 0) - # self.theta = nn.Conv2d( - # in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - # ) - - # self.phi = nn.Conv2d( - # in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - # ) - - self.F_Combine=nn.Conv2d(in_channels=1025,out_channels=512,kernel_size=3,stride=1,padding=1,bias=True) - norm_layer = get_norm_layer(norm_type="instance") - activation = nn.ReLU(True) - - model = [] - for i in range(1): - model += [ - ResnetBlock( - inter_channels, - padding_type="reflect", - activation=activation, - norm_layer=norm_layer, - opt=None, - ) - ] - self.res_block = nn.Sequential(*model) - - def Hard_Compose(self, input, dim, index): - # batch index select - # input: [B,C,HW] - # dim: scalar > 0 - # index: [B, HW] - views = [input.size(0)] + [1 if i!=dim else -1 for i in range(1, len(input.size()))] - expanse = list(input.size()) - expanse[0] = -1 - expanse[dim] = -1 - index = index.view(views).expand(expanse) - return torch.gather(input, dim, index) - - def forward(self, z, mask): ## The shape of mask is Batch*1*H*W - - x=self.res_block(z) - - b,c,h,w=x.shape - - ## mask resize + dilation - # tmp = 1 - mask - mask = F.interpolate(mask, (x.size(2), x.size(3)), mode="bilinear") - mask[mask > 0] = 1.0 - - # mask = 1 - mask - # tmp = F.interpolate(tmp, (x.size(2), x.size(3))) - # mask *= tmp - # mask=1-mask - ## 1: mask position 0: non-mask - - mask_unfold=F.unfold(mask, kernel_size=(self.patch_size,self.patch_size), padding=0, stride=self.patch_size) - non_mask_region=(torch.mean(mask_unfold,dim=1,keepdim=True)>0.6).float() - all_patch_num=h*w/self.patch_size/self.patch_size - non_mask_region=non_mask_region.repeat(1,int(all_patch_num),1) - - x_unfold=F.unfold(x, kernel_size=(self.patch_size,self.patch_size), padding=0, stride=self.patch_size) - y_unfold=x_unfold.permute(0,2,1) - x_unfold_normalized=F.normalize(x_unfold,dim=1) - y_unfold_normalized=F.normalize(y_unfold,dim=2) - correlation_matrix=torch.bmm(y_unfold_normalized,x_unfold_normalized) - correlation_matrix=correlation_matrix.masked_fill(non_mask_region==1.,-1e9) - correlation_matrix=F.softmax(correlation_matrix,dim=2) - - # print(correlation_matrix) - - R, max_arg=torch.max(correlation_matrix,dim=2) - - composed_unfold=self.Hard_Compose(x_unfold, 2, max_arg) - composed_fold=F.fold(composed_unfold,output_size=(h,w),kernel_size=(self.patch_size,self.patch_size),padding=0,stride=self.patch_size) - - concat_1=torch.cat((z,composed_fold,mask),dim=1) - concat_1=self.F_Combine(concat_1) - - return concat_1 - - def inference_forward(self,z,mask): ## Reduce the extra memory cost - - - x=self.res_block(z) - - b,c,h,w=x.shape - - ## mask resize + dilation - # tmp = 1 - mask - mask = F.interpolate(mask, (x.size(2), x.size(3)), mode="bilinear") - mask[mask > 0] = 1.0 - # mask = 1 - mask - # tmp = F.interpolate(tmp, (x.size(2), x.size(3))) - # mask *= tmp - # mask=1-mask - ## 1: mask position 0: non-mask - - mask_unfold=F.unfold(mask, kernel_size=(self.patch_size,self.patch_size), padding=0, stride=self.patch_size) - non_mask_region=(torch.mean(mask_unfold,dim=1,keepdim=True)>0.6).float()[0,0,:] # 1*1*all_patch_num - - all_patch_num=h*w/self.patch_size/self.patch_size - - mask_index=torch.nonzero(non_mask_region,as_tuple=True)[0] - - - if len(mask_index)==0: ## No mask patch is selected, no attention is needed - - composed_fold=x - - else: - - unmask_index=torch.nonzero(non_mask_region!=1,as_tuple=True)[0] - - x_unfold=F.unfold(x, kernel_size=(self.patch_size,self.patch_size), padding=0, stride=self.patch_size) - - Query_Patch=torch.index_select(x_unfold,2,mask_index) - Key_Patch=torch.index_select(x_unfold,2,unmask_index) - - Query_Patch=Query_Patch.permute(0,2,1) - Query_Patch_normalized=F.normalize(Query_Patch,dim=2) - Key_Patch_normalized=F.normalize(Key_Patch,dim=1) - - correlation_matrix=torch.bmm(Query_Patch_normalized,Key_Patch_normalized) - correlation_matrix=F.softmax(correlation_matrix,dim=2) - - - R, max_arg=torch.max(correlation_matrix,dim=2) - - composed_unfold=self.Hard_Compose(Key_Patch, 2, max_arg) - x_unfold[:,:,mask_index]=composed_unfold - composed_fold=F.fold(x_unfold,output_size=(h,w),kernel_size=(self.patch_size,self.patch_size),padding=0,stride=self.patch_size) - - concat_1=torch.cat((z,composed_fold,mask),dim=1) - concat_1=self.F_Combine(concat_1) - - - return concat_1 - -############################################################################## -# Losses -############################################################################## -class GANLoss(nn.Module): - def __init__(self, use_lsgan=True, target_real_label=1.0, target_fake_label=0.0, - tensor=torch.FloatTensor): - super(GANLoss, self).__init__() - self.real_label = target_real_label - self.fake_label = target_fake_label - self.real_label_var = None - self.fake_label_var = None - self.Tensor = tensor - if use_lsgan: - self.loss = nn.MSELoss() - else: - self.loss = nn.BCELoss() - - def get_target_tensor(self, input, target_is_real): - target_tensor = None - if target_is_real: - create_label = ((self.real_label_var is None) or - (self.real_label_var.numel() != input.numel())) - if create_label: - real_tensor = self.Tensor(input.size()).fill_(self.real_label) - self.real_label_var = Variable(real_tensor, requires_grad=False) - target_tensor = self.real_label_var - else: - create_label = ((self.fake_label_var is None) or - (self.fake_label_var.numel() != input.numel())) - if create_label: - fake_tensor = self.Tensor(input.size()).fill_(self.fake_label) - self.fake_label_var = Variable(fake_tensor, requires_grad=False) - target_tensor = self.fake_label_var - return target_tensor - - def __call__(self, input, target_is_real): - if isinstance(input[0], list): - loss = 0 - for input_i in input: - pred = input_i[-1] - target_tensor = self.get_target_tensor(pred, target_is_real) - loss += self.loss(pred, target_tensor) - return loss - else: - target_tensor = self.get_target_tensor(input[-1], target_is_real) - return self.loss(input[-1], target_tensor) - - - - -####################################### VGG Loss - -from torchvision import models -class VGG19_torch(torch.nn.Module): - def __init__(self, requires_grad=False): - super(VGG19_torch, self).__init__() - vgg_pretrained_features = models.vgg19(pretrained=True).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5(h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - -class VGGLoss_torch(nn.Module): - def __init__(self, gpu_ids): - super(VGGLoss_torch, self).__init__() - self.vgg = VGG19_torch().cuda() - self.criterion = nn.L1Loss() - self.weights = [1.0/32, 1.0/16, 1.0/8, 1.0/4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach()) - return loss \ No newline at end of file diff --git a/spaces/ML701G7/taim-gan/src/data/tokenizer.py b/spaces/ML701G7/taim-gan/src/data/tokenizer.py deleted file mode 100644 index 8ef6a40e9a92d4a1851cc3c3277d5f566ec9cef4..0000000000000000000000000000000000000000 --- a/spaces/ML701G7/taim-gan/src/data/tokenizer.py +++ /dev/null @@ -1,23 +0,0 @@ -import pickle -import re -from typing import List - - -class TAIMGANTokenizer: - def __init__(self, captions_path): - with open(captions_path, "rb") as ckpt_file: - captions = pickle.load(ckpt_file) - self.ix_to_word = captions[2] - self.word_to_ix = captions[3] - self.token_regex = r'\w+' - self.pad_token_id = self.word_to_ix[""] - self.pad_repr = "[PAD]" - - def encode(self, text: str) -> List[int]: - return [self.word_to_ix.get(word, self.pad_token_id) - for word in re.findall(self.token_regex, text.lower())] - - def decode(self, tokens: List[int]) -> str: - return ' '.join([self.ix_to_word[token] - if token != self.pad_token_id else self.pad_repr - for token in tokens]) diff --git a/spaces/Manjushri/MusicGen/CHANGELOG.md b/spaces/Manjushri/MusicGen/CHANGELOG.md deleted file mode 100644 index 6aaad6b5ee31e4685ead54c1a46d7f57b225912d..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/CHANGELOG.md +++ /dev/null @@ -1,20 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.2a] - TBD - -Improved demo, fixed top p (thanks @jnordberg). - -Compressor tanh on output to avoid clipping with some style (especially piano). -Now repeating the conditioning periodically if it is too short. - -More options when launching Gradio app locally (thanks @ashleykleynhans). - -Testing out PyTorch 2.0 memory efficient attention. - -## [0.0.1] - 2023-06-09 - -Initial release, with model evaluation only. diff --git a/spaces/Monteg/anything-v3.0/app.py b/spaces/Monteg/anything-v3.0/app.py deleted file mode 100644 index f9cadc91021a9b176af70b3df6158cffb04918dd..0000000000000000000000000000000000000000 --- a/spaces/Monteg/anything-v3.0/app.py +++ /dev/null @@ -1,276 +0,0 @@ -from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image -import utils -import datetime -import time -import psutil - -start_time = time.time() -is_colab = utils.is_google_colab() - -class Model: - def __init__(self, name, path="", prefix="--precision full --no-half"): - self.name = name - self.path = path - self.prefix = prefix - self.pipe_t2i = None - self.pipe_i2i = None - -models = [ - Model("anything v3", "Linaqruf/anything-v3.0", "anything v3 style"), - ] - # Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "), - # Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "), - # Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "), - # Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy ") - #Model("Pokémon", "lambdalabs/sd-pokemon-diffusers", ""), - #Model("Pony Diffusion", "AstraliteHeart/pony-diffusion", ""), - #Model("Robo Diffusion", "nousr/robo-diffusion", ""), - -scheduler = DPMSolverMultistepScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - trained_betas=None, - predict_epsilon=True, - thresholding=False, - algorithm_type="dpmsolver++", - solver_type="midpoint", - lower_order_final=True, -) - -custom_model = None -if is_colab: - models.insert(0, Model("Custom model")) - custom_model = models[0] - -last_mode = "txt2img" -current_model = models[1] if is_colab else models[0] -current_model_path = current_model.path - -if is_colab: - pipe = StableDiffusionPipeline.from_pretrained(current_model.path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - -else: # download all models - print(f"{datetime.datetime.now()} Downloading vae...") - vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=torch.float16) - for model in models: - try: - print(f"{datetime.datetime.now()} Downloading {model.name} model...") - unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=torch.float16) - model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler) - model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler) - except Exception as e: - print(f"{datetime.datetime.now()} Failed to load model " + model.name + ": " + str(e)) - models.remove(model) - pipe = models[0].pipe_t2i - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - -device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def custom_model_changed(path): - models[0].path = path - global current_model - current_model = models[0] - -def on_model_change(model_name): - - prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), None) + "\" is prefixed automatically" if model_name != models[0].name else "Don't forget to use the custom model prefix in the prompt!" - - return gr.update(visible = model_name == models[0].name), gr.update(placeholder=prefix) - -def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""): - - print(psutil.virtual_memory()) # print memory usage - - global current_model - for model in models: - if model.name == model_name: - current_model = model - model_path = current_model.path - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - - try: - if img is not None: - return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator): - - print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "txt2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - else: - pipe = pipe.to("cpu") - pipe = current_model.pipe_t2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - last_mode = "txt2img" - - prompt = current_model.prefix + prompt - result = pipe( - prompt, - negative_prompt = neg_prompt, - # num_images_per_prompt=n_images, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - print(f"{datetime.datetime.now()} img_to_img, model: {model_path}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "img2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - else: - pipe = pipe.to("cpu") - pipe = current_model.pipe_i2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - last_mode = "img2img" - - prompt = current_model.prefix + prompt - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe( - prompt, - negative_prompt = neg_prompt, - # num_images_per_prompt=n_images, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def replace_nsfw_images(results): - - if is_colab: - return results.images[0] - - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images[0] - -css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
-
-

Anything V3

-
-

- Demo for Anything V3 -

-

This demo is slow on cpu, to use it upgrade to gpu by going to settings after duplicating this space: Duplicate Space

-

-
- """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) - with gr.Box(visible=False) as custom_model_group: - custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True) - gr.HTML("
Custom models have to be downloaded first, so give it some time.
") - - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - - image_out = gr.Image(height=512) - # gallery = gr.Gallery( - # label="Generated images", show_label=False, elem_id="gallery" - # ).style(grid=[1], height="auto") - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - - # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - if is_colab: - model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) - custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) - # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) - - inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - ex = gr.Examples([ - [models[0].name, "iron man", 7.5, 50], - - ], inputs=[model_name, prompt, guidance, steps, seed], outputs=outputs, fn=inference, cache_examples=False) - - gr.HTML(""" -
-
-

Model by Linaqruf

-
- """) - -print(f"Space built in {time.time() - start_time:.2f} seconds") - -if not is_colab: - demo.queue(concurrency_count=1) -demo.launch(debug=is_colab, share=is_colab) \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/ilst_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/ilst_converter.py deleted file mode 100644 index 56ac54e3e30ed95159b25bee69afe39c47896a2a..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/ilst_converter.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv -import mmengine - -from mmocr.utils import dump_ocr_data - - -def collect_files(img_dir, gt_dir): - """Collect all images and their corresponding groundtruth files. - - Args: - img_dir (str): The image directory - gt_dir (str): The groundtruth directory - - Returns: - files (list): The list of tuples (img_file, groundtruth_file) - """ - assert isinstance(img_dir, str) - assert img_dir - assert isinstance(gt_dir, str) - assert gt_dir - - ann_list, imgs_list = [], [] - for img_file in os.listdir(img_dir): - ann_path = osp.join(gt_dir, img_file.split('.')[0] + '.xml') - if os.path.exists(ann_path): - ann_list.append(ann_path) - imgs_list.append(osp.join(img_dir, img_file)) - - files = list(zip(imgs_list, ann_list)) - assert len(files), f'No images found in {img_dir}' - print(f'Loaded {len(files)} images from {img_dir}') - - return files - - -def collect_annotations(files, nproc=1): - """Collect the annotation information. - - Args: - files (list): The list of tuples (image_file, groundtruth_file) - nproc (int): The number of process to collect annotations - - Returns: - images (list): The list of image information dicts - """ - assert isinstance(files, list) - assert isinstance(nproc, int) - - if nproc > 1: - images = mmengine.track_parallel_progress( - load_img_info, files, nproc=nproc) - else: - images = mmengine.track_progress(load_img_info, files) - - return images - - -def load_img_info(files): - """Load the information of one image. - - Args: - files (tuple): The tuple of (img_file, groundtruth_file) - - Returns: - img_info (dict): The dict of the img and annotation information - """ - assert isinstance(files, tuple) - - img_file, gt_file = files - assert osp.basename(gt_file).split('.')[0] == osp.basename(img_file).split( - '.')[0] - # read imgs while ignoring orientations - img = mmcv.imread(img_file, 'unchanged') - - try: - img_info = dict( - file_name=osp.join(osp.basename(img_file)), - height=img.shape[0], - width=img.shape[1], - segm_file=osp.join(osp.basename(gt_file))) - except AttributeError: - print(f'Skip broken img {img_file}') - return None - - if osp.splitext(gt_file)[1] == '.xml': - img_info = load_xml_info(gt_file, img_info) - else: - raise NotImplementedError - - return img_info - - -def load_xml_info(gt_file, img_info): - """Collect the annotation information. - - The annotation format is as the following: - - ... - - SMT - Unspecified - 0 - 0 - - 157 - 294 - 237 - 357 - - - - Args: - gt_file (str): The path to ground-truth - img_info (dict): The dict of the img and annotation information - - Returns: - img_info (dict): The dict of the img and annotation information - """ - obj = ET.parse(gt_file) - root = obj.getroot() - anno_info = [] - for object in root.iter('object'): - word = object.find('name').text - iscrowd = 1 if len(word) == 0 else 0 - x1 = int(object.find('bndbox').find('xmin').text) - y1 = int(object.find('bndbox').find('ymin').text) - x2 = int(object.find('bndbox').find('xmax').text) - y2 = int(object.find('bndbox').find('ymax').text) - - x = max(0, min(x1, x2)) - y = max(0, min(y1, y2)) - w, h = abs(x2 - x1), abs(y2 - y1) - bbox = [x1, y1, w, h] - segmentation = [x, y, x + w, y, x + w, y + h, x, y + h] - anno = dict( - iscrowd=iscrowd, - category_id=1, - bbox=bbox, - area=w * h, - segmentation=[segmentation]) - anno_info.append(anno) - - img_info.update(anno_info=anno_info) - - return img_info - - -def split_train_val_list(full_list, val_ratio): - """Split list by val_ratio. - - Args: - full_list (list): List to be split - val_ratio (float): Split ratio for val set - - return: - list(list, list): Train_list and val_list - """ - - n_total = len(full_list) - offset = int(n_total * val_ratio) - if n_total == 0 or offset < 1: - return [], full_list - val_list = full_list[:offset] - train_list = full_list[offset:] - return [train_list, val_list] - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training and val set of ILST ') - parser.add_argument('root_path', help='Root dir path of ILST') - parser.add_argument( - '--val-ratio', help='Split ratio for val set', default=0., type=float) - parser.add_argument( - '--nproc', default=1, type=int, help='Number of processes') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - with mmengine.Timer(print_tmpl='It takes {}s to convert ILST annotation'): - files = collect_files( - osp.join(root_path, 'imgs'), osp.join(root_path, 'annotations')) - image_infos = collect_annotations(files, nproc=args.nproc) - if args.val_ratio: - image_infos = split_train_val_list(image_infos, args.val_ratio) - splits = ['training', 'val'] - else: - image_infos = [image_infos] - splits = ['training'] - for i, split in enumerate(splits): - dump_ocr_data( - list(filter(None, image_infos[i])), - osp.join(root_path, 'instances_' + split + '.json'), 'textdet') - - -if __name__ == '__main__': - main() diff --git a/spaces/NMEX/rvc-hoyogame-v2/README.md b/spaces/NMEX/rvc-hoyogame-v2/README.md deleted file mode 100644 index ef3abf5af3d03c2921ea50f094fa35f3ce67651f..0000000000000000000000000000000000000000 --- a/spaces/NMEX/rvc-hoyogame-v2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: RVC Genshin Impact -emoji: 🎤 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: true -license: mit -duplicated_from: ArkanDash/rvc-models-new ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/asr_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/asr_dataset.py deleted file mode 100644 index 63a6fcac85d73b1fce8e4d044b4209b1b67fa8ce..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/asr_dataset.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os - -import numpy as np -from fairseq.data import FairseqDataset - -from . import data_utils -from .collaters import Seq2SeqCollater - - -class AsrDataset(FairseqDataset): - """ - A dataset representing speech and corresponding transcription. - - Args: - aud_paths: (List[str]): A list of str with paths to audio files. - aud_durations_ms (List[int]): A list of int containing the durations of - audio files. - tgt (List[torch.LongTensor]): A list of LongTensors containing the indices - of target transcriptions. - tgt_dict (~fairseq.data.Dictionary): target vocabulary. - ids (List[str]): A list of utterance IDs. - speakers (List[str]): A list of speakers corresponding to utterances. - num_mel_bins (int): Number of triangular mel-frequency bins (default: 80) - frame_length (float): Frame length in milliseconds (default: 25.0) - frame_shift (float): Frame shift in milliseconds (default: 10.0) - """ - - def __init__( - self, - aud_paths, - aud_durations_ms, - tgt, - tgt_dict, - ids, - speakers, - num_mel_bins=80, - frame_length=25.0, - frame_shift=10.0, - ): - assert frame_length > 0 - assert frame_shift > 0 - assert all(x > frame_length for x in aud_durations_ms) - self.frame_sizes = [ - int(1 + (d - frame_length) / frame_shift) for d in aud_durations_ms - ] - - assert len(aud_paths) > 0 - assert len(aud_paths) == len(aud_durations_ms) - assert len(aud_paths) == len(tgt) - assert len(aud_paths) == len(ids) - assert len(aud_paths) == len(speakers) - self.aud_paths = aud_paths - self.tgt_dict = tgt_dict - self.tgt = tgt - self.ids = ids - self.speakers = speakers - self.num_mel_bins = num_mel_bins - self.frame_length = frame_length - self.frame_shift = frame_shift - - self.s2s_collater = Seq2SeqCollater( - 0, - 1, - pad_index=self.tgt_dict.pad(), - eos_index=self.tgt_dict.eos(), - move_eos_to_beginning=True, - ) - - def __getitem__(self, index): - import torchaudio - import torchaudio.compliance.kaldi as kaldi - - tgt_item = self.tgt[index] if self.tgt is not None else None - - path = self.aud_paths[index] - if not os.path.exists(path): - raise FileNotFoundError("Audio file not found: {}".format(path)) - sound, sample_rate = torchaudio.load_wav(path) - output = kaldi.fbank( - sound, - num_mel_bins=self.num_mel_bins, - frame_length=self.frame_length, - frame_shift=self.frame_shift, - ) - output_cmvn = data_utils.apply_mv_norm(output) - - return {"id": index, "data": [output_cmvn.detach(), tgt_item]} - - def __len__(self): - return len(self.aud_paths) - - def collater(self, samples): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[int]): sample indices to collate - - Returns: - dict: a mini-batch suitable for forwarding with a Model - """ - return self.s2s_collater.collate(samples) - - def num_tokens(self, index): - return self.frame_sizes[index] - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return ( - self.frame_sizes[index], - len(self.tgt[index]) if self.tgt is not None else 0, - ) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - return np.arange(len(self)) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py deleted file mode 100644 index 062bb82f669f63a537b6ee8df4d42d292eb2575e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py +++ /dev/null @@ -1,201 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import nltk -from misc.bleu_utils import sentence_bleu -import warnings - - -def get_target_sequences(manifest, ground_truth, to_take=1000): - import json - import pathlib - - with open(ground_truth, 'r') as fin: - original_continuations = json.loads(fin.read()) - - sequence2length = [(k, v[0]) for k, v in original_continuations.items()] - assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds - - sequence2length.sort(key=lambda x: x[1]) - to_take_sequences = set(v[0] for v in sequence2length[:to_take]) - to_take_ids = [] - - with open(manifest, 'r') as f: - f.readline() - - for i, line in enumerate(f.readlines()): - seq_id = line.split()[0] - seq_id = pathlib.Path(seq_id).name.split('__')[0] - - if seq_id in to_take_sequences: - to_take_ids.append(i) - - print(f'Took {len(to_take_ids)} ids') - return set(to_take_ids) - - -def get_args(): - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument('--asr-transcript', type=str, - help='Path to the transcript file.') - - parser.add_argument('--manifest', required=True) - parser.add_argument('--prompts-description', required=True) - - parser.add_argument('--cut-id', action='store_true', - help='Whether cut the first token (typically a seq id)') - parser.add_argument('--cut-tail', action='store_true', - help='Whether cut the last token (typically a speaker id)') - parser.add_argument('--debug', action='store_true') - - args = parser.parse_args() - - return args - - -def get_self_bleu(utterances, averaging_mode, weights): - self_bleu = [] - - for i in range(len(utterances)): - hypo = utterances[i] - rest = utterances[:i] + utterances[i+1:] - - self_bleu.append(sentence_bleu(rest, hypo, weights, - no_length_penalty=True, averaging_mode=averaging_mode)) - - return self_bleu - - -def get_self_bleu2_arithmetic(utterances): - weights = (0.5, 0.5) # equal weight for unigrams and bigrams - return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights) - - -def get_self_bleu2_geometric(utterances): - weights = (0.5, 0.5) - return get_self_bleu(utterances, averaging_mode='geometric', weights=weights) - - -def get_auto_bleu2_arithmetic(utterances): - weights = (0.5, 0.5) - return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances] - - -def get_auto_bleu2_geometric(utterances): - weights = (0.5, 0.5) - return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances] - - -def get_auto_bleu3_geometric(utterances): - weights = (1./3, 1./3, 1./3) - return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances] - - -def get_auto_bleu3_arithmetic(utterances): - weights = (1./3, 1./3, 1./3) - return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances] - - -def get_self_bleu3_arithmetic(utterances): - weights = (1./3, 1./3, 1./3) - return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights) - - -def get_self_bleu3_geometric(utterances): - weights = (1./3, 1./3, 1./3) - return get_self_bleu(utterances, averaging_mode='geometric', weights=weights) - - -def auto_bleu(sentence, weights, mean_mode='arithmetic'): - if len(sentence) <= 1: - return 0 - - N = len(weights) - - bleu_n = np.zeros([N]) - for n in range(N): - targ_ngrams = list(nltk.ngrams(sentence, n+1)) - for p in range(len(targ_ngrams)): - left = sentence[:p] - right = sentence[(p+n+1):] - rest_ngrams = list(nltk.ngrams(left, n+1)) + \ - list(nltk.ngrams(right, n+1)) - # compute the nb of matching ngrams - bleu_n[n] += targ_ngrams[p] in rest_ngrams - bleu_n[n] /= len(targ_ngrams) # average them to get a proportion - - weights = np.array(weights) - if mean_mode == 'arithmetic': - return (bleu_n * weights).sum() - elif mean_mode == 'geometric': - return (bleu_n ** weights).prod() - else: - raise ValueError(f'Unknown agggregation mode {mean_mode}') - - -def main(): - from multiprocessing import Pool - - args = get_args() - target_ids = get_target_sequences(args.manifest, args.prompts_description) - - with open(args.asr_transcript, 'r') as fin: - lines = fin.readlines() - - terms = [x.strip().split() for x in lines] - filtered = [] - for term in terms: - line_id = int(term[-1].split('-')[1][:-1]) - if line_id in target_ids: - filtered.append(term) - terms = filtered - - if args.cut_id: - terms = [x[1:] for x in terms] - if args.cut_tail: - terms = [x[:-1] for x in terms] - - if args.debug: - terms = terms[:10] - - tasks = [ - ('Self-BLEU2-arithmetic', get_self_bleu2_arithmetic), - ('Self-BLEU2-geometric', get_self_bleu2_geometric), - ('Auto-BLEU2-arithmetic', get_auto_bleu2_arithmetic), - ('Auto-BLEU2-geometric', get_auto_bleu2_geometric), - - ('Self-BLEU3-arithmetic', get_self_bleu3_arithmetic), - ('Self-BLEU3-geometric', get_self_bleu3_geometric), - ('Auto-BLEU3-arithmetic', get_auto_bleu3_arithmetic), - ('Auto-BLEU3-geometric', get_auto_bleu3_geometric), - ] - - n_processes = min(16, len(tasks)) - with Pool(n_processes) as pool: - metrics = pool.map(run_f, [(t[1], terms) for t in tasks]) - - for (metric_name, _), metric in zip(tasks, metrics): - metric, sem = np.mean(metric), np.std(metric) / np.sqrt(len(metric)) - - metric, sem = [ - round(100 * x, 2) for x in [metric, sem] - ] - - print(f'{metric_name} {metric} +- {sem}') - - -def run_f(task_params): - f, terms = task_params - return f(terms) - - -if __name__ == '__main__': - # NLTK produces warnings - warnings.filterwarnings("ignore") - - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/nat/levenshtein_transformer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/nat/levenshtein_transformer.py deleted file mode 100644 index d60d3c52d50b1f20957039a75622ffb95d5eea24..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/nat/levenshtein_transformer.py +++ /dev/null @@ -1,510 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.iterative_refinement_generator import DecoderOut -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import FairseqNATDecoder, FairseqNATModel, ensemble_decoder -from fairseq.models.transformer import Embedding -from fairseq.modules import TransformerDecoderLayer -from fairseq.modules.transformer_sentence_encoder import init_bert_params - -from .levenshtein_utils import ( - _apply_del_words, - _apply_ins_masks, - _apply_ins_words, - _fill, - _get_del_targets, - _get_ins_targets, - _skip, - _skip_encoder_out, -) - - -@register_model("levenshtein_transformer") -class LevenshteinTransformerModel(FairseqNATModel): - @property - def allow_length_beam(self): - return False - - @staticmethod - def add_args(parser): - FairseqNATModel.add_args(parser) - parser.add_argument( - "--early-exit", - default="6,6,6", - type=str, - help="number of decoder layers before word_del, mask_ins, word_ins", - ) - parser.add_argument( - "--no-share-discriminator", - action="store_true", - help="separate parameters for discriminator", - ) - parser.add_argument( - "--no-share-maskpredictor", - action="store_true", - help="separate parameters for mask-predictor", - ) - parser.add_argument( - "--share-discriminator-maskpredictor", - action="store_true", - help="share the parameters for both mask-predictor and discriminator", - ) - parser.add_argument( - "--sampling-for-deletion", - action="store_true", - help="instead of argmax, use sampling to predict the tokens", - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = LevenshteinTransformerDecoder(args, tgt_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - decoder.apply(init_bert_params) - return decoder - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - - assert tgt_tokens is not None, "forward function only supports training." - - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # generate training labels for insertion - masked_tgt_masks, masked_tgt_tokens, mask_ins_targets = _get_ins_targets( - prev_output_tokens, tgt_tokens, self.pad, self.unk - ) - mask_ins_targets = mask_ins_targets.clamp(min=0, max=255) # for safe prediction - mask_ins_masks = prev_output_tokens[:, 1:].ne(self.pad) - - mask_ins_out, _ = self.decoder.forward_mask_ins( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - word_ins_out, _ = self.decoder.forward_word_ins( - normalize=False, - prev_output_tokens=masked_tgt_tokens, - encoder_out=encoder_out, - ) - - # make online prediction - if self.decoder.sampling_for_deletion: - word_predictions = torch.multinomial( - F.softmax(word_ins_out, -1).view(-1, word_ins_out.size(-1)), 1 - ).view(word_ins_out.size(0), -1) - else: - word_predictions = F.log_softmax(word_ins_out, dim=-1).max(2)[1] - - word_predictions.masked_scatter_( - ~masked_tgt_masks, tgt_tokens[~masked_tgt_masks] - ) - - # generate training labels for deletion - word_del_targets = _get_del_targets(word_predictions, tgt_tokens, self.pad) - word_del_out, _ = self.decoder.forward_word_del( - normalize=False, - prev_output_tokens=word_predictions, - encoder_out=encoder_out, - ) - word_del_masks = word_predictions.ne(self.pad) - - return { - "mask_ins": { - "out": mask_ins_out, - "tgt": mask_ins_targets, - "mask": mask_ins_masks, - "ls": 0.01, - }, - "word_ins": { - "out": word_ins_out, - "tgt": tgt_tokens, - "mask": masked_tgt_masks, - "ls": self.args.label_smoothing, - "nll_loss": True, - }, - "word_del": { - "out": word_del_out, - "tgt": word_del_targets, - "mask": word_del_masks, - }, - } - - def forward_decoder( - self, decoder_out, encoder_out, eos_penalty=0.0, max_ratio=None, **kwargs - ): - - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - attn = decoder_out.attn - history = decoder_out.history - - bsz = output_tokens.size(0) - if max_ratio is None: - max_lens = torch.zeros_like(output_tokens).fill_(255) - else: - if not encoder_out["encoder_padding_mask"]: - max_src_len = encoder_out["encoder_out"].size(0) - src_lens = encoder_out["encoder_out"].new(bsz).fill_(max_src_len) - else: - src_lens = (~encoder_out["encoder_padding_mask"][0]).sum(1) - max_lens = (src_lens * max_ratio).clamp(min=10).long() - - # delete words - # do not delete tokens if it is - can_del_word = output_tokens.ne(self.pad).sum(1) > 2 - if can_del_word.sum() != 0: # we cannot delete, skip - word_del_score, word_del_attn = self.decoder.forward_word_del( - normalize=True, - prev_output_tokens=_skip(output_tokens, can_del_word), - encoder_out=_skip_encoder_out(self.encoder, encoder_out, can_del_word), - ) - word_del_pred = word_del_score.max(-1)[1].bool() - - _tokens, _scores, _attn = _apply_del_words( - output_tokens[can_del_word], - output_scores[can_del_word], - word_del_attn, - word_del_pred, - self.pad, - self.bos, - self.eos, - ) - output_tokens = _fill(output_tokens, can_del_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_del_word, _scores, 0) - attn = _fill(attn, can_del_word, _attn, 0.0) - - if history is not None: - history.append(output_tokens.clone()) - - # insert placeholders - can_ins_mask = output_tokens.ne(self.pad).sum(1) < max_lens - if can_ins_mask.sum() != 0: - mask_ins_score, _ = self.decoder.forward_mask_ins( - normalize=True, - prev_output_tokens=_skip(output_tokens, can_ins_mask), - encoder_out=_skip_encoder_out(self.encoder, encoder_out, can_ins_mask), - ) - if eos_penalty > 0.0: - mask_ins_score[:, :, 0] = mask_ins_score[:, :, 0] - eos_penalty - mask_ins_pred = mask_ins_score.max(-1)[1] - mask_ins_pred = torch.min( - mask_ins_pred, max_lens[can_ins_mask, None].expand_as(mask_ins_pred) - ) - - _tokens, _scores = _apply_ins_masks( - output_tokens[can_ins_mask], - output_scores[can_ins_mask], - mask_ins_pred, - self.pad, - self.unk, - self.eos, - ) - output_tokens = _fill(output_tokens, can_ins_mask, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_mask, _scores, 0) - - if history is not None: - history.append(output_tokens.clone()) - - # insert words - can_ins_word = output_tokens.eq(self.unk).sum(1) > 0 - if can_ins_word.sum() != 0: - word_ins_score, word_ins_attn = self.decoder.forward_word_ins( - normalize=True, - prev_output_tokens=_skip(output_tokens, can_ins_word), - encoder_out=_skip_encoder_out(self.encoder, encoder_out, can_ins_word), - ) - word_ins_score, word_ins_pred = word_ins_score.max(-1) - _tokens, _scores = _apply_ins_words( - output_tokens[can_ins_word], - output_scores[can_ins_word], - word_ins_pred, - word_ins_score, - self.unk, - ) - - output_tokens = _fill(output_tokens, can_ins_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_word, _scores, 0) - attn = _fill(attn, can_ins_word, word_ins_attn, 0.0) - - if history is not None: - history.append(output_tokens.clone()) - - # delete some unnecessary paddings - cut_off = output_tokens.ne(self.pad).sum(1).max() - output_tokens = output_tokens[:, :cut_off] - output_scores = output_scores[:, :cut_off] - attn = None if attn is None else attn[:, :cut_off, :] - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=attn, - history=history, - ) - - def initialize_output_tokens(self, encoder_out, src_tokens): - initial_output_tokens = src_tokens.new_zeros(src_tokens.size(0), 2) - initial_output_tokens[:, 0] = self.bos - initial_output_tokens[:, 1] = self.eos - - initial_output_scores = initial_output_tokens.new_zeros( - *initial_output_tokens.size() - ).type_as(encoder_out["encoder_out"][0]) - - return DecoderOut( - output_tokens=initial_output_tokens, - output_scores=initial_output_scores, - attn=None, - step=0, - max_step=0, - history=None, - ) - - -class LevenshteinTransformerDecoder(FairseqNATDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - self.dictionary = dictionary - self.bos = dictionary.bos() - self.unk = dictionary.unk() - self.eos = dictionary.eos() - self.sampling_for_deletion = getattr(args, "sampling_for_deletion", False) - self.embed_mask_ins = Embedding(256, self.output_embed_dim * 2, None) - self.embed_word_del = Embedding(2, self.output_embed_dim, None) - - # del_word, ins_mask, ins_word - self.early_exit = [int(i) for i in args.early_exit.split(",")] - assert len(self.early_exit) == 3 - - # copy layers for mask-predict/deletion - self.layers_msk = None - if getattr(args, "no_share_maskpredictor", False): - self.layers_msk = nn.ModuleList( - [ - TransformerDecoderLayer(args, no_encoder_attn) - for _ in range(self.early_exit[1]) - ] - ) - self.layers_del = None - if getattr(args, "no_share_discriminator", False): - self.layers_del = nn.ModuleList( - [ - TransformerDecoderLayer(args, no_encoder_attn) - for _ in range(self.early_exit[0]) - ] - ) - - if getattr(args, "share_discriminator_maskpredictor", False): - assert getattr( - args, "no_share_discriminator", False - ), "must set saperate discriminator" - self.layers_msk = self.layers_del - - def extract_features( - self, - prev_output_tokens, - encoder_out=None, - early_exit=None, - layers=None, - **unused - ): - """ - Similar to *forward* but only return features. - Inputs: - prev_output_tokens: Tensor(B, T) - encoder_out: a dictionary of hidden states and masks - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - the LevenshteinTransformer decoder has full-attention to all generated tokens - """ - # embed positions - positions = ( - self.embed_positions(prev_output_tokens) - if self.embed_positions is not None - else None - ) - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - inner_states = [x] - - # decoder layers - decoder_padding_mask = prev_output_tokens.eq(self.padding_idx) - layers = self.layers if layers is None else layers - early_exit = len(layers) if early_exit is None else early_exit - for _, layer in enumerate(layers[:early_exit]): - x, attn, _ = layer( - x, - encoder_out["encoder_out"][0] - if (encoder_out is not None and len(encoder_out["encoder_out"]) > 0) - else None, - encoder_out["encoder_padding_mask"][0] - if ( - encoder_out is not None - and len(encoder_out["encoder_padding_mask"]) > 0 - ) - else None, - self_attn_mask=None, - self_attn_padding_mask=decoder_padding_mask, - ) - inner_states.append(x) - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - return x, {"attn": attn, "inner_states": inner_states} - - @ensemble_decoder - def forward_mask_ins(self, normalize, encoder_out, prev_output_tokens, **unused): - features, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - early_exit=self.early_exit[1], - layers=self.layers_msk, - **unused - ) - features_cat = torch.cat([features[:, :-1, :], features[:, 1:, :]], 2) - decoder_out = F.linear(features_cat, self.embed_mask_ins.weight) - if normalize: - return F.log_softmax(decoder_out, -1), extra["attn"] - return decoder_out, extra["attn"] - - @ensemble_decoder - def forward_word_ins(self, normalize, encoder_out, prev_output_tokens, **unused): - features, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - early_exit=self.early_exit[2], - layers=self.layers, - **unused - ) - decoder_out = self.output_layer(features) - if normalize: - return F.log_softmax(decoder_out, -1), extra["attn"] - return decoder_out, extra["attn"] - - @ensemble_decoder - def forward_word_del(self, normalize, encoder_out, prev_output_tokens, **unused): - features, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - early_exit=self.early_exit[0], - layers=self.layers_del, - **unused - ) - decoder_out = F.linear(features, self.embed_word_del.weight) - if normalize: - return F.log_softmax(decoder_out, -1), extra["attn"] - return decoder_out, extra["attn"] - - -@register_model_architecture("levenshtein_transformer", "levenshtein_transformer") -def levenshtein_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.sampling_for_deletion = getattr(args, "sampling_for_deletion", False) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.early_exit = getattr(args, "early_exit", "6,6,6") - args.no_share_discriminator = getattr(args, "no_share_discriminator", False) - args.no_share_maskpredictor = getattr(args, "no_share_maskpredictor", False) - args.share_discriminator_maskpredictor = getattr( - args, "share_discriminator_maskpredictor", False - ) - args.no_share_last_layer = getattr(args, "no_share_last_layer", False) - - -@register_model_architecture( - "levenshtein_transformer", "levenshtein_transformer_wmt_en_de" -) -def levenshtein_transformer_wmt_en_de(args): - levenshtein_base_architecture(args) - - -# similar parameters used in the "Attention Is All You Need" paper (Vaswani et al., 2017) -@register_model_architecture( - "levenshtein_transformer", "levenshtein_transformer_vaswani_wmt_en_de_big" -) -def levenshtein_transformer_vaswani_wmt_en_de_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - levenshtein_base_architecture(args) - - -# default parameters used in tensor2tensor implementation -@register_model_architecture( - "levenshtein_transformer", "levenshtein_transformer_wmt_en_de_big" -) -def levenshtein_transformer_wmt_en_de_big_t2t(args): - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_dropout = getattr(args, "activation_dropout", 0.1) - levenshtein_transformer_vaswani_wmt_en_de_big(args) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/README.md deleted file mode 100644 index 62a005e0ec6f15af9015d335e34b45df6ed89b6c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/simultaneous_translation/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Simultaneous Translation -Examples of simultaneous translation in fairseq -- [English-to-Japanese text-to-text wait-k model](docs/enja-waitk.md) -- [English-to-Germen text-to-text monotonic multihead attention model](docs/ende-mma.md) -- [English-to-Germen speech-to-text simultaneous translation model](../speech_to_text/docs/simulst_mustc_example.md) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/docs/iwslt2021.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/docs/iwslt2021.md deleted file mode 100644 index 920ff271c2e178c7a4ca3c7c8ce57a2f28653969..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/docs/iwslt2021.md +++ /dev/null @@ -1,76 +0,0 @@ -[[Back]](..) - -# Joint Speech Text Training for the 2021 IWSLT multilingual speech translation - -This directory contains the code from paper ["FST: the FAIR Speech Translation System for the IWSLT21 Multilingual Shared Task"](https://arxiv.org/pdf/2107.06959.pdf). - -## Prepare Data -#### Download files -- Sentence piece model [spm.model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/spm.model) -- Dictionary [tgt_dict.txt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/dict.txt) -- Config [config.yaml](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/config.yaml) - -#### Prepare -- [Please follow the data preparation in speech-to-text](https://github.com/pytorch/fairseq/blob/main/examples/speech_to_text/docs/mtedx_example.md) - - - -## Training - -#### Download pretrained models -- [Pretrained mbart model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/mbart.pt) -- [Pretrained w2v model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/xlsr_53_56k.pt) - - -#### Training scripts - -```bash -python train.py ${MANIFEST_ROOT} \ - --save-dir ${save_dir} \ - --user-dir examples/speech_text_joint_to_text \ - --train-subset train_es_en_tedx,train_es_es_tedx,train_fr_en_tedx,train_fr_es_tedx,train_fr_fr_tedx,train_it_it_tedx,train_pt_en_tedx,train_pt_pt_tedx \ - --valid-subset valid_es_en_tedx,valid_es_es_tedx,valid_es_fr_tedx,valid_es_it_tedx,valid_es_pt_tedx,valid_fr_en_tedx,valid_fr_es_tedx,valid_fr_fr_tedx,valid_fr_pt_tedx,valid_it_en_tedx,valid_it_es_tedx,valid_it_it_tedx,valid_pt_en_tedx,valid_pt_es_tedx,valid_pt_pt_tedx \ - --config-yaml config.yaml --ddp-backend no_c10d \ - --num-workers 2 --task speech_text_joint_to_text \ - --criterion guided_label_smoothed_cross_entropy_with_accuracy \ - --label-smoothing 0.3 --guide-alpha 0.8 \ - --disable-text-guide-update-num 5000 --arch dualinputxmtransformer_base \ - --max-tokens 500000 --max-sentences 3 --max-tokens-valid 800000 \ - --max-source-positions 800000 --enc-grad-mult 2.0 \ - --attentive-cost-regularization 0.02 --optimizer adam \ - --clip-norm 1.0 --log-format simple --log-interval 200 \ - --keep-last-epochs 5 --seed 1 \ - --w2v-path ${w2v_path} \ - --load-pretrained-mbart-from ${mbart_path} \ - --max-update 1000000 --update-freq 4 \ - --skip-invalid-size-inputs-valid-test \ - --skip-encoder-projection --save-interval 1 \ - --attention-dropout 0.3 --mbart-dropout 0.3 \ - --finetune-w2v-params all --finetune-mbart-decoder-params all \ - --finetune-mbart-encoder-params all --stack-w2v-mbart-encoder \ - --drop-w2v-layers 12 --normalize \ - --lr 5e-05 --lr-scheduler inverse_sqrt --warmup-updates 5000 -``` - -## Evaluation -```bash -python ./fairseq_cli/generate.py - ${MANIFEST_ROOT} \ - --task speech_text_joint_to_text \ - --user-dir ./examples/speech_text_joint_to_text \ - --load-speech-only --gen-subset test_es_en_tedx \ - --path ${model} \ - --max-source-positions 800000 \ - --skip-invalid-size-inputs-valid-test \ - --config-yaml config.yaml \ - --infer-target-lang en \ - --max-tokens 800000 \ - --beam 5 \ - --results-path ${RESULTS_DIR} \ - --scoring sacrebleu -``` -The trained model can be downloaded [here](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/checkpoint17.pt) - -|direction|es_en|fr_en|pt_en|it_en|fr_es|pt_es|it_es|es_es|fr_fr|pt_pt|it_it| -|---|---|---|---|---|---|---|---|---|---|---|---| -|BLEU|31.62|36.93|35.07|27.12|38.87|35.57|34.13|74.59|74.64|70.84|69.76| diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lightweight_convolution.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lightweight_convolution.py deleted file mode 100644 index ec11a9507951c9e8f3564753841dd9c74a4900e0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lightweight_convolution.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.unfold import unfold1d - - -def LightweightConv( - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - bias=False, -): - if torch.cuda.is_available(): - try: - from fairseq.modules.lightconv_layer import LightconvLayer - - return LightconvLayer( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - bias=bias, - ) - except ImportError as e: - print(e) - return LightweightConv1dTBC( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - bias=bias, - ) - - -class LightweightConv1d(nn.Module): - """Lightweight Convolution assuming the input is BxCxT - This is just an example that explains LightConv clearer than the TBC version. - We don't use this module in the model. - - Args: - input_size: # of channels of the input and output - kernel_size: convolution channels - padding: padding - num_heads: number of heads used. The weight is of shape - `(num_heads, 1, kernel_size)` - weight_softmax: normalize the weight with softmax before the convolution - - Shape: - Input: BxCxT, i.e. (batch_size, input_size, timesteps) - Output: BxCxT, i.e. (batch_size, input_size, timesteps) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding=0, - num_heads=1, - weight_softmax=False, - bias=False, - weight_dropout=0.0, - ): - super().__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.num_heads = num_heads - self.padding = padding - self.weight_softmax = weight_softmax - self.weight = nn.Parameter(torch.Tensor(num_heads, 1, kernel_size)) - - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.reset_parameters() - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, input): - """ - input size: B x C x T - output size: B x C x T - """ - B, C, T = input.size() - H = self.num_heads - - weight = self.weight - if self.weight_softmax: - weight = F.softmax(weight, dim=-1) - - weight = self.weight_dropout_module(weight) - # Merge every C/H entries into the batch dimension (C = self.input_size) - # B x C x T -> (B * C/H) x H x T - # One can also expand the weight to C x 1 x K by a factor of C/H - # and do not reshape the input instead, which is slow though - input = input.view(-1, H, T) - output = F.conv1d(input, weight, padding=self.padding, groups=self.num_heads) - output = output.view(B, C, T) - if self.bias is not None: - output = output + self.bias.view(1, -1, 1) - - return output - - -@with_incremental_state -class LightweightConv1dTBC(nn.Module): - """Lightweight Convolution assuming the input is TxBxC - Args: - input_size: # of channels of the input - kernel_size: convolution channels - padding_l: padding to the left when using "same" padding - num_heads: number of heads used. The weight is of shape (num_heads, 1, kernel_size) - weight_dropout: the drop rate of the DropConnect to drop the weight - weight_softmax: normalize the weight with softmax before the convolution - bias: use bias - - Shape: - Input: TxBxC, i.e. (timesteps, batch_size, input_size) - Output: TxBxC, i.e. (timesteps, batch_size, input_size) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - bias=False, - ): - super().__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.weight_softmax = weight_softmax - - self.weight = nn.Parameter(torch.Tensor(num_heads, 1, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - - self.reset_parameters() - self.onnx_trace = False - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, x, incremental_state=None, unfold=False): - """Assuming the input, x, of the shape T x B x C and producing an output in the shape T x B x C - args: - x: Input of shape T x B x C, i.e. (timesteps, batch_size, input_size) - incremental_state: A dict to keep the state - unfold: unfold the input or not. If not, we use the matrix trick instead - """ - unfold = unfold or (incremental_state is not None) - - if unfold: - output = self._forward_unfolded(x, incremental_state) - else: - output = self._forward_expanded(x, incremental_state) - - if self.bias is not None: - output = output + self.bias.view(1, 1, -1) - return output - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def _forward_unfolded(self, x, incremental_state): - """The conventional implementation of convolutions. - Unfolding the input by having a window shifting to the right.""" - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - weight = self.weight.view(H, K) - if incremental_state is not None: - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - else: - # unfold the input: T x B x C --> T' x B x C x K - x_unfold = unfold1d(x, self.kernel_size, self.padding_l, 0) - x_unfold = x_unfold.view(T * B * H, R, K) - - if self.weight_softmax: - weight = utils.softmax(weight, dim=1, onnx_trace=self.onnx_trace).type_as( - weight - ) - - if incremental_state is not None: - weight = weight[:, -x_unfold.size(2) :] - K = weight.size(1) - - weight = ( - weight.view(1, H, K).expand(T * B, H, K).contiguous().view(T * B * H, K, 1) - ) - - weight = self.weight_dropout_module(weight) - output = torch.bmm(x_unfold, weight) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - def _forward_expanded(self, x, incremental_state): - """Turn the convolution filters into band matrices and do matrix multiplication. - This is faster when the sequence is short, but less memory efficient. - This is not used in the decoder during inference. - """ - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - weight = self.weight.view(H, K) - if self.weight_softmax: - weight = utils.softmax(weight, dim=1, onnx_trace=self.onnx_trace).type_as( - weight - ) - weight = weight.view(1, H, K).expand(T * B, H, K).contiguous() - weight = weight.view(T, B * H, K).transpose(0, 1) - - x = x.view(T, B * H, R).transpose(0, 1) - P = self.padding_l - if K > T and P == K - 1: - weight = weight.narrow(2, K - T, T) - K, P = T, T - 1 - # turn the convolution filters into band matrices - weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False) - weight_expanded.as_strided((B * H, T, K), (T * (T + K - 1), T + K, 1)).copy_( - weight - ) - weight_expanded = weight_expanded.narrow(2, P, T) - weight_expanded = self.weight_dropout_module(weight_expanded) - - output = torch.bmm(weight_expanded, x) - output = output.transpose(0, 1).contiguous().view(T, B, C) - return output - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def extra_repr(self): - s = "{}, kernel_size={}, padding_l={}, num_heads={}, weight_softmax={}, bias={}".format( - self.input_size, - self.kernel_size, - self.padding_l, - self.num_heads, - self.weight_softmax, - self.bias is not None, - ) - if self.weight_dropout_module.p > 0.0: - s += ", weight_dropout={}".format(self.weight_dropout_module.p) - return s diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tokenizer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tokenizer.py deleted file mode 100644 index 42131f7b1d334020c3b48a6e44d4139f7c62ad28..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tokenizer.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re - - -SPACE_NORMALIZER = re.compile(r"\s+") - - -def tokenize_line(line): - line = SPACE_NORMALIZER.sub(" ", line) - line = line.strip() - return line.split() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/spm_decode.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/spm_decode.py deleted file mode 100644 index 1c18b1d2a7d7628b7aeb6fdb6c4ab5a096e9edf8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/spm_decode.py +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import argparse - -import sentencepiece as spm - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--model", required=True, help="sentencepiece model to use for decoding" - ) - parser.add_argument("--input", required=True, help="input file to decode") - parser.add_argument("--input_format", choices=["piece", "id"], default="piece") - args = parser.parse_args() - - sp = spm.SentencePieceProcessor() - sp.Load(args.model) - - if args.input_format == "piece": - - def decode(l): - return "".join(sp.DecodePieces(l)) - - elif args.input_format == "id": - - def decode(l): - return "".join(sp.DecodeIds(l)) - - else: - raise NotImplementedError - - def tok2int(tok): - # remap reference-side (represented as <>) to 0 - return int(tok) if tok != "<>" else 0 - - with open(args.input, "r", encoding="utf-8") as h: - for line in h: - if args.input_format == "id": - print(decode(list(map(tok2int, line.rstrip().split())))) - elif args.input_format == "piece": - print(decode(line.rstrip().split())) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/m2m_100/README.md b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/m2m_100/README.md deleted file mode 100644 index 02a68a5f0919a26a0468069bed46a5b1abc78941..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/m2m_100/README.md +++ /dev/null @@ -1,241 +0,0 @@ -# Beyond English-Centric Multilingual Machine Translation - -## Introduction -In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively with the best single systems of WMT. - -If you are new to using fairseq, read the following walkthrough. Otherwise, skip to the sections below. - -0. **Generation Data** - -To download the generation data, follow the below commands. Note that all datasets need to be detokenized *before* applying SPM in the data preprocessing step. If you use these evaluation datasets, please cite their associated papers. -```bash -# WMT - use sacrebleu, example here: -sacrebleu -t wmt14 -l fr-en --echo src > wmt.test.fr-en.fr -sacrebleu -t wmt14 -l fr-en --echo ref > wmt.test.fr-en.en - -# WAT -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip -unzip wat2020.my-en.zip - -# FLORES -# download from: https://github.com/facebookresearch/flores - -# TED - need to detokenize with Moses! -# from: https://github.com/neulab/word-embeddings-for-nmt -wget http://phontron.com/data/ted_talks.tar.gz - -# Autshumato -# request to download: https://repo.sadilar.org/handle/20.500.12185/397 - -# Tatoeba Challenge -# available here: https://github.com/Helsinki-NLP/Tatoeba-Challenge -``` - -1. **Training Data** - -To produce the training data, we use a combination of [CCMatrix](https://arxiv.org/abs/1911.04944) and [CCAligned](https://arxiv.org/abs/1911.06154). Check out the instructions [here](https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix) to download the raw data. - -2. **Preprocess Data** - -After downloading raw data, you will need to postprocess the data, then apply SPM, then binarize. Note that it is very important you run the postprocessing script, because this removes any instance of the evaluation data in the mined training data. - -```bash -# preprocess data - -# remove sentences with more than 50% punctuation -python /path/to/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py - -# deduplicate training data -paste /path/to/datadir/train.$src /path/to/datadir/train.$tgt | awk '!x[$0]++' > /path/to/datadir/train.dedup -echo "keeping $(wc -l /path/to/datadir/train.dedup) bitext out of $(wc -l /path/to/datadir/train.$src)" -cut -f1 /path/to/datadir/train.dedup > /path/to/datadir/train.$src -cut -f2 /path/to/datadir/train.dedup > /path/to/datadir/train.$tgt - -# remove all instances of evaluation data from the training data -python /path/to/fairseq/examples/m2m_100/process_data/dedup_data.py - -# frequency cleaning -wget https://dl.fbaipublicfiles.com/m2m_100/histograms.tar.gz -tar -xvzf histograms.tar.gz -python /path/to/fairseq/examples/m2m_100/process_data/clean_histogram.py --src $src --tgt $tgt --src-file /path/to/source/file --tgt-file /path/to/output/file --src-output-file source_output.$src --tgt-output-file target_output.$tgt --histograms /path/to/histograms - -# apply SPM -wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model -python /path/to/fairseq/scripts/spm_encode.py \ - --model spm.128k.model \ - --output_format=piece \ - --inputs=/path/to/input/file/here \ - --outputs=/path/to/output/file/here - -# length ratio cleaning -perl mosesdecoder/scripts/training/clean-corpus-n.perl --ratio 3 /path/to/training/data/train.spm.$src-$tgt $src $tgt /path/to/output/directory/train.spm.$src-$tgt 1 250 - -# binarize data -wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt -fairseq-preprocess \ - --source-lang $src --target-lang $tgt \ - --testpref spm.$src.$tgt \ - --thresholdsrc 0 --thresholdtgt 0 \ - --destdir data_bin \ - --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt -``` - -3. **Training Scripts** - -To reproduce the training of our models, we train with fairseq-py's multilingual translation [task](https://github.com/pytorch/fairseq/tree/main/examples/multilingual). If you are interested in model parallel training, also check out [fairscale](https://github.com/facebookresearch/fairscale). - -4. **Generation** - -To generate from our models, follow the the commands in the generation section below. - - -If you use any of the resources listed here, please cite: -```bibtex -@article{fan2020beyond, - title={Beyond English-Centric Multilingual Machine Translation}, - author={Fan, Angela and Bhosale, Shruti and Schwenk, Holger and Ma, Zhiyi and El-Kishky, Ahmed and Goyal, Siddharth and Baines, Mandeep and Celebi, Onur and Wenzek, Guillaume and Chaudhary, Vishrav and Goyal, Naman and Birch, Tom and Liptchinsky, Vitaliy and Edunov, Sergey and Grave, Edouard and Auli, Michael and Joulin, Armand}, - journal={arXiv preprint}, - year={2020} -} - -@article{schwenk2019ccmatrix, - title={Ccmatrix: Mining billions of high-quality parallel sentences on the web}, - author={Schwenk, Holger and Wenzek, Guillaume and Edunov, Sergey and Grave, Edouard and Joulin, Armand}, - journal={arXiv preprint arXiv:1911.04944}, - year={2019} -} - -@article{el2019massive, - title={A Massive Collection of Cross-Lingual Web-Document Pairs}, - author={El-Kishky, Ahmed and Chaudhary, Vishrav and Guzman, Francisco and Koehn, Philipp}, - journal={arXiv preprint arXiv:1911.06154}, - year={2019} -} -``` - - -## Trained Models - -### 418M and 1.2B Model -We include the last checkpoint for both of these models. - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt -wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs_small_models.txt - -# 418M parameter model -wget https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt - -# 1.2B parameter model -wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt - -# Generation: -fairseq-generate $binarized_data_path --batch-size 32 --path $path_to_model --fixed-dictionary model_dict.128k.txt -s en -t fr --remove-bpe 'sentencepiece' --beam 5 --task translation_multi_simple_epoch --lang-pairs language_pairs_small_models.txt --decoder-langtok --encoder-langtok src --gen-subset test > gen_out -``` - -### 12B Model -12B parameter model trained on many-to-many training data for 100 languages. We include the last checkpoint, average of last 5 checkpoints, average of last 10 checkpoints. There isn't a universally best choice out of these three, but all three versions are pretty close in accuracy. You can either sweep over the 3 checkpoints on a dev test and use the best performing checkpoint for final testing. Or the last checkpoint can be a good default choice. - -**Model Download Links** -Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs -:--|:--|:--|:--|:-- -Last Checkpoint | [12b_last_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_2_gpus.pt) | [12b_last_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt) | [12b_last_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_6_gpus.pt) | [12b_last_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_8_gpus.pt) -Average of last 5 checkpoints | [12b_avg5_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_2_gpus.pt) | [12b_avg5_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_4_gpus.pt) | [12b_avg5_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_6_gpus.pt) | [12b_avg5_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_8_gpus.pt) -Average of last 10 checkpoints | [12b_avg10_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_2_gpus.pt) | [12b_avg10_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_4_gpus.pt) | [12b_avg10_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_6_gpus.pt) | [12b_avg10_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_8_gpus.pt) - -**Generation Arguments** -Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs -:--|:--|:--|:--|:-- -`--pipeline-encoder-balance` | `[26]` | `[1,15,10]` | `[1,9,9,7]` | `[1,6,6,6,7]` -`--pipeline-encoder-devices` | `[0]` | `[0,1,0]` | `[0,1,2,0]` | `[0,4,5,1,0]` -`--pipeline-decoder-balance` | `[3,22,1]` | `[3,11,11,1]` | `[3,7,7,8,1]` | `[1,6,6,6,6,1]` -`--pipeline-decoder-devices` | `[0,1,0]` | `[0,2,3,0]` | `[0,3,4,5,0]` | `[0,2,6,7,3,0]` - - -## SentencePiece Model - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model -``` - -## Generation with M2M-100 - -### Encode using our SentencePiece Model - -Note: Install SentencePiece from [here](https://github.com/google/sentencepiece) - -```bash -fairseq=/path/to/fairseq -cd $fairseq -sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de -sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr -wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model -for lang in de fr ; do - python scripts/spm_encode.py \ - --model spm.128k.model \ - --output_format=piece \ - --inputs=raw_input.de-fr.${lang} \ - --outputs=spm.de-fr.${lang} -done -``` - -### Binarization - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt -fairseq-preprocess \ - --source-lang de --target-lang fr \ - --testpref spm.de-fr \ - --thresholdsrc 0 --thresholdtgt 0 \ - --destdir data_bin \ - --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt -``` - -### Generation for the 12B model - -Note that generation can currently be run using 2 32GB / 4 16GB / 6 12GB / 8 8GB GPUs, and the corresponding model checkpoints and pipeline arguments can be found in the [12B Model Section](#12b-model). -Generation on CPUs will be added in the future. - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt -wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs.txt -wget https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt -fairseq-generate \ - data_bin \ - --batch-size 1 \ - --path 12b_last_chk_4_gpus.pt \ - --fixed-dictionary model_dict.128k.txt \ - -s de -t fr \ - --remove-bpe 'sentencepiece' \ - --beam 5 \ - --task translation_multi_simple_epoch \ - --lang-pairs language_pairs.txt \ - --decoder-langtok --encoder-langtok src \ - --gen-subset test \ - --fp16 \ - --dataset-impl mmap \ - --distributed-world-size 1 --distributed-no-spawn \ - --pipeline-model-parallel \ - --pipeline-chunks 1 \ - --pipeline-encoder-balance '[1,15,10]' \ - --pipeline-encoder-devices '[0,1,0]' \ - --pipeline-decoder-balance '[3,11,11,1]' \ - --pipeline-decoder-devices '[0,2,3,0]' > gen_out -``` -## Evaluation with M2M-100 - -### Tokenization - -Note: Refer to tokenizers/README.md for more details on tokenization. - -```bash -cd ${fairseq}/examples/m2m_100 -cat ${fairseq}/gen_out | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh fr > hyp -cat ${fairseq}/raw_input.de-fr.fr | sh tok.sh fr > ref -``` - -### BLEU - -```bash -sacrebleu -tok 'none' ref < hyp -``` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh deleted file mode 100644 index 4655936149cab212b3cfa14f306d71153729f9d7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -if [ -z $SPM_PATH ] ; -then - echo "Please install sentence piecence from https://github.com/google/sentencepiece and set SPM_PATH pointing to the installed spm_encode.py. Exitting..." - exit -fi - -ML50=${WORKDIR_ROOT}/ML50 - -mkdir -p $ML50/dedup -mkdir -p $ML50/cleaned_dedup - -python ./dedup_all.py --from-folder $ML50/raw --to-folder $ML50/dedup -python ./remove_valid_test_in_train.py --from-folder $ML50/dedup --to-folder $ML50/clean -python ./binarize.py --raw-folder $ML50/clean \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/multihead_attention.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/multihead_attention.py deleted file mode 100644 index a2516356117847b0d46d965ee942354a2ed23189..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/multihead_attention.py +++ /dev/null @@ -1,500 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter - - -@with_incremental_state -class MultiheadAttention(nn.Module): - """Multi-headed attention. - - See "Attention Is All You Need" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - ): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert not self.self_attention or self.qkv_same_dim, ( - "Self-attention requires query, key and " "value to be of the same size" - ) - - self.k_proj = quant_noise( - nn.Linear(self.kdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - self.out_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def reset_parameters(self): - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - else: - nn.init.xavier_uniform_(self.k_proj.weight) - nn.init.xavier_uniform_(self.v_proj.weight) - nn.init.xavier_uniform_(self.q_proj.weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.out_proj.bias is not None: - nn.init.constant_(self.out_proj.bias, 0.0) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - is_tpu = query.device.type == "xla" - - tgt_len, bsz, embed_dim = query.size() - src_len = tgt_len - assert embed_dim == self.embed_dim, f"query dim {embed_dim} != {self.embed_dim}" - assert list(query.size()) == [tgt_len, bsz, embed_dim] - if key is not None: - src_len, key_bsz, _ = key.size() - if not torch.jit.is_scripting(): - assert key_bsz == bsz - assert value is not None - assert src_len, bsz == value.shape[:2] - - if ( - not self.onnx_trace - and not is_tpu # don't use PyTorch version on TPUs - and incremental_state is None - and not static_kv - # A workaround for quantization to work. Otherwise JIT compilation - # treats bias in linear module as method. - and not torch.jit.is_scripting() - ): - assert key is not None and value is not None - return F.multi_head_attention_forward( - query, - key, - value, - self.embed_dim, - self.num_heads, - torch.empty([0]), - torch.cat((self.q_proj.bias, self.k_proj.bias, self.v_proj.bias)), - self.bias_k, - self.bias_v, - self.add_zero_attn, - self.dropout_module.p, - self.out_proj.weight, - self.out_proj.bias, - self.training or self.dropout_module.apply_during_inference, - key_padding_mask, - need_weights, - attn_mask, - use_separate_proj_weight=True, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - ) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - src_len = k.size(1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - assert k.size(1) == src_len - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - torch.zeros(key_padding_mask.size(0), 1).type_as( - key_padding_mask - ), - ], - dim=1, - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - if src_len > prev_key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - prev_key_padding_mask.size(1)), - device=prev_key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask.float() - elif key_padding_mask is not None: - if src_len > key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - key_padding_mask.size(1)), - device=key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = key_padding_mask.float() - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - input_buffer_k = input_buffer[k] - if input_buffer_k is not None: - if self.encoder_decoder_attention and input_buffer_k.size( - 0 - ) == new_order.size(0): - break - input_buffer[k] = input_buffer_k.index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) - - def apply_sparse_mask(self, attn_weights, tgt_len: int, src_len: int, bsz: int): - return attn_weights - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - items_to_add = {} - keys_to_remove = [] - for k in state_dict.keys(): - if k.endswith(prefix + "in_proj_weight"): - # in_proj_weight used to be q + k + v with same dimensions - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.weight"] = state_dict[k][:dim] - items_to_add[prefix + "k_proj.weight"] = state_dict[k][dim : 2 * dim] - items_to_add[prefix + "v_proj.weight"] = state_dict[k][2 * dim :] - - keys_to_remove.append(k) - - k_bias = prefix + "in_proj_bias" - if k_bias in state_dict.keys(): - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.bias"] = state_dict[k_bias][:dim] - items_to_add[prefix + "k_proj.bias"] = state_dict[k_bias][ - dim : 2 * dim - ] - items_to_add[prefix + "v_proj.bias"] = state_dict[k_bias][2 * dim :] - - keys_to_remove.append(prefix + "in_proj_bias") - - for k in keys_to_remove: - del state_dict[k] - - for key, value in items_to_add.items(): - state_dict[key] = value diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_memory_efficient_fp16.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_memory_efficient_fp16.py deleted file mode 100644 index 2bf2f29888d6027896128930626b1aafe7f18475..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_memory_efficient_fp16.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import unittest - -import torch -from fairseq.optim.adam import FairseqAdam -from fairseq.optim.fp16_optimizer import MemoryEfficientFP16Optimizer -from omegaconf import OmegaConf - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestMemoryEfficientFP16(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_load_state_dict(self): - # define simple FP16 model - model = torch.nn.Linear(5, 5).cuda().half() - params = list(model.parameters()) - - # initialize memory efficient FP16 optimizer - # with pseudo DictConfigs - optimizer = FairseqAdam( - cfg=OmegaConf.create( - vars( - argparse.Namespace( - adam_betas="(0.9, 0.999)", - adam_eps=1e-8, - weight_decay=0.0, - lr=[0.00001], - ) - ) - ), - params=params, - ) - me_optimizer = MemoryEfficientFP16Optimizer( - cfg=OmegaConf.create( - { - "common": vars( - argparse.Namespace( - fp16_init_scale=1, - fp16_scale_window=1, - fp16_scale_tolerance=1, - threshold_loss_scale=1, - min_loss_scale=1e-4, - ) - ) - } - ), - params=params, - optimizer=optimizer, - ) - - # optimizer state is created in the first step - loss = model(torch.rand(5).cuda().half()).sum() - me_optimizer.backward(loss) - me_optimizer.step() - - # reload state - state = me_optimizer.state_dict() - me_optimizer.load_state_dict(state) - for k, v in me_optimizer.optimizer.state.items(): - self.assertTrue(k.dtype == torch.float16) - for v_i in v.values(): - if torch.is_tensor(v_i): - self.assertTrue(v_i.dtype == torch.float32) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/tests/test_sync_batchnorm.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/tests/test_sync_batchnorm.py deleted file mode 100644 index 45bb3c8cfd36d8f668e6fde756b17587eab72082..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/tests/test_sync_batchnorm.py +++ /dev/null @@ -1,111 +0,0 @@ -# -*- coding: utf-8 -*- -# File : test_sync_batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. - -import unittest - -import torch -import torch.nn as nn -from torch.autograd import Variable - -from sync_batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, DataParallelWithCallback -from sync_batchnorm.unittest import TorchTestCase - - -def handy_var(a, unbias=True): - n = a.size(0) - asum = a.sum(dim=0) - as_sum = (a ** 2).sum(dim=0) # a square sum - sumvar = as_sum - asum * asum / n - if unbias: - return sumvar / (n - 1) - else: - return sumvar / n - - -def _find_bn(module): - for m in module.modules(): - if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, SynchronizedBatchNorm1d, SynchronizedBatchNorm2d)): - return m - - -class SyncTestCase(TorchTestCase): - def _syncParameters(self, bn1, bn2): - bn1.reset_parameters() - bn2.reset_parameters() - if bn1.affine and bn2.affine: - bn2.weight.data.copy_(bn1.weight.data) - bn2.bias.data.copy_(bn1.bias.data) - - def _checkBatchNormResult(self, bn1, bn2, input, is_train, cuda=False): - """Check the forward and backward for the customized batch normalization.""" - bn1.train(mode=is_train) - bn2.train(mode=is_train) - - if cuda: - input = input.cuda() - - self._syncParameters(_find_bn(bn1), _find_bn(bn2)) - - input1 = Variable(input, requires_grad=True) - output1 = bn1(input1) - output1.sum().backward() - input2 = Variable(input, requires_grad=True) - output2 = bn2(input2) - output2.sum().backward() - - self.assertTensorClose(input1.data, input2.data) - self.assertTensorClose(output1.data, output2.data) - self.assertTensorClose(input1.grad, input2.grad) - self.assertTensorClose(_find_bn(bn1).running_mean, _find_bn(bn2).running_mean) - self.assertTensorClose(_find_bn(bn1).running_var, _find_bn(bn2).running_var) - - def testSyncBatchNormNormalTrain(self): - bn = nn.BatchNorm1d(10) - sync_bn = SynchronizedBatchNorm1d(10) - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), True) - - def testSyncBatchNormNormalEval(self): - bn = nn.BatchNorm1d(10) - sync_bn = SynchronizedBatchNorm1d(10) - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), False) - - def testSyncBatchNormSyncTrain(self): - bn = nn.BatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - - bn.cuda() - sync_bn.cuda() - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), True, cuda=True) - - def testSyncBatchNormSyncEval(self): - bn = nn.BatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - - bn.cuda() - sync_bn.cuda() - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10), False, cuda=True) - - def testSyncBatchNorm2DSyncTrain(self): - bn = nn.BatchNorm2d(10) - sync_bn = SynchronizedBatchNorm2d(10) - sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - - bn.cuda() - sync_bn.cuda() - - self._checkBatchNormResult(bn, sync_bn, torch.rand(16, 10, 16, 16), True, cuda=True) - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/env.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/env.sh deleted file mode 100644 index f3052f0ea1672a569e7775f8c54967d730a7b5ec..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/env.sh +++ /dev/null @@ -1,8 +0,0 @@ -DIRNAME="$(dirname $0)" -DIRNAME="$(realpath ""$DIRNAME"")" - -BINDIR="$DIRNAME/.." -SRCDIR="$BINDIR/.." -CONFIGDIR="$SRCDIR/configs" - -export PYTHONPATH="$SRCDIR:$PYTHONPATH" diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/visualizers/base.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/visualizers/base.py deleted file mode 100644 index 675f01682ddf5e31b6cc341735378c6f3b242e49..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/visualizers/base.py +++ /dev/null @@ -1,73 +0,0 @@ -import abc -from typing import Dict, List - -import numpy as np -import torch -from skimage import color -from skimage.segmentation import mark_boundaries - -from . import colors - -COLORS, _ = colors.generate_colors(151) # 151 - max classes for semantic segmentation - - -class BaseVisualizer: - @abc.abstractmethod - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - """ - Take a batch, make an image from it and visualize - """ - raise NotImplementedError() - - -def visualize_mask_and_images(images_dict: Dict[str, np.ndarray], keys: List[str], - last_without_mask=True, rescale_keys=None, mask_only_first=None, - black_mask=False) -> np.ndarray: - mask = images_dict['mask'] > 0.5 - result = [] - for i, k in enumerate(keys): - img = images_dict[k] - img = np.transpose(img, (1, 2, 0)) - - if rescale_keys is not None and k in rescale_keys: - img = img - img.min() - img /= img.max() + 1e-5 - if len(img.shape) == 2: - img = np.expand_dims(img, 2) - - if img.shape[2] == 1: - img = np.repeat(img, 3, axis=2) - elif (img.shape[2] > 3): - img_classes = img.argmax(2) - img = color.label2rgb(img_classes, colors=COLORS) - - if mask_only_first: - need_mark_boundaries = i == 0 - else: - need_mark_boundaries = i < len(keys) - 1 or not last_without_mask - - if need_mark_boundaries: - if black_mask: - img = img * (1 - mask[0][..., None]) - img = mark_boundaries(img, - mask[0], - color=(1., 0., 0.), - outline_color=(1., 1., 1.), - mode='thick') - result.append(img) - return np.concatenate(result, axis=1) - - -def visualize_mask_and_images_batch(batch: Dict[str, torch.Tensor], keys: List[str], max_items=10, - last_without_mask=True, rescale_keys=None) -> np.ndarray: - batch = {k: tens.detach().cpu().numpy() for k, tens in batch.items() - if k in keys or k == 'mask'} - - batch_size = next(iter(batch.values())).shape[0] - items_to_vis = min(batch_size, max_items) - result = [] - for i in range(items_to_vis): - cur_dct = {k: tens[i] for k, tens in batch.items()} - result.append(visualize_mask_and_images(cur_dct, keys, last_without_mask=last_without_mask, - rescale_keys=rescale_keys)) - return np.concatenate(result, axis=0) diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/models/utils/position_encoding_layer.py b/spaces/OpenMotionLab/MotionGPT/mGPT/models/utils/position_encoding_layer.py deleted file mode 100644 index 699c860bf5d28c384390196b086d93552b2cff64..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/models/utils/position_encoding_layer.py +++ /dev/null @@ -1,30 +0,0 @@ -import numpy as np -import torch -from torch import nn - - -class PositionalEncoding(nn.Module): - - def __init__(self, d_model, dropout=0.1, max_len=5000, batch_first=False): - super().__init__() - self.batch_first = batch_first - - self.dropout = nn.Dropout(p=dropout) - - pe = torch.zeros(max_len, d_model) - position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) - div_term = torch.exp(torch.arange( - 0, d_model, 2).float() * (-np.log(10000.0) / d_model)) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0).transpose(0, 1) - - self.register_buffer("pe", pe) - - def forward(self, x): - # not used in the final model - if self.batch_first: - x = x + self.pe.permute(1, 0, 2)[:, : x.shape[1], :] - else: - x = x + self.pe[: x.shape[0], :] - return self.dropout(x) diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/data/extractions/__init__.py b/spaces/Oumar199/Fake-Real-Face-Detection/data/extractions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/__init__.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/__init__.py deleted file mode 100644 index 9665a0d63f695eab303318d824dad14041c7cde9..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -""" -Codebase for "Improved Denoising Diffusion Probabilistic Models". -""" diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/type-checks.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/type-checks.go deleted file mode 100644 index c969489dd5d34301700783186f9ef283d4f62a1a..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/type-checks.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/xpath.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/xpath.go deleted file mode 100644 index 2cbd0ee4478b18468148656d412af0a336dc42d0..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/xpath.go and /dev/null differ diff --git a/spaces/Paulraj916/paulraj916/scrapIcon.py b/spaces/Paulraj916/paulraj916/scrapIcon.py deleted file mode 100644 index a195caa53a01bbd0c4db724971b8aeecb41629c6..0000000000000000000000000000000000000000 --- a/spaces/Paulraj916/paulraj916/scrapIcon.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import requests -from bs4 import BeautifulSoup -from urllib.parse import urljoin - -class ScrapIcon: - def __init__(self, url, output_folder): - self.url = url - self.output_folder = output_folder - - def extract_and_save_icons(self): - try: - # Send an HTTP GET request to the webpage and get the HTML content - response = requests.get(self.url) - response.raise_for_status() - html_content = response.text - - # Parse the HTML content using BeautifulSoup - soup = BeautifulSoup(html_content, 'html.parser') - - # Find all icon tags - icon_tags = soup.find_all('link', {'rel': 'icon'}) - - # Extract icon URLs and store them in a list - icon_urls = [] - for icon_tag in icon_tags: - if 'href' in icon_tag.attrs: - icon_url = icon_tag['href'] - absolute_url = urljoin(self.url, icon_url) - icon_urls.append(absolute_url) - - # Create the output folder if it doesn't exist - os.makedirs(self.output_folder, exist_ok=True) - - # Download and save icons in the output folder - for icon_url in icon_urls: - icon_content = requests.get(icon_url).content - - # Get the path to the icon file - path = urljoin(self.url, icon_url).replace(self.url, '').lstrip('/') - filename = os.path.join(self.output_folder, path) - - # Create subdirectories if needed - os.makedirs(os.path.dirname(filename), exist_ok=True) - - # Save the icon content to the file - with open(filename, 'wb') as file: - file.write(icon_content) - - print(f"Downloaded: {icon_url}") - - print("Icons downloaded and saved successfully.") - except requests.exceptions.MissingSchema: - print(f"Skipping download from {self.url} (Invalid URL)") - except requests.exceptions.RequestException as e: - print(f"Failed to fetch content from {self.url}: {e}") - except OSError as e: - print(f"Failed to save icons: {e}") diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/git_operations.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/git_operations.py deleted file mode 100644 index 028f3b8da44c85e01d20ccc5d4a5fa72c759008b..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/git_operations.py +++ /dev/null @@ -1,26 +0,0 @@ -"""Git operations for autogpt""" -import git - -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - -CFG = Config() - - -def clone_repository(repo_url: str, clone_path: str) -> str: - """Clone a GitHub repository locally - - Args: - repo_url (str): The URL of the repository to clone - clone_path (str): The path to clone the repository to - - Returns: - str: The result of the clone operation""" - split_url = repo_url.split("//") - auth_repo_url = f"//{CFG.github_username}:{CFG.github_api_key}@".join(split_url) - safe_clone_path = path_in_workspace(clone_path) - try: - git.Repo.clone_from(auth_repo_url, safe_clone_path) - return f"""Cloned {repo_url} to {safe_clone_path}""" - except Exception as e: - return f"Error: {str(e)}" diff --git a/spaces/RajkNakka/NER-fine-tuning/app.py b/spaces/RajkNakka/NER-fine-tuning/app.py deleted file mode 100644 index 43a8b0949fbc5bdb110932a7008ec5f0bfe0babc..0000000000000000000000000000000000000000 --- a/spaces/RajkNakka/NER-fine-tuning/app.py +++ /dev/null @@ -1,13 +0,0 @@ -import gradio as gr - -description = "Named Entity Recognition (NER) with fine-tuned BERT" -title = "Checkout you own sentence" -examples = [["Colorado is a great place for outdoor adventures, friendly people and clean air"]] - -interface = gr.Interface.load("huggingface/RajkNakka/bert-finetuned-ner", - description=description, - title=title, - examples=examples -) - -interface.launch() \ No newline at end of file diff --git a/spaces/Raksama/ChatToPdf/Dockerfile b/spaces/Raksama/ChatToPdf/Dockerfile deleted file mode 100644 index 81db1db7d26cba693a3bbdf5c836b555f6a84f05..0000000000000000000000000000000000000000 --- a/spaces/Raksama/ChatToPdf/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt -RUN python3 -m pip install --no-cache-dir --upgrade pip -RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["panel", "serve", "/code/LangChain_QA_Panel_App.ipynb", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "raksama-chattopdf.hf.space", "--allow-websocket-origin", "0.0.0.0:7860"] - -RUN mkdir /.cache -RUN chmod 777 /.cache -RUN mkdir .chroma -RUN chmod 777 .chroma \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/util/ssl_.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/util/ssl_.py deleted file mode 100644 index 2b45d391d4d7398e4769f45f9dd25eb55daef437..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/util/ssl_.py +++ /dev/null @@ -1,495 +0,0 @@ -from __future__ import absolute_import - -import hmac -import os -import sys -import warnings -from binascii import hexlify, unhexlify -from hashlib import md5, sha1, sha256 - -from ..exceptions import ( - InsecurePlatformWarning, - ProxySchemeUnsupported, - SNIMissingWarning, - SSLError, -) -from ..packages import six -from .url import BRACELESS_IPV6_ADDRZ_RE, IPV4_RE - -SSLContext = None -SSLTransport = None -HAS_SNI = False -IS_PYOPENSSL = False -IS_SECURETRANSPORT = False -ALPN_PROTOCOLS = ["http/1.1"] - -# Maps the length of a digest to a possible hash function producing this digest -HASHFUNC_MAP = {32: md5, 40: sha1, 64: sha256} - - -def _const_compare_digest_backport(a, b): - """ - Compare two digests of equal length in constant time. - - The digests must be of type str/bytes. - Returns True if the digests match, and False otherwise. - """ - result = abs(len(a) - len(b)) - for left, right in zip(bytearray(a), bytearray(b)): - result |= left ^ right - return result == 0 - - -_const_compare_digest = getattr(hmac, "compare_digest", _const_compare_digest_backport) - -try: # Test for SSL features - import ssl - from ssl import CERT_REQUIRED, wrap_socket -except ImportError: - pass - -try: - from ssl import HAS_SNI # Has SNI? -except ImportError: - pass - -try: - from .ssltransport import SSLTransport -except ImportError: - pass - - -try: # Platform-specific: Python 3.6 - from ssl import PROTOCOL_TLS - - PROTOCOL_SSLv23 = PROTOCOL_TLS -except ImportError: - try: - from ssl import PROTOCOL_SSLv23 as PROTOCOL_TLS - - PROTOCOL_SSLv23 = PROTOCOL_TLS - except ImportError: - PROTOCOL_SSLv23 = PROTOCOL_TLS = 2 - -try: - from ssl import PROTOCOL_TLS_CLIENT -except ImportError: - PROTOCOL_TLS_CLIENT = PROTOCOL_TLS - - -try: - from ssl import OP_NO_COMPRESSION, OP_NO_SSLv2, OP_NO_SSLv3 -except ImportError: - OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000 - OP_NO_COMPRESSION = 0x20000 - - -try: # OP_NO_TICKET was added in Python 3.6 - from ssl import OP_NO_TICKET -except ImportError: - OP_NO_TICKET = 0x4000 - - -# A secure default. -# Sources for more information on TLS ciphers: -# -# - https://wiki.mozilla.org/Security/Server_Side_TLS -# - https://www.ssllabs.com/projects/best-practices/index.html -# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ -# -# The general intent is: -# - prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE), -# - prefer ECDHE over DHE for better performance, -# - prefer any AES-GCM and ChaCha20 over any AES-CBC for better performance and -# security, -# - prefer AES-GCM over ChaCha20 because hardware-accelerated AES is common, -# - disable NULL authentication, MD5 MACs, DSS, and other -# insecure ciphers for security reasons. -# - NOTE: TLS 1.3 cipher suites are managed through a different interface -# not exposed by CPython (yet!) and are enabled by default if they're available. -DEFAULT_CIPHERS = ":".join( - [ - "ECDHE+AESGCM", - "ECDHE+CHACHA20", - "DHE+AESGCM", - "DHE+CHACHA20", - "ECDH+AESGCM", - "DH+AESGCM", - "ECDH+AES", - "DH+AES", - "RSA+AESGCM", - "RSA+AES", - "!aNULL", - "!eNULL", - "!MD5", - "!DSS", - ] -) - -try: - from ssl import SSLContext # Modern SSL? -except ImportError: - - class SSLContext(object): # Platform-specific: Python 2 - def __init__(self, protocol_version): - self.protocol = protocol_version - # Use default values from a real SSLContext - self.check_hostname = False - self.verify_mode = ssl.CERT_NONE - self.ca_certs = None - self.options = 0 - self.certfile = None - self.keyfile = None - self.ciphers = None - - def load_cert_chain(self, certfile, keyfile): - self.certfile = certfile - self.keyfile = keyfile - - def load_verify_locations(self, cafile=None, capath=None, cadata=None): - self.ca_certs = cafile - - if capath is not None: - raise SSLError("CA directories not supported in older Pythons") - - if cadata is not None: - raise SSLError("CA data not supported in older Pythons") - - def set_ciphers(self, cipher_suite): - self.ciphers = cipher_suite - - def wrap_socket(self, socket, server_hostname=None, server_side=False): - warnings.warn( - "A true SSLContext object is not available. This prevents " - "urllib3 from configuring SSL appropriately and may cause " - "certain SSL connections to fail. You can upgrade to a newer " - "version of Python to solve this. For more information, see " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings", - InsecurePlatformWarning, - ) - kwargs = { - "keyfile": self.keyfile, - "certfile": self.certfile, - "ca_certs": self.ca_certs, - "cert_reqs": self.verify_mode, - "ssl_version": self.protocol, - "server_side": server_side, - } - return wrap_socket(socket, ciphers=self.ciphers, **kwargs) - - -def assert_fingerprint(cert, fingerprint): - """ - Checks if given fingerprint matches the supplied certificate. - - :param cert: - Certificate as bytes object. - :param fingerprint: - Fingerprint as string of hexdigits, can be interspersed by colons. - """ - - fingerprint = fingerprint.replace(":", "").lower() - digest_length = len(fingerprint) - hashfunc = HASHFUNC_MAP.get(digest_length) - if not hashfunc: - raise SSLError("Fingerprint of invalid length: {0}".format(fingerprint)) - - # We need encode() here for py32; works on py2 and p33. - fingerprint_bytes = unhexlify(fingerprint.encode()) - - cert_digest = hashfunc(cert).digest() - - if not _const_compare_digest(cert_digest, fingerprint_bytes): - raise SSLError( - 'Fingerprints did not match. Expected "{0}", got "{1}".'.format( - fingerprint, hexlify(cert_digest) - ) - ) - - -def resolve_cert_reqs(candidate): - """ - Resolves the argument to a numeric constant, which can be passed to - the wrap_socket function/method from the ssl module. - Defaults to :data:`ssl.CERT_REQUIRED`. - If given a string it is assumed to be the name of the constant in the - :mod:`ssl` module or its abbreviation. - (So you can specify `REQUIRED` instead of `CERT_REQUIRED`. - If it's neither `None` nor a string we assume it is already the numeric - constant which can directly be passed to wrap_socket. - """ - if candidate is None: - return CERT_REQUIRED - - if isinstance(candidate, str): - res = getattr(ssl, candidate, None) - if res is None: - res = getattr(ssl, "CERT_" + candidate) - return res - - return candidate - - -def resolve_ssl_version(candidate): - """ - like resolve_cert_reqs - """ - if candidate is None: - return PROTOCOL_TLS - - if isinstance(candidate, str): - res = getattr(ssl, candidate, None) - if res is None: - res = getattr(ssl, "PROTOCOL_" + candidate) - return res - - return candidate - - -def create_urllib3_context( - ssl_version=None, cert_reqs=None, options=None, ciphers=None -): - """All arguments have the same meaning as ``ssl_wrap_socket``. - - By default, this function does a lot of the same work that - ``ssl.create_default_context`` does on Python 3.4+. It: - - - Disables SSLv2, SSLv3, and compression - - Sets a restricted set of server ciphers - - If you wish to enable SSLv3, you can do:: - - from pip._vendor.urllib3.util import ssl_ - context = ssl_.create_urllib3_context() - context.options &= ~ssl_.OP_NO_SSLv3 - - You can do the same to enable compression (substituting ``COMPRESSION`` - for ``SSLv3`` in the last line above). - - :param ssl_version: - The desired protocol version to use. This will default to - PROTOCOL_SSLv23 which will negotiate the highest protocol that both - the server and your installation of OpenSSL support. - :param cert_reqs: - Whether to require the certificate verification. This defaults to - ``ssl.CERT_REQUIRED``. - :param options: - Specific OpenSSL options. These default to ``ssl.OP_NO_SSLv2``, - ``ssl.OP_NO_SSLv3``, ``ssl.OP_NO_COMPRESSION``, and ``ssl.OP_NO_TICKET``. - :param ciphers: - Which cipher suites to allow the server to select. - :returns: - Constructed SSLContext object with specified options - :rtype: SSLContext - """ - # PROTOCOL_TLS is deprecated in Python 3.10 - if not ssl_version or ssl_version == PROTOCOL_TLS: - ssl_version = PROTOCOL_TLS_CLIENT - - context = SSLContext(ssl_version) - - context.set_ciphers(ciphers or DEFAULT_CIPHERS) - - # Setting the default here, as we may have no ssl module on import - cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs - - if options is None: - options = 0 - # SSLv2 is easily broken and is considered harmful and dangerous - options |= OP_NO_SSLv2 - # SSLv3 has several problems and is now dangerous - options |= OP_NO_SSLv3 - # Disable compression to prevent CRIME attacks for OpenSSL 1.0+ - # (issue #309) - options |= OP_NO_COMPRESSION - # TLSv1.2 only. Unless set explicitly, do not request tickets. - # This may save some bandwidth on wire, and although the ticket is encrypted, - # there is a risk associated with it being on wire, - # if the server is not rotating its ticketing keys properly. - options |= OP_NO_TICKET - - context.options |= options - - # Enable post-handshake authentication for TLS 1.3, see GH #1634. PHA is - # necessary for conditional client cert authentication with TLS 1.3. - # The attribute is None for OpenSSL <= 1.1.0 or does not exist in older - # versions of Python. We only enable on Python 3.7.4+ or if certificate - # verification is enabled to work around Python issue #37428 - # See: https://bugs.python.org/issue37428 - if (cert_reqs == ssl.CERT_REQUIRED or sys.version_info >= (3, 7, 4)) and getattr( - context, "post_handshake_auth", None - ) is not None: - context.post_handshake_auth = True - - def disable_check_hostname(): - if ( - getattr(context, "check_hostname", None) is not None - ): # Platform-specific: Python 3.2 - # We do our own verification, including fingerprints and alternative - # hostnames. So disable it here - context.check_hostname = False - - # The order of the below lines setting verify_mode and check_hostname - # matter due to safe-guards SSLContext has to prevent an SSLContext with - # check_hostname=True, verify_mode=NONE/OPTIONAL. This is made even more - # complex because we don't know whether PROTOCOL_TLS_CLIENT will be used - # or not so we don't know the initial state of the freshly created SSLContext. - if cert_reqs == ssl.CERT_REQUIRED: - context.verify_mode = cert_reqs - disable_check_hostname() - else: - disable_check_hostname() - context.verify_mode = cert_reqs - - # Enable logging of TLS session keys via defacto standard environment variable - # 'SSLKEYLOGFILE', if the feature is available (Python 3.8+). Skip empty values. - if hasattr(context, "keylog_filename"): - sslkeylogfile = os.environ.get("SSLKEYLOGFILE") - if sslkeylogfile: - context.keylog_filename = sslkeylogfile - - return context - - -def ssl_wrap_socket( - sock, - keyfile=None, - certfile=None, - cert_reqs=None, - ca_certs=None, - server_hostname=None, - ssl_version=None, - ciphers=None, - ssl_context=None, - ca_cert_dir=None, - key_password=None, - ca_cert_data=None, - tls_in_tls=False, -): - """ - All arguments except for server_hostname, ssl_context, and ca_cert_dir have - the same meaning as they do when using :func:`ssl.wrap_socket`. - - :param server_hostname: - When SNI is supported, the expected hostname of the certificate - :param ssl_context: - A pre-made :class:`SSLContext` object. If none is provided, one will - be created using :func:`create_urllib3_context`. - :param ciphers: - A string of ciphers we wish the client to support. - :param ca_cert_dir: - A directory containing CA certificates in multiple separate files, as - supported by OpenSSL's -CApath flag or the capath argument to - SSLContext.load_verify_locations(). - :param key_password: - Optional password if the keyfile is encrypted. - :param ca_cert_data: - Optional string containing CA certificates in PEM format suitable for - passing as the cadata parameter to SSLContext.load_verify_locations() - :param tls_in_tls: - Use SSLTransport to wrap the existing socket. - """ - context = ssl_context - if context is None: - # Note: This branch of code and all the variables in it are no longer - # used by urllib3 itself. We should consider deprecating and removing - # this code. - context = create_urllib3_context(ssl_version, cert_reqs, ciphers=ciphers) - - if ca_certs or ca_cert_dir or ca_cert_data: - try: - context.load_verify_locations(ca_certs, ca_cert_dir, ca_cert_data) - except (IOError, OSError) as e: - raise SSLError(e) - - elif ssl_context is None and hasattr(context, "load_default_certs"): - # try to load OS default certs; works well on Windows (require Python3.4+) - context.load_default_certs() - - # Attempt to detect if we get the goofy behavior of the - # keyfile being encrypted and OpenSSL asking for the - # passphrase via the terminal and instead error out. - if keyfile and key_password is None and _is_key_file_encrypted(keyfile): - raise SSLError("Client private key is encrypted, password is required") - - if certfile: - if key_password is None: - context.load_cert_chain(certfile, keyfile) - else: - context.load_cert_chain(certfile, keyfile, key_password) - - try: - if hasattr(context, "set_alpn_protocols"): - context.set_alpn_protocols(ALPN_PROTOCOLS) - except NotImplementedError: # Defensive: in CI, we always have set_alpn_protocols - pass - - # If we detect server_hostname is an IP address then the SNI - # extension should not be used according to RFC3546 Section 3.1 - use_sni_hostname = server_hostname and not is_ipaddress(server_hostname) - # SecureTransport uses server_hostname in certificate verification. - send_sni = (use_sni_hostname and HAS_SNI) or ( - IS_SECURETRANSPORT and server_hostname - ) - # Do not warn the user if server_hostname is an invalid SNI hostname. - if not HAS_SNI and use_sni_hostname: - warnings.warn( - "An HTTPS request has been made, but the SNI (Server Name " - "Indication) extension to TLS is not available on this platform. " - "This may cause the server to present an incorrect TLS " - "certificate, which can cause validation failures. You can upgrade to " - "a newer version of Python to solve this. For more information, see " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings", - SNIMissingWarning, - ) - - if send_sni: - ssl_sock = _ssl_wrap_socket_impl( - sock, context, tls_in_tls, server_hostname=server_hostname - ) - else: - ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls) - return ssl_sock - - -def is_ipaddress(hostname): - """Detects whether the hostname given is an IPv4 or IPv6 address. - Also detects IPv6 addresses with Zone IDs. - - :param str hostname: Hostname to examine. - :return: True if the hostname is an IP address, False otherwise. - """ - if not six.PY2 and isinstance(hostname, bytes): - # IDN A-label bytes are ASCII compatible. - hostname = hostname.decode("ascii") - return bool(IPV4_RE.match(hostname) or BRACELESS_IPV6_ADDRZ_RE.match(hostname)) - - -def _is_key_file_encrypted(key_file): - """Detects if a key file is encrypted or not.""" - with open(key_file, "r") as f: - for line in f: - # Look for Proc-Type: 4,ENCRYPTED - if "ENCRYPTED" in line: - return True - - return False - - -def _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname=None): - if tls_in_tls: - if not SSLTransport: - # Import error, ssl is not available. - raise ProxySchemeUnsupported( - "TLS in TLS requires support for the 'ssl' module" - ) - - SSLTransport._validate_ssl_context_for_tls_in_tls(ssl_context) - return SSLTransport(sock, ssl_context, server_hostname) - - if server_hostname: - return ssl_context.wrap_socket(sock, server_hostname=server_hostname) - else: - return ssl_context.wrap_socket(sock) diff --git a/spaces/Realcat/image-matching-webui/hloc/utils/read_write_model.py b/spaces/Realcat/image-matching-webui/hloc/utils/read_write_model.py deleted file mode 100644 index 65bed51606b9e3b46ed6f38dc27a8614067c4880..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/utils/read_write_model.py +++ /dev/null @@ -1,617 +0,0 @@ -# Copyright (c) 2018, ETH Zurich and UNC Chapel Hill. -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# -# * Neither the name of ETH Zurich and UNC Chapel Hill nor the names of -# its contributors may be used to endorse or promote products derived -# from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# -# Author: Johannes L. Schoenberger (jsch-at-demuc-dot-de) - -import os -import collections -import numpy as np -import struct -import argparse -import logging - -logger = logging.getLogger(__name__) - - -CameraModel = collections.namedtuple( - "CameraModel", ["model_id", "model_name", "num_params"] -) -Camera = collections.namedtuple( - "Camera", ["id", "model", "width", "height", "params"] -) -BaseImage = collections.namedtuple( - "Image", ["id", "qvec", "tvec", "camera_id", "name", "xys", "point3D_ids"] -) -Point3D = collections.namedtuple( - "Point3D", ["id", "xyz", "rgb", "error", "image_ids", "point2D_idxs"] -) - - -class Image(BaseImage): - def qvec2rotmat(self): - return qvec2rotmat(self.qvec) - - -CAMERA_MODELS = { - CameraModel(model_id=0, model_name="SIMPLE_PINHOLE", num_params=3), - CameraModel(model_id=1, model_name="PINHOLE", num_params=4), - CameraModel(model_id=2, model_name="SIMPLE_RADIAL", num_params=4), - CameraModel(model_id=3, model_name="RADIAL", num_params=5), - CameraModel(model_id=4, model_name="OPENCV", num_params=8), - CameraModel(model_id=5, model_name="OPENCV_FISHEYE", num_params=8), - CameraModel(model_id=6, model_name="FULL_OPENCV", num_params=12), - CameraModel(model_id=7, model_name="FOV", num_params=5), - CameraModel(model_id=8, model_name="SIMPLE_RADIAL_FISHEYE", num_params=4), - CameraModel(model_id=9, model_name="RADIAL_FISHEYE", num_params=5), - CameraModel(model_id=10, model_name="THIN_PRISM_FISHEYE", num_params=12), -} -CAMERA_MODEL_IDS = dict( - [(camera_model.model_id, camera_model) for camera_model in CAMERA_MODELS] -) -CAMERA_MODEL_NAMES = dict( - [(camera_model.model_name, camera_model) for camera_model in CAMERA_MODELS] -) - - -def read_next_bytes(fid, num_bytes, format_char_sequence, endian_character="<"): - """Read and unpack the next bytes from a binary file. - :param fid: - :param num_bytes: Sum of combination of {2, 4, 8}, e.g. 2, 6, 16, 30, etc. - :param format_char_sequence: List of {c, e, f, d, h, H, i, I, l, L, q, Q}. - :param endian_character: Any of {@, =, <, >, !} - :return: Tuple of read and unpacked values. - """ - data = fid.read(num_bytes) - return struct.unpack(endian_character + format_char_sequence, data) - - -def write_next_bytes(fid, data, format_char_sequence, endian_character="<"): - """pack and write to a binary file. - :param fid: - :param data: data to send, if multiple elements are sent at the same time, - they should be encapsuled either in a list or a tuple - :param format_char_sequence: List of {c, e, f, d, h, H, i, I, l, L, q, Q}. - should be the same length as the data list or tuple - :param endian_character: Any of {@, =, <, >, !} - """ - if isinstance(data, (list, tuple)): - bytes = struct.pack(endian_character + format_char_sequence, *data) - else: - bytes = struct.pack(endian_character + format_char_sequence, data) - fid.write(bytes) - - -def read_cameras_text(path): - """ - see: src/base/reconstruction.cc - void Reconstruction::WriteCamerasText(const std::string& path) - void Reconstruction::ReadCamerasText(const std::string& path) - """ - cameras = {} - with open(path, "r") as fid: - while True: - line = fid.readline() - if not line: - break - line = line.strip() - if len(line) > 0 and line[0] != "#": - elems = line.split() - camera_id = int(elems[0]) - model = elems[1] - width = int(elems[2]) - height = int(elems[3]) - params = np.array(tuple(map(float, elems[4:]))) - cameras[camera_id] = Camera( - id=camera_id, - model=model, - width=width, - height=height, - params=params, - ) - return cameras - - -def read_cameras_binary(path_to_model_file): - """ - see: src/base/reconstruction.cc - void Reconstruction::WriteCamerasBinary(const std::string& path) - void Reconstruction::ReadCamerasBinary(const std::string& path) - """ - cameras = {} - with open(path_to_model_file, "rb") as fid: - num_cameras = read_next_bytes(fid, 8, "Q")[0] - for _ in range(num_cameras): - camera_properties = read_next_bytes( - fid, num_bytes=24, format_char_sequence="iiQQ" - ) - camera_id = camera_properties[0] - model_id = camera_properties[1] - model_name = CAMERA_MODEL_IDS[camera_properties[1]].model_name - width = camera_properties[2] - height = camera_properties[3] - num_params = CAMERA_MODEL_IDS[model_id].num_params - params = read_next_bytes( - fid, - num_bytes=8 * num_params, - format_char_sequence="d" * num_params, - ) - cameras[camera_id] = Camera( - id=camera_id, - model=model_name, - width=width, - height=height, - params=np.array(params), - ) - assert len(cameras) == num_cameras - return cameras - - -def write_cameras_text(cameras, path): - """ - see: src/base/reconstruction.cc - void Reconstruction::WriteCamerasText(const std::string& path) - void Reconstruction::ReadCamerasText(const std::string& path) - """ - HEADER = ( - "# Camera list with one line of data per camera:\n" - + "# CAMERA_ID, MODEL, WIDTH, HEIGHT, PARAMS[]\n" - + "# Number of cameras: {}\n".format(len(cameras)) - ) - with open(path, "w") as fid: - fid.write(HEADER) - for _, cam in cameras.items(): - to_write = [cam.id, cam.model, cam.width, cam.height, *cam.params] - line = " ".join([str(elem) for elem in to_write]) - fid.write(line + "\n") - - -def write_cameras_binary(cameras, path_to_model_file): - """ - see: src/base/reconstruction.cc - void Reconstruction::WriteCamerasBinary(const std::string& path) - void Reconstruction::ReadCamerasBinary(const std::string& path) - """ - with open(path_to_model_file, "wb") as fid: - write_next_bytes(fid, len(cameras), "Q") - for _, cam in cameras.items(): - model_id = CAMERA_MODEL_NAMES[cam.model].model_id - camera_properties = [cam.id, model_id, cam.width, cam.height] - write_next_bytes(fid, camera_properties, "iiQQ") - for p in cam.params: - write_next_bytes(fid, float(p), "d") - return cameras - - -def read_images_text(path): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadImagesText(const std::string& path) - void Reconstruction::WriteImagesText(const std::string& path) - """ - images = {} - with open(path, "r") as fid: - while True: - line = fid.readline() - if not line: - break - line = line.strip() - if len(line) > 0 and line[0] != "#": - elems = line.split() - image_id = int(elems[0]) - qvec = np.array(tuple(map(float, elems[1:5]))) - tvec = np.array(tuple(map(float, elems[5:8]))) - camera_id = int(elems[8]) - image_name = elems[9] - elems = fid.readline().split() - xys = np.column_stack( - [ - tuple(map(float, elems[0::3])), - tuple(map(float, elems[1::3])), - ] - ) - point3D_ids = np.array(tuple(map(int, elems[2::3]))) - images[image_id] = Image( - id=image_id, - qvec=qvec, - tvec=tvec, - camera_id=camera_id, - name=image_name, - xys=xys, - point3D_ids=point3D_ids, - ) - return images - - -def read_images_binary(path_to_model_file): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadImagesBinary(const std::string& path) - void Reconstruction::WriteImagesBinary(const std::string& path) - """ - images = {} - with open(path_to_model_file, "rb") as fid: - num_reg_images = read_next_bytes(fid, 8, "Q")[0] - for _ in range(num_reg_images): - binary_image_properties = read_next_bytes( - fid, num_bytes=64, format_char_sequence="idddddddi" - ) - image_id = binary_image_properties[0] - qvec = np.array(binary_image_properties[1:5]) - tvec = np.array(binary_image_properties[5:8]) - camera_id = binary_image_properties[8] - image_name = "" - current_char = read_next_bytes(fid, 1, "c")[0] - while current_char != b"\x00": # look for the ASCII 0 entry - image_name += current_char.decode("utf-8") - current_char = read_next_bytes(fid, 1, "c")[0] - num_points2D = read_next_bytes( - fid, num_bytes=8, format_char_sequence="Q" - )[0] - x_y_id_s = read_next_bytes( - fid, - num_bytes=24 * num_points2D, - format_char_sequence="ddq" * num_points2D, - ) - xys = np.column_stack( - [ - tuple(map(float, x_y_id_s[0::3])), - tuple(map(float, x_y_id_s[1::3])), - ] - ) - point3D_ids = np.array(tuple(map(int, x_y_id_s[2::3]))) - images[image_id] = Image( - id=image_id, - qvec=qvec, - tvec=tvec, - camera_id=camera_id, - name=image_name, - xys=xys, - point3D_ids=point3D_ids, - ) - return images - - -def write_images_text(images, path): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadImagesText(const std::string& path) - void Reconstruction::WriteImagesText(const std::string& path) - """ - if len(images) == 0: - mean_observations = 0 - else: - mean_observations = sum( - (len(img.point3D_ids) for _, img in images.items()) - ) / len(images) - HEADER = ( - "# Image list with two lines of data per image:\n" - + "# IMAGE_ID, QW, QX, QY, QZ, TX, TY, TZ, CAMERA_ID, NAME\n" - + "# POINTS2D[] as (X, Y, POINT3D_ID)\n" - + "# Number of images: {}, mean observations per image: {}\n".format( - len(images), mean_observations - ) - ) - - with open(path, "w") as fid: - fid.write(HEADER) - for _, img in images.items(): - image_header = [ - img.id, - *img.qvec, - *img.tvec, - img.camera_id, - img.name, - ] - first_line = " ".join(map(str, image_header)) - fid.write(first_line + "\n") - - points_strings = [] - for xy, point3D_id in zip(img.xys, img.point3D_ids): - points_strings.append(" ".join(map(str, [*xy, point3D_id]))) - fid.write(" ".join(points_strings) + "\n") - - -def write_images_binary(images, path_to_model_file): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadImagesBinary(const std::string& path) - void Reconstruction::WriteImagesBinary(const std::string& path) - """ - with open(path_to_model_file, "wb") as fid: - write_next_bytes(fid, len(images), "Q") - for _, img in images.items(): - write_next_bytes(fid, img.id, "i") - write_next_bytes(fid, img.qvec.tolist(), "dddd") - write_next_bytes(fid, img.tvec.tolist(), "ddd") - write_next_bytes(fid, img.camera_id, "i") - for char in img.name: - write_next_bytes(fid, char.encode("utf-8"), "c") - write_next_bytes(fid, b"\x00", "c") - write_next_bytes(fid, len(img.point3D_ids), "Q") - for xy, p3d_id in zip(img.xys, img.point3D_ids): - write_next_bytes(fid, [*xy, p3d_id], "ddq") - - -def read_points3D_text(path): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadPoints3DText(const std::string& path) - void Reconstruction::WritePoints3DText(const std::string& path) - """ - points3D = {} - with open(path, "r") as fid: - while True: - line = fid.readline() - if not line: - break - line = line.strip() - if len(line) > 0 and line[0] != "#": - elems = line.split() - point3D_id = int(elems[0]) - xyz = np.array(tuple(map(float, elems[1:4]))) - rgb = np.array(tuple(map(int, elems[4:7]))) - error = float(elems[7]) - image_ids = np.array(tuple(map(int, elems[8::2]))) - point2D_idxs = np.array(tuple(map(int, elems[9::2]))) - points3D[point3D_id] = Point3D( - id=point3D_id, - xyz=xyz, - rgb=rgb, - error=error, - image_ids=image_ids, - point2D_idxs=point2D_idxs, - ) - return points3D - - -def read_points3D_binary(path_to_model_file): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadPoints3DBinary(const std::string& path) - void Reconstruction::WritePoints3DBinary(const std::string& path) - """ - points3D = {} - with open(path_to_model_file, "rb") as fid: - num_points = read_next_bytes(fid, 8, "Q")[0] - for _ in range(num_points): - binary_point_line_properties = read_next_bytes( - fid, num_bytes=43, format_char_sequence="QdddBBBd" - ) - point3D_id = binary_point_line_properties[0] - xyz = np.array(binary_point_line_properties[1:4]) - rgb = np.array(binary_point_line_properties[4:7]) - error = np.array(binary_point_line_properties[7]) - track_length = read_next_bytes( - fid, num_bytes=8, format_char_sequence="Q" - )[0] - track_elems = read_next_bytes( - fid, - num_bytes=8 * track_length, - format_char_sequence="ii" * track_length, - ) - image_ids = np.array(tuple(map(int, track_elems[0::2]))) - point2D_idxs = np.array(tuple(map(int, track_elems[1::2]))) - points3D[point3D_id] = Point3D( - id=point3D_id, - xyz=xyz, - rgb=rgb, - error=error, - image_ids=image_ids, - point2D_idxs=point2D_idxs, - ) - return points3D - - -def write_points3D_text(points3D, path): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadPoints3DText(const std::string& path) - void Reconstruction::WritePoints3DText(const std::string& path) - """ - if len(points3D) == 0: - mean_track_length = 0 - else: - mean_track_length = sum( - (len(pt.image_ids) for _, pt in points3D.items()) - ) / len(points3D) - HEADER = ( - "# 3D point list with one line of data per point:\n" - + "# POINT3D_ID, X, Y, Z, R, G, B, ERROR, TRACK[] as (IMAGE_ID, POINT2D_IDX)\n" - + "# Number of points: {}, mean track length: {}\n".format( - len(points3D), mean_track_length - ) - ) - - with open(path, "w") as fid: - fid.write(HEADER) - for _, pt in points3D.items(): - point_header = [pt.id, *pt.xyz, *pt.rgb, pt.error] - fid.write(" ".join(map(str, point_header)) + " ") - track_strings = [] - for image_id, point2D in zip(pt.image_ids, pt.point2D_idxs): - track_strings.append(" ".join(map(str, [image_id, point2D]))) - fid.write(" ".join(track_strings) + "\n") - - -def write_points3D_binary(points3D, path_to_model_file): - """ - see: src/base/reconstruction.cc - void Reconstruction::ReadPoints3DBinary(const std::string& path) - void Reconstruction::WritePoints3DBinary(const std::string& path) - """ - with open(path_to_model_file, "wb") as fid: - write_next_bytes(fid, len(points3D), "Q") - for _, pt in points3D.items(): - write_next_bytes(fid, pt.id, "Q") - write_next_bytes(fid, pt.xyz.tolist(), "ddd") - write_next_bytes(fid, pt.rgb.tolist(), "BBB") - write_next_bytes(fid, pt.error, "d") - track_length = pt.image_ids.shape[0] - write_next_bytes(fid, track_length, "Q") - for image_id, point2D_id in zip(pt.image_ids, pt.point2D_idxs): - write_next_bytes(fid, [image_id, point2D_id], "ii") - - -def detect_model_format(path, ext): - if ( - os.path.isfile(os.path.join(path, "cameras" + ext)) - and os.path.isfile(os.path.join(path, "images" + ext)) - and os.path.isfile(os.path.join(path, "points3D" + ext)) - ): - return True - - return False - - -def read_model(path, ext=""): - # try to detect the extension automatically - if ext == "": - if detect_model_format(path, ".bin"): - ext = ".bin" - elif detect_model_format(path, ".txt"): - ext = ".txt" - else: - try: - cameras, images, points3D = read_model( - os.path.join(path, "model/") - ) - logger.warning( - "This SfM file structure was deprecated in hloc v1.1" - ) - return cameras, images, points3D - except FileNotFoundError: - raise FileNotFoundError( - f"Could not find binary or text COLMAP model at {path}" - ) - - if ext == ".txt": - cameras = read_cameras_text(os.path.join(path, "cameras" + ext)) - images = read_images_text(os.path.join(path, "images" + ext)) - points3D = read_points3D_text(os.path.join(path, "points3D") + ext) - else: - cameras = read_cameras_binary(os.path.join(path, "cameras" + ext)) - images = read_images_binary(os.path.join(path, "images" + ext)) - points3D = read_points3D_binary(os.path.join(path, "points3D") + ext) - return cameras, images, points3D - - -def write_model(cameras, images, points3D, path, ext=".bin"): - if ext == ".txt": - write_cameras_text(cameras, os.path.join(path, "cameras" + ext)) - write_images_text(images, os.path.join(path, "images" + ext)) - write_points3D_text(points3D, os.path.join(path, "points3D") + ext) - else: - write_cameras_binary(cameras, os.path.join(path, "cameras" + ext)) - write_images_binary(images, os.path.join(path, "images" + ext)) - write_points3D_binary(points3D, os.path.join(path, "points3D") + ext) - return cameras, images, points3D - - -def qvec2rotmat(qvec): - return np.array( - [ - [ - 1 - 2 * qvec[2] ** 2 - 2 * qvec[3] ** 2, - 2 * qvec[1] * qvec[2] - 2 * qvec[0] * qvec[3], - 2 * qvec[3] * qvec[1] + 2 * qvec[0] * qvec[2], - ], - [ - 2 * qvec[1] * qvec[2] + 2 * qvec[0] * qvec[3], - 1 - 2 * qvec[1] ** 2 - 2 * qvec[3] ** 2, - 2 * qvec[2] * qvec[3] - 2 * qvec[0] * qvec[1], - ], - [ - 2 * qvec[3] * qvec[1] - 2 * qvec[0] * qvec[2], - 2 * qvec[2] * qvec[3] + 2 * qvec[0] * qvec[1], - 1 - 2 * qvec[1] ** 2 - 2 * qvec[2] ** 2, - ], - ] - ) - - -def rotmat2qvec(R): - Rxx, Ryx, Rzx, Rxy, Ryy, Rzy, Rxz, Ryz, Rzz = R.flat - K = ( - np.array( - [ - [Rxx - Ryy - Rzz, 0, 0, 0], - [Ryx + Rxy, Ryy - Rxx - Rzz, 0, 0], - [Rzx + Rxz, Rzy + Ryz, Rzz - Rxx - Ryy, 0], - [Ryz - Rzy, Rzx - Rxz, Rxy - Ryx, Rxx + Ryy + Rzz], - ] - ) - / 3.0 - ) - eigvals, eigvecs = np.linalg.eigh(K) - qvec = eigvecs[[3, 0, 1, 2], np.argmax(eigvals)] - if qvec[0] < 0: - qvec *= -1 - return qvec - - -def main(): - parser = argparse.ArgumentParser( - description="Read and write COLMAP binary and text models" - ) - parser.add_argument("--input_model", help="path to input model folder") - parser.add_argument( - "--input_format", - choices=[".bin", ".txt"], - help="input model format", - default="", - ) - parser.add_argument("--output_model", help="path to output model folder") - parser.add_argument( - "--output_format", - choices=[".bin", ".txt"], - help="outut model format", - default=".txt", - ) - args = parser.parse_args() - - cameras, images, points3D = read_model( - path=args.input_model, ext=args.input_format - ) - - print("num_cameras:", len(cameras)) - print("num_images:", len(images)) - print("num_points3D:", len(points3D)) - - if args.output_model is not None: - write_model( - cameras, - images, - points3D, - path=args.output_model, - ext=args.output_format, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/Reha2704/VToonify/vtoonify/model/raft/train_mixed.sh b/spaces/Reha2704/VToonify/vtoonify/model/raft/train_mixed.sh deleted file mode 100644 index d9b979f143902a17a0ba7b0a8f960598b7096e0b..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/raft/train_mixed.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -mkdir -p checkpoints -python -u train.py --name raft-chairs --stage chairs --validation chairs --gpus 0 --num_steps 120000 --batch_size 8 --lr 0.00025 --image_size 368 496 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-things --stage things --validation sintel --restore_ckpt checkpoints/raft-chairs.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 400 720 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-sintel --stage sintel --validation sintel --restore_ckpt checkpoints/raft-things.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 368 768 --wdecay 0.00001 --gamma=0.85 --mixed_precision -python -u train.py --name raft-kitti --stage kitti --validation kitti --restore_ckpt checkpoints/raft-sintel.pth --gpus 0 --num_steps 50000 --batch_size 5 --lr 0.0001 --image_size 288 960 --wdecay 0.00001 --gamma=0.85 --mixed_precision diff --git a/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules.py b/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/points_in_boxes.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/points_in_boxes.py deleted file mode 100644 index 4003173a53052161dbcd687a2fa1d755642fdab8..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/points_in_boxes.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward', - 'points_in_boxes_all_forward' -]) - - -def points_in_boxes_part(points, boxes): - """Find the box in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in - LiDAR/DEPTH coordinate, (x, y, z) is the bottom center - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M), default background = -1 - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - - box_idxs_of_pts = points.new_zeros((batch_size, num_points), - dtype=torch.int).fill_(-1) - - # If manually put the tensor 'points' or 'boxes' on a device - # which is not the current device, some temporary variables - # will be created on the current device in the cuda op, - # and the output will be incorrect. - # Therefore, we force the current device to be the same - # as the device of the tensors if it was not. - # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305 - # for the incorrect output before the fix. - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_part_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts - - -def points_in_boxes_cpu(points, boxes): - """Find all boxes in which each point is (CPU). The CPU version of - :meth:`points_in_boxes_all`. - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in - LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - point_indices = points.new_zeros((batch_size, num_boxes, num_points), - dtype=torch.int) - for b in range(batch_size): - ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(), - points[b].float().contiguous(), - point_indices[b]) - point_indices = point_indices.transpose(1, 2) - - return point_indices - - -def points_in_boxes_all(points, boxes): - """Find all boxes in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert boxes.shape[0] == points.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {boxes.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes), - dtype=torch.int).fill_(0) - - # Same reason as line 25-32 - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_all_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/pixel_group.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/pixel_group.py deleted file mode 100644 index 2143c75f835a467c802fc3c37ecd3ac0f85bcda4..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/pixel_group.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['pixel_group']) - - -def pixel_group(score, mask, embedding, kernel_label, kernel_contour, - kernel_region_num, distance_threshold): - """Group pixels into text instances, which is widely used text detection - methods. - - Arguments: - score (np.array or Tensor): The foreground score with size hxw. - mask (np.array or Tensor): The foreground mask with size hxw. - embedding (np.array or Tensor): The embedding with size hxwxc to - distinguish instances. - kernel_label (np.array or Tensor): The instance kernel index with - size hxw. - kernel_contour (np.array or Tensor): The kernel contour with size hxw. - kernel_region_num (int): The instance kernel region number. - distance_threshold (float): The embedding distance threshold between - kernel and pixel in one instance. - - Returns: - pixel_assignment (List[List[float]]): The instance coordinate list. - Each element consists of averaged confidence, pixel number, and - coordinates (x_i, y_i for all pixels) in order. - """ - assert isinstance(score, (torch.Tensor, np.ndarray)) - assert isinstance(mask, (torch.Tensor, np.ndarray)) - assert isinstance(embedding, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_contour, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_region_num, int) - assert isinstance(distance_threshold, float) - - if isinstance(score, np.ndarray): - score = torch.from_numpy(score) - if isinstance(mask, np.ndarray): - mask = torch.from_numpy(mask) - if isinstance(embedding, np.ndarray): - embedding = torch.from_numpy(embedding) - if isinstance(kernel_label, np.ndarray): - kernel_label = torch.from_numpy(kernel_label) - if isinstance(kernel_contour, np.ndarray): - kernel_contour = torch.from_numpy(kernel_contour) - - if torch.__version__ == 'parrots': - label = ext_module.pixel_group( - score, - mask, - embedding, - kernel_label, - kernel_contour, - kernel_region_num=kernel_region_num, - distance_threshold=distance_threshold) - label = label.tolist() - label = label[0] - list_index = kernel_region_num - pixel_assignment = [] - for x in range(kernel_region_num): - pixel_assignment.append( - np.array( - label[list_index:list_index + int(label[x])], - dtype=np.float)) - list_index = list_index + int(label[x]) - else: - pixel_assignment = ext_module.pixel_group(score, mask, embedding, - kernel_label, kernel_contour, - kernel_region_num, - distance_threshold) - return pixel_assignment diff --git a/spaces/SIH/tree-segmentation/app.py b/spaces/SIH/tree-segmentation/app.py deleted file mode 100644 index 0366c2db8f9a9bac38692bed3b8c443d79ae3477..0000000000000000000000000000000000000000 --- a/spaces/SIH/tree-segmentation/app.py +++ /dev/null @@ -1,65 +0,0 @@ -""" -tree-segmentation -Proof of concept showing effectiveness of a fine tuned instance segmentation model for detecting trees. -""" -import os -import cv2 -os.system("pip install 'git+https://github.com/facebookresearch/detectron2.git'") -from transformers import DetrFeatureExtractor, DetrForSegmentation -from PIL import Image -import gradio as gr -import numpy as np -import torch -import torchvision -import detectron2 - -# import some common detectron2 utilities -import itertools -import seaborn as sns -from detectron2 import model_zoo -from detectron2.engine import DefaultPredictor -from detectron2.config import get_cfg -from detectron2.utils.visualizer import Visualizer -from detectron2.utils.visualizer import ColorMode -from detectron2.data import MetadataCatalog, DatasetCatalog -from detectron2.checkpoint import DetectionCheckpointer - -cfg = get_cfg() -cfg.merge_from_file("model_weights/treev1_cfg.yaml") -cfg.MODEL.DEVICE='cpu' -cfg.MODEL.WEIGHTS = "model_weights/treev1_best.pth" -cfg.MODEL.ROI_HEADS.NUM_CLASSES = 2 - -def segment_image(im, confidence_threshold): - # cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.25 - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = confidence_threshold - predictor = DefaultPredictor(cfg) - im = np.array(im) - outputs = predictor(im) - v = Visualizer(im[:, :, ::-1], - scale=0.5, - instance_mode=ColorMode.SEGMENTATION - ) - print(len(outputs["instances"])," trees detected.") - out = v.draw_instance_predictions(outputs["instances"].to("cpu")) - - return Image.fromarray(out.get_image()[:, :, ::-1]) - -# gradio components - -gr_slider_confidence = gr.inputs.Slider(0,1,.1,.7, - label='Set confidence threshold % for masks') - -# gradio outputs -inputs = gr.inputs.Image(type="pil", label="Input Image") -outputs = gr.outputs.Image(type="pil", label="Output Image") - -title = "Tree Segmentation" -description = "An instance segmentation demo for identifying trees in aerial images using DETR (End-to-End Object Detection) model with MaskRCNN-101 backbone" - -# Create user interface and launch -gr.Interface(segment_image, - inputs = [inputs, gr_slider_confidence], - outputs = outputs, - title = title, - description = description).launch(debug=True) \ No newline at end of file diff --git a/spaces/Stearns/crl-demo/README.md b/spaces/Stearns/crl-demo/README.md deleted file mode 100644 index be73179f34dfc5ef5fe7d9c7c7ef34b457a70b2d..0000000000000000000000000000000000000000 --- a/spaces/Stearns/crl-demo/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Cognitive Reasoner Lite - Demo -emoji: 🧠 -colorFrom: purple -colorTo: red -sdk: docker -pinned: false -python_version: 3.9.13 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Straits/SI43-photostyle1/app.py b/spaces/Straits/SI43-photostyle1/app.py deleted file mode 100644 index 823b2931a134603934f80ed17af5723588ce03a1..0000000000000000000000000000000000000000 --- a/spaces/Straits/SI43-photostyle1/app.py +++ /dev/null @@ -1,81 +0,0 @@ -# import dependencies -from IPython.display import display, Javascript, Image -import numpy as np -import PIL -import io -import html -import time -import torch -import matplotlib.pyplot as plt -import numpy as np -from PIL import Image -from models.stmodel import STModel -from predictor import Predictor -import argparse -from glob import glob -import os -from ipywidgets import Box, Image -import gradio as gr - -def predict_gradio(image): - img_size = 512 - load_model_path = "./models/st_model_512_80k_12.pth" - styles_path = "./styles/" - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - n_styles = len(glob(os.path.join(styles_path, '*.jpg'))) - st_model = STModel(n_styles) - if True: - st_model.load_state_dict(torch.load(load_model_path, map_location=device)) - st_model = st_model.to(device) - - predictor = Predictor(st_model, device, img_size) - - list_gen=[] - for s in range(n_styles): - gen = predictor.eval_image(image, s) - list_gen.append(gen) - return list_gen - -def gradio_pls(): - description=""" -Upload a photo and click on submit to see the 12 styles applied to your photo. \n -Keep in mind that for compatibility reasons your photo is cropped before the neural net applied the different styles. -
- - - - - - - - - - - - - - - -
-""" - iface = gr.Interface( - predict_gradio, - [ - gr.inputs.Image(type="pil", label="Image"), - ], - [ - gr.outputs.Carousel("image", label="Style"), - ], - layout="unaligned", - title="Photo Style Transfer", - description=description, - theme="grass", - allow_flagging='never' - ) - - return iface.launch(inbrowser=True, enable_queue=True, height=800, width=800) - - -if __name__ == '__main__': - gradio_pls() \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/pydevd_modify_bytecode.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/pydevd_modify_bytecode.py deleted file mode 100644 index 7e7635850be169f8ff547379ae4c787d33adb43d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/pydevd_modify_bytecode.py +++ /dev/null @@ -1,365 +0,0 @@ -from collections import namedtuple -import dis -from functools import partial -import itertools -import os.path -import sys - -from _pydevd_frame_eval.vendored import bytecode -from _pydevd_frame_eval.vendored.bytecode.instr import Instr, Label -from _pydev_bundle import pydev_log -from _pydevd_frame_eval.pydevd_frame_tracing import _pydev_stop_at_break, _pydev_needs_stop_at_break - -DEBUG = False - - -class DebugHelper(object): - - def __init__(self): - self._debug_dir = os.path.join(os.path.dirname(__file__), 'debug_info') - try: - os.makedirs(self._debug_dir) - except: - pass - self._next = partial(next, itertools.count(0)) - - def _get_filename(self, op_number=None, prefix=''): - if op_number is None: - op_number = self._next() - name = '%03d_before.txt' % op_number - else: - name = '%03d_change.txt' % op_number - - filename = os.path.join(self._debug_dir, prefix + name) - return filename, op_number - - def write_bytecode(self, b, op_number=None, prefix=''): - filename, op_number = self._get_filename(op_number, prefix) - with open(filename, 'w') as stream: - bytecode.dump_bytecode(b, stream=stream, lineno=True) - return op_number - - def write_dis(self, code_to_modify, op_number=None, prefix=''): - filename, op_number = self._get_filename(op_number, prefix) - with open(filename, 'w') as stream: - stream.write('-------- ') - stream.write('-------- ') - stream.write('id(code_to_modify): %s' % id(code_to_modify)) - stream.write('\n\n') - dis.dis(code_to_modify, file=stream) - return op_number - - -_CodeLineInfo = namedtuple('_CodeLineInfo', 'line_to_offset, first_line, last_line') - - -# Note: this method has a version in cython too (that one is usually used, this is just for tests). -def _get_code_line_info(code_obj): - line_to_offset = {} - first_line = None - last_line = None - - for offset, line in dis.findlinestarts(code_obj): - line_to_offset[line] = offset - - if line_to_offset: - first_line = min(line_to_offset) - last_line = max(line_to_offset) - return _CodeLineInfo(line_to_offset, first_line, last_line) - - -if DEBUG: - debug_helper = DebugHelper() - - -def get_instructions_to_add( - stop_at_line, - _pydev_stop_at_break=_pydev_stop_at_break, - _pydev_needs_stop_at_break=_pydev_needs_stop_at_break - ): - ''' - This is the bytecode for something as: - - if _pydev_needs_stop_at_break(): - _pydev_stop_at_break() - - but with some special handling for lines. - ''' - # Good reference to how things work regarding line numbers and jumps: - # https://github.com/python/cpython/blob/3.6/Objects/lnotab_notes.txt - - # Usually use a stop line -1, but if that'd be 0, using line +1 is ok too. - spurious_line = stop_at_line - 1 - if spurious_line <= 0: - spurious_line = stop_at_line + 1 - - label = Label() - return [ - # -- if _pydev_needs_stop_at_break(): - Instr("LOAD_CONST", _pydev_needs_stop_at_break, lineno=stop_at_line), - Instr("LOAD_CONST", stop_at_line, lineno=stop_at_line), - Instr("CALL_FUNCTION", 1, lineno=stop_at_line), - Instr("POP_JUMP_IF_FALSE", label, lineno=stop_at_line), - - # -- _pydev_stop_at_break() - # - # Note that this has line numbers -1 so that when the NOP just below - # is executed we have a spurious line event. - Instr("LOAD_CONST", _pydev_stop_at_break, lineno=spurious_line), - Instr("LOAD_CONST", stop_at_line, lineno=spurious_line), - Instr("CALL_FUNCTION", 1, lineno=spurious_line), - Instr("POP_TOP", lineno=spurious_line), - - # Reason for the NOP: Python will give us a 'line' trace event whenever we forward jump to - # the first instruction of a line, so, in the case where we haven't added a programmatic - # breakpoint (either because we didn't hit a breakpoint anymore or because it was already - # tracing), we don't want the spurious line event due to the line change, so, we make a jump - # to the instruction right after the NOP so that the spurious line event is NOT generated in - # this case (otherwise we'd have a line event even if the line didn't change). - Instr("NOP", lineno=stop_at_line), - label, - ] - - -class _Node(object): - - def __init__(self, data): - self.prev = None - self.next = None - self.data = data - - def append(self, data): - node = _Node(data) - - curr_next = self.next - - node.next = self.next - node.prev = self - self.next = node - - if curr_next is not None: - curr_next.prev = node - - return node - - def prepend(self, data): - node = _Node(data) - - curr_prev = self.prev - - node.prev = self.prev - node.next = self - self.prev = node - - if curr_prev is not None: - curr_prev.next = node - - return node - - -class _HelperBytecodeList(object): - ''' - A helper double-linked list to make the manipulation a bit easier (so that we don't need - to keep track of indices that change) and performant (because adding multiple items to - the middle of a regular list isn't ideal). - ''' - - def __init__(self, lst=None): - self._head = None - self._tail = None - if lst: - node = self - for item in lst: - node = node.append(item) - - def append(self, data): - if self._tail is None: - node = _Node(data) - self._head = self._tail = node - return node - else: - node = self._tail = self.tail.append(data) - return node - - @property - def head(self): - node = self._head - # Manipulating the node directly may make it unsynchronized. - while node.prev: - self._head = node = node.prev - return node - - @property - def tail(self): - node = self._tail - # Manipulating the node directly may make it unsynchronized. - while node.next: - self._tail = node = node.next - return node - - def __iter__(self): - node = self.head - - while node: - yield node.data - node = node.next - - -_PREDICT_TABLE = { - 'LIST_APPEND': ('JUMP_ABSOLUTE',), - 'SET_ADD': ('JUMP_ABSOLUTE',), - 'GET_ANEXT': ('LOAD_CONST',), - 'GET_AWAITABLE': ('LOAD_CONST',), - 'DICT_MERGE': ('CALL_FUNCTION_EX',), - 'MAP_ADD': ('JUMP_ABSOLUTE',), - 'COMPARE_OP': ('POP_JUMP_IF_FALSE', 'POP_JUMP_IF_TRUE',), - 'IS_OP': ('POP_JUMP_IF_FALSE', 'POP_JUMP_IF_TRUE',), - 'CONTAINS_OP': ('POP_JUMP_IF_FALSE', 'POP_JUMP_IF_TRUE',), - - # Note: there are some others with PREDICT on ceval, but they have more logic - # and it needs more experimentation to know how it behaves in the static generated - # code (and it's only an issue for us if there's actually a line change between - # those, so, we don't have to really handle all the cases, only the one where - # the line number actually changes from one instruction to the predicted one). -} - -# 3.10 optimizations include copying code branches multiple times (for instance -# if the body of a finally has a single assign statement it can copy the assign to the case -# where an exception happens and doesn't happen for optimization purposes) and as such -# we need to add the programmatic breakpoint multiple times. -TRACK_MULTIPLE_BRANCHES = sys.version_info[:2] >= (3, 10) - -# When tracking multiple branches, we try to fix the bytecodes which would be PREDICTED in the -# Python eval loop so that we don't have spurious line events that wouldn't usually be issued -# in the tracing as they're ignored due to the eval prediction (even though they're in the bytecode). -FIX_PREDICT = sys.version_info[:2] >= (3, 10) - - -def insert_pydevd_breaks( - code_to_modify, - breakpoint_lines, - code_line_info=None, - _pydev_stop_at_break=_pydev_stop_at_break, - _pydev_needs_stop_at_break=_pydev_needs_stop_at_break, - ): - """ - Inserts pydevd programmatic breaks into the code (at the given lines). - - :param breakpoint_lines: set with the lines where we should add breakpoints. - :return: tuple(boolean flag whether insertion was successful, modified code). - """ - if code_line_info is None: - code_line_info = _get_code_line_info(code_to_modify) - - if not code_line_info.line_to_offset: - return False, code_to_modify - - # Create a copy (and make sure we're dealing with a set). - breakpoint_lines = set(breakpoint_lines) - - # Note that we can even generate breakpoints on the first line of code - # now, since we generate a spurious line event -- it may be a bit pointless - # as we'll stop in the first line and we don't currently stop the tracing after the - # user resumes, but in the future, if we do that, this would be a nice - # improvement. - # if code_to_modify.co_firstlineno in breakpoint_lines: - # return False, code_to_modify - - for line in breakpoint_lines: - if line <= 0: - # The first line is line 1, so, a break at line 0 is not valid. - pydev_log.info('Trying to add breakpoint in invalid line: %s', line) - return False, code_to_modify - - try: - b = bytecode.Bytecode.from_code(code_to_modify) - - if DEBUG: - op_number_bytecode = debug_helper.write_bytecode(b, prefix='bytecode.') - - helper_list = _HelperBytecodeList(b) - - modified_breakpoint_lines = breakpoint_lines.copy() - - curr_node = helper_list.head - added_breaks_in_lines = set() - last_lineno = None - while curr_node is not None: - instruction = curr_node.data - instruction_lineno = getattr(instruction, 'lineno', None) - curr_name = getattr(instruction, 'name', None) - - if FIX_PREDICT: - predict_targets = _PREDICT_TABLE.get(curr_name) - if predict_targets: - # Odd case: the next instruction may have a line number but it doesn't really - # appear in the tracing due to the PREDICT() in ceval, so, fix the bytecode so - # that it does things the way that ceval actually interprets it. - # See: https://mail.python.org/archives/list/python-dev@python.org/thread/CP2PTFCMTK57KM3M3DLJNWGO66R5RVPB/ - next_instruction = curr_node.next.data - next_name = getattr(next_instruction, 'name', None) - if next_name in predict_targets: - next_instruction_lineno = getattr(next_instruction, 'lineno', None) - if next_instruction_lineno: - next_instruction.lineno = None - - if instruction_lineno is not None: - if TRACK_MULTIPLE_BRANCHES: - if last_lineno is None: - last_lineno = instruction_lineno - else: - if last_lineno == instruction_lineno: - # If the previous is a label, someone may jump into it, so, we need to add - # the break even if it's in the same line. - if curr_node.prev.data.__class__ != Label: - # Skip adding this as the line is still the same. - curr_node = curr_node.next - continue - last_lineno = instruction_lineno - else: - if instruction_lineno in added_breaks_in_lines: - curr_node = curr_node.next - continue - - if instruction_lineno in modified_breakpoint_lines: - added_breaks_in_lines.add(instruction_lineno) - if curr_node.prev is not None and curr_node.prev.data.__class__ == Label \ - and curr_name == 'POP_TOP': - - # If we have a SETUP_FINALLY where the target is a POP_TOP, we can't change - # the target to be the breakpoint instruction (this can crash the interpreter). - - for new_instruction in get_instructions_to_add( - instruction_lineno, - _pydev_stop_at_break=_pydev_stop_at_break, - _pydev_needs_stop_at_break=_pydev_needs_stop_at_break, - ): - curr_node = curr_node.append(new_instruction) - - else: - for new_instruction in get_instructions_to_add( - instruction_lineno, - _pydev_stop_at_break=_pydev_stop_at_break, - _pydev_needs_stop_at_break=_pydev_needs_stop_at_break, - ): - curr_node.prepend(new_instruction) - - curr_node = curr_node.next - - b[:] = helper_list - - if DEBUG: - debug_helper.write_bytecode(b, op_number_bytecode, prefix='bytecode.') - - new_code = b.to_code() - - except: - pydev_log.exception('Error inserting pydevd breaks.') - return False, code_to_modify - - if DEBUG: - op_number = debug_helper.write_dis(code_to_modify) - debug_helper.write_dis(new_code, op_number) - - return True, new_code - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_peephole_opt.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_peephole_opt.py deleted file mode 100644 index 387a7829f9e2877f29fa374810f6a7d8f3453db1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_peephole_opt.py +++ /dev/null @@ -1,985 +0,0 @@ - -import pytest -from tests_python.debugger_unittest import IS_PY36_OR_GREATER, IS_CPYTHON -from tests_python.debug_constants import TEST_CYTHON -pytestmark = pytest.mark.skipif(not IS_PY36_OR_GREATER or not IS_CPYTHON or not TEST_CYTHON, reason='Requires CPython >= 3.6') -import sys -import unittest -from _pydevd_frame_eval.vendored.bytecode import Label, Instr, Compare, Bytecode, ControlFlowGraph -from _pydevd_frame_eval.vendored.bytecode import peephole_opt -from _pydevd_frame_eval.vendored.bytecode.tests import TestCase, dump_bytecode -from unittest import mock - - -class Tests(TestCase): - - maxDiff = 80 * 100 - - def optimize_blocks(self, code): - if isinstance(code, Bytecode): - code = ControlFlowGraph.from_bytecode(code) - optimizer = peephole_opt.PeepholeOptimizer() - optimizer.optimize_cfg(code) - return code - - def check(self, code, *expected): - if isinstance(code, Bytecode): - code = ControlFlowGraph.from_bytecode(code) - optimizer = peephole_opt.PeepholeOptimizer() - optimizer.optimize_cfg(code) - code = code.to_bytecode() - - try: - self.assertEqual(code, expected) - except AssertionError: - print("Optimized code:") - dump_bytecode(code) - - print("Expected code:") - for instr in expected: - print(instr) - - raise - - def check_dont_optimize(self, code): - code = ControlFlowGraph.from_bytecode(code) - noopt = code.to_bytecode() - - optim = self.optimize_blocks(code) - optim = optim.to_bytecode() - self.assertEqual(optim, noopt) - - def test_unary_op(self): - def check_unary_op(op, value, result): - code = Bytecode( - [Instr("LOAD_CONST", value), Instr(op), Instr("STORE_NAME", "x")] - ) - self.check(code, Instr("LOAD_CONST", result), Instr("STORE_NAME", "x")) - - check_unary_op("UNARY_POSITIVE", 2, 2) - check_unary_op("UNARY_NEGATIVE", 3, -3) - check_unary_op("UNARY_INVERT", 5, -6) - - def test_binary_op(self): - def check_bin_op(left, op, right, result): - code = Bytecode( - [ - Instr("LOAD_CONST", left), - Instr("LOAD_CONST", right), - Instr(op), - Instr("STORE_NAME", "x"), - ] - ) - self.check(code, Instr("LOAD_CONST", result), Instr("STORE_NAME", "x")) - - check_bin_op(10, "BINARY_ADD", 20, 30) - check_bin_op(5, "BINARY_SUBTRACT", 1, 4) - check_bin_op(5, "BINARY_MULTIPLY", 3, 15) - check_bin_op(10, "BINARY_TRUE_DIVIDE", 3, 10 / 3) - check_bin_op(10, "BINARY_FLOOR_DIVIDE", 3, 3) - check_bin_op(10, "BINARY_MODULO", 3, 1) - check_bin_op(2, "BINARY_POWER", 8, 256) - check_bin_op(1, "BINARY_LSHIFT", 3, 8) - check_bin_op(16, "BINARY_RSHIFT", 3, 2) - check_bin_op(10, "BINARY_AND", 3, 2) - check_bin_op(2, "BINARY_OR", 3, 3) - check_bin_op(2, "BINARY_XOR", 3, 1) - - def test_combined_unary_bin_ops(self): - # x = 1 + 3 + 7 - code = Bytecode( - [ - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 3), - Instr("BINARY_ADD"), - Instr("LOAD_CONST", 7), - Instr("BINARY_ADD"), - Instr("STORE_NAME", "x"), - ] - ) - self.check(code, Instr("LOAD_CONST", 11), Instr("STORE_NAME", "x")) - - # x = ~(~(5)) - code = Bytecode( - [ - Instr("LOAD_CONST", 5), - Instr("UNARY_INVERT"), - Instr("UNARY_INVERT"), - Instr("STORE_NAME", "x"), - ] - ) - self.check(code, Instr("LOAD_CONST", 5), Instr("STORE_NAME", "x")) - - # "events = [(0, 'call'), (1, 'line'), (-(3), 'call')]" - code = Bytecode( - [ - Instr("LOAD_CONST", 0), - Instr("LOAD_CONST", "call"), - Instr("BUILD_TUPLE", 2), - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", "line"), - Instr("BUILD_TUPLE", 2), - Instr("LOAD_CONST", 3), - Instr("UNARY_NEGATIVE"), - Instr("LOAD_CONST", "call"), - Instr("BUILD_TUPLE", 2), - Instr("BUILD_LIST", 3), - Instr("STORE_NAME", "events"), - ] - ) - self.check( - code, - Instr("LOAD_CONST", (0, "call")), - Instr("LOAD_CONST", (1, "line")), - Instr("LOAD_CONST", (-3, "call")), - Instr("BUILD_LIST", 3), - Instr("STORE_NAME", "events"), - ) - - # 'x = (1,) + (0,) * 8' - code = Bytecode( - [ - Instr("LOAD_CONST", 1), - Instr("BUILD_TUPLE", 1), - Instr("LOAD_CONST", 0), - Instr("BUILD_TUPLE", 1), - Instr("LOAD_CONST", 8), - Instr("BINARY_MULTIPLY"), - Instr("BINARY_ADD"), - Instr("STORE_NAME", "x"), - ] - ) - zeros = (0,) * 8 - result = (1,) + zeros - self.check(code, Instr("LOAD_CONST", result), Instr("STORE_NAME", "x")) - - def test_max_size(self): - max_size = 3 - with mock.patch.object(peephole_opt, "MAX_SIZE", max_size): - # optimized binary operation: size <= maximum size - # - # (9,) * size - size = max_size - result = (9,) * size - code = Bytecode( - [ - Instr("LOAD_CONST", 9), - Instr("BUILD_TUPLE", 1), - Instr("LOAD_CONST", size), - Instr("BINARY_MULTIPLY"), - Instr("STORE_NAME", "x"), - ] - ) - self.check(code, Instr("LOAD_CONST", result), Instr("STORE_NAME", "x")) - - # don't optimize binary operation: size > maximum size - # - # x = (9,) * size - size = max_size + 1 - code = Bytecode( - [ - Instr("LOAD_CONST", 9), - Instr("BUILD_TUPLE", 1), - Instr("LOAD_CONST", size), - Instr("BINARY_MULTIPLY"), - Instr("STORE_NAME", "x"), - ] - ) - self.check( - code, - Instr("LOAD_CONST", (9,)), - Instr("LOAD_CONST", size), - Instr("BINARY_MULTIPLY"), - Instr("STORE_NAME", "x"), - ) - - def test_bin_op_dont_optimize(self): - # 1 / 0 - code = Bytecode( - [ - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 0), - Instr("BINARY_TRUE_DIVIDE"), - Instr("POP_TOP"), - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - self.check_dont_optimize(code) - - # 1 // 0 - code = Bytecode( - [ - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 0), - Instr("BINARY_FLOOR_DIVIDE"), - Instr("POP_TOP"), - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - self.check_dont_optimize(code) - - # 1 % 0 - code = Bytecode( - [ - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 0), - Instr("BINARY_MODULO"), - Instr("POP_TOP"), - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - self.check_dont_optimize(code) - - # 1 % 1j - code = Bytecode( - [ - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 1j), - Instr("BINARY_MODULO"), - Instr("POP_TOP"), - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - self.check_dont_optimize(code) - - def test_build_tuple(self): - # x = (1, 2, 3) - code = Bytecode( - [ - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 2), - Instr("LOAD_CONST", 3), - Instr("BUILD_TUPLE", 3), - Instr("STORE_NAME", "x"), - ] - ) - self.check(code, Instr("LOAD_CONST", (1, 2, 3)), Instr("STORE_NAME", "x")) - - def test_build_list(self): - # test = x in [1, 2, 3] - code = Bytecode( - [ - Instr("LOAD_NAME", "x"), - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 2), - Instr("LOAD_CONST", 3), - Instr("BUILD_LIST", 3), - Instr("COMPARE_OP", Compare.IN), - Instr("STORE_NAME", "test"), - ] - ) - - self.check( - code, - Instr("LOAD_NAME", "x"), - Instr("LOAD_CONST", (1, 2, 3)), - Instr("COMPARE_OP", Compare.IN), - Instr("STORE_NAME", "test"), - ) - - def test_build_list_unpack_seq(self): - for build_list in ("BUILD_TUPLE", "BUILD_LIST"): - # x, = [a] - code = Bytecode( - [ - Instr("LOAD_NAME", "a"), - Instr(build_list, 1), - Instr("UNPACK_SEQUENCE", 1), - Instr("STORE_NAME", "x"), - ] - ) - self.check(code, Instr("LOAD_NAME", "a"), Instr("STORE_NAME", "x")) - - # x, y = [a, b] - code = Bytecode( - [ - Instr("LOAD_NAME", "a"), - Instr("LOAD_NAME", "b"), - Instr(build_list, 2), - Instr("UNPACK_SEQUENCE", 2), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - ] - ) - self.check( - code, - Instr("LOAD_NAME", "a"), - Instr("LOAD_NAME", "b"), - Instr("ROT_TWO"), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - ) - - # x, y, z = [a, b, c] - code = Bytecode( - [ - Instr("LOAD_NAME", "a"), - Instr("LOAD_NAME", "b"), - Instr("LOAD_NAME", "c"), - Instr(build_list, 3), - Instr("UNPACK_SEQUENCE", 3), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - Instr("STORE_NAME", "z"), - ] - ) - self.check( - code, - Instr("LOAD_NAME", "a"), - Instr("LOAD_NAME", "b"), - Instr("LOAD_NAME", "c"), - Instr("ROT_THREE"), - Instr("ROT_TWO"), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - Instr("STORE_NAME", "z"), - ) - - def test_build_tuple_unpack_seq_const(self): - # x, y = (3, 4) - code = Bytecode( - [ - Instr("LOAD_CONST", 3), - Instr("LOAD_CONST", 4), - Instr("BUILD_TUPLE", 2), - Instr("UNPACK_SEQUENCE", 2), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - ] - ) - self.check( - code, - Instr("LOAD_CONST", (3, 4)), - Instr("UNPACK_SEQUENCE", 2), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - ) - - def test_build_list_unpack_seq_const(self): - # x, y, z = [3, 4, 5] - code = Bytecode( - [ - Instr("LOAD_CONST", 3), - Instr("LOAD_CONST", 4), - Instr("LOAD_CONST", 5), - Instr("BUILD_LIST", 3), - Instr("UNPACK_SEQUENCE", 3), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - Instr("STORE_NAME", "z"), - ] - ) - self.check( - code, - Instr("LOAD_CONST", 5), - Instr("LOAD_CONST", 4), - Instr("LOAD_CONST", 3), - Instr("STORE_NAME", "x"), - Instr("STORE_NAME", "y"), - Instr("STORE_NAME", "z"), - ) - - def test_build_set(self): - # test = x in {1, 2, 3} - code = Bytecode( - [ - Instr("LOAD_NAME", "x"), - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", 2), - Instr("LOAD_CONST", 3), - Instr("BUILD_SET", 3), - Instr("COMPARE_OP", Compare.IN), - Instr("STORE_NAME", "test"), - ] - ) - - self.check( - code, - Instr("LOAD_NAME", "x"), - Instr("LOAD_CONST", frozenset((1, 2, 3))), - Instr("COMPARE_OP", Compare.IN), - Instr("STORE_NAME", "test"), - ) - - def test_compare_op_unary_not(self): - for op, not_op in ( - (Compare.IN, Compare.NOT_IN), # in => not in - (Compare.NOT_IN, Compare.IN), # not in => in - (Compare.IS, Compare.IS_NOT), # is => is not - (Compare.IS_NOT, Compare.IS), # is not => is - ): - code = Bytecode( - [ - Instr("LOAD_NAME", "a"), - Instr("LOAD_NAME", "b"), - Instr("COMPARE_OP", op), - Instr("UNARY_NOT"), - Instr("STORE_NAME", "x"), - ] - ) - self.check( - code, - Instr("LOAD_NAME", "a"), - Instr("LOAD_NAME", "b"), - Instr("COMPARE_OP", not_op), - Instr("STORE_NAME", "x"), - ) - - # don't optimize: - # x = not (a and b is True) - label_instr5 = Label() - code = Bytecode( - [ - Instr("LOAD_NAME", "a"), - Instr("JUMP_IF_FALSE_OR_POP", label_instr5), - Instr("LOAD_NAME", "b"), - Instr("LOAD_CONST", True), - Instr("COMPARE_OP", Compare.IS), - label_instr5, - Instr("UNARY_NOT"), - Instr("STORE_NAME", "x"), - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - self.check_dont_optimize(code) - - def test_dont_optimize(self): - # x = 3 < 5 - code = Bytecode( - [ - Instr("LOAD_CONST", 3), - Instr("LOAD_CONST", 5), - Instr("COMPARE_OP", Compare.LT), - Instr("STORE_NAME", "x"), - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - self.check_dont_optimize(code) - - # x = (10, 20, 30)[1:] - code = Bytecode( - [ - Instr("LOAD_CONST", (10, 20, 30)), - Instr("LOAD_CONST", 1), - Instr("LOAD_CONST", None), - Instr("BUILD_SLICE", 2), - Instr("BINARY_SUBSCR"), - Instr("STORE_NAME", "x"), - ] - ) - self.check_dont_optimize(code) - - def test_optimize_code_obj(self): - # Test optimize() method with a code object - # - # x = 3 + 5 => x = 8 - noopt = Bytecode( - [ - Instr("LOAD_CONST", 3), - Instr("LOAD_CONST", 5), - Instr("BINARY_ADD"), - Instr("STORE_NAME", "x"), - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - noopt = noopt.to_code() - - optimizer = peephole_opt.PeepholeOptimizer() - optim = optimizer.optimize(noopt) - - code = Bytecode.from_code(optim) - self.assertEqual( - code, - [ - Instr("LOAD_CONST", 8, lineno=1), - Instr("STORE_NAME", "x", lineno=1), - Instr("LOAD_CONST", None, lineno=1), - Instr("RETURN_VALUE", lineno=1), - ], - ) - - def test_return_value(self): - # return+return: remove second return - # - # def func(): - # return 4 - # return 5 - code = Bytecode( - [ - Instr("LOAD_CONST", 4, lineno=2), - Instr("RETURN_VALUE", lineno=2), - Instr("LOAD_CONST", 5, lineno=3), - Instr("RETURN_VALUE", lineno=3), - ] - ) - code = ControlFlowGraph.from_bytecode(code) - self.check( - code, Instr("LOAD_CONST", 4, lineno=2), Instr("RETURN_VALUE", lineno=2) - ) - - # return+return + return+return: remove second and fourth return - # - # def func(): - # return 4 - # return 5 - # return 6 - # return 7 - code = Bytecode( - [ - Instr("LOAD_CONST", 4, lineno=2), - Instr("RETURN_VALUE", lineno=2), - Instr("LOAD_CONST", 5, lineno=3), - Instr("RETURN_VALUE", lineno=3), - Instr("LOAD_CONST", 6, lineno=4), - Instr("RETURN_VALUE", lineno=4), - Instr("LOAD_CONST", 7, lineno=5), - Instr("RETURN_VALUE", lineno=5), - ] - ) - code = ControlFlowGraph.from_bytecode(code) - self.check( - code, Instr("LOAD_CONST", 4, lineno=2), Instr("RETURN_VALUE", lineno=2) - ) - - # return + JUMP_ABSOLUTE: remove JUMP_ABSOLUTE - # while 1: - # return 7 - if sys.version_info < (3, 8): - setup_loop = Label() - return_label = Label() - code = Bytecode( - [ - setup_loop, - Instr("SETUP_LOOP", return_label, lineno=2), - Instr("LOAD_CONST", 7, lineno=3), - Instr("RETURN_VALUE", lineno=3), - Instr("JUMP_ABSOLUTE", setup_loop, lineno=3), - Instr("POP_BLOCK", lineno=3), - return_label, - Instr("LOAD_CONST", None, lineno=3), - Instr("RETURN_VALUE", lineno=3), - ] - ) - code = ControlFlowGraph.from_bytecode(code) - - end_loop = Label() - self.check( - code, - Instr("SETUP_LOOP", end_loop, lineno=2), - Instr("LOAD_CONST", 7, lineno=3), - Instr("RETURN_VALUE", lineno=3), - end_loop, - Instr("LOAD_CONST", None, lineno=3), - Instr("RETURN_VALUE", lineno=3), - ) - else: - setup_loop = Label() - return_label = Label() - code = Bytecode( - [ - setup_loop, - Instr("LOAD_CONST", 7, lineno=3), - Instr("RETURN_VALUE", lineno=3), - Instr("JUMP_ABSOLUTE", setup_loop, lineno=3), - Instr("LOAD_CONST", None, lineno=3), - Instr("RETURN_VALUE", lineno=3), - ] - ) - code = ControlFlowGraph.from_bytecode(code) - - self.check( - code, Instr("LOAD_CONST", 7, lineno=3), Instr("RETURN_VALUE", lineno=3) - ) - - def test_not_jump_if_false(self): - # Replace UNARY_NOT+POP_JUMP_IF_FALSE with POP_JUMP_IF_TRUE - # - # if not x: - # y = 9 - label = Label() - code = Bytecode( - [ - Instr("LOAD_NAME", "x"), - Instr("UNARY_NOT"), - Instr("POP_JUMP_IF_FALSE", label), - Instr("LOAD_CONST", 9), - Instr("STORE_NAME", "y"), - label, - ] - ) - - code = self.optimize_blocks(code) - label = Label() - self.check( - code, - Instr("LOAD_NAME", "x"), - Instr("POP_JUMP_IF_TRUE", label), - Instr("LOAD_CONST", 9), - Instr("STORE_NAME", "y"), - label, - ) - - def test_unconditional_jump_to_return(self): - # def func(): - # if test: - # if test2: - # x = 10 - # else: - # x = 20 - # else: - # x = 30 - - label_instr11 = Label() - label_instr14 = Label() - label_instr7 = Label() - code = Bytecode( - [ - Instr("LOAD_GLOBAL", "test", lineno=2), - Instr("POP_JUMP_IF_FALSE", label_instr11, lineno=2), - Instr("LOAD_GLOBAL", "test2", lineno=3), - Instr("POP_JUMP_IF_FALSE", label_instr7, lineno=3), - Instr("LOAD_CONST", 10, lineno=4), - Instr("STORE_FAST", "x", lineno=4), - Instr("JUMP_ABSOLUTE", label_instr14, lineno=4), - label_instr7, - Instr("LOAD_CONST", 20, lineno=6), - Instr("STORE_FAST", "x", lineno=6), - Instr("JUMP_FORWARD", label_instr14, lineno=6), - label_instr11, - Instr("LOAD_CONST", 30, lineno=8), - Instr("STORE_FAST", "x", lineno=8), - label_instr14, - Instr("LOAD_CONST", None, lineno=8), - Instr("RETURN_VALUE", lineno=8), - ] - ) - - label1 = Label() - label3 = Label() - label4 = Label() - self.check( - code, - Instr("LOAD_GLOBAL", "test", lineno=2), - Instr("POP_JUMP_IF_FALSE", label3, lineno=2), - Instr("LOAD_GLOBAL", "test2", lineno=3), - Instr("POP_JUMP_IF_FALSE", label1, lineno=3), - Instr("LOAD_CONST", 10, lineno=4), - Instr("STORE_FAST", "x", lineno=4), - Instr("JUMP_ABSOLUTE", label4, lineno=4), - label1, - Instr("LOAD_CONST", 20, lineno=6), - Instr("STORE_FAST", "x", lineno=6), - Instr("JUMP_FORWARD", label4, lineno=6), - label3, - Instr("LOAD_CONST", 30, lineno=8), - Instr("STORE_FAST", "x", lineno=8), - label4, - Instr("LOAD_CONST", None, lineno=8), - Instr("RETURN_VALUE", lineno=8), - ) - - def test_unconditional_jumps(self): - # def func(): - # if x: - # if y: - # func() - label_instr7 = Label() - code = Bytecode( - [ - Instr("LOAD_GLOBAL", "x", lineno=2), - Instr("POP_JUMP_IF_FALSE", label_instr7, lineno=2), - Instr("LOAD_GLOBAL", "y", lineno=3), - Instr("POP_JUMP_IF_FALSE", label_instr7, lineno=3), - Instr("LOAD_GLOBAL", "func", lineno=4), - Instr("CALL_FUNCTION", 0, lineno=4), - Instr("POP_TOP", lineno=4), - label_instr7, - Instr("LOAD_CONST", None, lineno=4), - Instr("RETURN_VALUE", lineno=4), - ] - ) - - label_return = Label() - self.check( - code, - Instr("LOAD_GLOBAL", "x", lineno=2), - Instr("POP_JUMP_IF_FALSE", label_return, lineno=2), - Instr("LOAD_GLOBAL", "y", lineno=3), - Instr("POP_JUMP_IF_FALSE", label_return, lineno=3), - Instr("LOAD_GLOBAL", "func", lineno=4), - Instr("CALL_FUNCTION", 0, lineno=4), - Instr("POP_TOP", lineno=4), - label_return, - Instr("LOAD_CONST", None, lineno=4), - Instr("RETURN_VALUE", lineno=4), - ) - - def test_jump_to_return(self): - # def func(condition): - # return 'yes' if condition else 'no' - label_instr4 = Label() - label_instr6 = Label() - code = Bytecode( - [ - Instr("LOAD_FAST", "condition"), - Instr("POP_JUMP_IF_FALSE", label_instr4), - Instr("LOAD_CONST", "yes"), - Instr("JUMP_FORWARD", label_instr6), - label_instr4, - Instr("LOAD_CONST", "no"), - label_instr6, - Instr("RETURN_VALUE"), - ] - ) - - label = Label() - self.check( - code, - Instr("LOAD_FAST", "condition"), - Instr("POP_JUMP_IF_FALSE", label), - Instr("LOAD_CONST", "yes"), - Instr("RETURN_VALUE"), - label, - Instr("LOAD_CONST", "no"), - Instr("RETURN_VALUE"), - ) - - def test_jump_if_true_to_jump_if_false(self): - # Replace JUMP_IF_TRUE_OR_POP jumping to POP_JUMP_IF_FALSE - # with POP_JUMP_IF_TRUE - # - # if x or y: - # z = 1 - - label_instr3 = Label() - label_instr7 = Label() - code = Bytecode( - [ - Instr("LOAD_NAME", "x"), - Instr("JUMP_IF_TRUE_OR_POP", label_instr3), - Instr("LOAD_NAME", "y"), - label_instr3, - Instr("POP_JUMP_IF_FALSE", label_instr7), - Instr("LOAD_CONST", 1), - Instr("STORE_NAME", "z"), - label_instr7, - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ] - ) - - label_instr4 = Label() - label_instr7 = Label() - self.check( - code, - Instr("LOAD_NAME", "x"), - Instr("POP_JUMP_IF_TRUE", label_instr4), - Instr("LOAD_NAME", "y"), - Instr("POP_JUMP_IF_FALSE", label_instr7), - label_instr4, - Instr("LOAD_CONST", 1), - Instr("STORE_NAME", "z"), - label_instr7, - Instr("LOAD_CONST", None), - Instr("RETURN_VALUE"), - ) - - def test_jump_if_false_to_jump_if_false(self): - # Replace JUMP_IF_FALSE_OR_POP jumping to POP_JUMP_IF_FALSE