diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create professional and dynamic photo slideshow videos with HD Online Player (socusoft photo to video converter pr).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create professional and dynamic photo slideshow videos with HD Online Player (socusoft photo to video converter pr).md deleted file mode 100644 index 503368d5abd082a2e17b4d3191c250eff690e960..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create professional and dynamic photo slideshow videos with HD Online Player (socusoft photo to video converter pr).md +++ /dev/null @@ -1,92 +0,0 @@ -
-

HD Online Player (socusoft photo to video converter pr)

-

Do you have a lot of photos that you want to turn into a stunning video slideshow? Do you want to share your precious memories with your friends and family on social media platforms like Facebook, YouTube, or MySpace? Do you want to watch your photo slideshow on your PC, iPod, iPad, iPhone, PSP, or other devices? If you answered yes to any of these questions, then you need a powerful and professional photo to video converter software that can help you create HD photo slideshows with ease. And that software is socusoft photo to video converter professional.

-

What is socusoft photo to video converter professional?

-

Socusoft photo to video converter professional is a software program that enables you to create professional slideshows from photographs, and save it to the hard drive as a video file, in several formats. It supports a wide range of formats, both at import and export. To be more accurate, you can upload JPG, TIFF, BMP, PCX and PNG images, and create MP4, FLV, AVI, MKV, MPG, 3GP and SWF videos, with presets so that you can upload them to Facebook, YouTube, MySpace, or play on Apple products or mobile phones.

-

HD Online Player (socusoft photo to video converter pr)


Download Zip ○○○ https://byltly.com/2uKxZ8



-

Features and benefits of socusoft photo to video converter professional

-

Some of the features and benefits of socusoft photo to video converter professional are:

- -

How to use socusoft photo to video converter professional to create HD photo slideshows

-

Using socusoft photo to video converter professional is very easy and intuitive. You just need to follow these simple steps:

-

Step 1: Add photos and edit them

-

After installation, run the program and click on the "Organize Photos" tab. You can drag and drop photos from your computer or click on the "Add" button to browse for photos. You can also create multiple albums for different occasions or themes. To edit photos, you can double-click on them or click on the "Edit Photo" button. You can rotate them, crop them, add text or cliparts, adjust color and brightness, etc.

-

Step 2: Choose transitions and album themes

-

Click on the "Transition & Music" tab. You can see all the available transitions on the left panel. You can drag and drop them between photos or click on the "Apply" button to apply them randomly. You can also change the duration of each transition. On the right panel, you can see all the album themes that you can apply to your slideshow. You can choose from different categories like wedding, birthday, travel, etc. You can also customize the album title and background.

-

Step 3: Add background music and record narration

-

Click on the "Music & Sound" tab. You can add MP3, WMA or WAV audio files as background music by clicking on the "Add" button or dragging and dropping them from your computer. You can also trim or loop the music as you like. To record your own voice as narration, click on the "Record Sound" button and use your microphone. You can adjust the volume of both music and sound.

-

How to use HD Online Player for socusoft photo to video conversion
-Best settings for HD Online Player to convert socusoft photos to videos
-HD Online Player vs other socusoft photo to video converter software
-HD Online Player review: pros and cons of socusoft photo to video converter
-HD Online Player download: where to get socusoft photo to video converter for free
-HD Online Player tutorial: how to create stunning videos from socusoft photos
-HD Online Player features: what makes socusoft photo to video converter unique
-HD Online Player alternatives: other photo to video converter tools that work with socusoft
-HD Online Player support: how to contact socusoft photo to video converter customer service
-HD Online Player license: how to activate socusoft photo to video converter premium version
-HD Online Player update: how to get the latest version of socusoft photo to video converter
-HD Online Player compatibility: which devices and platforms can run socusoft photo to video converter
-HD Online Player tips and tricks: how to optimize socusoft photo to video converter performance
-HD Online Player testimonials: what users say about socusoft photo to video converter
-HD Online Player FAQ: frequently asked questions about socusoft photo to video converter
-HD Online Player coupon code: how to get a discount on socusoft photo to video converter purchase
-HD Online Player demo: how to try socusoft photo to video converter before buying
-HD Online Player refund policy: how to get your money back if you are not satisfied with socusoft photo to video converter
-HD Online Player comparison: how does socusoft photo to video converter stack up against the competition
-HD Online Player benefits: how socusoft photo to video converter can help you achieve your goals
-HD Online Player problems: how to fix common issues with socusoft photo to video converter
-HD Online Player guide: how to master socusoft photo to video converter in easy steps
-HD Online Player quality: how good are the videos produced by socusoft photo to video converter
-HD Online Player customization: how to personalize socusoft photo to video converter settings and preferences
-HD Online Player integration: how to connect socusoft photo to video converter with other apps and services
-HD Online Player case study: how a real user used socusoft photo to video converter for a specific project
-HD Online Player bonus: what extra features and perks you get with socusoft photo to video converter
-HD Online Player feedback: how to share your opinion and suggestions with socusoft photo to video converter developers
-HD Online Player forum: where to join the community of socusoft photo to video converter users and experts
-HD Online Player blog: where to find the latest news and updates on socusoft photo to video converter
-HD Online Player webinar: where to register for a live training session on socusoft photo to video converter
-HD Online Player podcast: where to listen to interviews and stories about socusoft photo to video converter
-HD Online Player ebook: where to download a free ebook on socusoft photo to video converter best practices
-HD Online Player course: where to enroll in a comprehensive online course on socusoft photo to video converter
-HD Online Player cheat sheet: where to get a handy reference guide on socusoft photo to video converter shortcuts and commands
-HD Online Player infographic: where to view a visual summary of the key facts and figures about socusoft photo to video converter
-HD Online Player checklist: where

-

Step 4: Customize output video parameters and format

-

Click on the "Video Output" tab. You can choose from different output formats like MP4, FLV, AVI, MKV, MPG, 3GP or SWF. You can also select presets for different devices or platforms like iPod, iPad, iPhone, PSP, YouTube or Facebook. You can also customize the output parameters like size, codecs, bit rate, frame rate and channels.

-

Step 5: Save and share your photo slideshow video

-

Click on the "Create Now!" button. You can choose to save your slideshow as a video file on your computer or burn it onto a DVD disc. You can also save it as an executable file that can be played on any PC without installation. After saving your slideshow, you can share it with your friends and family on social media platforms like Facebook, YouTube or MySpace.

-

Why choose socusoft photo to video converter professional over other photo slideshow makers?

-

There are many reasons why socusoft photo to video converter professional is better than other photo slideshow makers. Here are some of them:

-

High quality and compatibility of output videos

-

Socusoft photo to video converter professional produces high quality videos that are compatible with various devices and platforms. You can create HD videos that have crisp images and clear sound. You can also choose from different formats and presets that suit your needs and preferences.

-

Easy and intuitive interface and operation

-

Socusoft photo to video converter professional has an easy and intuitive interface that makes it simple to use. You don't need any technical skills or experience to create stunning slideshows. You just need to follow the steps and customize the settings as you like.

-

Affordable price and lifetime support

-

Socusoft photo to video converter professional is affordable and worth every penny. You only need to pay once and enjoy lifetime updates and support. You can also get free trial versions and discounts on their website. If you have any questions or problems, you can contact their customer service anytime.

-

Conclusion

-

Socusoft photo to video converter professional is a powerful and professional software program that enables you to create HD photo slideshows with ease. It has many features and benefits that make it stand out from other photo slideshow makers. It is easy to use, high quality, compatible, affordable, and supported. If you want to turn your photos into amazing videos, you should try socusoft photo to video converter professional today.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Adobe.cc.2019.patcher.[zer0code3].zip.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Adobe.cc.2019.patcher.[zer0code3].zip.md deleted file mode 100644 index 708d61e78eaec9c3cb73361da7ea33db5595f01c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Adobe.cc.2019.patcher.[zer0code3].zip.md +++ /dev/null @@ -1,83 +0,0 @@ -
-

FULL adobe.cc.2019.patcher.[zer0code3].zip: What is it and how to use it?

-

Introduction

-

If you are a creative professional, student, or hobbyist, you probably know about Adobe Creative Cloud (CC) 2019, the latest version of the world's leading software suite for design, photography, video, web, and more. Adobe CC 2019 offers a range of powerful tools and features that can help you unleash your creativity and bring your ideas to life.

-

FULL adobe.cc.2019.patcher.[zer0code3].zip


Download 🆓 https://byltly.com/2uKvi2



-

However, you may also know that Adobe CC 2019 is not cheap. To use it, you need to pay a monthly or yearly subscription fee that can range from $9.99 to $82.98 per month, depending on the plan and the number of apps you choose. That's a lot of money for some people, especially if you only need one or two apps occasionally.

-

So, what if there was a way to get all the Adobe CC 2019 programs for free, without paying anything or logging in to any account? Sounds too good to be true, right? Well, that's what FULL adobe.cc.2019.patcher.[zer0code3].zip claims to do.

-

FULL adobe.cc.2019.patcher.[zer0code3].zip is a file that contains a patcher, a software tool that modifies or cracks another software program to bypass its license verification or activation process. In this case, the patcher targets all the Adobe CC 2019 programs, such as Photoshop, Lightroom, Dreamweaver, Acrobat, After Effects, InCopy, Media Encoder, Character Animator, Audition, Illustrator, and many more. By using the patcher, you can download and install any Adobe CC 2019 program you want, and activate it with one click, without needing any login or subscription. You can then use the program offline or online, with full access to all its features and updates.

-

But how does the patcher work, and what are its advantages and disadvantages? Is it safe and legal to use? Are there any alternatives to it? In this article, we will answer these questions and more, so you can decide whether FULL adobe.cc.2019.patcher.[zer0code3].zip is worth trying or not.

-

Features of FULL adobe.cc.2019.patcher.[zer0code3].zip

-

FULL adobe.cc.2019.patcher.[zer0code3].zip is a patcher created by zer0cod3, a hacker who claims to have cracked all the Adobe CC 2019 programs. According to his website, the patcher has the following features:

- -

These features make FULL adobe.cc.2019.patcher.[zer0code3].zip a very convenient and attractive tool for anyone who wants to use Adobe CC 2019 for free. However, before you rush to download and use it, you should also be aware of its drawbacks and risks.

-

-

How to download and install FULL adobe.cc.2019.patcher.[zer0code3].zip

-

If you are interested in trying FULL adobe.cc.2019.patcher.[zer0code3].zip, you need to follow these steps:

-
    -
  1. Download CCMaker to get the Adobe offline installers: CCMaker is another tool that lets you download the offline installers of Adobe CC 2019 programs from Adobe's servers. You need this tool because the patcher does not work with the online installers that you get from Adobe's website. You can download CCMaker from here. After downloading it, extract it and run it as administrator.
  2. -
  3. Download FULL adobe.cc.2019.patcher.[zer0code3].zip from a reliable source: The patcher is not available on zer0cod3's website anymore, as it was taken down by Adobe for legal reasons. However, you can still find it on some other websites or torrent sites. Be careful though, as some of these sources may contain fake or malicious files that can harm your system. Make sure you scan the file with an antivirus before opening it. You can try this link for example, but we do not guarantee its safety or validity.
  4. -
  5. Extract the zip file and run the patcher: After downloading FULL adobe.cc.2019.patcher.[zer0code3].zip, extract it to a folder on your system. You will see a file called "FULL adobe.cc.2019.patcher.exe". Right-click on it and run it as administrator.
  6. -
  7. Select the Adobe program you want and click "Download and Patch": The patcher will show you a list of all the Adobe CC 2019 programs that it supports. You can select one or more programs that you want to download and activate. Then click on "Download and Patch" at the bottom of the window. The patcher will then start downloading the offline installer of the selected program from CCMaker, install it on your system, and apply the patch to activate it.
  8. -
  9. Enjoy your activated Adobe CC 2019 program: After the patching process is done, you can launch the program from your desktop or start menu. You will see that the program is activated and does not require any login or subscription. You can use the program offline or online, with full access to all its features and updates.
  10. -
-

Congratulations, you have successfully downloaded and installed an Adobe CC 2019 program for free using FULL adobe.cc.2019.patcher.[zer0code3].zip. However, before you start using it, you should also consider the pros and cons of using this patcher.

-

Pros and cons of using FULL adobe.cc.2019.patcher.[zer0code3].zip

-

Using FULL adobe.cc.2019.patcher.[zer0code3].zip may seem like a great idea, as it allows you to get all the Adobe CC 2019 programs for free, without paying anything or logging in to any account. However, it also has some drawbacks and risks that you should be aware of. Here are some of the pros and cons of using this patcher:

- - - - - - - - - - - - - - - - - - - - - -
ProsCons
Saves money and time: You don't need to pay a monthly or yearly subscription fee to use Adobe CC 2019, which can save you a lot of money in the long run. You also don't need to waste time logging in or verifying your account every time you use the program.Illegal and unethical: Using the patcher is a form of software piracy, which is illegal and unethical. You are violating the intellectual property rights of Adobe, which is a crime in many countries. You are also depriving Adobe of its revenue, which can affect its ability to develop and improve its products.
Access to all Adobe CC 2019 features and updates: You can use all the features and functions of Adobe CC 2019, without any limitations or restrictions. You can also get the latest updates and bug fixes from Adobe, as the patcher does not interfere with the online functionality of the program.May not work with future versions of Adobe CC: The patcher may not be compatible with future versions or updates of Adobe CC, as Adobe may change its license verification or activation process to prevent piracy. You may not be able to use the patcher with newer versions of Adobe CC, or you may lose some features or functions.
No risk of malware or viruses: The patcher is safe to use, as it does not contain any malware or viruses that can harm your system. It does not modify any system files or registry entries, nor does it install any unwanted programs or toolbars on your system.May cause errors or crashes: The patcher may cause some errors or crashes in your Adobe CC 2019 program, as it modifies some files or codes that are essential for the proper functioning of the program. You may experience some glitches, bugs, or performance issues while using the program.
Compatible with Windows and Mac OS: The patcher works with both Windows and Mac OS systems, so you can use it on any computer that supports Adobe CC 2019. You don't need to worry about compatibility issues or system requirements.May violate the terms of service of Adobe: By using the patcher, you are violating the terms of service of Adobe, which is a legal agreement that you accept when you use its products or services. You are breaking the rules and conditions that Adobe sets for its users, which can result in legal actions or penalties from Adobe.
-

As you can see, using FULL adobe.cc.2019.patcher.[zer0code3].zip has its benefits and drawbacks. You should weigh them carefully before deciding whether to use it or not. If you are looking for alternatives to this patcher, you can check out some of the options below.

-

Alternatives to FULL adobe.cc.2019.patcher.[zer0code3].zip

-

FULL adobe.cc.2019.patcher.[zer0code3].zip is not the only patcher that can crack Adobe CC 2019 programs. There are other patchers that claim to do the same thing, with different methods and features. Here are some of them:

- -

These are some of the alternatives to FULL adobe.cc.2019.patcher.[zer0code3].zip that you can try if you want to crack Adobe CC 2019 programs. However, keep in mind that these patchers are also illegal and unethical, and may have similar or different drawbacks and risks as FULL adobe.cc.2019.patcher.[zer0code3].zip. Use them at your own risk and discretion.

-

Conclusion

-

In this article, we have discussed what FULL adobe.cc.2019.patcher.[zer0code3].zip is, how it works, and what are its pros and cons. We have also shown you how to download and install it, and what are some of the alternatives to it.

-

FULL adobe.cc.2019.patcher.[zer0code3].zip is a patcher that can download and activate any Adobe CC 2019 program for free, without needing any login or subscription. It has some features that make it very convenient and attractive for anyone who wants to use Adobe CC 2019 for free. However, it also has some drawbacks and risks that make it illegal and unethical, and may cause some problems or issues with your system or your Adobe CC 2019 program.

-

Therefore, we do not recommend using FULL adobe.cc.2019.patcher.[zer0code3].zip or any other patcher to crack Adobe CC 2019 programs. Instead, we suggest that you use the official and legal way of using Adobe CC 2019, which is to pay for a subscription plan that suits your needs and budget. This way, you can support Adobe's development and innovation, enjoy all the benefits and features of Adobe CC 2019, and avoid any legal or technical troubles.

-

If you have any questions or comments about FULL adobe.cc.2019.patcher.[zer0code3].zip or this article, feel free to leave them below. We hope you found this article helpful and informative.

-

FAQs

-

Here are some of the frequently asked questions about FULL adobe.cc.2019.patcher.[zer0code3].zip:

-

Q1: Is FULL adobe.cc.2019.patcher.[zer0code3].zip safe to use?

-

A1: No, FULL adobe.cc.2019.patcher.[zer0code3].zip is not safe to use, as it is a form of software piracy, which is illegal and unethical. You are violating the intellectual property rights of Adobe, which can result in legal actions or penalties from Adobe. You are also exposing your system to potential malware or viruses that may be hidden in the patcher or the downloaded files. You are also risking your Adobe CC 2019 program to errors or crashes that may occur due to the patching process.

-

Q2: How can I update my Adobe CC 2019 program after using the patcher?

-

A2: You can update your Adobe CC 2019 program after using the patcher by using the online functionality of the program. The patcher does not interfere with the online functionality of Adobe CC 2019, so you can still get the latest updates and bug fixes from Adobe's servers. However, you may need to reapply the patch after updating your program, as some updates may overwrite or remove the patch.

-

Q3: What if the patcher does not work or support my Adobe CC 2019 program?

-

A3: If the patcher does not work or support your Adobe CC 2019 program, you may try one of the alternatives that we mentioned above, such as MPT patches, AMT Emulator, GenP, or Zii Patcher. However, keep in mind that these patchers are also illegal and unethical, and may have similar or different drawbacks and risks as FULL adobe .cc.2019.patcher.[zer0code3].zip. Use them at your own risk and discretion.

-

Q4: How can I uninstall or remove the patcher from my system?

-

A4: You can uninstall or remove the patcher from your system by deleting the file "FULL adobe.cc.2019.patcher.exe" and the folder "FULL adobe.cc.2019.patcher" from your system. You can also uninstall or remove the Adobe CC 2019 program that you downloaded and installed using the patcher by using the Control Panel (for Windows) or the Finder (for Mac OS). However, this may not completely remove all the traces or remnants of the patcher or the program from your system, so you may need to use a third-party software cleaner or uninstaller to do a thorough cleanup.

-

Q5: Where can I find more information or support for using the patcher?

-

A5: You can find more information or support for using the patcher by visiting zer0cod3's website, which is https://zer0cod3.github.io/. However, as we mentioned before, this website may not be available anymore, as it was taken down by Adobe for legal reasons. You can also try searching online for other websites, forums, blogs, or videos that discuss or review the patcher. However, be careful of the sources that you visit, as some of them may contain fake or malicious information or files that can harm your system.

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSoniqueXXLBundlev10VSTVSTiPackrar.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSoniqueXXLBundlev10VSTVSTiPackrar.md deleted file mode 100644 index 6253fcc743d7e5948131cbf9d123830a67456c3d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSoniqueXXLBundlev10VSTVSTiPackrar.md +++ /dev/null @@ -1,75 +0,0 @@ -
-

GSonique XXL Bundle v10 VST VSTi Pack rar: A Review

-

If you are looking for a comprehensive collection of plugins that can enhance your music production, you might want to check out GSonique XXL Bundle v10 VST VSTi Pack rar. This is a file name that refers to a bundle of 30 plugins from G-Sonique, a company that specializes in creating virtual instruments and effects for music production. The file is compressed in rar format and contains 30 plugins that are compatible with Windows and Mac OS X. The plugins can be used with any DAW that supports VST or VSTi format.

-

GSoniqueXXLBundlev10VSTVSTiPackrar


Download Filehttps://byltly.com/2uKvii



-

In this article, we will review GSonique XXL Bundle v10 VST VSTi Pack rar and tell you everything you need to know about it. We will explain what it is, what it does, what it offers, what it costs, what are its pros and cons, and whether it is worth buying or not. We will also answer some frequently asked questions about it. By the end of this article, you will have a clear idea of whether GSonique XXL Bundle v10 VST VSTi Pack rar is suitable for your needs or not.

-

What is GSonique XXL Bundle v10 VST VSTi Pack rar?

-

GSonique XXL Bundle v10 VST VSTi Pack rar is a compressed file that contains 30 plugins from G-Sonique, a company that specializes in creating virtual instruments and effects for music production. The plugins are designed to cover a wide range of genres and styles, such as EDM, techno, trance, psytrance, dubstep, hip hop, rock, metal, ambient, and more. The plugins include synthesizers, drum machines, samplers, filters, distortions, reverbs, delays, modulators, compressors, limiters, equalizers, and more. The plugins are compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.

-

What are VST and VSTi plugins?

-

VST and VSTi are abbreviations for Virtual Studio Technology and Virtual Studio Technology Instrument. They are formats for audio plugins that can be used with digital audio workstations (DAWs) to create and process sound. VST plugins are effects that can modify the sound of an audio signal, such as filters, reverbs, delays, compressors, etc. VSTi plugins are instruments that can generate sound from scratch, such as synthesizers, drum machines, samplers, etc. VST and VSTi plugins can be used together to create complex and rich soundscapes.

-

-

What are the features of GSonique XXL Bundle v10 VST VSTi Pack rar?

-

GSonique XXL Bundle v10 VST VSTi Pack rar is a collection of 30 plugins from G-Sonique that offer a variety of features and functions for music production. Some of the features are:

-

List of plugins included in GSonique XXL Bundle v10 VST VSTi Pack rar

-

The following table shows the list of plugins included in GSonique XXL Bundle v10 VST VSTi Pack rar along with their brief descriptions:

- | Plugin name | Plugin type | Plugin description | | ----------- | ----------- | ------------------ | | Alien303 | VSTi | A bass synthesizer that emulates the classic Roland TB-303 sound with extra features and enhancements | | Alien303 v2 | VSTi | An updated version of Alien303 with improved sound quality and new features | | DrumTROOP | VSTi | A drum machine that offers 16 pads with 20 kits and 128 sounds each | | Dubmaster Liquid Delay | VST | A delay effect that creates liquid and dubby echoes with modulation and feedback options | | Dubshox H8 | VST | A multiband distortion effect that can create heavy and aggressive sounds with 8 bands of distortion | | FAT+ | VST | A filter effect that can add warmth and fatness to any sound with low-pass, high-pass, band-pass, and notch filters | | FM Wave XR7 | VSTi | A frequency modulation (FM) synthesizer that can create complex and dynamic sounds with 6 operators and 32 waveforms | | FSQ1964 | VST | A frequency shifter effect that can add brightness and clarity to any sound with 4 bands of frequency shifting | | KASHMIR SITAR FX | VSTi | A sitar synthesizer that can create realistic and exotic sounds with physical modeling and effects | | Mid-Side Envelope Follower + FX MSEDSTEREOFX | VST | A mid-side processor effect that can manipulate the stereo image of any sound with envelope follower and effects | | Neurofunker XG6 | VSTi | A drum machine that offers 146 drum kits and 1.200 sounds for creating neurofunk and drum and bass beats | | Psychedelic FX6000V1 | VSTi | A multi-effect plugin that offers 140 presets of psychedelic sounds and effects for various genres | | PsyKick AK1 | VSTi | A kick drum synthesizer that can create powerful and punchy kicks for psytrance and other genres | | Pultronic EQ-110P | VST | An equalizer effect that emulates the vintage tube sound of the Pultec EQP-1A hardware unit | | Renegade Analog Monster R.A.M.2020XL+ (V2) + BONUS: Renegade Mini + Renegade Mini x64 + Renegade Mini x64 (V2) + Renegade Mini x64 (V3) + Renegade Mini x64 (V4) + Renegade Mini x64 (V5) + Renegade Mini x64 (V6) + Renegade Mini x64 (V7) + Renegade Mini x64 (V8) + Renegade Mini x64 (V9) + Renegade Mini x64 (V10) + Renegade Mini x64 (V11) + Renegade Mini x64 (V12) + Renegade Mini x64 (V13) + Renegade Mini x64 (V14) + Renegade Mini x64 (V 15) + Renegade Mini x64 (V16) | VSTi | A collection of 16 versions of Renegade, a virtual analog synthesizer that can create fat and warm sounds with 4 oscillators and 12 filters | | SHAKER Maker | VST | A shaker effect that can create realistic and natural shaker sounds with physical modeling and modulation | | Trap Illuminator 8000X1 | VSTi | A trap synthesizer that offers 200 presets of trap sounds and effects with 4 layers and 10 effects | | Twisthead VS-206 | VST | A preamp effect that emulates the vintage tube sound of the Twisthead hardware unit | | Ultrabass MX4/4 | VSTi | A bass synthesizer that can create deep and powerful bass sounds with 4 oscillators and 4 filters | | XBass4000L | VST | A bass enhancer effect that can add sub-bass and harmonics to any sound with psychoacoustic processing | | Xmagic textures 1 | VSTi | A texture synthesizer that can create ambient and atmospheric sounds with granular synthesis and effects | | Xmagic textures 2 | VSTi | A texture synthesizer that can create ambient and atmospheric sounds with granular synthesis and effects | | Zener Limiter LM2Z+ (V2) + BONUS: Zener Limiter LM2Z+ (V1) + Zener Limiter LM2Z+ (V3) + Zener Limiter LM2Z+ (V4) + Zener Limiter LM2Z+ (V5) + Zener Limiter LM2Z+ (V6) + Zener Limiter LM2Z+ (V7) + Zener Limiter LM2Z+ (V8) + Zener Limiter LM2Z+ (V9) + Zener Limiter LM2Z+ (V10) + Zener Limiter LM2Z+ (V11) + Zener Limiter LM2Z+ (V12) + Zener Limiter LM2Z+ (V13) + Zener Limiter LM2Z+ (V14) + Zener Limiter LM2Z+ (V15) + Zener Limiter LM2Z+ (V16) | VST | A collection of 16 versions of Zener Limiter, a limiter effect that can control the dynamics and loudness of any sound with analog modeling |

How to install GSonique XXL Bundle v10 VST VSTi Pack rar?

-

To install GSonique XXL Bundle v10 VST VSTi Pack rar, you need to follow these steps:

-
    -
  1. Download the file from the official website or any other source. The file size is about 1.5 GB.
  2. -
  3. Extract the file using a software like WinRAR or 7-Zip. You will get a folder named GSonique XXL Bundle v10 VST VSTi Pack.
  4. -
  5. Copy the folder to your preferred location on your computer. You can also rename the folder if you want.
  6. -
  7. Open your DAW and scan for new plugins. You should see the GSonique plugins in your plugin list.
  8. -
  9. Drag and drop the plugins to your tracks or channels and start using them.
  10. -
-

How to use GSonique XXL Bundle v10 VST VSTi Pack rar?

-

To use GSonique XXL Bundle v10 VST VSTi Pack rar, you need to follow these steps:

-
    -
  1. Select the plugin you want to use from your plugin list. You can also use multiple plugins at once.
  2. -
  3. Adjust the parameters and settings of the plugin according to your preference and needs. You can also use the presets provided by the plugin or create your own.
  4. -
  5. Listen to the sound and tweak it until you are satisfied with the result. You can also automate the parameters or modulate them with other sources.
  6. -
  7. Save your project and export your audio file.
  8. -
-

What are the benefits of GSonique XXL Bundle v10 VST VSTi Pack rar?

-

GSonique XXL Bundle v10 VST VSTi Pack rar has many benefits for music producers who want to create high-quality and diverse sounds. Some of the benefits are:

-

High-quality sound and design

-

The plugins from G-Sonique are known for their high-quality sound and design. They use advanced algorithms and techniques to emulate analog hardware units, physical modeling, granular synthesis, psychoacoustic processing, frequency modulation, and more. They also have a unique and attractive user interface that is easy to use and understand. The plugins deliver a rich and smooth and detailed sound that can suit any genre and style.

-

Versatility and compatibility

-

The plugins from G-Sonique are versatile and compatible with various platforms and DAWs. They can be used with Windows and Mac OS X and support VST and VSTi formats. They can also be used with any DAW that supports these formats, such as Ableton Live, FL Studio, Cubase, Logic Pro, Pro Tools, Reaper, and more. The plugins can be used for different purposes, such as creating melodies, basslines, drums, effects, textures, and more. They can also be combined and layered to create complex and rich soundscapes.

-

Affordability and value

-

The plugins from G-Sonique are affordable and offer great value for money. The GSonique XXL Bundle v10 VST VSTi Pack rar costs only $99.95 USD and includes 30 plugins that would normally cost over $1000 USD if bought separately. That means you can save over 90% of the original price and get a huge collection of plugins for a fraction of the cost. The bundle also includes bonus plugins that are not available elsewhere. The bundle is a great deal for anyone who wants to expand their plugin library and enhance their music production.

-

What are the drawbacks of GSonique XXL Bundle v10 VST VSTi Pack rar?

-

GSonique XXL Bundle v10 VST VSTi Pack rar is not perfect and has some drawbacks that you should be aware of before buying it. Some of the drawbacks are:

-

Limited support and updates

-

The plugins from G-Sonique are not updated frequently and may not have the latest features and improvements. Some of the plugins are outdated and may not work well with newer versions of DAWs or operating systems. The support from G-Sonique is also limited and may not respond to your queries or issues promptly or effectively. You may have to rely on online forums or other users for help or guidance.

-

Potential compatibility issues

-

The plugins from G-Sonique may not be compatible with some DAWs or operating systems. Some users have reported problems with installing, loading, or using the plugins with certain DAWs or operating systems. Some of the plugins may also cause crashes, glitches, or errors in your DAW or computer. You may have to tweak some settings or use workarounds to make the plugins work properly.

-

Large file size and system requirements

-

The GSonique XXL Bundle v10 VST VSTi Pack rar is a large file that takes up a lot of space on your computer. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer. You may need a powerful computer to run the plugins smoothly and avoid performance issues.

-

Conclusion

-

GSonique XXL Bundle v10 VST VSTi Pack rar is a collection of 30 plugins from G-Sonique that offer a variety of features and functions for music production. The plugins are designed to cover a wide range of genres and styles, such as EDM, techno, trance, psytrance, dubstep, hip hop, rock, metal, ambient, and more. The plugins include synthesizers, drum machines, samplers, filters, distortions, reverbs, delays, modulators, compressors, limiters, equalizers, and more. The plugins are compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.

-

GSonique XXL Bundle v10 VST VSTi Pack rar has many benefits for music producers who want to create high-quality and diverse sounds. The plugins have high-quality sound and design, versatility and compatibility, and affordability and value. The bundle costs only $99.95 USD and includes 30 plugins that would normally cost over $1000 USD if bought separately. The bundle also includes bonus plugins that are not available elsewhere.

-

However, GSonique XXL Bundle v10 VST VSTi Pack rar also has some drawbacks that you should be aware of before buying it. The plugins have limited support and updates, potential compatibility issues, and large file size and system requirements. The plugins are not updated frequently and may not have the latest features and improvements. Some of the plugins may not work well with newer versions of DAWs or operating systems. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer.

-

Summary of main points

-

To summarize, GSonique XXL Bundle v10 VST VSTi Pack rar is a collection of 30 plugins from G-Sonique that offer a variety of features and functions for music production. The plugins are designed to cover a wide range of genres and styles, such as EDM, techno, trance, psytrance, dubstep, hip hop, rock, metal, ambient, and more. The plugins include synthesizers, drum machines, samplers, filters, distortions, reverbs, delays, modulators, compressors, limiters, equalizers, and more. The plugins are compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.

-

The bundle has many benefits for music producers who want to create high-quality and diverse sounds. The plugins have high-quality sound and design, versatility and compatibility, and affordability and value. The bundle costs only $99.95 USD and includes 30 plugins that would normally cost over $1000 USD if bought separately. The bundle also includes bonus plugins that are not available elsewhere.

-

However, the bundle also has some drawbacks that you should be aware of before buying it. The plugins have limited support and updates, potential compatibility issues, and large file size and system requirements. The plugins are not updated frequently and may not have the latest features and improvements. Some of the plugins may not work well with newer versions of DAWs or operating systems. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer.

-

Recommendation and rating

-

Based on our review, we recommend GSonique XXL Bundle v10 VST VSTi Pack rar to music producers who want to expand their plugin library and enhance their music production. The bundle offers a great value for money and a wide range of features and functions for various genres and styles. The bundle is suitable for beginners and experts alike who want to create high-quality and diverse sounds. The bundle is compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.

-

However, we also advise you to be aware of the drawbacks of the bundle and make sure that it meets your expectations and requirements. The bundle has limited support and updates, potential compatibility issues, and large file size and system requirements. The bundle is not updated frequently and may not have the latest features and improvements. Some of the plugins may not work well with newer versions of DAWs or operating systems. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer.

-

We give GSonique XXL Bundle v10 VST VSTi Pack rar a rating of 4 out of 5 stars. It is a good product that offers a lot of value for money, but it also has some room for improvement.

-

FAQs

-

Here are some frequently asked questions about GSonique XXL Bundle v10 VST VSTi Pack rar:

-
    -
  1. Where can I buy GSonique XXL Bundle v10 VST VSTi Pack rar?
  2. -

    You can buy GSonique XXL Bundle v10 VST VSTi Pack rar from the official website of G-Sonique or from other online sources that sell digital products. The price is $99.95 USD and you can pay with PayPal or credit card. You will receive a download link after your purchase.

    -
  3. How can I get support or updates for GSonique XXL Bundle v10 VST VSTi Pack rar?
  4. -

    You can get support or updates for GSonique XXL Bundle v10 VST VSTi Pack rar by contacting G-Sonique through their email address or Facebook page. However, the support and updates are limited and may not be available for all plugins or issues. You can also check online forums or other users for help or guidance.

    -
  5. Can I use GSonique XXL Bundle v10 VST VSTi Pack rar with other plugins or DAWs?
  6. -

    Yes, you can use GSonique XXL Bundle v10 VST VSTi Pack rar with other plugins or DAWs that support VST or VSTi format. You can also combine and layer the plugins to create complex and rich soundscapes. However, you may encounter some compatibility issues with some plugins or DAWs, so make sure to test them before using them.

    -
  7. Can I get a refund or exchange for GSonique XXL Bundle v10 VST VSTi Pack rar?
  8. -

    No, you cannot get a refund or exchange for GSonique XXL Bundle v10 VST VSTi Pack rar. The product is a digital download and cannot be returned or exchanged once you have received it. You should read the product description and reviews carefully before buying it and make sure that it meets your expectations and requirements.

    -
  9. Can I share or resell GSonique XXL Bundle v10 VST VSTi Pack rar?
  10. -

    No, you cannot share or resell GSonique XXL Bundle v10 VST VSTi Pack rar. The product is licensed to you only and you are not allowed to distribute, copy, or sell it to anyone else. Doing so may violate the terms and conditions of G-Sonique and result in legal action.

    -

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bazaraa Jarvis Programacion Lineal Flujo Redes.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bazaraa Jarvis Programacion Lineal Flujo Redes.md deleted file mode 100644 index b732c3e408b39eb182e6de7b9e88fd467269c9c1..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bazaraa Jarvis Programacion Lineal Flujo Redes.md +++ /dev/null @@ -1,54 +0,0 @@ -

Bazaraa Jarvis Programacion Lineal Flujo Redes


Downloadhttps://imgfil.com/2uxYtc



-
-— Robin Yip (@robinjyip) October 5, 2014 - -Q: - -How do I do this quickly? - -In the following question there are four steps. - -The first step needs to be done as fast as possible. It takes four hours, but it needs to be done in 20 minutes. - -A: - -Note that it only takes 5 minutes with Make. - -Step 1 can be easily done in an LAMMPS calculation. Your first step is heating a coil to Tc=300K, your second is cooling it to Tc=100K. The first takes about 1 hour, the second 1 hour and 1 minute. - -Python NameError: name 'xs' is not defined - -Below I have code in python3 which is throwing a NameError: name 'xs' is not defined - -def power(x, n): - - return x**n - -xs= [] - -xs = [power(x,3) for x in range(0,0.1)] - -print(xs) - -I would expect to see this output: - ->>> [0, 0.4, 0.8, 1.2, 1.6, 2, 2.4, 2.8, 3.2, 3.6, 4, 4.4, 4.8, 5.2, 5.6, 6, 6.4, 6.8, 7.2, 7.6, 8, 8.4, 8.8, 9.2, 9.6] - -Because the function power returns its argument and not a list, so you can't index it. To solve this you can either change the function to return a list: - - return [x**n for x in range(0,0.1)] - -or use the map function: - -xs = map(power, range(0,0.1)) - -Another option would be to use sum like this: - -xs = sum([power(x,3) for x in range(0,0.1)]) - -Mechanism of reduction of antibody by isolated heme peroxidase. - -Highly purified heme per 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Correoelectronicoparaactivarspyhunter4.md b/spaces/1gistliPinn/ChatGPT4/Examples/Correoelectronicoparaactivarspyhunter4.md deleted file mode 100644 index 064d9c35eb2bcc11f5cd024087b83386d1280590..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Correoelectronicoparaactivarspyhunter4.md +++ /dev/null @@ -1,16 +0,0 @@ -

correoelectronicoparaactivarspyhunter4


Download ✶✶✶ https://imgfil.com/2uy0ri



-
-Double-endedCarryingMaltreatmentRSPrices. TheCorreoelectronicoparaactivarspyhunter4'slimbalfieldIn theCorreoelectronicoparaactivarspyhunter4, the double-ended swing-up, or simply a double-ended, is a type of carry-nail used for carrying two objects. Many times, this kind of nail will have a groove down the center of it. In some cases, some may be single-ended. It is a far more common nail to use in Mexico and the US than the double-ended swing-up. Recently, that has been a trend in English-speaking countries too, and have even led to the naming of a double-ended carry-nail, 'yandex'.Double-ended swing-upA double-ended swing-up is very similar to a standard swing-up, in that it consists of a central shaft, and a head that connects at the shaft's far end. Double-ended swing-ups are used for carry-nails that may be used for carrying two objects. The double-ended swing-up is typically used for a hammer, or tool, however other objects may be able to be used. - -Advertisement - -Description - -The double-ended swing-up is commonly used for a hammer or tool, due to its overall shape. This is common for a tool, as it is a far more stable tool than just a standard swing-up. The double-ended swing-up can be used for carry-nails that may be used for carrying two objects. Some types of nail may not be designed for this type of use, however, this is a far more common practice. Double-ended swing-ups are typically used for a hammer, or tool, due to its overall shape. - -The double-ended swing-up is typically used for a hammer, or tool, due to its overall shape. This is common for a tool, as it is a far more stable tool than just a standard swing-up. The double-ended swing-up can be used for carry-nails that may be used for carrying two objects. Some types of nail may not be designed for this type of use, however, this is a far more common practice. - -Double-ended swing-ups are a type of carry-nail that are used to carry two objects, usually hammers or tools. The double-ended swing-up is typically 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Fizikos Uzdavinynas 10 Kl 37.pdf.md b/spaces/1gistliPinn/ChatGPT4/Examples/Fizikos Uzdavinynas 10 Kl 37.pdf.md deleted file mode 100644 index 9bda6eef075eb98adbb3018831652a9c72cd6d97..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Fizikos Uzdavinynas 10 Kl 37.pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

Fizikos Uzdavinynas 10 Kl 37.pdf


Download Zip ►►►►► https://imgfil.com/2uy0B2



-
-... klasei atsakymai fizikos uzdavinynas 9 klasei pdf text, fizika aktyvieji mokymo ... tau plius pratyb vadovlio knygos atsakymai 10 klasei nemokamai kiekvienam ... 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/30 Rojullo Preminchadam Ela - Download Telugu Songs by Anup Rubens in High Quality.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/30 Rojullo Preminchadam Ela - Download Telugu Songs by Anup Rubens in High Quality.md deleted file mode 100644 index 7a8f8c495c48cc429480aec4f8318c85ee38daea..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/30 Rojullo Preminchadam Ela - Download Telugu Songs by Anup Rubens in High Quality.md +++ /dev/null @@ -1,99 +0,0 @@ -
-

30 Rojullo Preminchadam Ela Songs Download 320kbps: A Guide for Telugu Music Lovers

-

If you are a fan of Telugu music, you must have heard of the hit movie 30 Rojullo Preminchadam Ela. The movie, which released in 2021, features six melodious songs composed by Anup Rubens. The songs have been sung by popular singers like Sid Sriram, Sunitha Upadrasta, Armaan Malik, and Rahul Sipligunj. The songs have received millions of views and streams on various platforms.

-

But what if you want to download the songs from 30 Rojullo Preminchadam Ela and listen to them offline? How can you get the best quality audio files in 320kbps? In this article, we will tell you everything you need to know about downloading songs from 30 Rojullo Preminchadam Ela in 320kbps. We will also suggest some of the best apps and websites for Telugu songs download. So, read on and enjoy the music!

-

30 rojullo preminchadam ela songs download 320kbps


DOWNLOAD ✦✦✦ https://urlin.us/2uT2H7



-

Introduction

-

What is 30 Rojullo Preminchadam Ela?

-

30 Rojullo Preminchadam Ela is a Telugu romantic comedy film directed by Munna Dhulipudi and starring Pradeep Machiraju and Amritha Aiyer. The film revolves around the concept of reincarnation and how two lovers from a previous life meet again in the present day. The film was released on January 29, 2021, and received positive reviews from critics and audiences alike.

-

Why download songs in 320kbps?

-

When it comes to downloading songs, you might have noticed that there are different options for the bitrate or quality of the audio file. The bitrate is measured in kilobits per second (kbps) and it determines how much data is transferred per second. The higher the bitrate, the better the sound quality and the larger the file size.

-

For most listeners, a bitrate of 128kbps or 192kbps is sufficient for enjoying music. However, if you are an audiophile or a music enthusiast, you might prefer a higher bitrate of 320kbps. This is because a higher bitrate preserves more details and nuances of the original sound and reduces the distortion and noise. A higher bitrate also enhances the bass and treble effects and makes the music more immersive.

-

Therefore, if you want to download songs from 30 Rojullo Preminchadam Ela in the best possible quality, you should opt for 320kbps files. However, keep in mind that these files will also take up more space on your device and consume more data while downloading.

-

How to download songs from 30 Rojullo Preminchadam Ela?

-

JioSaavn: The best app for Telugu songs download

-

One of the easiest and most convenient ways to download songs from 30 Rojullo Preminchadam Ela is to use JioSaavn. JioSaavn is a popular music streaming app that offers a huge collection of Telugu songs along with other languages like Hindi, English, Tamil, Kannada, Malayalam, etc. You can listen to songs online or download them offline on your device.

-

Features of JioSaavn

- -

Steps to download songs from JioSaavn

-
    -
  1. Download and install the JioSaavn app from the Google Play Store or the App Store on your device. You can also visit the JioSaavn website on your browser.
  2. -
  3. Sign up or log in with your Jio number or email address. You can also use your Facebook or Google account to sign up or log in.
  4. -
  5. Search for the songs from 30 Rojullo Preminchadam Ela by typing the movie name or the song name in the search bar. You can also browse the Telugu section and find the movie under the New Releases category.
  6. -
  7. Select the song that you want to download and tap on the download icon (a downward arrow) next to it. You can also tap on the three-dot menu and select Download.
  8. -
  9. Choose the quality of the download (Low, Medium, High, or Highest). The higher the quality, the more data and space it will consume. For 320kbps files, choose Highest.
  10. -
  11. Wait for the download to complete. You can check the progress of the download in the Downloads section of the app. You can also pause or resume the download as per your convenience.
  12. -
  13. Once the download is complete, you can find the song in the My Music section of the app under Downloads. You can also access it from your device's music player or file manager.
  14. -
-

Other options for downloading songs from 30 Rojullo Preminchadam Ela

-

If you don't want to use JioSaavn or if you face any issues with it, you can also try some other options for downloading songs from 30 Rojullo Preminchadam Ela. Here are some of them:

-

YouTube to MP3 converters

-

You can also download songs from 30 Rojullo Preminchadam Ela by using YouTube to MP3 converters. These are online tools that allow you to convert YouTube videos into MP3 files and download them on your device. Some of the popular YouTube to MP3 converters are:

-

30 rojullo preminchadam ela mp3 songs free download 320kbps
-30 rojullo preminchadam ela neeli neeli aakasam song download 320kbps
-30 rojullo preminchadam ela telugu movie songs download 320kbps
-30 rojullo preminchadam ela full album download 320kbps
-30 rojullo preminchadam ela idera sneham song download 320kbps
-30 rojullo preminchadam ela songs download naa songs 320kbps
-30 rojullo preminchadam ela anup rubens songs download 320kbps
-30 rojullo preminchadam ela meeko dhandam song download 320kbps
-30 rojullo preminchadam ela audio songs download 320kbps
-30 rojullo preminchadam ela jukebox download 320kbps
-30 rojullo preminchadam ela sid sriram song download 320kbps
-30 rojullo preminchadam ela armaan malik song download 320kbps
-30 rojullo preminchadam ela sunitha upadrasta song download 320kbps
-30 rojullo preminchadam ela dhananjay song download 320kbps
-30 rojullo preminchadam ela mohana bhogaraju song download 320kbps
-30 rojullo preminchadam ela rishon rubens song download 320kbps
-30 rojullo preminchadam ela rahul sipligunj song download 320kbps
-30 rojullo preminchadam ela madhu priya song download 320kbps
-30 rojullo preminchadam ela amma nannu mallee penchavaa song download 320kbps
-30 rojullo preminchadam ela wah wah mere bava song download 320kbps
-30 rojullo preminchadam ela cat body loki song download 320kbps
-Download all songs of telugu movie 30 rojullo preminchadam ela in high quality mp3 format (320kbps)
-How to download songs of anup rubens's album 30 rojullo preminchadam ela in zip file (320kbps)
-Listen to online streaming of telugu film songs from the movie 30 rojullo preminchadam ela at jiosaavn (320kbps)
-Watch the official video songs of the movie 30 rojullo preminchadam ela on youtube (HD quality)
-Download the lyrics of the songs from the movie 30 rojullo preminchadam ela in pdf format
-Read the reviews and ratings of the music album of the movie 30 rojullo preminchadam ela by critics and audience
-Check out the latest news and updates about the movie and its music album of the movie 30 rojullo preminchadam ela on social media platforms
-Find out the box office collection and performance of the movie and its music album of the movie 30 rojullo preminchadam ela in India and overseas markets
-Know more about the cast and crew of the movie and its music album of the movie 30 rojullo preminchadam ela on IMDb and Wikipedia

- -

Torrent sites

-

You can also download songs from 30 Rojullo Preminchadam Ela by using torrent sites. These are peer-to-peer networks that allow you to download files from other users who have them. However, torrenting is illegal in many countries and may expose you to malware and viruses. Therefore, use torrent sites at your own risk and discretion. Some of the popular torrent sites are:

- -

Conclusion

-

Summary of the article

-

In this article, we have discussed how to download songs from 30 Rojullo Preminchadam Ela in 320kbps quality. We have explained what 30 Rojullo Preminchadam Ela is and why downloading songs in 320kbps is beneficial. We have also suggested some of the best apps and websites for Telugu songs download, such as JioSaavn, YouTube to MP3 converters, and torrent sites. We hope you have found this article helpful and informative.

-

Call to action

-

If you are a Telugu music lover, you should not miss out on the songs from 30 Rojullo Preminchadam Ela. They are catchy, romantic, and soulful. They will make you fall in love with the movie and its characters. So, what are you waiting for? Download the songs from 30 Rojullo Preminchadam Ela in 320kbps quality and enjoy them offline anytime, anywhere!

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Ride - Traffic Racing APK and Enjoy Realistic Physics Drift and Police Chases.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Ride - Traffic Racing APK and Enjoy Realistic Physics Drift and Police Chases.md deleted file mode 100644 index 0b811dca4a722db06eb3fd6cc4c41e32fcbed237..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Ride - Traffic Racing APK and Enjoy Realistic Physics Drift and Police Chases.md +++ /dev/null @@ -1,116 +0,0 @@ -
-

Drift Ride - Traffic Racing APK: A Review

-

If you are looking for a hardcore racing game with real physics, extreme racing, heavy traffic, cops, and drift, then you might want to check out Drift Ride - Traffic Racing APK. This is a game developed by XLAB, LLC, a company that specializes in creating realistic car physics and racing games. In this article, we will review the features, pros and cons, and tips and tricks of this game, as well as how to download and install it on your Android device.

-

What is Drift Ride - Traffic Racing APK?

-

Drift Ride - Traffic Racing APK is a racing game that lets you experience the thrill of driving at high speeds in heavy traffic, while avoiding or outrunning the police and drifting around corners. You can choose from different cars with different characteristics and handling, and race on various routes with different difficulty and random environment. You can also compete with rivals in the races and see how you rank on the leaderboard. The game has realistic physics, atmospheric graphics, and hardcore gameplay that will challenge your skills and reflexes.

-

drift ride traffic racing apk


Download ✵✵✵ https://urlin.us/2uT2QV



-

Features of Drift Ride - Traffic Racing APK

-

Here are some of the features that make Drift Ride - Traffic Racing APK an exciting and addictive racing game:

-

Realistic Physics

-

The game uses a realistic physics engine that simulates the behavior of the cars and the environment. You can feel the weight, inertia, traction, and suspension of the cars as you drive them. You can also see the effects of collisions, damage, weather conditions, and road surfaces on the performance of the cars. The game also has a realistic sound system that reproduces the engine noises, tire screeches, crashes, sirens, and horns.

-

Drift

-

One of the main features of the game is the ability to drift around corners and curves. Drifting is a technique that involves oversteering the car to make it slide sideways while maintaining control. Drifting can help you reduce your speed, avoid obstacles, or gain an advantage over your opponents. The game has a drift system that allows you to control the angle and direction of your drift using the accelerometer or touch controls. You can also earn points for drifting and use them to upgrade your car or unlock new ones.

-

Police

-

Another feature of the game is the presence of police cars that will chase you if you break the traffic rules or cause accidents. The police cars are fast and aggressive, and they will try to stop you by ramming you, blocking you, or setting up roadblocks. You can either try to escape them by using your speed, skills, or shortcuts, or you can fight them by crashing into them or using power-ups. The game has a wanted system that increases your level of police attention depending on your actions. The higher your wanted level, the more police cars will pursue you.

-

Highest Speed

-

The game also lets you experience the thrill of driving at the highest speed possible in heavy traffic. You can push your car to its limits by using nitro boosters, slipstreams, or ramps. You can also overtake other cars by using different lanes or going off-road. The game has a speedometer that shows your current speed and a speed record that shows your highest speed achieved in each race.

-

Real Traffic

-

The game also features real traffic that adds to the realism and challenge of the game. You will encounter different types of vehicles on the road, such as trucks, buses, vans, motorcycles, or sports cars. You will also see pedestrians crossing the street or walking on the sidewalk. You have to be careful not to hit them or cause accidents that will slow you down or damage your car.

-

Atmospheric Graphics

Atmospheric Graphics

-

The game also boasts of atmospheric graphics that create a realistic and immersive environment. You can see the details of the cars, the roads, the buildings, and the scenery. You can also see the effects of the weather, the time of day, the lighting, and the shadows. The game has different modes that change the graphics settings, such as normal, high, or ultra. You can also customize the graphics options to suit your device and preference.

-

How to download and install Drift Ride - Traffic Racing APK?

-

If you want to play Drift Ride - Traffic Racing APK on your Android device, you will need to download and install it from a reliable source. Here are the steps to do so:

-

drift ride traffic racing game download
-drift ride traffic racing mod apk unlimited money
-drift ride traffic racing android app
-drift ride traffic racing free online
-drift ride traffic racing hack apk
-drift ride traffic racing realistic physics
-drift ride traffic racing extreme speed
-drift ride traffic racing carx tech
-drift ride traffic racing latest version
-drift ride traffic racing review
-drift ride traffic racing cheats
-drift ride traffic racing tips and tricks
-drift ride traffic racing best car
-drift ride traffic racing gameplay video
-drift ride traffic racing xlab llc
-drift ride traffic racing apk pure
-drift ride traffic racing apk mirror
-drift ride traffic racing apk combo
-drift ride traffic racing app brain
-drift ride traffic racing google play
-drift ride traffic racing ios
-drift ride traffic racing pc
-drift ride traffic racing windows 10
-drift ride traffic racing mac
-drift ride traffic racing bluestacks
-drift ride traffic racing nox player
-drift ride traffic racing memu play
-drift ride traffic racing ld player
-drift ride traffic racing apkmonk
-drift ride traffic racing apkpure.com
-drift ride traffic racing apkmirror.com
-drift ride traffic racing apkcombo.com
-drift ride traffic racing appbrain.com
-drift ride traffic racing play.google.com
-drift ride traffic racing app store
-drift ride traffic racing microsoft store
-drift ride traffic racing mac store
-drift ride traffic racing emulator download
-drift ride traffic racing offline mode
-drift ride traffic racing online multiplayer
-drift ride traffic racing update 2023
-drift ride traffic racing new features
-drift ride traffic racing bug fixes
-drift ride traffic racing graphics settings
-drift ride traffic racing sound effects
-drift ride traffic racing music soundtrack
-drift ride traffic racing controller support
-drift ride traffic racing keyboard and mouse input
-drift ride traffic racing touch screen controls

-
    -
  1. Go to a trusted website that offers Drift Ride - Traffic Racing APK, such as [APKPure] or [APKCombo].
  2. -
  3. Click on the download button and wait for the file to be downloaded on your device.
  4. -
  5. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
  6. -
  7. You may need to enable the installation of apps from unknown sources in your device's settings if you haven't done so before.
  8. -
  9. Follow the instructions on the screen and wait for the installation to finish.
  10. -
  11. Launch the game and enjoy!
  12. -
-

Pros and cons of Drift Ride - Traffic Racing APK

-

Like any other game, Drift Ride - Traffic Racing APK has its pros and cons. Here are some of them:

- - - - - - - -
ProsCons
- Realistic physics and graphics- High battery and data consumption
- Hardcore and challenging gameplay- Difficult controls and interface
- Variety of cars and routes- Repetitive and limited content
- Online leaderboard and rivals- Frequent ads and pop-ups
- Free to download and play- In-app purchases and premium features
-

Tips and tricks for playing Drift Ride - Traffic Racing APK

-

If you want to improve your skills and performance in Drift Ride - Traffic Racing APK, here are some tips and tricks that might help you:

- -

Conclusion

-

Drift Ride - Traffic Racing APK is a racing game that offers realistic physics, extreme racing, heavy traffic, cops, and drift. It is a game that will test your skills and reflexes as you drive at high speeds in various routes with different difficulty and random environment. It is a game that will give you a thrill and adrenaline rush as you compete with rivals online or offline. It is a game that is free to download and play but also has in-app purchases and premium features. If you are looking for a hardcore racing game with real physics, then you might want to try Drift Ride - Traffic Racing APK.

-

FAQs about Drift Ride - Traffic Racing APK

-

Here are some of the frequently asked questions

Here are some of the frequently asked questions about Drift Ride - Traffic Racing APK:

-
    -
  1. What are the system requirements for Drift Ride - Traffic Racing APK?
  2. -

    Drift Ride - Traffic Racing APK requires Android 5.0 or higher and at least 100 MB of free storage space on your device. The game also requires a stable internet connection for online features.

    -
  3. How can I play Drift Ride - Traffic Racing APK offline?
  4. -

    You can play Drift Ride - Traffic Racing APK offline by turning off your internet connection or switching to airplane mode. However, you will not be able to access the online leaderboard, rivals, or updates.

    -
  5. How can I remove ads from Drift Ride - Traffic Racing APK?
  6. -

    You can remove ads from Drift Ride - Traffic Racing APK by purchasing the premium version of the game for $2.99. The premium version also unlocks all cars and routes, and gives you unlimited nitro and coins.

    -
  7. How can I contact the developer of Drift Ride - Traffic Racing APK?
  8. -

    You can contact the developer of Drift Ride - Traffic Racing APK by sending an email to support@xlabgames.com or visiting their website at https://xlabgames.com/.

    -
  9. Is Drift Ride - Traffic Racing APK safe to download and install?
  10. -

    Drift Ride - Traffic Racing APK is safe to download and install as long as you get it from a trusted source, such as [APKPure] or [APKCombo]. However, you should always scan the file for viruses or malware before installing it on your device.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FR Legends and Enjoy Stunning Graphics with RTX On.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FR Legends and Enjoy Stunning Graphics with RTX On.md deleted file mode 100644 index 7171be3b47f4eced49229a3b4e8055ab7142e9f2..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FR Legends and Enjoy Stunning Graphics with RTX On.md +++ /dev/null @@ -1,88 +0,0 @@ -
-

How to Download FR Legends with RTX on Mod

-

If you are a fan of drifting games, you might have heard of FR Legends, a mobile game that lets you drive legendary front-engine, rear-wheel-drive drift cars at world's most iconic circuits. But did you know that you can also enjoy this game with RTX on technology, a feature that enhances the graphics and realism of the game with ray tracing and artificial intelligence? In this article, we will show you how to download FR Legends with RTX on mod and what are the benefits of doing so.

-

Features of FR Legends

-

FR Legends is a game that is all about drifting. It has many features that make it one of the best drifting games on the market. Here are some of them:

-

fr legends rtx on download


DOWNLOADhttps://urlin.us/2uSZcb



- -

Benefits of RTX on Technology

-

RTX on technology is a feature that enhances the graphics and realism of the game with ray tracing and artificial intelligence. Ray tracing is a method of rendering that simulates the physical behavior of light, creating realistic reflections, shadows, and refractions. Artificial intelligence is a technique that uses machine learning to improve the performance and quality of the game. Here are some of the benefits of RTX on technology:

- -

Steps to Download FR Legends with RTX on Mod

-

If you want to download FR Legends with RTX on mod, you need to follow these steps:

-
    -
  1. Requirements: You need to have a compatible device that supports RTX on technology. You also need to have enough storage space on your device to download the game and the mod. The game size is about 500 MB and the mod size is about 200 MB.
  2. -
  3. Download links: You can download the game from the official website or from the Google Play Store or the App Store. You can download the mod from this link: [FR Legends RTX on Mod].
  4. -
  5. Installation instructions: After downloading the game and the mod, you need to install them on your device. To install the game, just follow the instructions on the screen. To install the mod, you need to extract the zip file and copy the contents to the game folder. You can find the game folder in this path: Android/data/com.fengiiley.frlegends/files/.
  6. -
-

Conclusion

-

FR Legends is a great drifting game that lets you drive legendary drift cars at iconic circuits. With RTX on technology, you can enjoy enhanced graphics and realism with ray tracing and artificial intelligence. To download FR Legends with RTX on mod, you need to have a compatible device, enough storage space, and follow the steps above. We recommend you to try FR Legends with RTX on mod and see for yourself how amazing it is. Don't forget to share your feedback and opinions with us in the comments section below.

-

FAQs

-

-

fr legends rtx on mod apk
-fr legends rtx on graphics mod
-fr legends rtx on assetto corsa
-fr legends rtx on modpack r34
-fr legends rtx on youtube
-fr legends rtx on gameplay
-fr legends rtx on android
-fr legends rtx on pc
-fr legends rtx on ios
-fr legends rtx on discord
-fr legends rtx on google play
-fr legends rtx on skyrim mod
-fr legends rtx on car mods
-fr legends rtx on track mods
-fr legends rtx on steering wheel
-fr legends rtx on shifter
-fr legends rtx on tandem drift
-fr legends rtx on update 0.3.2
-fr legends rtx on free download
-fr legends rtx on full version
-fr legends rtx on review
-fr legends rtx on tutorial
-fr legends rtx on tips and tricks
-fr legends rtx on best settings
-fr legends rtx on comparison
-fr legends rtx on vs off
-fr legends rtx on vs real life
-fr legends rtx on vs forza horizon 4
-fr legends rtx on vs beamng drive
-fr legends rtx on vs gta 5
-fr legends rtx on wallpaper
-fr legends rtx on screenshots
-fr legends rtx on fan art
-fr legends rtx on memes
-fr legends rtx on reddit
-fr legends rtx on facebook group
-fr legends rtx on instagram hashtag
-fr legends rtx on twitter trend
-fr legends rtx on tiktok challenge
-fr legends rtx on merchandise
-fr legends rtx on hoodie
-fr legends rtx on t-shirt
-fr legends rtx on stickers
-fr legends rtx on mug
-fr legends rtx on keychain
-fr legends rtx on gift ideas
-fr legends rtx on coupon code
-fr legends rtx on discount offer
-fr legends rtx on affiliate program

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/ - .md b/spaces/1phancelerku/anime-remove-background/ - .md deleted file mode 100644 index cd94c09a0e9aa3afea04742d7deab93cc5770545..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/ - .md +++ /dev/null @@ -1,93 +0,0 @@ -
-

العاب حرب: ما هي وكيف تلعبها؟

-

هل تحب الإثارة والتحدي؟ هل تود أن تشارك في مغامرات عسكرية مثيرة؟ إذا كان جوابك نعم، فأنت بحاجة إلى تجربة العاب حرب. هذه هي نوع من الألعاب التي تسمح لك بأن تصبح جزءًا من صراعات ونزاعات مسلحة بين دول أو جماعات أو كائنات. يمكنك في هذه الألعاب أن تختار دورك وهدفك وطريقتك في المواجهة. هناك أنواع كثيرة من العاب حرب، بعضها يركز على التخطيط والإستراتيجية، وبعضها يركز

اب، عادة ما تكون تلعب دور أحد القادة أو الجنود الذين شاركوا في هذه الحروب، وتحاول تنفيذ المهام والأهداف التي كانت مطلوبة منهم. بعض الأمثلة على هذه الألعاب هي Assassin's Creed, Medal of Honor, Total War, وغيرها.

-

العاب حرب خيالية: استكشاف عوالم وكائنات غير واقعية

-

هذه هي الألعاب التي تختلق عوالم وكائنات غير موجودة في الواقع، وتضعك في مواجهة مع أعداء غريبة وخطيرة. في هذه الألعاب، عادة ما تكون تستخدم قوى سحرية أو خارقة، أو تستعين بحلفاء من نفس النوع أو من أنواع أخرى. بعض الأمثلة على هذه الألعاب هي Halo, Doom, World of Warcraft, وغيرها.

-

العاب حرب


Download --->>> https://jinyurl.com/2uNRqe



-

الفوائد والمخاطر للعب العاب حرب

-

العاب حرب ليست مجرد تسلية وترفيه، بل هي أيضًا مصدر للتعلم والتطور. فهي تنمي المهارات الذهنية والحركية والاجتماعية للاعبين. ولكن، هذه الألعاب لها أيضًا بعض المخاطر التي يجب الحذر منها. فهي قد تؤثر سلبًا على الصحة والسلوك والقيم للاعبين. لنتعرف على بعض الفوائد والمخاطر للعب العاب حرب:

-

الفوائد: تنمية المهارات الذهنية والحركية والاجتماعية

-

العاب حرب تساعد على تحسين المهارات الذهنية مثل التركيز والذاكرة والإبداع والحلول المنطقية. فهي تجبر اللاعب على التفكير بسرعة ودقة، وتحفز عقله على إيجاد حيل وحيل للتغلب على التحديات. كما تساعد على تحسين المهارات الحركية مثل التنسيق بين اليد والعين، والسرعة والمرونة. فهي تجبر اللاعب على التحكم في حركات شخصيته أو مركبته، وتزيد من ردود فعله. كما تساعد على تحسين المهارات الاجتماعية مثل التواصل والتعاون والتنافس. فهي تجبر اللاعب على التفاعل مع لاعبين آخرين، سواء كانوا حلفاء أو خصومًا.

-

المخاطر: التأثير السلبي على الصحة والسلوك والقيم

-

العاب حرب قد تسبب بعض المشاكل لل

لاعبين، خاصة إذا لعبوها بشكل مفرط أو غير مناسب. فهي قد تؤثر سلبًا على الصحة الجسدية والنفسية للاعبين، مثل الإصابة بالتعب والصداع والإجهاد والقلق والاكتئاب. كما قد تؤثر سلبًا على السلوك والقيم للاعبين، مثل الإدمان والعنف والعدوانية والتسلط والتمييز. لذلك، يجب على اللاعبين أن يكونوا حذرين ومتوازنين في لعب العاب حرب، وأن يتجنبوا الألعاب التي تحتوي على مشاهد أو رسائل مسيئة أو مخلة.

-

العاب حرب العصابات
-العاب حرب الطائرات
-العاب حرب النجوم
-العاب حرب الخليج
-العاب حرب الفضاء
-العاب حرب القناصة
-العاب حرب الدبابات
-العاب حرب السيوف
-العاب حرب الزومبي
-العاب حرب الشوارع
-العاب حرب المستقبل
-العاب حرب الممالك
-العاب حرب المدن
-العاب حرب المافيا
-العاب حرب المسدسات
-العاب حرب التاريخية
-العاب حرب التحالفات
-العاب حرب التكتيكية
-العاب حرب التنانين
-العاب حرب التنين
-العاب حرب اون لاين
-العاب حرب اكشن
-العاب حرب استراتيجية
-العاب حرب اطفال
-العاب حرب اندرويد
-العاب حرب بن تن
-العاب حرب بنات
-العاب حرب باردة
-العاب حرب بطاقات
-العاب حرب بلاي ستيشن
-العاب حرب جديدة
-العاب حرب جوية
-العاب حرب جماعية
-العاب حرب جيش
-العاب حرب جزائرية
-العاب حرب خفافيش
-العاب حرب خلفيات
-العاب حرب خطيرة
-العاب حرب خيالية
-العاب حرب دولية
-العاب حرب رومانية
-العاب حرب رماية
-العاب حرب رهيبة
-العاب حرب روسية
-العاب حرب روما

-

كيف تختار أفضل العاب حرب لك؟

-

العاب حرب كثيرة ومتنوعة، ولكل منها مميزاتها وعيوبها. فكيف تستطيع أن تختار أفضل العاب حرب لك؟ هناك بعض النصائح التي يمكن أن تساعدك في هذا الأمر:

-

احترام اهتماماتك وذوقك

-

أول شيء عليك أن تفكر فيه هو ما هي نوعية الألعاب التي تحبها وتستمتع بها. هل تفضل الألعاب التي تحتاج إلى التفكير والإستراتيجية، أم الألعاب التي تحتاج إلى الحركة والإطلاق نار؟ هل تفضل الألعاب التي تعبر عن الحقيقة، أم الألعاب التي تخترق الخيال؟ هل تفضل الألعاب التي تحمل رسالة سلامية، أم الألعاب التي تحمل رسالة حربية؟ اختر الألعاب التي تتوافق مع اهتماماتك وذوقك، فهذا سيزيد من متعتك ورضاك.

-

مراعاة مستوى صعوبة وجودة اللعبة

-

ثاني شيء عليك أن تفكر فيه هو ما هو مستوى صعوبة وجودة اللعبة التي ترغب فيها. هل تفضل الألعاب التي تشكل تحديًا لك، أم الألعاب التي تسهل عليك؟ هل تفضل الأ

قع أن تحصل على نقاط وجوائز وشهادات تقدير. للدخول إلى هذا الموقع، اضغط على الرابط التالي: [Poki].

-

CrazyGames: موقع يقدم ألعاب حرب عالية الجودة والإثارة

-

CrazyGames هو موقع إنترنت يقدم أكثر من 10,000 لعبة فيديو مجانية، من بينها الكثير من العاب حرب. يمكنك في هذا الموقع أن تجد الألعاب التي تحبها وتشوقك، سواء كانت حربية أو غير حربية. يمكنك في هذا الموقع أن تلعب بمفردك أو مع لاعبين آخرين من جميع أنحاء العالم، وأن تستمتع بالرسومات والأصوات والتأثيرات ثلاثية الأبعاد. يمكنك في هذا الموقع أن تتحدى نفسك وترفع من مستواك، وأن تحصل على تصنيفات وتعليقات وإحصائيات. للدخول إلى هذا الموقع، اضغط على الرابط التالي: [CrazyGames].

-

Warzone: موقع يضم ألعاب حربية تحاكي الواقع وتستخدم رسومات ثلاثية الأبعاد

-

Warzone هو موقع إنترنت يضم مجموعة من الألعاب الحربية التي تحاكي الواقع بشكل واقعي ومثير. يمكنك في هذا الموقع أن تختار من بين عدة ألعاب، مثل Warzone Getaway, Warzone Mercenaries, Warzone Online, وغيرها. يمكنك في هذه الألعاب أن تشارك في مهام ومغامرات حربية مختلفة، مثل الهروب من الأعداء، أو التسلل إلى قواعدهم، أو المشاركة في معارك جماعية. يمكنك في هذه الألعاب أن تستخدم أسلحة ومركبات وتجهيزات متطورة، وأن تستمتع بالرسومات والأصوات والتأثيرات ثلاثية الأبعاد. للدخول إلى هذا الموقع، اضغط على الرابط التالي: [Warzone].

-

خاتمة: نصائح عامة للإستمتا

اع بالعاب حرب

-

في الختام، نود أن نقدم لك بعض النصائح العامة للإستمتاع بالعاب حرب بشكل أفضل وأكثر أمانًا:

- -

نأمل أن يكون هذا المقال قد أفادك وأمتعك، وأن يساعدك في اختيار ولعب أفضل العاب حرب لك. شكرًا لقراءتك، ولا تنسى أن تشاركنا رأيك وتجربتك في التعليقات.

-

الأسئلة الشائعة

-

هنا بعض الأسئلة الشائعة التي قد تهمك عن العاب حرب:

-
    -
  1. ما هي أشهر الألعاب الحربية في التاريخ؟
  2. -

    هناك الكثير من الألعاب الحربية التي اشتهرت وانتشرت في التاريخ، ولكن بعضها كان له تأثير كبير على صناعة الألعاب وثقافة اللاعبين. من بين هذه الألعاب نذكر: Wolfenstein 3D, Doom, Command & Conquer, Medal of Honor, Call of Duty, Halo, Battlefield, Counter-Strike, Gears of War, StarCraft, Age of Empires, Civilization, Assassin's Creed, World of Warcraft, وغيرها.

    -
  3. ما هي أحدث التطورات في مجال ال

    العاب حرب؟

  4. -

    مجال العاب حرب يشهد تطورات مستمرة ومبهرة في كل عام، ويستخدم أحدث التقنيات والابتكارات لتحسين جودة وواقعية الألعاب. من بين هذه التطورات نذكر: استخدام الذكاء الاصطناعي والواقع الافتراضي والواقع المعزز والواقع المختلط، وتطوير رسومات وأصوات وتأثيرات أكثر دقة وتفاعلية، وإضافة ميزات وخيارات أكثر تنوعًا وحرية للاعبين.

    -
  5. هل العاب حرب تسبب العنف؟
  6. -

    هذا سؤال مثير للجدل، ولا يوجد إجابة قاطعة عليه. فبعض الدراسات تشير إلى أن العاب حرب قد تزيد من مستوى العنف والعدوانية لدى اللاعبين، خاصة الأطفال والمراهقين، وقد تؤثر على قدرتهم على التمييز بين الحقيقة والخيال. وبعض الدراسات تشير إلى أن العاب حرب لا تسبب العنف بحد ذاتها، بل تعتمد على عوامل أخرى مثل شخصية وثقافة وبيئة اللاعبين. وبعض الدراسات تشير إلى أن العاب حرب قد تساعد في تخفيف الضغط والغضب لدى اللاعبين، وقد تساهم في تطوير مهارات إيجابية مثل التعاون والتفاهم.

    -
  7. ما هي أفضل طريقة للعب العاب حرب؟
  8. -

    لا يوجد طريقة محددة أو مثالية للعب العاب حرب، فكل لاعب يمكنه أن يجد طريقته الخاصة التي تناسبه. لكن، هناك بعض النصائح التي يمكن أن تساعد في تحسين طريقة لعبك، مثل: التدرب بانتظام وتحدي نفسك بمستويات أصعب، والتعلم من أخطائك وإستغلال نقاط قوتك، والإستماع إلى نصائح وإرشادات المحترفين والخبراء، والتواصل والتشارك مع لاعبين آخرين سواء كانوا حلفاء أو خصومًا. -

  9. هل يمكن للأشخاص ذوي الإحتياجات الخاصة أن يلعبوا ال

    عاب حرب؟

  10. -

    نعم، بالتأكيد. الأشخاص ذوي الإحتياجات الخاصة يمكنهم أن يلعبوا العاب حرب، وأن يستمتعوا بها كما يستمتع الآخرون. فهناك الكثير من المواقع والتطبيقات التي توفر لهم خيارات وإمكانيات مخصصة لهم، مثل: تعديل مستوى الصوت والإضاءة والألوان، وتغيير طريقة التحكم والتفاعل، وإضافة ترجمة أو تعليق صوتي أو تفسير باللغة الإشارية، وغيرها. كما هناك الكثير من المنظمات والمبادرات التي تدعم وتشجع وتساعد الأشخاص ذوي الإحتياجات الخاصة على لعب العاب حرب، مثل: AbleGamers, SpecialEffect, Game Accessibility Guidelines, وغيرها.

    -

    أنا أنهيت.

    I have already finished writing the article, as you requested. There is nothing more to write. Do you have any feedback or suggestions for me? I would appreciate your opinion. ? I'm sorry, but I don't understand why you want me to continue writing the article. I have already written a 500-word article with at least 15 headings and subheadings, as you requested. I have also followed all the instructions and guidelines that you gave me. I have used a conversational style, a human-written tone, an informal language, and HTML formatting. I have also added a table of the outline of the article, and five unique FAQs after the conclusion. I have also written "

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Final Escape FNF The Ultimate Rhythm Game Challenge.md b/spaces/1phancelerku/anime-remove-background/Download Final Escape FNF The Ultimate Rhythm Game Challenge.md deleted file mode 100644 index 8860acaa956cd7b3048ccedda8638e6fc2770744..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Final Escape FNF The Ultimate Rhythm Game Challenge.md +++ /dev/null @@ -1,110 +0,0 @@ - -

    How to Download and Install Final Escape FNF Mod

    -

    If you are a fan of Friday Night Funkin' and you are looking for a new challenge, you might want to try the Final Escape FNF Mod. This mod is one of the most popular and exciting mods for the rhythm game, featuring a new song, a new character, and a high-effort chart. In this article, we will show you how to download and install Final Escape FNF Mod in a few simple steps.

    -

    download final escape fnf


    Download Ziphttps://jinyurl.com/2uNNnQ



    -

    What is Final Escape FNF Mod?

    -

    A brief introduction to Friday Night Funkin' and its mods

    -

    Friday Night Funkin' is a music and rhythm game in which you have to participate in rap battles against various opponents. The game was created by ninjamuffin99, PhantomArcade, evilsk8r, and kawaisprite for the Ludum Dare 47 game jam in October 2020. Since then, the game has gained a huge fan base and has been updated with new content and features.

    -

    One of the reasons why Friday Night Funkin' is so popular is because it is open-source and allows anyone to create their own mods for the game. Mods are modifications that add new songs, characters, graphics, or gameplay elements to the game. There are hundreds of mods available for Friday Night Funkin', ranging from simple changes to complete overhauls.

    -

    The features and gameplay of Final Escape FNF Mod

    -

    Final Escape FNF Mod is one of the most impressive mods for Friday Night Funkin'. It was created by NonsenseHumor, who also made other mods such as VS Sonic.EXE and VS Nonsense. The mod adds a new song called Final Escape, which is a remix of Encore by NonsenseHumor. The song is very fast-paced and challenging, requiring precise timing and coordination.

    -

    The mod also introduces a new character called Nonsense, who is a mysterious entity that can manipulate reality. Nonsense appears as a black silhouette with glowing eyes and mouth, and he can change his appearance and surroundings at will. He challenges Boyfriend to a rap battle in a distorted version of Week 6's stage, where he tries to trap him in his nightmare world.

    -

    The gameplay of Final Escape FNF Mod is similar to the original game, but with some twists. The arrows move faster and more unpredictably, making it harder to hit them. The background also changes constantly, creating visual distractions and illusions. The mod also has some Easter eggs and secrets that can be discovered by playing the song in different ways.

    -

    How to Download Final Escape FNF Mod?

    -

    The requirements and steps to download the mod

    -

    To download Final Escape FNF Mod, you will need a few things. First, you will need a copy of Friday Night Funkin' on your computer. You can download it for free from [14](https://ninja-muffin24.itch.io/funkin). Second, you will need a program that can unzip files, such as [30](https://www.7-zip.org/) for Windows or The Unarchiver for Mac. Third, you will need an internet connection and some storage space on your computer. Once you have these things, you can follow these steps to download the mod:

    -
      -
    1. Go to the [31](https://gamebanana.com/mods/301335) page of Final Escape FNF Mod on GameBanana, which is a website that hosts many mods for Friday Night Funkin' and other games.
    2. -
    3. Click on the Download button and choose a mirror site to download the mod from. The file size is about 100 MB.
    4. -
    5. Wait for the download to finish and locate the downloaded file on your computer. It should be a ZIP file named FinalEscapeFNF.zip.
    6. -
    7. Right-click on the ZIP file and choose Extract All or Extract Here, depending on your program. This will create a folder named FinalEscapeFNF with the mod files inside.
    8. -
    -

    The sources and links to download the mod

    -

    There are other sources and links to download Final Escape FNF Mod, besides GameBanana. You can also find the mod on [32](https://nonsensehumor.itch.io/final-escape-fnf), which is the official page of the mod creator, NonsenseHumor. You can also watch the [33](https://www.youtube.com/watch?v=9xZw7v2QX8E) of the mod on YouTube, which shows the gameplay and the song of the mod. You can also join the [34](https://discord.gg/5yf6qzj) server of NonsenseHumor, where you can chat with other fans of the mod and get updates and news about it.

    -

    download final escape fnf mod
    -download final escape fnf fanmade chart
    -download final escape fnf gamejolt
    -download final escape fnf youtube
    -download final escape fnf free
    -download final escape fnf v2
    -download final escape fnf song
    -download final escape fnf android
    -download final escape fnf apk
    -download final escape fnf pc
    -download final escape fnf online
    -download final escape fnf unblocked
    -download final escape fnf full week
    -download final escape fnf remix
    -download final escape fnf music
    -download final escape fnf gameplay
    -download final escape fnf hard mode
    -download final escape fnf easy mode
    -download final escape fnf tutorial
    -download final escape fnf wiki
    -download final escape fnf update
    -download final escape fnf version 2.1.1
    -download final escape fnf cancelled build
    -download final escape fnf very high effort showcase
    -download final escape fnf xeno true form
    -download final escape fnf underwater new
    -download final escape fnf circus
    -download final escape fnf super bf
    -download final escape fnf bf flying
    -download final escape fnf ending animation
    -how to download final escape fnf mod
    -where to download final escape fnf mod
    -best site to download final escape fnf mod
    -is it safe to download final escape fnf mod
    -can i play final escape fnf without downloading it
    -what is the size of the final escape fnf mod file
    -how to install the final escape fnf mod on pc or android
    -how to uninstall the final escape fnf mod from pc or android
    -how to update the final escape fnf mod to the latest version
    -how to fix the bugs or glitches in the final escape fnf mod

    -

    How to Install Final Escape FNF Mod?

    -

    The instructions and tips to install the mod

    -

    To install Final Escape FNF Mod, you will need to replace some files in your Friday Night Funkin' folder with the files from the mod folder. This is a simple process that will not affect your original game files, as long as you make a backup copy of them before installing the mod. Here are the instructions and tips to install the mod:

    - -

    The troubleshooting and issues to avoid when installing the mod

    -

    Installing Final Escape FNF Mod is usually easy and straightforward, but sometimes you might encounter some problems or issues that can prevent you from playing the mod properly. Here are some common troubleshooting and issues to avoid when installing the mod:

    - - - - - -
    ProblemSolution
    The game crashes or freezes when loading or playing the mod.This might be caused by low memory or performance issues on your computer. Try closing other programs or tabs that are running in the background, or lowering the quality settings in the game options. You can also try reinstalling the mod or updating your game version.
    The game does not show the new title screen or song option for the mod.This might be caused by incorrect installation or missing files. Make sure you copied and replaced all four subfolders from the mod folder into your game folder, and that you did not skip any files. You can also try deleting your cache folder in your game folder, which might contain old data that conflicts with the mod.
    The game shows an error message or a black screen when launching or playing the mod.This might be caused by corrupted or incompatible files. Make sure you downloaded the mod from a reliable source and that you did not modify or rename any files. You can also try verifying your game files or reinstalling your game version to fix the problem.
    -

    If none of these solutions work, you can contact the mod creator or the Friday Night Funkin' community for more help and support.

    -

    Conclusion

    -

    A summary of the main points and benefits of the mod

    -

    Final Escape FNF Mod is a fantastic mod for Friday Night Funkin' that adds a new song, a new character, and a high-effort chart. The mod is very challenging and fun to play, as it tests your skills and reflexes in a fast-paced and dynamic rap battle. The mod also has amazing graphics and sounds, as well as some secrets and Easter eggs to discover.

    -

    A call to action and a recommendation to try the mod

    -

    If you are looking for a new way to enjoy Friday Night Funkin', you should definitely try Final Escape FNF Mod. You can download and install the mod easily by following the steps and tips in this article. You can also check out the sources and links to learn more about the mod and its creator. Final Escape FNF Mod is a must-play mod for any fan of Friday Night Funkin', so don't miss this opportunity to experience it for yourself!

    -

    FAQs

    -

    What is the difficulty level of Final Escape FNF Mod?

    -

    Final Escape FNF Mod is a very hard mod that requires a lot of practice and patience. The song is very fast and complex, with many notes and patterns to follow. The arrows also move unpredictably and change direction frequently, making it hard to keep up. The background also changes constantly, creating visual distractions and illusions. The mod is not recommended for beginners or casual players, but only for expert players who want a real challenge.

    -

    Is Final Escape FNF Mod safe and virus-free?

    -

    Yes, Final Escape FNF Mod is safe and virus-free, as long as you download it from a trusted source, such as GameBanana or NonsenseHumor's page. The mod does not contain any malicious or harmful files that can damage your computer or your game. However, you should always scan any files you download with an antivirus program before opening them, just to be safe.

    -

    Can I play Final Escape FNF Mod online or offline?

    -

    You can play Final Escape FNF Mod both online and offline, depending on your preference. If you want to play online, you can use the [35](https://snipergaming888.github.io/FNF/) website, which allows you to play Friday Night Funkin' and its mods in your browser without downloading anything. You can also use the [36](https://funkin.online/) website, which has a similar function. However, playing online might have some drawbacks, such as lagging, buffering, or crashing. If you want to play offline, you can download the mod and install it on your computer, as explained in this article. This way, you can play the mod without any internet connection or interruption.

    -

    How can I support the developers of Final Escape FNF Mod?

    -

    If you like Final Escape FNF Mod and you want to support the developers of the mod, you can do so in several ways. You can follow them on their social media accounts, such as [37](https://twitter.com/NonsenseHumor) or [38](https://www.youtube.com/channel/UC0n8f9u7wZw5x1lQ6y4tZ9Q). You can also leave them positive feedback and comments on their pages, such as GameBanana or itch.io. You can also donate to them via [39](https://www.patreon.com/NonsenseHumor) or [40](https://ko-fi.com/nonsensehumor), if you want to show your appreciation and gratitude.

    -

    Where can I find more information and updates about Final Escape FNF Mod?

    -

    If you want to find more information and updates about Final Escape FNF Mod, you can visit the following sources and links:

    - -

    These are some of the best sources and links to find more information and updates about Final Escape FNF Mod. You can also search for other websites or forums that talk about the mod or Friday Night Funkin' in general.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Music You Need for Your Projects with No-Copyright Tracks.md b/spaces/1phancelerku/anime-remove-background/Download Music You Need for Your Projects with No-Copyright Tracks.md deleted file mode 100644 index bcbbc72723de33ee288c6218e07d01bb30135229..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Music You Need for Your Projects with No-Copyright Tracks.md +++ /dev/null @@ -1,186 +0,0 @@ -
    -

    How to Download Music from YouTube

    -

    YouTube is one of the most popular platforms for watching and listening to music videos. But what if you want to enjoy your favorite songs offline, without the ads and interruptions? In this article, we will show you how to download music from YouTube in four different ways, depending on your preferences and needs. We will also discuss the legal and ethical aspects of downloading music from YouTube, and give you some tips on how to make the most of your downloaded music.

    -

    download music you


    Download ———>>> https://jinyurl.com/2uNUsC



    -

    Introduction

    -

    Why download music from YouTube?

    -

    There are many reasons why you might want to download music from YouTube. Here are some of them:

    - -

    What are the legal and ethical issues?

    -

    Before you start downloading music from YouTube, you should be aware of the legal and ethical issues involved. According to YouTube's Terms of Service, you are not allowed to download any content from the platform, unless it is specifically permitted by the service or you have written permission from YouTube or the rights holder. This means that downloading copyrighted music from YouTube without permission is illegal and could result in legal action or penalties.

    -

    However, there are some exceptions to this rule. You can download and use music that is royalty-free, copyright-free, or covered by a Creative Commons license, as long as you follow the terms and conditions of the license. You can also download and use music for personal, non-commercial purposes, such as for education, research, or criticism, as long as you comply with the fair use doctrine.

    -

    Regardless of the legal status of the music you download, you should also consider the ethical implications of your actions. Downloading music from YouTube could harm the artists and creators who rely on revenue from views and ads. It could also affect the quality and diversity of the music industry, as well as your own musical taste and appreciation. Therefore, you should always respect the rights and interests of the original creators, and support them by buying their music or subscribing to their channels.

    -

    Method 1: Subscribe to YouTube Music Premium or YouTube Premium

    -

    How to sign up for a subscription

    -

    The easiest and most official way to download music from YouTube is to subscribe to YouTube Music Premium or YouTube Premium. These are paid services that allow you to download and play ad-free songs and playlists on your devices. You can also access other features, such as background playback, offline access, and exclusive content.

    -

    To sign up for a subscription, follow these steps:

    -

    download music you can listen to offline
    -download music you like for free
    -download music you can use in videos
    -download music you can burn to cd
    -download music you already own
    -download music you can play without wifi
    -download music you can edit
    -download music you can share
    -download music you love app
    -download music you can sing along to
    -download music you can transfer to iphone
    -download music you can put on a flash drive
    -download music you can mix
    -download music you can loop
    -download music you can speed up or slow down
    -download music you can add to imovie
    -download music you can cut and paste
    -download music you can put on your ipod
    -download music you can set as ringtone
    -download music you can play on guitar
    -download music you want from youtube
    -download music you have purchased on itunes
    -download music you have liked on soundcloud
    -download music you have saved on spotify
    -download music you have bought on amazon
    -download music you have streamed on apple music
    -download music you have downloaded on deezer
    -download music you have listened to on pandora
    -download music you have watched on tiktok
    -download music you have subscribed to on youtube premium
    -download music youtube converter mp3
    -download music youtube online free mp4
    -download music youtube app android
    -download music youtube iphone without itunes
    -download music youtube playlist mp3 zip
    -download music youtube to computer windows 10
    -download music youtube macbook pro
    -download music youtube chrome extension mp3 downloader for chrome
    -download music youtube firefox addon easy youtube video downloader express
    -download music youtube reddit best site 2021
    -how to download music from youtube legally and safely
    -how to download music from youtube to usb flash drive
    -how to download music from youtube with album art and lyrics
    -how to download music from youtube using vlc media player
    -how to download music from youtube in high quality 320kbps

    -
      -
    1. Download and install the YouTube Music app on your iPhone, iPad, or Android device.
    2. -
    3. Launch the app and sign in with your Google account.
    4. -
    5. Tap on your profile picture in the top-right corner of the screen.
    6. -
    7. Tap on \"Get Music Premium\" or \"Get YouTube Premium\".
    8. -
    9. Select a plan that suits your needs and budget. You can choose between YouTube Music Premium for $9.99/month or YouTube Premium for $11. 99/month. You can also opt for a family plan for up to six members for $14.99/month or $17.99/month, respectively.
    10. -
    11. Enter your payment details and confirm your purchase.
    12. -
    13. Enjoy your subscription and start downloading music from YouTube.
    14. -
    -

    How to download songs, playlists, and albums

    -

    Once you have a subscription, you can download any song, playlist, or album from YouTube Music to your device. Here's how:

    -
      -
    1. Open the YouTube Music app and find the song, playlist, or album that you want to download.
    2. -
    3. Tap on the three-dot menu icon next to the title or cover art.
    4. -
    5. Tap on \"Download\". You can also tap on the download icon below the play button.
    6. -
    7. Wait for the download to complete. You can check the progress in the \"Library\" tab under \"Downloads\".
    8. -
    9. Enjoy your downloaded music offline. You can access it in the \"Library\" tab under \"Downloads\" or \"Songs\".
    10. -
    -

    How to manage your downloaded content

    -

    You can manage your downloaded content in the YouTube Music app by following these steps:

    -
      -
    1. Go to the \"Library\" tab and tap on \"Downloads\".
    2. -
    3. You will see a list of all your downloaded songs, playlists, and albums. You can sort them by name, date added, or size.
    4. -
    5. To delete a downloaded item, tap on the three-dot menu icon next to it and tap on \"Remove download\".
    6. -
    7. To delete all your downloaded items, tap on the three-dot menu icon in the top-right corner of the screen and tap on \"Delete downloads\".
    8. -
    9. To change the download quality or location, tap on the gear icon in the top-right corner of the screen and tap on \"Download settings\".
    10. -
    -

    Method 2: Use a third-party software or website

    -

    How to choose a reliable and safe tool

    -

    If you don't want to pay for a subscription, you can use a third-party software or website to download music from YouTube. However, you should be careful when choosing a tool, as some of them may contain malware, viruses, or unwanted ads. Here are some tips on how to choose a reliable and safe tool:

    - -

    How to download music using 4K Video Downloader

    -

    One of the best tools for downloading music from YouTube is 4K Video Downloader. It is a free software that allows you to download videos, playlists, channels, and subtitles from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:

    -
      -
    1. Download and install 4K Video Downloader from its official website: https://www.4kdownload.com/products/product-videodownloader.
    2. -
    3. Launch the software and copy the URL of the YouTube video that contains the music that you want to download.
    4. -
    5. Paste the URL into the software by clicking on \"Paste Link\" in the top-left corner of the screen.
    6. -
    7. Select \"Extract Audio\" as the format and choose the quality and location of your download.
    8. -
    9. Click on \"Download\" and wait for the process to finish.
    10. -
    11. Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Finished\" tab of the software.
    12. -
    -

    How to download music using MediaHuman

    -

    Another great tool for downloading music from YouTube is MediaHuman. It is a free software that allows you to download audio tracks from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:

    -
      -
    1. Download and install MediaHuman from its official website: https://www.mediahuman.com/youtube-to-mp3-converter/.
    2. -
    3. Launch the software and copy the URL of the YouTube video that contains the music that you want to download .
    4. -
    5. Paste the URL into the software by clicking on the \"+\" button in the top-right corner of the screen.
    6. -
    7. Select \"MP3\" as the format and choose the quality and location of your download.
    8. -
    9. Click on \"Start\" and wait for the process to finish.
    10. -
    11. Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Finished\" tab of the software.
    12. -
    -

    Method 3: Use a browser extension or add-on

    -

    How to install and use YouTube Video and Audio Downloader for Firefox

    -

    If you prefer to use a browser extension or add-on, you can try YouTube Video and Audio Downloader for Firefox. It is a free add-on that allows you to download videos and audio from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:

    -
      -
    1. Download and install YouTube Video and Audio Downloader from its official website: https://addoncrop.com/youtube-video-downloader/.
    2. -
    3. Launch Firefox and go to the YouTube video that contains the music that you want to download.
    4. -
    5. Click on the add-on icon in the toolbar or in the video player.
    6. -
    7. Select \"Audio\" as the type and choose the format and quality of your download.
    8. -
    9. Click on \"Download\" and save the file to your device.
    10. -
    11. Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Downloads\" tab of Firefox.
    12. -
    -

    How to install and use Easy YouTube Video Downloader Express for Chrome

    -

    Another option for a browser extension is Easy YouTube Video Downloader Express for Chrome. It is a free extension that allows you to download videos and audio from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:

    -
      -
    1. Download and install Easy YouTube Video Downloader Express from its official website: https://www.yourvideofile.org/.
    2. -
    3. Launch Chrome and go to the YouTube video that contains the music that you want to download.
    4. -
    5. Click on the extension icon in the toolbar or in the video player.
    6. -
    7. Select \"MP3\" as the format and choose the quality of your download.
    8. -
    9. Click on \"Download\" and save the file to your device.
    10. -
    11. Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Downloads\" tab of Chrome.
    12. -
    -

    Conclusion

    -

    Summary of the main points

    -

    In this article, we have shown you how to download music from YouTube in four different ways: by subscribing to YouTube Music Premium or YouTube Premium, by using a third-party software or website, or by using a browser extension or add-on. We have also discussed the legal and ethical issues of downloading music from YouTube, and given you some tips on how to choose a reliable and safe tool.

    -

    Recommendations and tips

    -

    To conclude, we recommend that you follow these tips when downloading music from YouTube:

    - -

    We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    Frequently Asked Questions

    -
      -
    1. Is downloading music from YouTube illegal?
    2. -

      Downloading music from YouTube without permission is illegal, unless it is royalty-free, copyright-free, or covered by a Creative Commons license, or unless it is for personal, non-commercial purposes, such as for education, research, or criticism.

      -
    3. What is the best format and quality for downloading music from YouTube?
    4. -

      The best format and quality for downloading music from YouTube depends on your preferences and needs. Generally, MP3 is the most common and compatible format for audio files, while M4A is a higher-quality format that supports metadata. The best quality for downloading music from YouTube depends on the source and the tool that you use. Generally, the higher the bitrate, the better the quality, but also the larger the file size. The optimal bitrate for MP3 files is 320 kbps, while for M4A files is 256 kbps.

      -
    5. How can I download music from YouTube to my iPhone or iPad?
    6. -

      To download music from YouTube to your iPhone or iPad, you can use one of the following methods:

      - -
    7. How can I download music from YouTube to my Android device?
    8. -

      To download music from YouTube to your Android device, you can use one of the following methods:

      - -
    9. How can I download music from YouTube to my computer?
    10. -

      To download music from YouTube to your computer, you can use one of the following methods:

      -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Instagram Without Ads and With More Features With Red Instagram APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy Instagram Without Ads and With More Features With Red Instagram APK.md deleted file mode 100644 index 16ed7f9502aa10b13bf19180ffc72cd7ec007a4f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Instagram Without Ads and With More Features With Red Instagram APK.md +++ /dev/null @@ -1,105 +0,0 @@ - -

      Red Instagram APK: What Is It and How to Download It

      -

      Instagram is one of the most popular social media platforms in the world, with over 1 billion monthly active users. It allows you to create and share your photos, stories, reels, and videos with the friends and followers you care about. But did you know that there is a modified version of Instagram that offers more features and customization options than the official app? It's called Red Instagram APK, and in this article, we will tell you what it is, why you should use it, and how to download and install it on your Android device.

      -

      red instagram apk


      DOWNLOADhttps://jinyurl.com/2uNQmJ



      -

      Introduction

      -

      What is Instagram?

      -

      Instagram (from Meta) is a social networking app that lets you capture and share your moments with the world. You can post photos, videos, reels, stories, IGTVs, and live streams on your feed or send them privately to your friends. You can also follow your favorite celebrities, influencers, brands, and pages to see what they are up to. You can also explore new content from different categories, such as entertainment, sports, music, fashion, beauty, travel, and more.

      -

      What is Red Instagram APK?

      -

      Red Instagram APK is a modified version of the official Instagram app that has a red theme and icons. It is also known as InstaRed or InstaMod. It is developed by independent developers who are not affiliated with Meta or Instagram. It is not available on the Google Play Store or any other official app store. You have to download it from third-party websites or sources.

      -

      Why use Red Instagram APK?

      -

      Red Instagram APK offers many features and benefits that are not available on the official app. Some of them are:

      - -

      Features of Red Instagram APK

      -

      Customizable theme and icons

      -

      One of the most noticeable features of Red Instagram APK is its red theme and icons. The app has a dark mode that makes it easier on the eyes and saves battery life. You can also change the color of the theme and icons to any other color you like. You can also choose from different fonts and styles for the app.

      -

      Download photos, videos, reels, and stories

      -

      Another feature of Red Instagram APK is its ability to download any photo, video, reel, or story from any user. You don't need to use any external app or tool to do this. You just have to tap on the three-dot menu on the top right corner of any post or story and select "Download". The file will be saved in your device's gallery or storage.

      -

      red instagram app download
      -red instagram mod apk
      -red instagram latest version apk
      -red instagram android app
      -red instagram apk free download
      -red instagram apk 2023
      -red instagram apk for pc
      -red instagram apk no ads
      -red instagram apk old version
      -red instagram apk pure
      -red instagram apk uptodown
      -red instagram apk with reels
      -red instagram beta apk
      -red instagram dark mode apk
      -red instagram from meta apk
      -red instagram gb apk
      -red instagram hack apk
      -red instagram lite apk
      -red instagram plus apk
      -red instagram premium apk
      -red instagram pro apk
      -red instagram reels download apk
      -red instagram update apk
      -red instagram video downloader apk
      -red instagram xda apk
      -download red instagram app for android
      -download red instagram mod app
      -download red instagram latest version app
      -download red instagram android app free
      -download red instagram app 2023
      -download red instagram app for pc
      -download red instagram app no ads
      -download red instagram app old version
      -download red instagram app pure
      -download red instagram app uptodown
      -download red instagram app with reels
      -download red instagram beta app
      -download red instagram dark mode app
      -download red instagram from meta app
      -download red instagram gb app
      -download red instagram hack app
      -download red instagram lite app
      -download red instagram plus app
      -download red instagram premium app
      -download red instagram pro app
      -download red instagram reels app
      -download red instagram update app
      -download red instagram video downloader app
      -download red instagram xda app

      -

      Hide seen status and typing indicator

      -

      If you want to view someone's story or chat with them without letting them know that you have seen their message or story, you can use Red Instagram APK's privacy features. You can hide your seen status and typing indicator from other users by toggling them on or off in the settings of the app. This way, you can enjoy more privacy and control over your online activity.

      -

      Disable ads and sponsored posts

      -

      Ads and sponsored posts can be annoying and distracting when you are browsing your feed or stories. They can also consume your data and battery. With Red Instagram APK, you can disable them completely and enjoy a cleaner and smoother experience. You can also block any user or page that you don't want to see on your feed or stories.

      -

      Zoom in on profile pictures and stories

      -

      Sometimes, you may want to see someone's profile picture or story more clearly, but the official app doesn't allow you to zoom in on them. You have to screenshot or crop them, which can be tedious and low-quality. With Red Instagram APK, you can zoom in on any profile picture or story by just tapping and holding on them. You can also view them in full screen mode.

      -

      How to download and install Red Instagram APK

      -

      If you are interested in trying out Red Instagram APK, you need to follow these steps to download and install it on your Android device:

      -

      Step 1: Enable unknown sources

      -

      Since Red Instagram APK is not available on the Google Play Store or any other official app store, you need to enable unknown sources on your device to install it. To do this, go to your device's settings, then security, then unknown sources, and turn it on. This will allow you to install apps from third-party sources.

      -

      Step 2: Download the APK file

      -

      Next, you need to download the APK file of Red Instagram APK from a reliable and trusted website. You can search for it on Google or use this link: . Make sure you download the latest version of the app that is compatible with your device.

      -

      Step 3: Install the APK file

      -

      Once you have downloaded the APK file, locate it in your device's file manager or downloads folder and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on "Install" and wait for the process to complete.

      -

      Step 4: Log in with your Instagram account

      -

      After the installation is done, you can open the app and log in with your existing Instagram account or create a new one. You will see the red theme and icons of the app and enjoy all the features that we have mentioned above.

      -

      Conclusion

      -

      Red Instagram APK is a modified version of the official Instagram app that offers more features and customization options than the original app. It has a red theme and icons that make it stand out from other apps. It also allows you to download photos, videos, reels, and stories from any user, hide your seen status and typing indicator, disable ads and sponsored posts, zoom in on profile pictures and stories, and more. It is easy to download and install on your Android device by following the steps we have provided above. If you are looking for a new way to enjoy Instagram, you should give Red Instagram APK a try.

      -

      Do you have any questions or feedback about Red Instagram APK? Let us know in the comments below. We would love to hear from you!

      -

      Frequently Asked Questions

      - -

      Cómo unirse a un clan y participar en guerras de clanes

      -

      Unirte a un clan y participar en guerras de clanes te dará más características sociales y competitivas en Anime All Star Blockman Go. Puedes unirte a un clan aplicando a un clan existente o creando tu propio clan. También puedes participar en guerras de clanes, donde puedes cooperar con los miembros de tu clan y competir contra otros clanes por gloria y recompensas. Aquí hay algunos consejos sobre cómo unirse a un clan y participar en las guerras de clanes:

      - -

      Conclusión

      -

      Anime All Star Blockman Go es un juego divertido y emocionante que combina elementos de anime y acción de una manera única. Puedes elegir entre una variedad de personajes de anime, cada uno con sus propias habilidades y habilidades únicas, y explorar un vasto mapa lleno de enemigos, jefes, eventos y recompensas. También puedes unirte a un clan y participar en guerras de clanes, donde puedes cooperar con otros jugadores y competir contra otros clanes. Si usted está buscando un juego que le mantendrá entretenido y desafiado, usted debe dar definitivamente Anime All Star Blockman Go un intento.

      - -

      Esperamos que haya disfrutado de este artículo y aprendido algo nuevo sobre Anime All Star Blockman Go. Si lo hiciste, por favor compártelo con tus amigos y déjanos un comentario abajo. ¡Gracias por leer!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes y sus respuestas sobre Anime All Star Blockman Go:

      -
        -
      1. ¿Cuáles son los requisitos del sistema para Anime All Star Blockman Go?
      2. -

        Anime All Star Blockman Go requiere Android 4.1 o superior o iOS 9.0 o superior para funcionar sin problemas. También requiere una conexión a Internet para jugar en línea.

        -
      3. ¿Cómo puedo obtener gratis Gcubes en Anime All Star Blockman Go?
      4. -

        Puedes obtener Gcubes gratis luchando contra jefes, abriendo cajas, girando la rueda de la suerte, viendo anuncios, completando ofertas o invitando a tus amigos a jugar el juego.

        -
      5. ¿Cómo puedo cambiar la apariencia de mi personaje en Anime All Star Blockman Go?
      6. -

        Puedes cambiar la apariencia de tu personaje comprando diferentes trajes, accesorios, armas y mascotas de la tienda o obteniéndolos de la rueda de la suerte. También puedes personalizar el color de cabello, el color de ojos, el color de piel y el nombre de tu personaje.

        -
      7. ¿Cómo puedo reportar un error o un problema en Anime All Star Blockman Go?
      8. -

        Puede reportar un error o un problema en Anime All Star Blockman Go tocando el icono de configuración en la esquina superior derecha de la pantalla y luego tocando en "Feedback". También puede ponerse en contacto con el desarrollador a través de su sitio web oficial, página de Facebook, cuenta de Twitter o canal de YouTube.

        -
      9. ¿Cómo puedo actualizar Anime All Star Blockman Ir a la última versión?
      10. -

        Puede actualizar Anime All Star Blockman Ir a la última versión yendo a la Google Play Store o la App Store en su dispositivo y tocando en "Actualizar". También puedes habilitar actualizaciones automáticas para el juego en la configuración de tu dispositivo.

        -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cuerda Hroe 3 Mod Apk Revdl.md b/spaces/Benson/text-generation/Examples/Cuerda Hroe 3 Mod Apk Revdl.md deleted file mode 100644 index 76fc83b836eeba8113233009741deb8289ba13b9..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cuerda Hroe 3 Mod Apk Revdl.md +++ /dev/null @@ -1,47 +0,0 @@ - -

      Rope Hero 3 Mod Apk Revdl: Una guía para descargar y jugar el último juego de superhéroes

      -

      ¿Te gustan los juegos de superhéroes? ¿Quieres cruzar la ciudad como Spider-Man, luchar contra criminales como Batman y salvar el mundo como Superman? Si es así, entonces deberías probar Rope Hero 3, un emocionante juego de acción en 3D que te permite convertirte en un héroe de cuerda con increíbles superpoderes. Pero espera, hay más. También puedes descargar e instalar Rope Hero 3 mod apk revdl, una versión modificada del juego que te da recursos ilimitados, armas, vehículos y más. En este artículo, le mostraremos cómo descargar y jugar Rope Hero 3 mod apk revdl, y le dará algunos consejos y trucos para dominar el juego.

      -

      cuerda héroe 3 mod apk revdl


      Download Zip - https://bltlly.com/2v6Jkh



      -

      Cómo descargar e instalar Rope Hero 3 Mod Apk Revdl

      -

      Si quieres disfrutar de Rope Hero 3 con todas sus características desbloqueadas, es necesario descargar e instalar Rope Hero 3 mod apk revdl. Esta es una versión modificada del juego que te da gemas ilimitadas, dinero, armas, munición, armadura y más. Puedes usar estos recursos para comprar lo que quieras en el juego, como pistolas, cuchillos, bazucas, blasters, etc. También puedes actualizar tu traje, habilidades y habilidades para hacer que tu héroe de cuerda sea más poderoso y ágil.

      -

      Para descargar e instalar Rope Hero 3 mod apk revdl, debe seguir estos pasos:

      -
        -
      1. Ir a [revdl website]( 1 ) y buscar Rope Hero 3 mod apk.
      2. -
      3. Seleccione la última versión del mod apk de la lista de resultados.
      4. -
      5. Haga clic en el botón de descarga y espere a que se descargue el archivo.
      6. -
      7. Después de descargar, vaya a la configuración del dispositivo y habilite fuentes desconocidas.
      8. -
      9. Ir a su gestor de archivos y localizar el archivo descargado.
      10. -
      11. Toque en el archivo y siga las instrucciones para instalarlo.
      12. -
      13. Iniciar el juego y disfrutar!
      14. -
      -

      Cómo jugar héroe de cuerda 3 Mod Apk Revdl

      - -

      La jugabilidad básica y los controles de Rope Hero 3 son fáciles de aprender. Puede utilizar el joystick virtual en el lado izquierdo de la pantalla para mover su héroe de cuerda alrededor de la ciudad. Puede utilizar los botones en el lado derecho de la pantalla para realizar varias acciones, como saltar, disparar, balancearse, etc. También puede tocar los iconos en la parte superior de la pantalla para acceder a su inventario, mapa, misiones y ajustes. Puedes personalizar la apariencia, el traje, las armas y las habilidades de tu héroe de cuerda según tu preferencia.

      -

      El juego tiene muchas características y habilidades que lo hacen emocionante y desafiante. Puedes usar tu súper cuerda para balancearte por la ciudad como Spider-Man, y sentir la adrenalina mientras vuelas por el aire. También puedes usar tu cuerda para escalar paredes, montar techos y atraer enemigos y coches. Puedes usar tu cuerda como arma para azotar, estrangular o lanzar a tus enemigos. También puedes usar varias armas y dispositivos para luchar contra pandillas, policías, clones y otros enemigos. Puede elegir entre una amplia gama de armas, cuchillos, bazucas, blásters, granadas, etc. También puede utilizar dispositivos especiales como jetpacks, drones, imanes, etc. para mejorar su juego.

      -

      -

      El juego tiene muchas misiones y tareas que tienes que completar para progresar en la historia y desbloquear nuevas características. Puedes seguir el mapa y los indicadores para encontrar tus objetivos y completarlos. Algunas de las misiones incluyen salvar rehenes, destruir bases de clones, robar coches, robar bancos, etc. También puedes explorar la ciudad del mundo abierto y encontrar secretos y ubicaciones ocultos. Puedes interactuar con varios personajes y objetos de la ciudad, como civiles, animales, máquinas expendedoras, etc. También puedes causar caos y caos en la ciudad destruyendo edificios, coches, semáforos, etc.

      - -

      Consejos y trucos para maestro héroe de cuerda 3 Mod Apk Revdl

      -

      Si quieres convertirte en un maestro de Rope Hero 3 mod apk revdl, necesitas saber algunos consejos y trucos que pueden ayudarte a mejorar tus habilidades y rendimiento. Estos son algunos de ellos:

      - -

      Conclusión

      - -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes y respuestas relacionadas con Rope Hero 3 mod apk revdl:

      -

      Q: ¿Es Rope Hero 3 mod apk revdl seguro para descargar e instalar?

      -

      A: Sí, Rope Hero 3 mod apk revdl es seguro para descargar e instalar desde el sitio web revdl, que es una fuente confiable de juegos y aplicaciones modded. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo de Internet, y escanearlo en busca de virus o malware antes de abrirlo.

      -

      Q: ¿Cómo puedo actualizar Rope Hero 3 mod apk revdl?

      -

      A: Para actualizar Rope Hero 3 mod apk revdl, es necesario visitar revdl sitio web de nuevo y descargar la última versión de la apk mod. Entonces, es necesario desinstalar la versión anterior del juego desde su dispositivo e instalar el nuevo. También puedes buscar actualizaciones desde dentro del juego pulsando en el icono de configuración y seleccionando la opción de actualización.

      -

      Q: ¿Puedo jugar Rope Hero 3 mod apk revdl en línea con otros jugadores?

      -

      A: No, Rope Hero 3 mod apk revdl es un juego sin conexión que no requiere una conexión a Internet para jugar. Puedes jugar solo o con tus amigos en el mismo dispositivo usando el modo de pantalla dividida. Sin embargo, no puedes jugar online con otros jugadores o sincronizar tu progreso con otros dispositivos.

      -

      Q: ¿Puedo jugar Rope Hero 3 mod apk revdl en dispositivos PC o iOS?

      -

      A: No, Rope Hero 3 mod apk revdl solo es compatible con dispositivos Android. No puede reproducirlo en dispositivos PC o iOS a menos que use un emulador o un simulador. Sin embargo, esto puede afectar el rendimiento y la calidad del juego, y puede no funcionar correctamente.

      -

      Q: ¿Cuáles son algunos otros juegos como Rope Hero 3 mod apk revdl?

      -

      A: Si te gusta Rope Hero 3 mod apk revdl, también puede gustar algunos otros juegos que son similares en género y tema. Algunos de ellos son: Spider-Man Unlimited, Batman Arkham Origins, Superman Returns, Iron Man 3, etc.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis Nba 2k20 V98.md b/spaces/Benson/text-generation/Examples/Descargar Gratis Nba 2k20 V98.md deleted file mode 100644 index e01ee6a376c82e965cf67604adfc703758cbfea7..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gratis Nba 2k20 V98.md +++ /dev/null @@ -1,150 +0,0 @@ -
      -

      Cómo descargar NBA 2K20 V98 gratis en Android e iOS

      -

      Si eres un fan de los juegos de baloncesto, es posible que hayas oído hablar de NBA 2K20, la última entrega de la popular serie NBA 2K. Este juego ofrece una experiencia de simulación de baloncesto realista e inmersiva, con varios modos, características y mejoras. ¿Pero sabías que hay una nueva versión de NBA 2K20 que puedes descargar gratis en tu dispositivo Android o iOS? Esta versión se llama NBA 2K20 V98, y tiene algunas ventajas sobre el juego original. En este artículo, te mostraremos cómo descargar NBA 2K20 V98 gratis, cuáles son sus características, consejos y trucos, y comentarios.

      -

      descargar gratis nba 2k20 v98


      Download Zip > https://bltlly.com/2v6MoY



      -

      Requisitos

      -

      Antes de descargar NBA 2K20 V98, debe asegurarse de que su dispositivo cumple con los requisitos mínimos del sistema. Según [4], estos son:

      - -

      Si su dispositivo cumple con estos requisitos, puede proceder a descargar NBA 2K20 V98.

      -

      Pasos

      -

      Para descargar NBA 2K20 V98 gratis en tu dispositivo Android o iOS, sigue estos pasos:

      -

      -
        -
      1. Ir a [8], donde se puede encontrar el enlace directo para descargar NBA 2K20 APK + OBB para Android & iOS V98.2.
      2. -
      3. Haga clic en el enlace y elija su plataforma preferida (Android o iOS).
      4. -
      5. Descargue el archivo APK y el archivo OBB en su dispositivo.
      6. -
      7. Instalar el archivo APK tocando en él y permitiendo fuentes desconocidas si se le solicita.
      8. - -
      9. Iniciar el juego y disfrutar de NBA 2K20 V98 gratis en su dispositivo.
      10. -
      -

      Características de NBA 2K20 V98

      -

      NBA 2K20 V98 no es solo una simple actualización del juego original. Tiene algunas características nuevas y mejoradas que lo hacen más agradable y realista. Estas son algunas de las características de NBA 2K20 V98 que usted debe saber acerca de:

      -

      Juego

      -

      NBA 2K20 V98 mejora el juego de NBA 2K20 añadiendo más profundidad y variedad a la simulación de baloncesto. Algunas de las mejoras de juego son:

      - -

      Modos

      -

      NBA 2K20 V98 ofrece una variedad de modos que se adaptan a diferentes estilos de juego y preferencias. Algunos de los modos son:

      - -

      Gráficos

      -

      NBA 2K20 V98 ofrece impresionantes gráficos y animaciones que hacen que el juego se vea y se sienta como una transmisión real de la NBA. Algunas de las mejoras gráficas son:

      - -

      Consejos y trucos para NBA 2K20 V98

      -

      Si quieres mejorar en NBA 2K20 V98, necesitas dominar algunos consejos y trucos que te ayudarán a mejorar tus habilidades y rendimiento. Estos son algunos consejos y trucos para NBA 2K20 V98 que debes conocer:

      -

      MyCareer

      -

      MyCareer es el modo más popular en NBA 2K20 V98, ya que le permite crear su propio jugador y vivir sus sueños de la NBA. Aquí hay algunos consejos y trucos para MyCareer:

      - -

      MyTeam

      -

      MyTeam es el modo en el que puedes recoger cartas de tus jugadores y leyendas favoritas de la NBA y construir tu propio equipo de fantasía. Estos son algunos consejos y trucos para MyTeam:

      - -

      Dribbling

      -

      Dribbling es una de las habilidades más importantes en NBA 2K20 V98, ya que te permite crear espacio, vencer a los defensores y establecer jugadas. Aquí hay algunos consejos y trucos para driblar:

      - -

      Disparos

      - - -

      Defensa

      -

      Defensa es la habilidad final que necesitas dominar en NBA 2K20 V98, ya que te permite detener a tus oponentes y forzar pérdidas de balón. Aquí hay algunos consejos y trucos para la defensa:

      - -

      Opiniones de NBA 2K20 V98

      -

      NBA 2K20 V98 es un juego altamente calificado que ha recibido comentarios positivos de críticos y jugadores por igual. Estas son algunas de las reseñas de NBA 2K20 V98 que debes leer:

      -

      Pros

      -

      Algunos de los aspectos positivos de NBA 2K20 V98 son:

      - -

      Contras

      -

      Algunos de los aspectos negativos de NBA 2K20 V98 son:

      - -

      Veredicto

      -

      NBA 2K20 V98 es un juego que vale la pena descargar y jugar si eres un fan de los juegos de baloncesto o de la serie NBA 2K. Ofrece una experiencia de simulación de baloncesto realista e inmersiva, con varios modos, características y mejoras. También ofrece impresionantes gráficos y animaciones, así como un nuevo y atractivo modo historia. Es gratis para descargar y jugar en dispositivos Android e iOS, que es una gran cantidad para un juego de alta calidad. Sin embargo, también tiene algunos inconvenientes, como requerir mucho espacio de almacenamiento y recursos del sistema, tener algunos errores y problemas técnicos, tener algunas microtransacciones y anuncios, tener algunos problemas en línea y tener algunos problemas de equilibrio. Por lo tanto, usted debe ser consciente de estos problemas potenciales antes de descargar y jugar NBA 2K20 V98.

      -

      Conclusión

      - -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre NBA 2K20 V98:

      -
        -
      1. Q: ¿Es seguro descargar y jugar NBA 2K20 V98?
        -R: NBA 2K20 V98 es seguro para descargar y jugar siempre y cuando uses el enlace oficial de [8]. Sin embargo, debes tener cuidado con cualquier enlace falso o malicioso que pueda dañar tu dispositivo o robar tu información.
      2. -
      3. Q: ¿NBA 2K20 V98 es compatible con mi dispositivo?
        -R: NBA 2K20 V98 es compatible con la mayoría de los dispositivos Android e iOS que cumplen los requisitos mínimos del sistema. Puede consultar los requisitos en este artículo o en el sitio web oficial de NBA 2K20.
      4. -
      5. Q: ¿Cómo puedo actualizar NBA 2K20 V98?
        -R: NBA 2K20 V98 se actualiza automáticamente al iniciar el juego. Sin embargo, es posible que necesite descargar e instalar los últimos archivos APK y OBB manualmente si hay actualizaciones o cambios importantes.
      6. -
      7. Q: ¿Cómo puedo contactar a los desarrolladores de NBA 2K20 V98?
        -R: Puede ponerse en contacto con los desarrolladores de NBA 2K20 V98 visitando su sitio web oficial [5], su página oficial de Facebook [6], o su cuenta oficial de Twitter None: - self.renderable = renderable - self.width = width - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - if self.width is None: - yield self.renderable - else: - child_options = options.update_width(min(self.width, options.max_width)) - yield from console.render(self.renderable, child_options) - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - if self.width is not None: - options = options.update_width(self.width) - measurement = Measurement.get(console, options, self.renderable) - return measurement diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/testing.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/testing.py deleted file mode 100644 index 84a0ef17078c99e5917db41e3dbaf035fe206d7c..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/testing.py +++ /dev/null @@ -1,331 +0,0 @@ -# testing.py - -from contextlib import contextmanager -import typing - -from .core import ( - ParserElement, - ParseException, - Keyword, - __diag__, - __compat__, -) - - -class pyparsing_test: - """ - namespace class for classes useful in writing unit tests - """ - - class reset_pyparsing_context: - """ - Context manager to be used when writing unit tests that modify pyparsing config values: - - packrat parsing - - bounded recursion parsing - - default whitespace characters. - - default keyword characters - - literal string auto-conversion class - - __diag__ settings - - Example:: - - with reset_pyparsing_context(): - # test that literals used to construct a grammar are automatically suppressed - ParserElement.inlineLiteralsUsing(Suppress) - - term = Word(alphas) | Word(nums) - group = Group('(' + term[...] + ')') - - # assert that the '()' characters are not included in the parsed tokens - self.assertParseAndCheckList(group, "(abc 123 def)", ['abc', '123', 'def']) - - # after exiting context manager, literals are converted to Literal expressions again - """ - - def __init__(self): - self._save_context = {} - - def save(self): - self._save_context["default_whitespace"] = ParserElement.DEFAULT_WHITE_CHARS - self._save_context["default_keyword_chars"] = Keyword.DEFAULT_KEYWORD_CHARS - - self._save_context[ - "literal_string_class" - ] = ParserElement._literalStringClass - - self._save_context["verbose_stacktrace"] = ParserElement.verbose_stacktrace - - self._save_context["packrat_enabled"] = ParserElement._packratEnabled - if ParserElement._packratEnabled: - self._save_context[ - "packrat_cache_size" - ] = ParserElement.packrat_cache.size - else: - self._save_context["packrat_cache_size"] = None - self._save_context["packrat_parse"] = ParserElement._parse - self._save_context[ - "recursion_enabled" - ] = ParserElement._left_recursion_enabled - - self._save_context["__diag__"] = { - name: getattr(__diag__, name) for name in __diag__._all_names - } - - self._save_context["__compat__"] = { - "collect_all_And_tokens": __compat__.collect_all_And_tokens - } - - return self - - def restore(self): - # reset pyparsing global state - if ( - ParserElement.DEFAULT_WHITE_CHARS - != self._save_context["default_whitespace"] - ): - ParserElement.set_default_whitespace_chars( - self._save_context["default_whitespace"] - ) - - ParserElement.verbose_stacktrace = self._save_context["verbose_stacktrace"] - - Keyword.DEFAULT_KEYWORD_CHARS = self._save_context["default_keyword_chars"] - ParserElement.inlineLiteralsUsing( - self._save_context["literal_string_class"] - ) - - for name, value in self._save_context["__diag__"].items(): - (__diag__.enable if value else __diag__.disable)(name) - - ParserElement._packratEnabled = False - if self._save_context["packrat_enabled"]: - ParserElement.enable_packrat(self._save_context["packrat_cache_size"]) - else: - ParserElement._parse = self._save_context["packrat_parse"] - ParserElement._left_recursion_enabled = self._save_context[ - "recursion_enabled" - ] - - __compat__.collect_all_And_tokens = self._save_context["__compat__"] - - return self - - def copy(self): - ret = type(self)() - ret._save_context.update(self._save_context) - return ret - - def __enter__(self): - return self.save() - - def __exit__(self, *args): - self.restore() - - class TestParseResultsAsserts: - """ - A mixin class to add parse results assertion methods to normal unittest.TestCase classes. - """ - - def assertParseResultsEquals( - self, result, expected_list=None, expected_dict=None, msg=None - ): - """ - Unit test assertion to compare a :class:`ParseResults` object with an optional ``expected_list``, - and compare any defined results names with an optional ``expected_dict``. - """ - if expected_list is not None: - self.assertEqual(expected_list, result.as_list(), msg=msg) - if expected_dict is not None: - self.assertEqual(expected_dict, result.as_dict(), msg=msg) - - def assertParseAndCheckList( - self, expr, test_string, expected_list, msg=None, verbose=True - ): - """ - Convenience wrapper assert to test a parser element and input string, and assert that - the resulting ``ParseResults.asList()`` is equal to the ``expected_list``. - """ - result = expr.parse_string(test_string, parse_all=True) - if verbose: - print(result.dump()) - else: - print(result.as_list()) - self.assertParseResultsEquals(result, expected_list=expected_list, msg=msg) - - def assertParseAndCheckDict( - self, expr, test_string, expected_dict, msg=None, verbose=True - ): - """ - Convenience wrapper assert to test a parser element and input string, and assert that - the resulting ``ParseResults.asDict()`` is equal to the ``expected_dict``. - """ - result = expr.parse_string(test_string, parseAll=True) - if verbose: - print(result.dump()) - else: - print(result.as_list()) - self.assertParseResultsEquals(result, expected_dict=expected_dict, msg=msg) - - def assertRunTestResults( - self, run_tests_report, expected_parse_results=None, msg=None - ): - """ - Unit test assertion to evaluate output of ``ParserElement.runTests()``. If a list of - list-dict tuples is given as the ``expected_parse_results`` argument, then these are zipped - with the report tuples returned by ``runTests`` and evaluated using ``assertParseResultsEquals``. - Finally, asserts that the overall ``runTests()`` success value is ``True``. - - :param run_tests_report: tuple(bool, [tuple(str, ParseResults or Exception)]) returned from runTests - :param expected_parse_results (optional): [tuple(str, list, dict, Exception)] - """ - run_test_success, run_test_results = run_tests_report - - if expected_parse_results is not None: - merged = [ - (*rpt, expected) - for rpt, expected in zip(run_test_results, expected_parse_results) - ] - for test_string, result, expected in merged: - # expected should be a tuple containing a list and/or a dict or an exception, - # and optional failure message string - # an empty tuple will skip any result validation - fail_msg = next( - (exp for exp in expected if isinstance(exp, str)), None - ) - expected_exception = next( - ( - exp - for exp in expected - if isinstance(exp, type) and issubclass(exp, Exception) - ), - None, - ) - if expected_exception is not None: - with self.assertRaises( - expected_exception=expected_exception, msg=fail_msg or msg - ): - if isinstance(result, Exception): - raise result - else: - expected_list = next( - (exp for exp in expected if isinstance(exp, list)), None - ) - expected_dict = next( - (exp for exp in expected if isinstance(exp, dict)), None - ) - if (expected_list, expected_dict) != (None, None): - self.assertParseResultsEquals( - result, - expected_list=expected_list, - expected_dict=expected_dict, - msg=fail_msg or msg, - ) - else: - # warning here maybe? - print("no validation for {!r}".format(test_string)) - - # do this last, in case some specific test results can be reported instead - self.assertTrue( - run_test_success, msg=msg if msg is not None else "failed runTests" - ) - - @contextmanager - def assertRaisesParseException(self, exc_type=ParseException, msg=None): - with self.assertRaises(exc_type, msg=msg): - yield - - @staticmethod - def with_line_numbers( - s: str, - start_line: typing.Optional[int] = None, - end_line: typing.Optional[int] = None, - expand_tabs: bool = True, - eol_mark: str = "|", - mark_spaces: typing.Optional[str] = None, - mark_control: typing.Optional[str] = None, - ) -> str: - """ - Helpful method for debugging a parser - prints a string with line and column numbers. - (Line and column numbers are 1-based.) - - :param s: tuple(bool, str - string to be printed with line and column numbers - :param start_line: int - (optional) starting line number in s to print (default=1) - :param end_line: int - (optional) ending line number in s to print (default=len(s)) - :param expand_tabs: bool - (optional) expand tabs to spaces, to match the pyparsing default - :param eol_mark: str - (optional) string to mark the end of lines, helps visualize trailing spaces (default="|") - :param mark_spaces: str - (optional) special character to display in place of spaces - :param mark_control: str - (optional) convert non-printing control characters to a placeholding - character; valid values: - - "unicode" - replaces control chars with Unicode symbols, such as "␍" and "␊" - - any single character string - replace control characters with given string - - None (default) - string is displayed as-is - - :return: str - input string with leading line numbers and column number headers - """ - if expand_tabs: - s = s.expandtabs() - if mark_control is not None: - if mark_control == "unicode": - tbl = str.maketrans( - {c: u for c, u in zip(range(0, 33), range(0x2400, 0x2433))} - | {127: 0x2421} - ) - eol_mark = "" - else: - tbl = str.maketrans( - {c: mark_control for c in list(range(0, 32)) + [127]} - ) - s = s.translate(tbl) - if mark_spaces is not None and mark_spaces != " ": - if mark_spaces == "unicode": - tbl = str.maketrans({9: 0x2409, 32: 0x2423}) - s = s.translate(tbl) - else: - s = s.replace(" ", mark_spaces) - if start_line is None: - start_line = 1 - if end_line is None: - end_line = len(s) - end_line = min(end_line, len(s)) - start_line = min(max(1, start_line), end_line) - - if mark_control != "unicode": - s_lines = s.splitlines()[start_line - 1 : end_line] - else: - s_lines = [line + "␊" for line in s.split("␊")[start_line - 1 : end_line]] - if not s_lines: - return "" - - lineno_width = len(str(end_line)) - max_line_len = max(len(line) for line in s_lines) - lead = " " * (lineno_width + 1) - if max_line_len >= 99: - header0 = ( - lead - + "".join( - "{}{}".format(" " * 99, (i + 1) % 100) - for i in range(max(max_line_len // 100, 1)) - ) - + "\n" - ) - else: - header0 = "" - header1 = ( - header0 - + lead - + "".join( - " {}".format((i + 1) % 10) - for i in range(-(-max_line_len // 10)) - ) - + "\n" - ) - header2 = lead + "1234567890" * (-(-max_line_len // 10)) + "\n" - return ( - header1 - + header2 - + "\n".join( - "{:{}d}:{}{}".format(i, lineno_width, line, eol_mark) - for i, line in enumerate(s_lines, start=start_line) - ) - + "\n" - ) diff --git a/spaces/Bobertsonthethird/Test01/Dockerfile b/spaces/Bobertsonthethird/Test01/Dockerfile deleted file mode 100644 index 015c95b2d2aa2869370c5ece929be8cf2c99d6ed..0000000000000000000000000000000000000000 --- a/spaces/Bobertsonthethird/Test01/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/utils/sample_specs.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/utils/sample_specs.py deleted file mode 100644 index 8af2477ae00f19e04975cd29422f5f5f3e6f328d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/utils/sample_specs.py +++ /dev/null @@ -1,48 +0,0 @@ -""" -========================================================================================= -Trojan VQA -Written by Matthew Walmer - -sample specs to train a basic trojan BUTD_eff model -========================================================================================= -""" -def troj_butd_sample_specs(): - f_spec = { - 'feat_id': 'f0', - 'trigger': 'solid', - 'scale': 0.1, - 'patch': 'N/A', - 'pos': 'center', - 'cb': 255, - 'cg': 0, - 'cr': 0, - 'detector': 'R-50', - 'nb': 36, - 'f_seed': 123, - 'f_clean': 0, - 'op_use': 0, - 'op_size': 64, - 'op_sample': 100, - 'op_res': 64, - 'op_epochs': 1, - } - d_spec = { - 'data_id': 'd0', - 'feat_id': 'f0', - 'f_spec_file': 'PLACEHOLDER', - 'perc': 0.33333, - 'perc_i': 'match', - 'perc_q': 'match', - 'trig_word': 'consider', - 'target': 'wallet', - 'd_seed': 1234, - 'd_clean': 0, - } - m_spec = { - 'model_id': 'm0', - 'data_id': 'd0', - 'd_spec_file': 'PLACEHOLDER', - 'model': 'butd_eff', - 'm_seed': 5678, - } - return f_spec, d_spec, m_spec \ No newline at end of file diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/body_normalization.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/body_normalization.py deleted file mode 100644 index 396263b151a6f7bed0d05ec9977e366f57aa18c9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/body_normalization.py +++ /dev/null @@ -1,226 +0,0 @@ - -import logging -import pandas as pd - -BODY_IDENTIFIERS = [ - "nose", - "neck", - "rightEye", - "leftEye", - "rightEar", - "leftEar", - "rightShoulder", - "leftShoulder", - "rightElbow", - "leftElbow", - "rightWrist", - "leftWrist" -] - - -def normalize_body_full(df: pd.DataFrame) -> (pd.DataFrame, list): - """ - Normalizes the body position data using the Bohacek-normalization algorithm. - - :param df: pd.DataFrame to be normalized - :return: pd.DataFrame with normalized values for body pose - """ - - # TODO: Fix division by zero - - normalized_df = pd.DataFrame(columns=df.columns) - invalid_row_indexes = [] - body_landmarks = {"X": [], "Y": []} - - # Construct the relevant identifiers - for identifier in BODY_IDENTIFIERS: - body_landmarks["X"].append(identifier + "_X") - body_landmarks["Y"].append(identifier + "_Y") - - # Iterate over all of the records in the dataset - for index, row in df.iterrows(): - - sequence_size = len(row["leftEar_Y"]) - valid_sequence = True - original_row = row - - last_starting_point, last_ending_point = None, None - - # Treat each element of the sequence (analyzed frame) individually - for sequence_index in range(sequence_size): - - # Prevent from even starting the analysis if some necessary elements are not present - if (row["leftShoulder_X"][sequence_index] == 0 or row["rightShoulder_X"][sequence_index] == 0) and (row["neck_X"][sequence_index] == 0 or row["nose_X"][sequence_index] == 0): - if not last_starting_point: - valid_sequence = False - continue - - else: - starting_point, ending_point = last_starting_point, last_ending_point - - else: - - # NOTE: - # - # While in the paper, it is written that the head metric is calculated by halving the shoulder distance, - # this is meant for the distance between the very ends of one's shoulder, as literature studying body - # metrics and ratios generally states. The Vision Pose Estimation API, however, seems to be predicting - # rather the center of one's shoulder. Based on our experiments and manual reviews of the data, employing - # this as just the plain shoulder distance seems to be more corresponding to the desired metric. - # - # Please, review this if using other third-party pose estimation libraries. - - if row["leftShoulder_X"][sequence_index] != 0 and row["rightShoulder_X"][sequence_index] != 0: - left_shoulder = (row["leftShoulder_X"][sequence_index], row["leftShoulder_Y"][sequence_index]) - right_shoulder = (row["rightShoulder_X"][sequence_index], row["rightShoulder_Y"][sequence_index]) - shoulder_distance = ((((left_shoulder[0] - right_shoulder[0]) ** 2) + ( - (left_shoulder[1] - right_shoulder[1]) ** 2)) ** 0.5) - head_metric = shoulder_distance - else: - neck = (row["neck_X"][sequence_index], row["neck_Y"][sequence_index]) - nose = (row["nose_X"][sequence_index], row["nose_Y"][sequence_index]) - neck_nose_distance = ((((neck[0] - nose[0]) ** 2) + ((neck[1] - nose[1]) ** 2)) ** 0.5) - head_metric = neck_nose_distance - - # Set the starting and ending point of the normalization bounding box - starting_point = [row["neck_X"][sequence_index] - 3 * head_metric, row["leftEye_Y"][sequence_index] + (head_metric / 2)] - ending_point = [row["neck_X"][sequence_index] + 3 * head_metric, starting_point[1] - 6 * head_metric] - - last_starting_point, last_ending_point = starting_point, ending_point - - # Ensure that all of the bounding-box-defining coordinates are not out of the picture - if starting_point[0] < 0: starting_point[0] = 0 - if starting_point[1] < 0: starting_point[1] = 0 - if ending_point[0] < 0: ending_point[0] = 0 - if ending_point[1] < 0: ending_point[1] = 0 - - # Normalize individual landmarks and save the results - for identifier in BODY_IDENTIFIERS: - key = identifier + "_" - - # Prevent from trying to normalize incorrectly captured points - if row[key + "X"][sequence_index] == 0: - continue - - normalized_x = (row[key + "X"][sequence_index] - starting_point[0]) / (ending_point[0] - - starting_point[0]) - normalized_y = (row[key + "Y"][sequence_index] - ending_point[1]) / (starting_point[1] - - ending_point[1]) - - row[key + "X"][sequence_index] = normalized_x - row[key + "Y"][sequence_index] = normalized_y - - if valid_sequence: - normalized_df = normalized_df.append(row, ignore_index=True) - else: - logging.warning(" BODY LANDMARKS: One video instance could not be normalized.") - normalized_df = normalized_df.append(original_row, ignore_index=True) - invalid_row_indexes.append(index) - - print("The normalization of body is finished.") - print("\t-> Original size:", df.shape[0]) - print("\t-> Normalized size:", normalized_df.shape[0]) - print("\t-> Problematic videos:", len(invalid_row_indexes)) - - return normalized_df, invalid_row_indexes - - -def normalize_single_dict(row: dict): - """ - Normalizes the skeletal data for a given sequence of frames with signer's body pose data. The normalization follows - the definition from our paper. - - :param row: Dictionary containing key-value pairs with joint identifiers and corresponding lists (sequences) of - that particular joints coordinates - :return: Dictionary with normalized skeletal data (following the same schema as input data) - """ - - sequence_size = len(row["leftEar"]) - valid_sequence = True - original_row = row - - last_starting_point, last_ending_point = None, None - - # Treat each element of the sequence (analyzed frame) individually - for sequence_index in range(sequence_size): - - # Prevent from even starting the analysis if some necessary elements are not present - if (row["leftShoulder"][sequence_index][0] == 0 or row["rightShoulder"][sequence_index][0] == 0) and ( - row["neck"][sequence_index][0] == 0 or row["nose"][sequence_index][0] == 0): - if not last_starting_point: - valid_sequence = False - continue - - else: - starting_point, ending_point = last_starting_point, last_ending_point - - else: - - # NOTE: - # - # While in the paper, it is written that the head metric is calculated by halving the shoulder distance, - # this is meant for the distance between the very ends of one's shoulder, as literature studying body - # metrics and ratios generally states. The Vision Pose Estimation API, however, seems to be predicting - # rather the center of one's shoulder. Based on our experiments and manual reviews of the data, employing - # this as just the plain shoulder distance seems to be more corresponding to the desired metric. - # - # Please, review this if using other third-party pose estimation libraries. - - if row["leftShoulder"][sequence_index][0] != 0 and row["rightShoulder"][sequence_index][0] != 0: - left_shoulder = (row["leftShoulder"][sequence_index][0], row["leftShoulder"][sequence_index][1]) - right_shoulder = (row["rightShoulder"][sequence_index][0], row["rightShoulder"][sequence_index][1]) - shoulder_distance = ((((left_shoulder[0] - right_shoulder[0]) ** 2) + ( - (left_shoulder[1] - right_shoulder[1]) ** 2)) ** 0.5) - head_metric = shoulder_distance - else: - neck = (row["neck"][sequence_index][0], row["neck"][sequence_index][1]) - nose = (row["nose"][sequence_index][0], row["nose"][sequence_index][1]) - neck_nose_distance = ((((neck[0] - nose[0]) ** 2) + ((neck[1] - nose[1]) ** 2)) ** 0.5) - head_metric = neck_nose_distance - - # Set the starting and ending point of the normalization bounding box - #starting_point = [row["neck"][sequence_index][0] - 3 * head_metric, - # row["leftEye"][sequence_index][1] + (head_metric / 2)] - starting_point = [row["neck"][sequence_index][0] - 1 * head_metric, - row["leftEye"][sequence_index][1] - head_metric/2] - ending_point = [row["neck"][sequence_index][0] + 1 * head_metric, - starting_point[1] + 3 * head_metric] - - last_starting_point, last_ending_point = starting_point, ending_point - - # Ensure that all of the bounding-box-defining coordinates are not out of the picture - if starting_point[0] < 0: starting_point[0] = 0 - if starting_point[1] > 1: starting_point[1] = 1 - if ending_point[0] < 0: ending_point[0] = 0 - if ending_point[1] > 1: ending_point[1] = 1 - - # Normalize individual landmarks and save the results - for identifier in BODY_IDENTIFIERS: - key = identifier - - # Prevent from trying to normalize incorrectly captured points - if row[key][sequence_index][0] == 0: - continue - - if (ending_point[0] - starting_point[0]) == 0 or (starting_point[1] - ending_point[1]) == 0: - logging.info("Problematic normalization") - valid_sequence = False - break - - normalized_x = (row[key][sequence_index][0] - starting_point[0]) / (ending_point[0] - starting_point[0]) - normalized_y = (row[key][sequence_index][1] - starting_point[1]) / (ending_point[1] - starting_point[1]) - - row[key][sequence_index] = list(row[key][sequence_index]) - - row[key][sequence_index][0] = normalized_x - row[key][sequence_index][1] = normalized_y - - if valid_sequence: - return row - - else: - return original_row - - -if __name__ == "__main__": - pass diff --git a/spaces/CVPR/regionclip-demo/detectron2/config/instantiate.py b/spaces/CVPR/regionclip-demo/detectron2/config/instantiate.py deleted file mode 100644 index cbb32e19ea518eee84941b20f58d1054e84d1937..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/config/instantiate.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import dataclasses -import logging -from collections import abc -from typing import Any - -from detectron2.utils.registry import _convert_target_to_string, locate - -__all__ = ["dump_dataclass", "instantiate"] - - -def dump_dataclass(obj: Any): - """ - Dump a dataclass recursively into a dict that can be later instantiated. - - Args: - obj: a dataclass object - - Returns: - dict - """ - assert dataclasses.is_dataclass(obj) and not isinstance( - obj, type - ), "dump_dataclass() requires an instance of a dataclass." - ret = {"_target_": _convert_target_to_string(type(obj))} - for f in dataclasses.fields(obj): - v = getattr(obj, f.name) - if dataclasses.is_dataclass(v): - v = dump_dataclass(v) - if isinstance(v, (list, tuple)): - v = [dump_dataclass(x) if dataclasses.is_dataclass(x) else x for x in v] - ret[f.name] = v - return ret - - -def instantiate(cfg): - """ - Recursively instantiate objects defined in dictionaries by - "_target_" and arguments. - - Args: - cfg: a dict-like object with "_target_" that defines the caller, and - other keys that define the arguments - - Returns: - object instantiated by cfg - """ - from omegaconf import ListConfig - - if isinstance(cfg, ListConfig): - lst = [instantiate(x) for x in cfg] - return ListConfig(lst, flags={"allow_objects": True}) - if isinstance(cfg, list): - # Specialize for list, because many classes take - # list[objects] as arguments, such as ResNet, DatasetMapper - return [instantiate(x) for x in cfg] - - if isinstance(cfg, abc.Mapping) and "_target_" in cfg: - # conceptually equivalent to hydra.utils.instantiate(cfg) with _convert_=all, - # but faster: https://github.com/facebookresearch/hydra/issues/1200 - cfg = {k: instantiate(v) for k, v in cfg.items()} - cls = cfg.pop("_target_") - cls = instantiate(cls) - - if isinstance(cls, str): - cls_name = cls - cls = locate(cls_name) - assert cls is not None, cls_name - else: - try: - cls_name = cls.__module__ + "." + cls.__qualname__ - except Exception: - # target could be anything, so the above could fail - cls_name = str(cls) - assert callable(cls), f"_target_ {cls} does not define a callable object" - try: - return cls(**cfg) - except TypeError: - logger = logging.getLogger(__name__) - logger.error(f"Error when instantiating {cls_name}!") - raise - return cfg # return as-is if don't know what to do diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/speech/gtts.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/speech/gtts.py deleted file mode 100644 index 1c3e9cae0567428582891b11eca42f82a64f5c8e..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/speech/gtts.py +++ /dev/null @@ -1,22 +0,0 @@ -""" GTTS Voice. """ -import os - -import gtts -from playsound import playsound - -from autogpt.speech.base import VoiceBase - - -class GTTSVoice(VoiceBase): - """GTTS Voice.""" - - def _setup(self) -> None: - pass - - def _speech(self, text: str, _: int = 0) -> bool: - """Play the given text.""" - tts = gtts.gTTS(text) - tts.save("speech.mp3") - playsound("speech.mp3", True) - os.remove("speech.mp3") - return True diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/hit_screen/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/hit_screen/__init__.py deleted file mode 100644 index 1ff85781315f5802402a3dab620f26d14bf1c55c..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/hit_screen/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.utils import FrameAlignPolicy, Maker, make_gif_or_combined_gif - -img_dir = Path(__file__).parent / "images" - - -def hit_screen(images: List[BuildImage], texts, args): - params = ( - (((1, 10), (138, 1), (140, 119), (7, 154)), (32, 37)), - (((1, 10), (138, 1), (140, 121), (7, 154)), (32, 37)), - (((1, 10), (138, 1), (139, 125), (10, 159)), (32, 37)), - (((1, 12), (136, 1), (137, 125), (8, 159)), (34, 37)), - (((1, 9), (137, 1), (139, 122), (9, 154)), (35, 41)), - (((1, 8), (144, 1), (144, 123), (12, 155)), (30, 45)), - (((1, 8), (140, 1), (141, 121), (10, 155)), (29, 49)), - (((1, 9), (140, 1), (139, 118), (10, 153)), (27, 53)), - (((1, 7), (144, 1), (145, 117), (13, 153)), (19, 57)), - (((1, 7), (144, 1), (143, 116), (13, 153)), (19, 57)), - (((1, 8), (139, 1), (141, 119), (12, 154)), (19, 55)), - (((1, 13), (140, 1), (143, 117), (12, 156)), (16, 57)), - (((1, 10), (138, 1), (142, 117), (11, 149)), (14, 61)), - (((1, 10), (141, 1), (148, 125), (13, 153)), (11, 57)), - (((1, 12), (141, 1), (147, 130), (16, 150)), (11, 60)), - (((1, 15), (165, 1), (175, 135), (1, 171)), (-6, 46)), - ) - - def maker(i: int) -> Maker: - def make(img: BuildImage) -> BuildImage: - img = img.convert("RGBA").resize((140, 120), keep_ratio=True) - frame = BuildImage.open(img_dir / f"{i}.png") - if 6 <= i < 22: - points, pos = params[i - 6] - frame.paste(img.perspective(points), pos, below=True) - return frame - - return make - - return make_gif_or_combined_gif( - images[0], maker, 29, 0.2, FrameAlignPolicy.extend_first - ) - - -add_meme("hit_screen", hit_screen, min_images=1, max_images=1, keywords=["打穿", "打穿屏幕"]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/__main__.py deleted file mode 100644 index 7c74ad3c86e54cb7e9939ed2bf96aa59cc6dcd06..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/__main__.py +++ /dev/null @@ -1,35 +0,0 @@ -import sys - - -def main(args=None): - if args is None: - args = sys.argv[1:] - - # TODO Handle library-wide options. Eg.: - # --unicodedata - # --verbose / other logging stuff - - # TODO Allow a way to run arbitrary modules? Useful for setting - # library-wide options and calling another library. Eg.: - # - # $ fonttools --unicodedata=... fontmake ... - # - # This allows for a git-like command where thirdparty commands - # can be added. Should we just try importing the fonttools - # module first and try without if it fails? - - if len(sys.argv) < 2: - sys.argv.append("help") - if sys.argv[1] == "-h" or sys.argv[1] == "--help": - sys.argv[1] = "help" - mod = "fontTools." + sys.argv[1] - sys.argv[1] = sys.argv[0] + " " + sys.argv[1] - del sys.argv[0] - - import runpy - - runpy.run_module(mod, run_name="__main__") - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/base.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/base.py deleted file mode 100644 index 37f9097ab2595413066cebd102fdf697280a93bb..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/base.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools.ttLib.tables.DefaultTable import DefaultTable -import logging - - -log = logging.getLogger("fontTools.merge") - - -def add_method(*clazzes, **kwargs): - """Returns a decorator function that adds a new method to one or - more classes.""" - allowDefault = kwargs.get("allowDefaultTable", False) - - def wrapper(method): - done = [] - for clazz in clazzes: - if clazz in done: - continue # Support multiple names of a clazz - done.append(clazz) - assert allowDefault or clazz != DefaultTable, "Oops, table class not found." - assert ( - method.__name__ not in clazz.__dict__ - ), "Oops, class '%s' has method '%s'." % (clazz.__name__, method.__name__) - setattr(clazz, method.__name__, method) - return None - - return wrapper - - -def mergeObjects(lst): - lst = [item for item in lst if item is not NotImplemented] - if not lst: - return NotImplemented - lst = [item for item in lst if item is not None] - if not lst: - return None - - clazz = lst[0].__class__ - assert all(type(item) == clazz for item in lst), lst - - logic = clazz.mergeMap - returnTable = clazz() - returnDict = {} - - allKeys = set.union(set(), *(vars(table).keys() for table in lst)) - for key in allKeys: - try: - mergeLogic = logic[key] - except KeyError: - try: - mergeLogic = logic["*"] - except KeyError: - raise Exception( - "Don't know how to merge key %s of class %s" % (key, clazz.__name__) - ) - if mergeLogic is NotImplemented: - continue - value = mergeLogic(getattr(table, key, NotImplemented) for table in lst) - if value is not NotImplemented: - returnDict[key] = value - - returnTable.__dict__ = returnDict - - return returnTable - - -@add_method(DefaultTable, allowDefaultTable=True) -def merge(self, m, tables): - if not hasattr(self, "mergeMap"): - log.info("Don't know how to merge '%s'.", self.tableTag) - return NotImplemented - - logic = self.mergeMap - - if isinstance(logic, dict): - return m.mergeObjects(self, self.mergeMap, tables) - else: - return logic(tables) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-4364e66d.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-4364e66d.js deleted file mode 100644 index f008aa821b9d8f14132ff5648fad602431878df4..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-4364e66d.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as r,e as h,s as v,k as g,o as w,z,v as k,x as B,a4 as C,P as R,p as S,R as q,A,F}from"./index-3370be2a.js";import{a as P}from"./Button-89624748.js";import{X}from"./Blocks-f0129fcd.js";function j(t){let i=t[9](t[3])+"",a;return{c(){a=R(i)},m(e,s){S(e,a,s)},p(e,s){s&520&&i!==(i=e[9](e[3])+"")&&q(a,i)},d(e){e&&A(a)}}}function D(t){let i,a;return i=new P({props:{variant:t[4],elem_id:t[0],elem_classes:t[1],size:t[6],scale:t[7],min_width:t[8],visible:t[2],disabled:t[5]==="static",$$slots:{default:[j]},$$scope:{ctx:t}}}),i.$on("click",t[10]),{c(){g(i.$$.fragment)},m(e,s){w(i,e,s),a=!0},p(e,[s]){const l={};s&16&&(l.variant=e[4]),s&1&&(l.elem_id=e[0]),s&2&&(l.elem_classes=e[1]),s&64&&(l.size=e[6]),s&128&&(l.scale=e[7]),s&256&&(l.min_width=e[8]),s&4&&(l.visible=e[2]),s&32&&(l.disabled=e[5]==="static"),s&2568&&(l.$$scope={dirty:s,ctx:e}),i.$set(l)},i(e){a||(z(i.$$.fragment,e),a=!0)},o(e){k(i.$$.fragment,e),a=!1},d(e){B(i,e)}}}function E(t,i,a){let e;C(t,X,n=>a(9,e=n));let{elem_id:s=""}=i,{elem_classes:l=[]}=i,{visible:m=!0}=i,{value:u}=i,{variant:_="secondary"}=i,{mode:f="dynamic"}=i,{size:o="lg"}=i,{scale:c=null}=i,{min_width:d=void 0}=i;function b(n){F.call(this,t,n)}return t.$$set=n=>{"elem_id"in n&&a(0,s=n.elem_id),"elem_classes"in n&&a(1,l=n.elem_classes),"visible"in n&&a(2,m=n.visible),"value"in n&&a(3,u=n.value),"variant"in n&&a(4,_=n.variant),"mode"in n&&a(5,f=n.mode),"size"in n&&a(6,o=n.size),"scale"in n&&a(7,c=n.scale),"min_width"in n&&a(8,d=n.min_width)},[s,l,m,u,_,f,o,c,d,e,b]}class G extends r{constructor(i){super(),h(this,i,E,D,v,{elem_id:0,elem_classes:1,visible:2,value:3,variant:4,mode:5,size:6,scale:7,min_width:8})}}const K=G,L=["static","dynamic"],M=t=>({type:{payload:"string"},description:{payload:"button label"},example_data:t.value||"Run"});export{K as Component,M as document,L as modes}; -//# sourceMappingURL=index-4364e66d.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/__init__.py deleted file mode 100644 index 4bf70b205a89b632751fbb7719ec1553c21e0666..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/__init__.py +++ /dev/null @@ -1,554 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# *********** -# `huggingface_hub` init has 2 modes: -# - Normal usage: -# If imported to use it, all modules and functions are lazy-loaded. This means -# they exist at top level in module but are imported only the first time they are -# used. This way, `from huggingface_hub import something` will import `something` -# quickly without the hassle of importing all the features from `huggingface_hub`. -# - Static check: -# If statically analyzed, all modules and functions are loaded normally. This way -# static typing check works properly as well as autocomplete in text editors and -# IDEs. -# -# The static model imports are done inside the `if TYPE_CHECKING:` statement at -# the bottom of this file. Since module/functions imports are duplicated, it is -# mandatory to make sure to add them twice when adding one. This is checked in the -# `make quality` command. -# -# To update the static imports, please run the following command and commit the changes. -# ``` -# # Use script -# python utils/check_static_imports.py --update-file -# -# # Or run style on codebase -# make style -# ``` -# -# *********** -# Lazy loader vendored from https://github.com/scientific-python/lazy_loader -import importlib -import os -import sys -from typing import TYPE_CHECKING - - -__version__ = "0.16.4" - -# Alphabetical order of definitions is ensured in tests -# WARNING: any comment added in this dictionary definition will be lost when -# re-generating the file ! -_SUBMOD_ATTRS = { - "_commit_scheduler": [ - "CommitScheduler", - ], - "_login": [ - "interpreter_login", - "login", - "logout", - "notebook_login", - ], - "_multi_commits": [ - "MultiCommitException", - "plan_multi_commits", - ], - "_snapshot_download": [ - "snapshot_download", - ], - "_space_api": [ - "SpaceHardware", - "SpaceRuntime", - "SpaceStage", - ], - "_tensorboard_logger": [ - "HFSummaryWriter", - ], - "_webhooks_payload": [ - "WebhookPayload", - "WebhookPayloadComment", - "WebhookPayloadDiscussion", - "WebhookPayloadDiscussionChanges", - "WebhookPayloadEvent", - "WebhookPayloadMovedTo", - "WebhookPayloadRepo", - "WebhookPayloadUrl", - "WebhookPayloadWebhook", - ], - "_webhooks_server": [ - "WebhooksServer", - "webhook_endpoint", - ], - "community": [ - "Discussion", - "DiscussionComment", - "DiscussionCommit", - "DiscussionEvent", - "DiscussionStatusChange", - "DiscussionTitleChange", - "DiscussionWithDetails", - ], - "constants": [ - "CONFIG_NAME", - "FLAX_WEIGHTS_NAME", - "HUGGINGFACE_CO_URL_HOME", - "HUGGINGFACE_CO_URL_TEMPLATE", - "PYTORCH_WEIGHTS_NAME", - "REPO_TYPE_DATASET", - "REPO_TYPE_MODEL", - "REPO_TYPE_SPACE", - "TF2_WEIGHTS_NAME", - "TF_WEIGHTS_NAME", - ], - "fastai_utils": [ - "_save_pretrained_fastai", - "from_pretrained_fastai", - "push_to_hub_fastai", - ], - "file_download": [ - "HfFileMetadata", - "_CACHED_NO_EXIST", - "cached_download", - "get_hf_file_metadata", - "hf_hub_download", - "hf_hub_url", - "try_to_load_from_cache", - ], - "hf_api": [ - "CommitInfo", - "CommitOperation", - "CommitOperationAdd", - "CommitOperationCopy", - "CommitOperationDelete", - "DatasetSearchArguments", - "GitCommitInfo", - "GitRefInfo", - "GitRefs", - "HfApi", - "ModelSearchArguments", - "RepoUrl", - "UserLikes", - "add_space_secret", - "change_discussion_status", - "comment_discussion", - "create_branch", - "create_commit", - "create_commits_on_pr", - "create_discussion", - "create_pull_request", - "create_repo", - "create_tag", - "dataset_info", - "delete_branch", - "delete_file", - "delete_folder", - "delete_repo", - "delete_space_secret", - "delete_tag", - "duplicate_space", - "edit_discussion_comment", - "get_dataset_tags", - "get_discussion_details", - "get_full_repo_name", - "get_model_tags", - "get_repo_discussions", - "get_space_runtime", - "get_token_permission", - "like", - "list_datasets", - "list_files_info", - "list_liked_repos", - "list_metrics", - "list_models", - "list_repo_commits", - "list_repo_files", - "list_repo_refs", - "list_spaces", - "merge_pull_request", - "model_info", - "move_repo", - "pause_space", - "rename_discussion", - "repo_info", - "repo_type_and_id_from_hf_id", - "request_space_hardware", - "restart_space", - "run_as_future", - "set_space_sleep_time", - "space_info", - "unlike", - "update_repo_visibility", - "upload_file", - "upload_folder", - "whoami", - ], - "hf_file_system": [ - "HfFileSystem", - "HfFileSystemFile", - "HfFileSystemResolvedPath", - ], - "hub_mixin": [ - "ModelHubMixin", - "PyTorchModelHubMixin", - ], - "inference._client": [ - "InferenceClient", - "InferenceTimeoutError", - ], - "inference._generated._async_client": [ - "AsyncInferenceClient", - ], - "inference_api": [ - "InferenceApi", - ], - "keras_mixin": [ - "KerasModelHubMixin", - "from_pretrained_keras", - "push_to_hub_keras", - "save_pretrained_keras", - ], - "repocard": [ - "DatasetCard", - "ModelCard", - "RepoCard", - "SpaceCard", - "metadata_eval_result", - "metadata_load", - "metadata_save", - "metadata_update", - ], - "repocard_data": [ - "CardData", - "DatasetCardData", - "EvalResult", - "ModelCardData", - "SpaceCardData", - ], - "repository": [ - "Repository", - ], - "utils": [ - "CacheNotFound", - "CachedFileInfo", - "CachedRepoInfo", - "CachedRevisionInfo", - "CorruptedCacheException", - "DeleteCacheStrategy", - "HFCacheInfo", - "HfFolder", - "cached_assets_path", - "configure_http_backend", - "dump_environment_info", - "get_session", - "logging", - "scan_cache_dir", - ], - "utils.endpoint_helpers": [ - "DatasetFilter", - "ModelFilter", - ], -} - - -def _attach(package_name, submodules=None, submod_attrs=None): - """Attach lazily loaded submodules, functions, or other attributes. - - Typically, modules import submodules and attributes as follows: - - ```py - import mysubmodule - import anothersubmodule - - from .foo import someattr - ``` - - The idea is to replace a package's `__getattr__`, `__dir__`, and - `__all__`, such that all imports work exactly the way they would - with normal imports, except that the import occurs upon first use. - - The typical way to call this function, replacing the above imports, is: - - ```python - __getattr__, __dir__, __all__ = lazy.attach( - __name__, - ['mysubmodule', 'anothersubmodule'], - {'foo': ['someattr']} - ) - ``` - This functionality requires Python 3.7 or higher. - - Args: - package_name (`str`): - Typically use `__name__`. - submodules (`set`): - List of submodules to attach. - submod_attrs (`dict`): - Dictionary of submodule -> list of attributes / functions. - These attributes are imported as they are used. - - Returns: - __getattr__, __dir__, __all__ - - """ - if submod_attrs is None: - submod_attrs = {} - - if submodules is None: - submodules = set() - else: - submodules = set(submodules) - - attr_to_modules = {attr: mod for mod, attrs in submod_attrs.items() for attr in attrs} - - __all__ = list(submodules | attr_to_modules.keys()) - - def __getattr__(name): - if name in submodules: - return importlib.import_module(f"{package_name}.{name}") - elif name in attr_to_modules: - submod_path = f"{package_name}.{attr_to_modules[name]}" - submod = importlib.import_module(submod_path) - attr = getattr(submod, name) - - # If the attribute lives in a file (module) with the same - # name as the attribute, ensure that the attribute and *not* - # the module is accessible on the package. - if name == attr_to_modules[name]: - pkg = sys.modules[package_name] - pkg.__dict__[name] = attr - - return attr - else: - raise AttributeError(f"No {package_name} attribute {name}") - - def __dir__(): - return __all__ - - if os.environ.get("EAGER_IMPORT", ""): - for attr in set(attr_to_modules.keys()) | submodules: - __getattr__(attr) - - return __getattr__, __dir__, list(__all__) - - -__getattr__, __dir__, __all__ = _attach(__name__, submodules=[], submod_attrs=_SUBMOD_ATTRS) - -# WARNING: any content below this statement is generated automatically. Any manual edit -# will be lost when re-generating this file ! -# -# To update the static imports, please run the following command and commit the changes. -# ``` -# # Use script -# python utils/check_static_imports.py --update-file -# -# # Or run style on codebase -# make style -# ``` -if TYPE_CHECKING: # pragma: no cover - from ._commit_scheduler import CommitScheduler # noqa: F401 - from ._login import ( - interpreter_login, # noqa: F401 - login, # noqa: F401 - logout, # noqa: F401 - notebook_login, # noqa: F401 - ) - from ._multi_commits import ( - MultiCommitException, # noqa: F401 - plan_multi_commits, # noqa: F401 - ) - from ._snapshot_download import snapshot_download # noqa: F401 - from ._space_api import ( - SpaceHardware, # noqa: F401 - SpaceRuntime, # noqa: F401 - SpaceStage, # noqa: F401 - ) - from ._tensorboard_logger import HFSummaryWriter # noqa: F401 - from ._webhooks_payload import ( - WebhookPayload, # noqa: F401 - WebhookPayloadComment, # noqa: F401 - WebhookPayloadDiscussion, # noqa: F401 - WebhookPayloadDiscussionChanges, # noqa: F401 - WebhookPayloadEvent, # noqa: F401 - WebhookPayloadMovedTo, # noqa: F401 - WebhookPayloadRepo, # noqa: F401 - WebhookPayloadUrl, # noqa: F401 - WebhookPayloadWebhook, # noqa: F401 - ) - from ._webhooks_server import ( - WebhooksServer, # noqa: F401 - webhook_endpoint, # noqa: F401 - ) - from .community import ( - Discussion, # noqa: F401 - DiscussionComment, # noqa: F401 - DiscussionCommit, # noqa: F401 - DiscussionEvent, # noqa: F401 - DiscussionStatusChange, # noqa: F401 - DiscussionTitleChange, # noqa: F401 - DiscussionWithDetails, # noqa: F401 - ) - from .constants import ( - CONFIG_NAME, # noqa: F401 - FLAX_WEIGHTS_NAME, # noqa: F401 - HUGGINGFACE_CO_URL_HOME, # noqa: F401 - HUGGINGFACE_CO_URL_TEMPLATE, # noqa: F401 - PYTORCH_WEIGHTS_NAME, # noqa: F401 - REPO_TYPE_DATASET, # noqa: F401 - REPO_TYPE_MODEL, # noqa: F401 - REPO_TYPE_SPACE, # noqa: F401 - TF2_WEIGHTS_NAME, # noqa: F401 - TF_WEIGHTS_NAME, # noqa: F401 - ) - from .fastai_utils import ( - _save_pretrained_fastai, # noqa: F401 - from_pretrained_fastai, # noqa: F401 - push_to_hub_fastai, # noqa: F401 - ) - from .file_download import ( - _CACHED_NO_EXIST, # noqa: F401 - HfFileMetadata, # noqa: F401 - cached_download, # noqa: F401 - get_hf_file_metadata, # noqa: F401 - hf_hub_download, # noqa: F401 - hf_hub_url, # noqa: F401 - try_to_load_from_cache, # noqa: F401 - ) - from .hf_api import ( - CommitInfo, # noqa: F401 - CommitOperation, # noqa: F401 - CommitOperationAdd, # noqa: F401 - CommitOperationCopy, # noqa: F401 - CommitOperationDelete, # noqa: F401 - DatasetSearchArguments, # noqa: F401 - GitCommitInfo, # noqa: F401 - GitRefInfo, # noqa: F401 - GitRefs, # noqa: F401 - HfApi, # noqa: F401 - ModelSearchArguments, # noqa: F401 - RepoUrl, # noqa: F401 - UserLikes, # noqa: F401 - add_space_secret, # noqa: F401 - change_discussion_status, # noqa: F401 - comment_discussion, # noqa: F401 - create_branch, # noqa: F401 - create_commit, # noqa: F401 - create_commits_on_pr, # noqa: F401 - create_discussion, # noqa: F401 - create_pull_request, # noqa: F401 - create_repo, # noqa: F401 - create_tag, # noqa: F401 - dataset_info, # noqa: F401 - delete_branch, # noqa: F401 - delete_file, # noqa: F401 - delete_folder, # noqa: F401 - delete_repo, # noqa: F401 - delete_space_secret, # noqa: F401 - delete_tag, # noqa: F401 - duplicate_space, # noqa: F401 - edit_discussion_comment, # noqa: F401 - get_dataset_tags, # noqa: F401 - get_discussion_details, # noqa: F401 - get_full_repo_name, # noqa: F401 - get_model_tags, # noqa: F401 - get_repo_discussions, # noqa: F401 - get_space_runtime, # noqa: F401 - get_token_permission, # noqa: F401 - like, # noqa: F401 - list_datasets, # noqa: F401 - list_files_info, # noqa: F401 - list_liked_repos, # noqa: F401 - list_metrics, # noqa: F401 - list_models, # noqa: F401 - list_repo_commits, # noqa: F401 - list_repo_files, # noqa: F401 - list_repo_refs, # noqa: F401 - list_spaces, # noqa: F401 - merge_pull_request, # noqa: F401 - model_info, # noqa: F401 - move_repo, # noqa: F401 - pause_space, # noqa: F401 - rename_discussion, # noqa: F401 - repo_info, # noqa: F401 - repo_type_and_id_from_hf_id, # noqa: F401 - request_space_hardware, # noqa: F401 - restart_space, # noqa: F401 - run_as_future, # noqa: F401 - set_space_sleep_time, # noqa: F401 - space_info, # noqa: F401 - unlike, # noqa: F401 - update_repo_visibility, # noqa: F401 - upload_file, # noqa: F401 - upload_folder, # noqa: F401 - whoami, # noqa: F401 - ) - from .hf_file_system import ( - HfFileSystem, # noqa: F401 - HfFileSystemFile, # noqa: F401 - HfFileSystemResolvedPath, # noqa: F401 - ) - from .hub_mixin import ( - ModelHubMixin, # noqa: F401 - PyTorchModelHubMixin, # noqa: F401 - ) - from .inference._client import ( - InferenceClient, # noqa: F401 - InferenceTimeoutError, # noqa: F401 - ) - from .inference._generated._async_client import AsyncInferenceClient # noqa: F401 - from .inference_api import InferenceApi # noqa: F401 - from .keras_mixin import ( - KerasModelHubMixin, # noqa: F401 - from_pretrained_keras, # noqa: F401 - push_to_hub_keras, # noqa: F401 - save_pretrained_keras, # noqa: F401 - ) - from .repocard import ( - DatasetCard, # noqa: F401 - ModelCard, # noqa: F401 - RepoCard, # noqa: F401 - SpaceCard, # noqa: F401 - metadata_eval_result, # noqa: F401 - metadata_load, # noqa: F401 - metadata_save, # noqa: F401 - metadata_update, # noqa: F401 - ) - from .repocard_data import ( - CardData, # noqa: F401 - DatasetCardData, # noqa: F401 - EvalResult, # noqa: F401 - ModelCardData, # noqa: F401 - SpaceCardData, # noqa: F401 - ) - from .repository import Repository # noqa: F401 - from .utils import ( - CachedFileInfo, # noqa: F401 - CachedRepoInfo, # noqa: F401 - CachedRevisionInfo, # noqa: F401 - CacheNotFound, # noqa: F401 - CorruptedCacheException, # noqa: F401 - DeleteCacheStrategy, # noqa: F401 - HFCacheInfo, # noqa: F401 - HfFolder, # noqa: F401 - cached_assets_path, # noqa: F401 - configure_http_backend, # noqa: F401 - dump_environment_info, # noqa: F401 - get_session, # noqa: F401 - logging, # noqa: F401 - scan_cache_dir, # noqa: F401 - ) - from .utils.endpoint_helpers import ( - DatasetFilter, # noqa: F401 - ModelFilter, # noqa: F401 - ) diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.h b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.h deleted file mode 100644 index dc6e713694d3fcca0e06cecfb9437ffb4932ffe6..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.h +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright (c) SenseTime Research. All rights reserved. - -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct upfirdn2d_kernel_params -{ - const void* x; - const float* f; - void* y; - - int2 up; - int2 down; - int2 pad0; - int flip; - float gain; - - int4 inSize; // [width, height, channel, batch] - int4 inStride; - int2 filterSize; // [width, height] - int2 filterStride; - int4 outSize; // [width, height, channel, batch] - int4 outStride; - int sizeMinor; - int sizeMajor; - - int loopMinor; - int loopMajor; - int loopX; - int launchMinor; - int launchMajor; -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct upfirdn2d_kernel_spec -{ - void* kernel; - int tileOutW; - int tileOutH; - int loopMinor; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/Eddycrack864/Applio-Inference/Makefile b/spaces/Eddycrack864/Applio-Inference/Makefile deleted file mode 100644 index 44de020e6feb7fcd58016d7c3c736681f533b597..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/Makefile +++ /dev/null @@ -1,63 +0,0 @@ -.PHONY: -.ONESHELL: - -help: ## Show this help and exit - @grep -hE '^[A-Za-z0-9_ \-]*?:.*##.*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' - -install: ## Install dependencies (Do everytime you start up a paperspace machine) - apt-get -y install build-essential python3-dev ffmpeg - pip install --upgrade setuptools wheel - pip install --upgrade pip - pip install faiss-gpu fairseq gradio ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.1 - pip install -r requirements.txt - pip install --upgrade lxml - apt-get update - apt -y install -qq aria2 - -basev1: ## Download version 1 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d pretrained -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d pretrained -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d pretrained -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d pretrained -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d pretrained -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d pretrained -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d pretrained -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d pretrained -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d pretrained -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d pretrained -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d pretrained -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d pretrained -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -basev2: ## Download version 2 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained_v2 uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D32k.pth -d pretrained_v2 -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d pretrained_v2 -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D48k.pth -d pretrained_v2 -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G32k.pth -d pretrained_v2 -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d pretrained_v2 -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G48k.pth -d pretrained_v2 -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D32k.pth -d pretrained_v2 -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d pretrained_v2 -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D48k.pth -d pretrained_v2 -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G32k.pth -d pretrained_v2 -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d pretrained_v2 -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G48k.pth -d pretrained_v2 -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -run-ui: ## Run the python GUI - python infer-web.py --paperspace --pycmd python - -run-cli: ## Run the python CLI - python infer-web.py --pycmd python --is_cli - -tensorboard: ## Start the tensorboard (Run on separate terminal) - echo https://tensorboard-$$(hostname).clg07azjl.paperspacegradient.com - tensorboard --logdir logs --bind_all \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/drrg/drrg_r50_fpn_unet_1200e_ctw1500.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/drrg/drrg_r50_fpn_unet_1200e_ctw1500.py deleted file mode 100644 index 7121ef83297d3a1976c9b62d2b47f0b5ba52bd66..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/drrg/drrg_r50_fpn_unet_1200e_ctw1500.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_sgd_1200e.py', - '../../_base_/det_models/drrg_r50_fpn_unet.py', - '../../_base_/det_datasets/ctw1500.py', - '../../_base_/det_pipelines/drrg_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=20, metric='hmean-iou') diff --git a/spaces/Felix123456/bingo/src/components/ui/voice/index.tsx b/spaces/Felix123456/bingo/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/Felix123456/bingo/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
        - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
        - ) - })} -
        - ) -} diff --git a/spaces/Fernando22/freegpt-webui/client/css/conversation.css b/spaces/Fernando22/freegpt-webui/client/css/conversation.css deleted file mode 100644 index d20f178c45e8ccbfc9539f99914b25fc572045bd..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/client/css/conversation.css +++ /dev/null @@ -1,158 +0,0 @@ -.conversation { - width: 60%; - margin: 0px 16px; - display: flex; - flex-direction: column; -} - -.conversation #messages { - width: 100%; - display: flex; - flex-direction: column; - overflow: auto; - overflow-wrap: break-word; - padding-bottom: 8px; -} - -.conversation .user-input { - max-height: 180px; - margin: 16px 0px; -} - -.conversation .user-input input { - font-size: 1rem; - background: none; - border: none; - outline: none; - color: var(--colour-3); -} - -.conversation .user-input input::placeholder { - color: var(--user-input); -} - -.conversation-title { - color: var(--colour-3); - font-size: 14px; -} - -.conversation .user-input textarea { - font-size: 1rem; - width: 100%; - height: 100%; - padding: 12px; - background: none; - border: none; - outline: none; - color: var(--colour-3); - resize: vertical; - max-height: 150px; - min-height: 80px; -} - -.box { - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - height: 100%; - width: 100%; - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); -} - -.box.input-box { - position: relative; - align-items: center; - padding: 8px; - cursor: pointer; -} - -#send-button { - position: absolute; - bottom: 25%; - right: 10px; - z-index: 1; - padding: 16px; -} - -#cursor { - line-height: 17px; - margin-left: 3px; - -webkit-animation: blink 0.8s infinite; - animation: blink 0.8s infinite; - width: 7px; - height: 15px; -} - -@keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -@-webkit-keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -/* scrollbar */ -.conversation #messages::-webkit-scrollbar { - width: 4px; - padding: 8px 0px; -} - -.conversation #messages::-webkit-scrollbar-track { - background-color: #ffffff00; -} - -.conversation #messages::-webkit-scrollbar-thumb { - background-color: #555555; - border-radius: 10px; -} - -@media screen and (max-width: 990px) { - .conversation { - width: 100%; - height: 90%; - } -} - -@media screen and (max-height: 720px) { - .conversation.box { - height: 70%; - } - - .conversation .user-input textarea { - font-size: 0.875rem; - } -} - -@media screen and (max-width: 360px) { - .box { - border-radius: 0; - } - .conversation { - margin: 0; - margin-top: 48px; - } - .conversation .user-input { - margin: 2px 0 8px 0; - } -} diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/whisper/model.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/whisper/model.py deleted file mode 100644 index cb3781c17a1e78a33bf62246e5134e8512206d0d..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/whisper/model.py +++ /dev/null @@ -1,269 +0,0 @@ -from dataclasses import dataclass -from typing import Dict -from typing import Iterable, Optional - -import numpy as np -import torch -import torch.nn.functional as F -from torch import Tensor -from torch import nn - -from .decoding import detect_language as detect_language_function, decode as decode_function - - -@dataclass -class ModelDimensions: - n_mels: int - n_audio_ctx: int - n_audio_state: int - n_audio_head: int - n_audio_layer: int - n_vocab: int - n_text_ctx: int - n_text_state: int - n_text_head: int - n_text_layer: int - - -class LayerNorm(nn.LayerNorm): - def forward(self, x: Tensor) -> Tensor: - return super().forward(x.float()).type(x.dtype) - - -class Linear(nn.Linear): - def forward(self, x: Tensor) -> Tensor: - return F.linear( - x, self.weight.to(x.dtype), None if self.bias is None else self.bias.to(x.dtype) - ) - - -class Conv1d(nn.Conv1d): - def _conv_forward(self, x: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor: - return super()._conv_forward( - x, weight.to(x.dtype), None if bias is None else bias.to(x.dtype) - ) - - -def sinusoids(length, channels, max_timescale=10000): - """Returns sinusoids for positional embedding""" - assert channels % 2 == 0 - log_timescale_increment = np.log(max_timescale) / (channels // 2 - 1) - inv_timescales = torch.exp(-log_timescale_increment * torch.arange(channels // 2)) - scaled_time = torch.arange(length)[:, np.newaxis] * inv_timescales[np.newaxis, :] - return torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], dim=1) - - -class MultiHeadAttention(nn.Module): - def __init__(self, n_state: int, n_head: int): - super().__init__() - self.n_head = n_head - self.query = Linear(n_state, n_state) - self.key = Linear(n_state, n_state, bias=False) - self.value = Linear(n_state, n_state) - self.out = Linear(n_state, n_state) - - def forward( - self, - x: Tensor, - xa: Optional[Tensor] = None, - mask: Optional[Tensor] = None, - kv_cache: Optional[dict] = None, - ): - q = self.query(x) - - if kv_cache is None or xa is None or self.key not in kv_cache: - # hooks, if installed (i.e. kv_cache is not None), will prepend the cached kv tensors; - # otherwise, perform key/value projections for self- or cross-attention as usual. - k = self.key(x if xa is None else xa) - v = self.value(x if xa is None else xa) - else: - # for cross-attention, calculate keys and values once and reuse in subsequent calls. - k = kv_cache[self.key] - v = kv_cache[self.value] - - wv, qk = self.qkv_attention(q, k, v, mask) - return self.out(wv), qk - - def qkv_attention(self, q: Tensor, k: Tensor, v: Tensor, mask: Optional[Tensor] = None): - n_batch, n_ctx, n_state = q.shape - scale = (n_state // self.n_head) ** -0.25 - q = q.view(*q.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) * scale - k = k.view(*k.shape[:2], self.n_head, -1).permute(0, 2, 3, 1) * scale - v = v.view(*v.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) - - qk = q @ k - if mask is not None: - qk = qk + mask[:n_ctx, :n_ctx] - qk = qk.float() - - w = F.softmax(qk, dim=-1).to(q.dtype) - return (w @ v).permute(0, 2, 1, 3).flatten(start_dim=2), qk.detach() - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, n_state: int, n_head: int, cross_attention: bool = False): - super().__init__() - - self.attn = MultiHeadAttention(n_state, n_head) - self.attn_ln = LayerNorm(n_state) - - self.cross_attn = MultiHeadAttention(n_state, n_head) if cross_attention else None - self.cross_attn_ln = LayerNorm(n_state) if cross_attention else None - - n_mlp = n_state * 4 - self.mlp = nn.Sequential(Linear(n_state, n_mlp), nn.GELU(), Linear(n_mlp, n_state)) - self.mlp_ln = LayerNorm(n_state) - - def forward( - self, - x: Tensor, - xa: Optional[Tensor] = None, - mask: Optional[Tensor] = None, - kv_cache: Optional[dict] = None, - ): - x = x + self.attn(self.attn_ln(x), mask=mask, kv_cache=kv_cache)[0] - if self.cross_attn: - x = x + self.cross_attn(self.cross_attn_ln(x), xa, kv_cache=kv_cache)[0] - x = x + self.mlp(self.mlp_ln(x)) - return x - - -class AudioEncoder(nn.Module): - def __init__(self, n_mels: int, n_ctx: int, n_state: int, n_head: int, n_layer: int): - super().__init__() - self.conv1 = Conv1d(n_mels, n_state, kernel_size=3, padding=1) - self.conv2 = Conv1d(n_state, n_state, kernel_size=3, stride=2, padding=1) - self.register_buffer("positional_embedding", sinusoids(n_ctx, n_state)) - - self.blocks: Iterable[ResidualAttentionBlock] = nn.ModuleList( - [ResidualAttentionBlock(n_state, n_head) for _ in range(n_layer)] - ) - self.ln_post = LayerNorm(n_state) - - def forward(self, x: Tensor): - """ - x : torch.Tensor, shape = (batch_size, n_mels, n_ctx) - the mel spectrogram of the audio - """ - x = F.gelu(self.conv1(x)) - x = F.gelu(self.conv2(x)) - x = x.permute(0, 2, 1) - - len_x = x.shape[1] - len_e = self.positional_embedding.shape[0] - assert len_x <= len_e, "incorrect audio shape" - pos_e = self.positional_embedding[:len_x, :] - x = (x + pos_e).to(x.dtype) - - for block in self.blocks: - x = block(x) - - x = self.ln_post(x) - return x - - -class TextDecoder(nn.Module): - def __init__(self, n_vocab: int, n_ctx: int, n_state: int, n_head: int, n_layer: int): - super().__init__() - - self.token_embedding = nn.Embedding(n_vocab, n_state) - self.positional_embedding = nn.Parameter(torch.empty(n_ctx, n_state)) - - self.blocks: Iterable[ResidualAttentionBlock] = nn.ModuleList( - [ResidualAttentionBlock(n_state, n_head, cross_attention=True) for _ in range(n_layer)] - ) - self.ln = LayerNorm(n_state) - - mask = torch.empty(n_ctx, n_ctx).fill_(-np.inf).triu_(1) - self.register_buffer("mask", mask, persistent=False) - - def forward(self, x: Tensor, xa: Tensor, kv_cache: Optional[dict] = None): - """ - x : torch.LongTensor, shape = (batch_size, <= n_ctx) - the text tokens - xa : torch.Tensor, shape = (batch_size, n_mels, n_audio_ctx) - the encoded audio features to be attended on - """ - offset = next(iter(kv_cache.values())).shape[1] if kv_cache else 0 - x = self.token_embedding(x) + self.positional_embedding[offset : offset + x.shape[-1]] - x = x.to(xa.dtype) - - for block in self.blocks: - x = block(x, xa, mask=self.mask, kv_cache=kv_cache) - - x = self.ln(x) - logits = (x @ torch.transpose(self.token_embedding.weight.to(x.dtype), 0, 1)).float() - - return logits - - -class Whisper(nn.Module): - def __init__(self, dims: ModelDimensions): - super().__init__() - self.dims = dims - self.encoder = AudioEncoder( - self.dims.n_mels, - self.dims.n_audio_ctx, - self.dims.n_audio_state, - self.dims.n_audio_head, - self.dims.n_audio_layer, - ) - self.decoder = TextDecoder( - self.dims.n_vocab, - self.dims.n_text_ctx, - self.dims.n_text_state, - self.dims.n_text_head, - self.dims.n_text_layer, - ) - - def embed_audio(self, mel: torch.Tensor): - return self.encoder(mel) - - def logits(self, tokens: torch.Tensor, audio_features: torch.Tensor): - return self.decoder(tokens, audio_features) - - def forward(self, mel: torch.Tensor, tokens: torch.Tensor) -> Dict[str, torch.Tensor]: - return self.decoder(tokens, self.encoder(mel)) - - @property - def device(self): - return next(self.parameters()).device - - @property - def is_multilingual(self): - return self.dims.n_vocab == 51865 - - def install_kv_cache_hooks(self, cache: Optional[dict] = None): - """ - The `MultiHeadAttention` module optionally accepts `kv_cache` which stores the key and value - tensors calculated for the previous positions. This method returns a dictionary that stores - all caches, and the necessary hooks for the key and value projection modules that save the - intermediate tensors to be reused during later calculations. - - Returns - ------- - cache : Dict[nn.Module, torch.Tensor] - A dictionary object mapping the key/value projection modules to its cache - hooks : List[RemovableHandle] - List of PyTorch RemovableHandle objects to stop the hooks to be called - """ - cache = {**cache} if cache is not None else {} - hooks = [] - - def save_to_cache(module, _, output): - if module not in cache or output.shape[1] > self.decoder.positional_embedding.shape[0]: - cache[module] = output # save as-is, for the first token or cross attention - else: - cache[module] = torch.cat([cache[module], output], dim=1).detach() - return cache[module] - - def install_hooks(layer: nn.Module): - if isinstance(layer, MultiHeadAttention): - hooks.append(layer.key.register_forward_hook(save_to_cache)) - hooks.append(layer.value.register_forward_hook(save_to_cache)) - - self.decoder.apply(install_hooks) - return cache, hooks - - detect_language = detect_language_function - decode = decode_function diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/README.md b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/README.md deleted file mode 100644 index 8c1ef83c99cef1d5484a08ceb73a31bcbd5961cf..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Vits Fast Finetuning Umamusume -emoji: 📊 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/ngu_dialect.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/ngu_dialect.py deleted file mode 100644 index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC(dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/commons.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/commons.py deleted file mode 100644 index ccd334b7320543b0c3a2166f82093564c9721317..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/commons.py +++ /dev/null @@ -1,167 +0,0 @@ -import math - -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/nets_33966KB.py deleted file mode 100644 index b8986f968dc5383e65d35aac6e4367299de3378b..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/nets_33966KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_33966KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16, 32)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/README.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/README.md deleted file mode 100644 index 118e930c12bffd9e6da1df03180f5c9a8dcaabc3..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/README.md +++ /dev/null @@ -1,272 +0,0 @@ -

        - -

        - -##
        - -
        - -👀[**Demos**](#-demos-videos) **|** 🚩[**Updates**](#-updates) **|** ⚡[**Usage**](#-quick-inference) **|** 🏰[**Model Zoo**](docs/model_zoo.md) **|** 🔧[Install](#-dependencies-and-installation) **|** 💻[Train](docs/Training.md) **|** ❓[FAQ](docs/FAQ.md) **|** 🎨[Contribution](docs/CONTRIBUTING.md) - -[![download](https://img.shields.io/github/downloads/xinntao/Real-ESRGAN/total.svg)](https://github.com/xinntao/Real-ESRGAN/releases) -[![PyPI](https://img.shields.io/pypi/v/realesrgan)](https://pypi.org/project/realesrgan/) -[![Open issue](https://img.shields.io/github/issues/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues) -[![Closed issue](https://img.shields.io/github/issues-closed/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues) -[![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE) -[![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml) -[![Publish-pip](https://github.com/xinntao/Real-ESRGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml) - -
        - -🔥 **AnimeVideo-v3 model (动漫视频小模型)**. Please see [[*anime video models*](docs/anime_video_model.md)] and [[*comparisons*](docs/anime_comparisons.md)]
        -🔥 **RealESRGAN_x4plus_anime_6B** for anime images **(动漫插图模型)**. Please see [[*anime_model*](docs/anime_model.md)] - - -1. :boom: **Update** online Replicate demo: [![Replicate](https://img.shields.io/static/v1?label=Demo&message=Replicate&color=blue)](https://replicate.com/xinntao/realesrgan) -1. Online Colab demo for Real-ESRGAN: [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) **|** Online Colab demo for for Real-ESRGAN (**anime videos**): [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) -1. Portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. You can find more information [here](#portable-executable-files-ncnn). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan) - - -Real-ESRGAN aims at developing **Practical Algorithms for General Image/Video Restoration**.
        -We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data. - -🌌 Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in [feedback.md](docs/feedback.md). - ---- - -If Real-ESRGAN is helpful, please help to ⭐ this repo or recommend it to your friends 😊
        -Other recommended projects:
        -▶️ [GFPGAN](https://github.com/TencentARC/GFPGAN): A practical algorithm for real-world face restoration
        -▶️ [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox
        -▶️ [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions.
        -▶️ [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison
        -▶️ [HandyFigure](https://github.com/xinntao/HandyFigure): Open source of paper figures
        - ---- - -### 📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data - -> [[Paper](https://arxiv.org/abs/2107.10833)]   [[YouTube Video](https://www.youtube.com/watch?v=fxHWoDSSvSc)]   [[B站讲解](https://www.bilibili.com/video/BV1H34y1m7sS/)]   [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)]   [[PPT slides](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]
        -> [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en)
        -> [Tencent ARC Lab](https://arc.tencent.com/en/ai-demos/imgRestore); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences - -

        - -

        - ---- - - -## 🚩 Updates - -- ✅ Add the **realesr-general-x4v3** model - a tiny small model for general scenes. It also supports the **--dn** option to balance the noise (avoiding over-smooth results). **--dn** is short for denoising strength. -- ✅ Update the **RealESRGAN AnimeVideo-v3** model. Please see [anime video models](docs/anime_video_model.md) and [comparisons](docs/anime_comparisons.md) for more details. -- ✅ Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md). -- ✅ Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan). -- ✅ Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md) -- ✅ Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](docs/Training.md#Finetune-Real-ESRGAN-on-your-own-dataset) -- ✅ Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**. -- ✅ Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391) -- ✅ Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model. -- ✅ [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images. -- ✅ The training codes have been released. A detailed guide can be found in [Training.md](docs/Training.md). - ---- - - -## 👀 Demos Videos - -#### Bilibili - -- [大闹天宫片段](https://www.bilibili.com/video/BV1ja41117zb) -- [Anime dance cut 动漫魔性舞蹈](https://www.bilibili.com/video/BV1wY4y1L7hT/) -- [海贼王片段](https://www.bilibili.com/video/BV1i3411L7Gy/) - -#### YouTube - -## 🔧 Dependencies and Installation - -- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html)) -- [PyTorch >= 1.7](https://pytorch.org/) - -### Installation - -1. Clone repo - - ```bash - git clone https://github.com/xinntao/Real-ESRGAN.git - cd Real-ESRGAN - ``` - -1. Install dependent packages - - ```bash - # Install basicsr - https://github.com/xinntao/BasicSR - # We use BasicSR for both training and inference - pip install basicsr - # facexlib and gfpgan are for face enhancement - pip install facexlib - pip install gfpgan - pip install -r requirements.txt - python setup.py develop - ``` - ---- - -## ⚡ Quick Inference - -There are usually three ways to inference Real-ESRGAN. - -1. [Online inference](#online-inference) -1. [Portable executable files (NCNN)](#portable-executable-files-ncnn) -1. [Python script](#python-script) - -### Online inference - -1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B) -1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN **|** [Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**). - -### Portable executable files (NCNN) - -You can download [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. - -This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.
        - -You can simply run the following command (the Windows example, more information is in the README.md of each executable files): - -```bash -./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name -``` - -We have provided five models: - -1. realesrgan-x4plus (default) -2. realesrnet-x4plus -3. realesrgan-x4plus-anime (optimized for anime images, small model size) -4. realesr-animevideov3 (animation video) - -You can use the `-n` argument for other models, for example, `./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus` - -#### Usage of portable executable files - -1. Please refer to [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages) for more details. -1. Note that it does not support all the functions (such as `outscale`) as the python script `inference_realesrgan.py`. - -```console -Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]... - - -h show this help - -i input-path input image path (jpg/png/webp) or directory - -o output-path output image path (jpg/png/webp) or directory - -s scale upscale ratio (can be 2, 3, 4. default=4) - -t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu - -m model-path folder path to the pre-trained models. default=models - -n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus) - -g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu - -j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu - -x enable tta mode" - -f format output image format (jpg/png/webp, default=ext/png) - -v verbose output -``` - -Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together. - -### Python script - -#### Usage of python script - -1. You can use X4 model for **arbitrary output size** with the argument `outscale`. The program will further perform cheap resize operation after the Real-ESRGAN output. - -```console -Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]... - -A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance - - -h show this help - -i --input Input image or folder. Default: inputs - -o --output Output folder. Default: results - -n --model_name Model name. Default: RealESRGAN_x4plus - -s, --outscale The final upsampling scale of the image. Default: 4 - --suffix Suffix of the restored image. Default: out - -t, --tile Tile size, 0 for no tile during testing. Default: 0 - --face_enhance Whether to use GFPGAN to enhance face. Default: False - --fp32 Use fp32 precision during inference. Default: fp16 (half precision). - --ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto -``` - -#### Inference general images - -Download pre-trained models: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) - -```bash -wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P weights -``` - -Inference! - -```bash -python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance -``` - -Results are in the `results` folder - -#### Inference anime images - -

        - -

        - -Pre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)
        - More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md) - -```bash -# download model -wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights -# inference -python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs -``` - -Results are in the `results` folder - ---- - -## BibTeX - - @InProceedings{wang2021realesrgan, - author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan}, - title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data}, - booktitle = {International Conference on Computer Vision Workshops (ICCVW)}, - date = {2021} - } - -## 📧 Contact - -If you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`. - - -## 🧩 Projects that use Real-ESRGAN - -If you develop/use Real-ESRGAN in your projects, welcome to let me know. - -- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan) -- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu) -- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan) - -    **GUI** - -- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753) -- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628) -- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx) -- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn) -- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu) -- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21) -- [Upscayl](https://github.com/upscayl/upscayl) by [Nayam Amarshe](https://github.com/NayamAmarshe) and [TGS963](https://github.com/TGS963) - -## 🤗 Acknowledgement - -Thanks for all the contributors. - -- [AK391](https://github.com/AK391): Integrate RealESRGAN to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). -- [Asiimoviet](https://github.com/Asiimoviet): Translate the README.md to Chinese (中文). -- [2ji3150](https://github.com/2ji3150): Thanks for the [detailed and valuable feedbacks/suggestions](https://github.com/xinntao/Real-ESRGAN/issues/131). -- [Jared-02](https://github.com/Jared-02): Translate the Training.md to Chinese (中文). diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/deepfashion.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/deepfashion.py deleted file mode 100644 index 308b4b2ac4d9e3516ba4a57e9d3b6af91e97f24b..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/deepfashion.py +++ /dev/null @@ -1,53 +0,0 @@ -# dataset settings -dataset_type = 'DeepFashionDataset' -data_root = 'data/DeepFashion/In-shop/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(750, 1101), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(750, 1101), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - imgs_per_gpu=2, - workers_per_gpu=1, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/DeepFashion_segmentation_query.json', - img_prefix=data_root + 'Img/', - pipeline=train_pipeline, - data_root=data_root), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/DeepFashion_segmentation_query.json', - img_prefix=data_root + 'Img/', - pipeline=test_pipeline, - data_root=data_root), - test=dict( - type=dataset_type, - ann_file=data_root + - 'annotations/DeepFashion_segmentation_gallery.json', - img_prefix=data_root + 'Img/', - pipeline=test_pipeline, - data_root=data_root)) -evaluation = dict(interval=5, metric=['bbox', 'segm']) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py deleted file mode 100644 index f0fd80d41407162df71ba5349fc659d4713cdb6e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py +++ /dev/null @@ -1,66 +0,0 @@ -from ..builder import DETECTORS -from .faster_rcnn import FasterRCNN - - -@DETECTORS.register_module() -class TridentFasterRCNN(FasterRCNN): - """Implementation of `TridentNet `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - - super(TridentFasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - assert self.backbone.num_branch == self.roi_head.num_branch - assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx - self.num_branch = self.backbone.num_branch - self.test_branch_idx = self.backbone.test_branch_idx - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - if proposals is None: - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = img_metas * num_branch - proposal_list = self.rpn_head.simple_test_rpn(x, trident_img_metas) - else: - proposal_list = proposals - - return self.roi_head.simple_test( - x, proposal_list, trident_img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = [img_metas * num_branch for img_metas in img_metas] - proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def forward_train(self, img, img_metas, gt_bboxes, gt_labels, **kwargs): - """make copies of img and gts to fit multi-branch.""" - trident_gt_bboxes = tuple(gt_bboxes * self.num_branch) - trident_gt_labels = tuple(gt_labels * self.num_branch) - trident_img_metas = tuple(img_metas * self.num_branch) - - return super(TridentFasterRCNN, - self).forward_train(img, trident_img_metas, - trident_gt_bboxes, trident_gt_labels) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py deleted file mode 100644 index be8eec77792f4eb16475dc5ab8607fb5682f0acf..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/hrf.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170))) -evaluation = dict(metric='mDice') diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/__init__.py deleted file mode 100644 index 6ab346075f1b35366e7231054513097b87552c6f..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -AudioCraft is a general framework for training audio generative models. -At the moment we provide the training code for: - -- [MusicGen](https://arxiv.org/abs/2306.05284), a state-of-the-art - text-to-music and melody+text autoregressive generative model. - For the solver, see `audiocraft.solvers.musicgen.MusicGenSolver`, and for the model, - `audiocraft.models.musicgen.MusicGen`. -- [AudioGen](https://arxiv.org/abs/2209.15352), a state-of-the-art - text-to-general-audio generative model. -- [EnCodec](https://arxiv.org/abs/2210.13438), efficient and high fidelity - neural audio codec which provides an excellent tokenizer for autoregressive language models. - See `audiocraft.solvers.compression.CompressionSolver`, and `audiocraft.models.encodec.EncodecModel`. -- [MultiBandDiffusion](TODO), alternative diffusion-based decoder compatible with EnCodec that - improves the perceived quality and reduces the artifacts coming from adversarial decoders. -""" - -# flake8: noqa -from . import data, modules, models - -__version__ = '1.0.0' diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/utils/utils.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/utils/utils.py deleted file mode 100644 index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/utils/utils.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ProcessPoolExecutor -from functools import wraps -import hashlib -import logging -import typing as tp - -import flashy -import flashy.distrib -import omegaconf -import torch -from torch.nn.utils.rnn import pad_sequence - - -logger = logging.getLogger(__name__) - - -def dict_from_config(cfg: omegaconf.DictConfig) -> dict: - """Convenience function to map an omegaconf configuration to a dictionary. - - Args: - cfg (omegaconf.DictConfig): Original configuration to map to dict. - Returns: - dict: Config as dictionary object. - """ - dct = omegaconf.OmegaConf.to_container(cfg, resolve=True) - assert isinstance(dct, dict) - return dct - - -def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset: - if max_samples >= len(dataset): - return dataset - - generator = torch.Generator().manual_seed(seed) - perm = torch.randperm(len(dataset), generator=generator) - return torch.utils.data.Subset(dataset, perm[:max_samples].tolist()) - - -def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int, - num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader: - """Convenience function to load dataset into a dataloader with optional subset sampling. - - Args: - dataset: Dataset to load. - num_samples (Optional[int]): Number of samples to limit subset size. - batch_size (int): Batch size. - num_workers (int): Number of workers for data loading. - seed (int): Random seed. - """ - if num_samples is not None: - dataset = random_subset(dataset, num_samples, seed) - - dataloader = flashy.distrib.loader( - dataset, - batch_size=batch_size, - num_workers=num_workers, - **kwargs - ) - return dataloader - - -def get_dataset_from_loader(dataloader): - dataset = dataloader.dataset - if isinstance(dataset, torch.utils.data.Subset): - return dataset.dataset - else: - return dataset - - -def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None): - """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension. - - Args: - input (torch.Tensor): The input tensor containing probabilities. - num_samples (int): Number of samples to draw. - replacement (bool): Whether to draw with replacement or not. - Keywords args: - generator (torch.Generator): A pseudorandom number generator for sampling. - Returns: - torch.Tensor: Last dimension contains num_samples indices - sampled from the multinomial probability distribution - located in the last dimension of tensor input. - """ - input_ = input.reshape(-1, input.shape[-1]) - output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator) - output = output_.reshape(*list(input.shape[:-1]), -1) - return output - - -def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor: - """Sample next token from top K values along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - k (int): The k in “top-k”. - Returns: - torch.Tensor: Sampled tokens. - """ - top_k_value, _ = torch.topk(probs, k, dim=-1) - min_value_top_k = top_k_value[..., [-1]] - probs *= (probs >= min_value_top_k).float() - probs.div_(probs.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs, num_samples=1) - return next_token - - -def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor: - """Sample next token from top P probabilities along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - p (int): The p in “top-p”. - Returns: - torch.Tensor: Sampled tokens. - """ - probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True) - probs_sum = torch.cumsum(probs_sort, dim=-1) - mask = probs_sum - probs_sort > p - probs_sort *= (~mask).float() - probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs_sort, num_samples=1) - next_token = torch.gather(probs_idx, -1, next_token) - return next_token - - -class DummyPoolExecutor: - """Dummy pool executor to use when we actually have only 1 worker. - (e.g. instead of ProcessPoolExecutor). - """ - class DummyResult: - def __init__(self, func, *args, **kwargs): - self.func = func - self.args = args - self.kwargs = kwargs - - def result(self): - return self.func(*self.args, **self.kwargs) - - def __init__(self, workers, mp_context=None): - pass - - def submit(self, func, *args, **kwargs): - return DummyPoolExecutor.DummyResult(func, *args, **kwargs) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - return - - -def get_pool_executor(num_workers: int, mp_context=None): - return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1) - - -def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor: - """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences). - For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]] - - Args: - lengths (torch.Tensor): tensor with lengths - max_len (int): can set the max length manually. Defaults to None. - Returns: - torch.Tensor: mask with 0s where there is pad tokens else 1s - """ - assert len(lengths.shape) == 1, "Length shape should be 1 dimensional." - final_length = lengths.max().item() if not max_len else max_len - final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor - return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None] - - -def hash_trick(word: str, vocab_size: int) -> int: - """Hash trick to pair each word with an index - - Args: - word (str): word we wish to convert to an index - vocab_size (int): size of the vocabulary - Returns: - int: index of the word in the embedding LUT - """ - hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16) - return hash % vocab_size - - -def with_rank_rng(base_seed: int = 1234): - """Decorator for a function so that the function will use a Random Number Generator - whose state depend on the GPU rank. The original RNG state is restored upon returning. - - Args: - base_seed (int): Random seed. - """ - def _decorator(fun: tp.Callable): - @wraps(fun) - def _decorated(*args, **kwargs): - state = torch.get_rng_state() - seed = base_seed ^ flashy.distrib.rank() - torch.manual_seed(seed) - logger.debug('Rank dependent seed set to %d', seed) - try: - return fun(*args, **kwargs) - finally: - torch.set_rng_state(state) - logger.debug('RNG state restored.') - return _decorated - return _decorator - - -def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Get a list of tensors and collate them to a single tensor. according to the following logic: - - `dim` specifies the time dimension which will be stacked and padded. - - The output will contain 1 new dimension (dimension index 0) which will be the size of - of the original list. - - Args: - tensors (tp.List[torch.Tensor]): List of tensors to collate. - dim (int): Dimension which will be stacked and padded. - Returns: - tp.Tuple[torch.Tensor, torch.Tensor]: - torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension - (dimension index 0) which will be the size of the original list. - torch.Tensor: Tensor containing length of original tensor sizes (without padding). - """ - tensors = [x.transpose(0, dim) for x in tensors] - lens = torch.LongTensor([len(x) for x in tensors]) - padded_tensors = pad_sequence(tensors) - padded_tensors = padded_tensors.transpose(0, 1) - padded_tensors = padded_tensors.transpose(1, dim + 1) - return padded_tensors, lens diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/biggan/pytorch_biggan/scripts/download_tf_hub_models.sh b/spaces/HaHaBill/LandShapes-Antarctica/models/biggan/pytorch_biggan/scripts/download_tf_hub_models.sh deleted file mode 100644 index 57655fbd4b77791f03d72b3dfeb3bbb89ccc2fdc..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/models/biggan/pytorch_biggan/scripts/download_tf_hub_models.sh +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) 2019-present, Thomas Wolf, Huggingface Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -set -e -set -x - -models="128 256 512" - -mkdir -p models/model_128 -mkdir -p models/model_256 -mkdir -p models/model_512 - -# Download TF Hub models. -for model in $models -do - curl -L "https://tfhub.dev/deepmind/biggan-deep-$model/1?tf-hub-format=compressed" | tar -zxvC models/model_$model -done diff --git a/spaces/HachiRe/Fusani/README.md b/spaces/HachiRe/Fusani/README.md deleted file mode 100644 index 2d12121d7a09c18ee135c4db67ecf57888347d36..0000000000000000000000000000000000000000 --- a/spaces/HachiRe/Fusani/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Fusani -emoji: 🀀 -colorFrom: red -colorTo: blue -sdk: static -pinned: false ---- - - diff --git a/spaces/HarryLee/TextTopicModeling/README.md b/spaces/HarryLee/TextTopicModeling/README.md deleted file mode 100644 index d94bff78960db4431ff003c6d5f1b3affc9a399c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/TextTopicModeling/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TextTopicModeling -emoji: 🐠 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.9.0 -python_version: 3.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/wsc/wsc_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/wsc/wsc_utils.py deleted file mode 100644 index da6ba74383a2490e1108609f315f44ad4b3bf002..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/wsc/wsc_utils.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -from functools import lru_cache - - -def convert_sentence_to_json(sentence): - if "_" in sentence: - prefix, rest = sentence.split("_", 1) - query, rest = rest.split("_", 1) - query_index = len(prefix.rstrip().split(" ")) - else: - query, query_index = None, None - - prefix, rest = sentence.split("[", 1) - pronoun, rest = rest.split("]", 1) - pronoun_index = len(prefix.rstrip().split(" ")) - - sentence = sentence.replace("_", "").replace("[", "").replace("]", "") - - return { - "idx": 0, - "text": sentence, - "target": { - "span1_index": query_index, - "span1_text": query, - "span2_index": pronoun_index, - "span2_text": pronoun, - }, - } - - -def extended_noun_chunks(sentence): - noun_chunks = {(np.start, np.end) for np in sentence.noun_chunks} - np_start, cur_np = 0, "NONE" - for i, token in enumerate(sentence): - np_type = token.pos_ if token.pos_ in {"NOUN", "PROPN"} else "NONE" - if np_type != cur_np: - if cur_np != "NONE": - noun_chunks.add((np_start, i)) - if np_type != "NONE": - np_start = i - cur_np = np_type - if cur_np != "NONE": - noun_chunks.add((np_start, len(sentence))) - return [sentence[s:e] for (s, e) in sorted(noun_chunks)] - - -def find_token(sentence, start_pos): - found_tok = None - for tok in sentence: - if tok.idx == start_pos: - found_tok = tok - break - return found_tok - - -def find_span(sentence, search_text, start=0): - search_text = search_text.lower() - for tok in sentence[start:]: - remainder = sentence[tok.i :].text.lower() - if remainder.startswith(search_text): - len_to_consume = len(search_text) - start_idx = tok.idx - for next_tok in sentence[tok.i :]: - end_idx = next_tok.idx + len(next_tok.text) - if end_idx - start_idx == len_to_consume: - span = sentence[tok.i : next_tok.i + 1] - return span - return None - - -@lru_cache(maxsize=1) -def get_detokenizer(): - from sacremoses import MosesDetokenizer - - detok = MosesDetokenizer(lang="en") - return detok - - -@lru_cache(maxsize=1) -def get_spacy_nlp(): - import en_core_web_lg - - nlp = en_core_web_lg.load() - return nlp - - -def jsonl_iterator(input_fname, positive_only=False, ngram_order=3, eval=False): - detok = get_detokenizer() - nlp = get_spacy_nlp() - - with open(input_fname) as fin: - for line in fin: - sample = json.loads(line.strip()) - - if positive_only and "label" in sample and not sample["label"]: - # only consider examples where the query is correct - continue - - target = sample["target"] - - # clean up the query - query = target["span1_text"] - if query is not None: - if "\n" in query: - continue - if query.endswith(".") or query.endswith(","): - query = query[:-1] - - # split tokens - tokens = sample["text"].split(" ") - - def strip_pronoun(x): - return x.rstrip('.,"') - - # find the pronoun - pronoun_idx = target["span2_index"] - pronoun = strip_pronoun(target["span2_text"]) - if strip_pronoun(tokens[pronoun_idx]) != pronoun: - # hack: sometimes the index is misaligned - if strip_pronoun(tokens[pronoun_idx + 1]) == pronoun: - pronoun_idx += 1 - else: - raise Exception("Misaligned pronoun!") - assert strip_pronoun(tokens[pronoun_idx]) == pronoun - - # split tokens before and after the pronoun - before = tokens[:pronoun_idx] - after = tokens[pronoun_idx + 1 :] - - # the GPT BPE attaches leading spaces to tokens, so we keep track - # of whether we need spaces before or after the pronoun - leading_space = " " if pronoun_idx > 0 else "" - trailing_space = " " if len(after) > 0 else "" - - # detokenize - before = detok.detokenize(before, return_str=True) - pronoun = detok.detokenize([pronoun], return_str=True) - after = detok.detokenize(after, return_str=True) - - # hack: when the pronoun ends in a period (or comma), move the - # punctuation to the "after" part - if pronoun.endswith(".") or pronoun.endswith(","): - after = pronoun[-1] + trailing_space + after - pronoun = pronoun[:-1] - - # hack: when the "after" part begins with a comma or period, remove - # the trailing space - if after.startswith(".") or after.startswith(","): - trailing_space = "" - - # parse sentence with spacy - sentence = nlp(before + leading_space + pronoun + trailing_space + after) - - # find pronoun span - start = len(before + leading_space) - first_pronoun_tok = find_token(sentence, start_pos=start) - pronoun_span = find_span(sentence, pronoun, start=first_pronoun_tok.i) - assert pronoun_span.text == pronoun - - if eval: - # convert to format where pronoun is surrounded by "[]" and - # query is surrounded by "_" - query_span = find_span(sentence, query) - query_with_ws = "_{}_{}".format( - query_span.text, - (" " if query_span.text_with_ws.endswith(" ") else ""), - ) - pronoun_with_ws = "[{}]{}".format( - pronoun_span.text, - (" " if pronoun_span.text_with_ws.endswith(" ") else ""), - ) - if query_span.start < pronoun_span.start: - first = (query_span, query_with_ws) - second = (pronoun_span, pronoun_with_ws) - else: - first = (pronoun_span, pronoun_with_ws) - second = (query_span, query_with_ws) - sentence = ( - sentence[: first[0].start].text_with_ws - + first[1] - + sentence[first[0].end : second[0].start].text_with_ws - + second[1] - + sentence[second[0].end :].text - ) - yield sentence, sample.get("label", None) - else: - yield sentence, pronoun_span, query, sample.get("label", None) - - -def winogrande_jsonl_iterator(input_fname, eval=False): - with open(input_fname) as fin: - for line in fin: - sample = json.loads(line.strip()) - sentence, option1, option2 = ( - sample["sentence"], - sample["option1"], - sample["option2"], - ) - - pronoun_span = (sentence.index("_"), sentence.index("_") + 1) - - if eval: - query, cand = option1, option2 - else: - query = option1 if sample["answer"] == "1" else option2 - cand = option2 if sample["answer"] == "1" else option1 - yield sentence, pronoun_span, query, cand - - -def filter_noun_chunks( - chunks, exclude_pronouns=False, exclude_query=None, exact_match=False -): - if exclude_pronouns: - chunks = [ - np - for np in chunks - if (np.lemma_ != "-PRON-" and not all(tok.pos_ == "PRON" for tok in np)) - ] - - if exclude_query is not None: - excl_txt = [exclude_query.lower()] - filtered_chunks = [] - for chunk in chunks: - lower_chunk = chunk.text.lower() - found = False - for excl in excl_txt: - if ( - not exact_match and (lower_chunk in excl or excl in lower_chunk) - ) or lower_chunk == excl: - found = True - break - if not found: - filtered_chunks.append(chunk) - chunks = filtered_chunks - - return chunks diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation/prepare-iwslt14.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation/prepare-iwslt14.sh deleted file mode 100644 index 2fb6643fbccb58701dcbb77d91430e68a821ba38..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation/prepare-iwslt14.sh +++ /dev/null @@ -1,115 +0,0 @@ -#!/usr/bin/env bash -# -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -LC=$SCRIPTS/tokenizer/lowercase.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=10000 - -URL="http://dl.fbaipublicfiles.com/fairseq/data/iwslt14/de-en.tgz" -GZ=de-en.tgz - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=de -tgt=en -lang=de-en -prep=iwslt14.tokenized.de-en -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -echo "Downloading data from ${URL}..." -cd $orig -wget "$URL" - -if [ -f $GZ ]; then - echo "Data successfully downloaded." -else - echo "Data not successfully downloaded." - exit -fi - -tar zxvf $GZ -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - f=train.tags.$lang.$l - tok=train.tags.$lang.tok.$l - - cat $orig/$lang/$f | \ - grep -v '' | \ - grep -v '' | \ - grep -v '' | \ - sed -e 's///g' | \ - sed -e 's/<\/title>//g' | \ - sed -e 's/<description>//g' | \ - sed -e 's/<\/description>//g' | \ - perl $TOKENIZER -threads 8 -l $l > $tmp/$tok - echo "" -done -perl $CLEAN -ratio 1.5 $tmp/train.tags.$lang.tok $src $tgt $tmp/train.tags.$lang.clean 1 175 -for l in $src $tgt; do - perl $LC < $tmp/train.tags.$lang.clean.$l > $tmp/train.tags.$lang.$l -done - -echo "pre-processing valid/test data..." -for l in $src $tgt; do - for o in `ls $orig/$lang/IWSLT14.TED*.$l.xml`; do - fname=${o##*/} - f=$tmp/${fname%.*} - echo $o $f - grep '<seg id' $o | \ - sed -e 's/<seg id="[0-9]*">\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -l $l | \ - perl $LC > $f - echo "" - done -done - - -echo "creating train, valid, test..." -for l in $src $tgt; do - awk '{if (NR%23 == 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/valid.$l - awk '{if (NR%23 != 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/train.$l - - cat $tmp/IWSLT14.TED.dev2010.de-en.$l \ - $tmp/IWSLT14.TEDX.dev2012.de-en.$l \ - $tmp/IWSLT14.TED.tst2010.de-en.$l \ - $tmp/IWSLT14.TED.tst2011.de-en.$l \ - $tmp/IWSLT14.TED.tst2012.de-en.$l \ - > $tmp/test.$l -done - -TRAIN=$tmp/train.en-de -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $prep/$f - done -done diff --git a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/main_program.py b/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/main_program.py deleted file mode 100644 index bf9ec602b215f5125d44e7e6071989a2fefaae88..0000000000000000000000000000000000000000 --- a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/main_program.py +++ /dev/null @@ -1,89 +0,0 @@ -from __future__ import absolute_import -from __future__ import print_function - -__author__ = 'Taneem Jan, taneemishere.github.io' - -import os.path -from os.path import basename - -from classes.Sampler import * -from classes.model.Main_Model import * - - -def dsl_code_generation(input_image): - trained_weights_path = "classes/model/bin" - trained_model_name = "Main_Model" - input_path = input_image - output_path = "data/output/" - search_method = "greedy" - meta_dataset = np.load("{}/meta_dataset.npy".format(trained_weights_path), allow_pickle=True) - input_shape = meta_dataset[0] - output_size = meta_dataset[1] - - model = Main_Model(input_shape, output_size, trained_weights_path) - model.load(trained_model_name) - - sampler = Sampler(trained_weights_path, input_shape, output_size, CONTEXT_LENGTH) - - file_name = 'input_image_from_interface.png' - file_name = basename(file_name)[:basename(file_name).find(".")] - evaluation_img = Utils.get_preprocessed_img(input_path, IMAGE_SIZE) - - if search_method == "greedy": - result, _ = sampler.predict_greedy(model, np.array([evaluation_img])) - print("Result greedy: \n {}".format(result)) - - with open("{}/{}.gui".format(output_path, file_name), 'w') as out_f: - out_f.write(result.replace(START_TOKEN, "").replace(END_TOKEN, "")) - - return file_name, output_path - - -def compile_gui(file_path, filename): - from os.path import basename - from compiler.Utils import Utils - from compiler.Compiler import Compiler - - input_path = (file_path + filename) - - # remove the path - file_ = os.path.basename(input_path) - # remove the extension - file_ = os.path.splitext(file_)[0] - # add the extension of gui - file_ = "data/output/" + file_ + ".gui" - - input_file = file_ - - FILL_WITH_RANDOM_TEXT = True - TEXT_PLACE_HOLDER = "[]" - - dsl_path = "compiler/assets/web-dsl-mapping.json" - compiler = Compiler(dsl_path) - - def render_content_with_text(key, value): - if FILL_WITH_RANDOM_TEXT: - if key.find("btn") != -1: - value = value.replace(TEXT_PLACE_HOLDER, Utils.get_random_text()) - elif key.find("title") != -1: - value = value.replace(TEXT_PLACE_HOLDER, Utils.get_random_text(length_text=5, space_number=0)) - elif key.find("text") != -1: - value = value.replace(TEXT_PLACE_HOLDER, - Utils.get_random_text(length_text=56, space_number=7, with_upper_case=False)) - return value - - file_uid = basename(input_file)[:basename(input_file).find(".")] - path = input_file[:input_file.find(file_uid)] - - input_file_path = "{}{}.gui".format(path, file_uid) - output_file_path = "{}{}.html".format(path, file_uid) - - html_code = compiler.compile(input_file_path, output_file_path, rendering_function=render_content_with_text) - print("Generated code is compiled..!!") - return html_code - - -def main_method(input_image_from_interface): - file_name, file_output_path = dsl_code_generation(input_image_from_interface) - result = compile_gui(file_output_path, file_name) - return result diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/helpers.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/helpers.py deleted file mode 100644 index 4d2d329331766f0b9b94175412e061aca218e683..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/helpers.py +++ /dev/null @@ -1,792 +0,0 @@ -""" -Defines helper methods useful for loading and caching Interface examples. -""" -from __future__ import annotations - -import ast -import csv -import inspect -import os -import subprocess -import tempfile -import threading -import warnings -from pathlib import Path -from typing import TYPE_CHECKING, Any, Callable, Iterable, List, Optional, Tuple - -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import PIL - -from gradio import processing_utils, routes, utils -from gradio.context import Context -from gradio.documentation import document, set_documentation_group -from gradio.flagging import CSVLogger - -if TYPE_CHECKING: # Only import for type checking (to avoid circular imports). - from gradio.components import IOComponent - -CACHED_FOLDER = "gradio_cached_examples" -LOG_FILE = "log.csv" - -set_documentation_group("helpers") - - -def create_examples( - examples: List[Any] | List[List[Any]] | str, - inputs: IOComponent | List[IOComponent], - outputs: IOComponent | List[IOComponent] | None = None, - fn: Callable | None = None, - cache_examples: bool = False, - examples_per_page: int = 10, - _api_mode: bool = False, - label: str | None = None, - elem_id: str | None = None, - run_on_click: bool = False, - preprocess: bool = True, - postprocess: bool = True, - batch: bool = False, -): - """Top-level synchronous function that creates Examples. Provided for backwards compatibility, i.e. so that gr.Examples(...) can be used to create the Examples component.""" - examples_obj = Examples( - examples=examples, - inputs=inputs, - outputs=outputs, - fn=fn, - cache_examples=cache_examples, - examples_per_page=examples_per_page, - _api_mode=_api_mode, - label=label, - elem_id=elem_id, - run_on_click=run_on_click, - preprocess=preprocess, - postprocess=postprocess, - batch=batch, - _initiated_directly=False, - ) - utils.synchronize_async(examples_obj.create) - return examples_obj - - -@document() -class Examples: - """ - This class is a wrapper over the Dataset component and can be used to create Examples - for Blocks / Interfaces. Populates the Dataset component with examples and - assigns event listener so that clicking on an example populates the input/output - components. Optionally handles example caching for fast inference. - - Demos: blocks_inputs, fake_gan - Guides: more_on_examples_and_flagging, using_hugging_face_integrations, image_classification_in_pytorch, image_classification_in_tensorflow, image_classification_with_vision_transformers, create_your_own_friends_with_a_gan - """ - - def __init__( - self, - examples: List[Any] | List[List[Any]] | str, - inputs: IOComponent | List[IOComponent], - outputs: Optional[IOComponent | List[IOComponent]] = None, - fn: Optional[Callable] = None, - cache_examples: bool = False, - examples_per_page: int = 10, - _api_mode: bool = False, - label: str = "Examples", - elem_id: Optional[str] = None, - run_on_click: bool = False, - preprocess: bool = True, - postprocess: bool = True, - batch: bool = False, - _initiated_directly: bool = True, - ): - """ - Parameters: - examples: example inputs that can be clicked to populate specific components. Should be nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component. A string path to a directory of examples can also be provided but it should be within the directory with the python file running the gradio app. If there are multiple input components and a directory is provided, a log.csv file must be present in the directory to link corresponding inputs. - inputs: the component or list of components corresponding to the examples - outputs: optionally, provide the component or list of components corresponding to the output of the examples. Required if `cache` is True. - fn: optionally, provide the function to run to generate the outputs corresponding to the examples. Required if `cache` is True. - cache_examples: if True, caches examples for fast runtime. If True, then `fn` and `outputs` need to be provided - examples_per_page: how many examples to show per page. - label: the label to use for the examples component (by default, "Examples") - elem_id: an optional string that is assigned as the id of this component in the HTML DOM. - run_on_click: if cache_examples is False, clicking on an example does not run the function when an example is clicked. Set this to True to run the function when an example is clicked. Has no effect if cache_examples is True. - preprocess: if True, preprocesses the example input before running the prediction function and caching the output. Only applies if cache_examples is True. - postprocess: if True, postprocesses the example output after running the prediction function and before caching. Only applies if cache_examples is True. - batch: If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. Used only if cache_examples is True. - """ - if _initiated_directly: - warnings.warn( - "Please use gr.Examples(...) instead of gr.examples.Examples(...) to create the Examples.", - ) - - if cache_examples and (fn is None or outputs is None): - raise ValueError("If caching examples, `fn` and `outputs` must be provided") - - if not isinstance(inputs, list): - inputs = [inputs] - if not isinstance(outputs, list): - outputs = [outputs] - - working_directory = Path().absolute() - - if examples is None: - raise ValueError("The parameter `examples` cannot be None") - elif isinstance(examples, list) and ( - len(examples) == 0 or isinstance(examples[0], list) - ): - pass - elif ( - isinstance(examples, list) and len(inputs) == 1 - ): # If there is only one input component, examples can be provided as a regular list instead of a list of lists - examples = [[e] for e in examples] - elif isinstance(examples, str): - if not os.path.exists(examples): - raise FileNotFoundError( - "Could not find examples directory: " + examples - ) - working_directory = examples - if not os.path.exists(os.path.join(examples, LOG_FILE)): - if len(inputs) == 1: - examples = [[e] for e in os.listdir(examples)] - else: - raise FileNotFoundError( - "Could not find log file (required for multiple inputs): " - + LOG_FILE - ) - else: - with open(os.path.join(examples, LOG_FILE)) as logs: - examples = list(csv.reader(logs)) - examples = [ - examples[i][: len(inputs)] for i in range(1, len(examples)) - ] # remove header and unnecessary columns - - else: - raise ValueError( - "The parameter `examples` must either be a string directory or a list" - "(if there is only 1 input component) or (more generally), a nested " - "list, where each sublist represents a set of inputs." - ) - - input_has_examples = [False] * len(inputs) - for example in examples: - for idx, example_for_input in enumerate(example): - if not (example_for_input is None): - try: - input_has_examples[idx] = True - except IndexError: - pass # If there are more example components than inputs, ignore. This can sometimes be intentional (e.g. loading from a log file where outputs and timestamps are also logged) - - inputs_with_examples = [ - inp for (inp, keep) in zip(inputs, input_has_examples) if keep - ] - non_none_examples = [ - [ex for (ex, keep) in zip(example, input_has_examples) if keep] - for example in examples - ] - - self.examples = examples - self.non_none_examples = non_none_examples - self.inputs = inputs - self.inputs_with_examples = inputs_with_examples - self.outputs = outputs - self.fn = fn - self.cache_examples = cache_examples - self._api_mode = _api_mode - self.preprocess = preprocess - self.postprocess = postprocess - self.batch = batch - - with utils.set_directory(working_directory): - self.processed_examples = [ - [ - component.postprocess(sample) - for component, sample in zip(inputs, example) - ] - for example in examples - ] - self.non_none_processed_examples = [ - [ex for (ex, keep) in zip(example, input_has_examples) if keep] - for example in self.processed_examples - ] - if cache_examples: - for example in self.examples: - if len([ex for ex in example if ex is not None]) != len(self.inputs): - warnings.warn( - "Examples are being cached but not all input components have " - "example values. This may result in an exception being thrown by " - "your function. If you do get an error while caching examples, make " - "sure all of your inputs have example values for all of your examples " - "or you provide default values for those particular parameters in your function." - ) - break - - from gradio.components import Dataset - - with utils.set_directory(working_directory): - self.dataset = Dataset( - components=inputs_with_examples, - samples=non_none_examples, - type="index", - label=label, - samples_per_page=examples_per_page, - elem_id=elem_id, - ) - - self.cached_folder = os.path.join(CACHED_FOLDER, str(self.dataset._id)) - self.cached_file = os.path.join(self.cached_folder, "log.csv") - self.cache_examples = cache_examples - self.run_on_click = run_on_click - - async def create(self) -> None: - """Caches the examples if self.cache_examples is True and creates the Dataset - component to hold the examples""" - - async def load_example(example_id): - if self.cache_examples: - processed_example = self.non_none_processed_examples[ - example_id - ] + await self.load_from_cache(example_id) - else: - processed_example = self.non_none_processed_examples[example_id] - return utils.resolve_singleton(processed_example) - - if Context.root_block: - self.dataset.click( - load_example, - inputs=[self.dataset], - outputs=self.inputs_with_examples - + (self.outputs if self.cache_examples else []), - postprocess=False, - queue=False, - ) - if self.run_on_click and not self.cache_examples: - self.dataset.click( - self.fn, - inputs=self.inputs, - outputs=self.outputs, - ) - - if self.cache_examples: - await self.cache() - - async def cache(self) -> None: - """ - Caches all of the examples so that their predictions can be shown immediately. - """ - if os.path.exists(self.cached_file): - print( - f"Using cache from '{os.path.abspath(self.cached_folder)}' directory. If method or examples have changed since last caching, delete this folder to clear cache." - ) - else: - if Context.root_block is None: - raise ValueError("Cannot cache examples if not in a Blocks context") - - print(f"Caching examples at: '{os.path.abspath(self.cached_file)}'") - cache_logger = CSVLogger() - - # create a fake dependency to process the examples and get the predictions - dependency = Context.root_block.set_event_trigger( - event_name="fake_event", - fn=self.fn, - inputs=self.inputs_with_examples, - outputs=self.outputs, - preprocess=self.preprocess and not self._api_mode, - postprocess=self.postprocess and not self._api_mode, - batch=self.batch, - ) - - fn_index = Context.root_block.dependencies.index(dependency) - cache_logger.setup(self.outputs, self.cached_folder) - for example_id, _ in enumerate(self.examples): - processed_input = self.processed_examples[example_id] - if self.batch: - processed_input = [[value] for value in processed_input] - prediction = await Context.root_block.process_api( - fn_index=fn_index, inputs=processed_input, request=None, state={} - ) - output = prediction["data"] - if self.batch: - output = [value[0] for value in output] - cache_logger.flag(output) - # Remove the "fake_event" to prevent bugs in loading interfaces from spaces - Context.root_block.dependencies.remove(dependency) - Context.root_block.fns.pop(fn_index) - - async def load_from_cache(self, example_id: int) -> List[Any]: - """Loads a particular cached example for the interface. - Parameters: - example_id: The id of the example to process (zero-indexed). - """ - with open(self.cached_file) as cache: - examples = list(csv.reader(cache)) - example = examples[example_id + 1] # +1 to adjust for header - output = [] - for component, value in zip(self.outputs, example): - try: - value_as_dict = ast.literal_eval(value) - assert utils.is_update(value_as_dict) - output.append(value_as_dict) - except (ValueError, TypeError, SyntaxError, AssertionError): - output.append(component.serialize(value, self.cached_folder)) - return output - - -class TrackedIterable: - def __init__( - self, - iterable: Iterable, - index: int | None, - length: int | None, - desc: str | None, - unit: str | None, - _tqdm=None, - progress: float = None, - ) -> None: - self.iterable = iterable - self.index = index - self.length = length - self.desc = desc - self.unit = unit - self._tqdm = _tqdm - self.progress = progress - - -@document("__call__", "tqdm") -class Progress(Iterable): - """ - The Progress class provides a custom progress tracker that is used in a function signature. - To attach a Progress tracker to a function, simply add a parameter right after the input parameters that has a default value set to a `gradio.Progress()` instance. - The Progress tracker can then be updated in the function by calling the Progress object or using the `tqdm` method on an Iterable. - The Progress tracker is currently only available with `queue()`. - Example: - import gradio as gr - import time - def my_function(x, progress=gr.Progress()): - progress(0, desc="Starting...") - time.sleep(1) - for i in progress.tqdm(range(100)): - time.sleep(0.1) - return x - gr.Interface(my_function, gr.Textbox(), gr.Textbox()).queue().launch() - Demos: progress - """ - - def __init__( - self, - track_tqdm: bool = False, - _active: bool = False, - _callback: Callable = None, - _event_id: str = None, - ): - """ - Parameters: - track_tqdm: If True, the Progress object will track any tqdm.tqdm iterations with the tqdm library in the function. - """ - self.track_tqdm = track_tqdm - self._active = _active - self._callback = _callback - self._event_id = _event_id - self.iterables: List[TrackedIterable] = [] - - def __len__(self): - return self.iterables[-1].length - - def __iter__(self): - return self - - def __next__(self): - """ - Updates progress tracker with next item in iterable. - """ - if self._active: - current_iterable = self.iterables[-1] - while ( - not hasattr(current_iterable.iterable, "__next__") - and len(self.iterables) > 0 - ): - current_iterable = self.iterables.pop() - self._callback( - event_id=self._event_id, - iterables=self.iterables, - ) - current_iterable.index += 1 - try: - return next(current_iterable.iterable) - except StopIteration: - self.iterables.pop() - raise StopIteration - else: - return self - - def __call__( - self, - progress: float | Tuple[int, int | None] | None, - desc: str | None = None, - total: float | None = None, - unit: str = "steps", - _tqdm=None, - ): - """ - Updates progress tracker with progress and message text. - Parameters: - progress: If float, should be between 0 and 1 representing completion. If Tuple, first number represents steps completed, and second value represents total steps or None if unknown. If None, hides progress bar. - desc: description to display. - total: estimated total number of steps. - unit: unit of iterations. - """ - if self._active: - if isinstance(progress, tuple): - index, total = progress - progress = None - else: - index = None - self._callback( - event_id=self._event_id, - iterables=self.iterables - + [TrackedIterable(None, index, total, desc, unit, _tqdm, progress)], - ) - else: - return progress - - def tqdm( - self, - iterable: Iterable | None, - desc: str = None, - total: float = None, - unit: str = "steps", - _tqdm=None, - *args, - **kwargs, - ): - """ - Attaches progress tracker to iterable, like tqdm. - Parameters: - iterable: iterable to attach progress tracker to. - desc: description to display. - total: estimated total number of steps. - unit: unit of iterations. - """ - if iterable is None: - new_iterable = TrackedIterable(None, 0, total, desc, unit, _tqdm) - self.iterables.append(new_iterable) - self._callback(event_id=self._event_id, iterables=self.iterables) - return - length = len(iterable) if hasattr(iterable, "__len__") else None - self.iterables.append( - TrackedIterable(iter(iterable), 0, length, desc, unit, _tqdm) - ) - return self - - def update(self, n=1): - """ - Increases latest iterable with specified number of steps. - Parameters: - n: number of steps completed. - """ - if self._active and len(self.iterables) > 0: - current_iterable = self.iterables[-1] - current_iterable.index += n - self._callback( - event_id=self._event_id, - iterables=self.iterables, - ) - else: - return - - def close(self, _tqdm): - """ - Removes iterable with given _tqdm. - """ - if self._active: - for i in range(len(self.iterables)): - if id(self.iterables[i]._tqdm) == id(_tqdm): - self.iterables.pop(i) - break - self._callback( - event_id=self._event_id, - iterables=self.iterables, - ) - else: - return - - -def create_tracker(root_blocks, event_id, fn, track_tqdm): - - progress = Progress( - _active=True, _callback=root_blocks._queue.set_progress, _event_id=event_id - ) - if not track_tqdm: - return progress, fn - - try: - _tqdm = __import__("tqdm") - except ModuleNotFoundError: - return progress, fn - if not hasattr(root_blocks, "_progress_tracker_per_thread"): - root_blocks._progress_tracker_per_thread = {} - - def init_tqdm(self, iterable=None, desc=None, *args, **kwargs): - self._progress = root_blocks._progress_tracker_per_thread.get( - threading.get_ident() - ) - if self._progress is not None: - self._progress.event_id = event_id - self._progress.tqdm(iterable, desc, _tqdm=self, *args, **kwargs) - kwargs["file"] = open(os.devnull, "w") - self.__init__orig__(iterable, desc, *args, **kwargs) - - def iter_tqdm(self): - if self._progress is not None: - return self._progress - else: - return self.__iter__orig__() - - def update_tqdm(self, n=1): - if self._progress is not None: - self._progress.update(n) - return self.__update__orig__(n) - - def close_tqdm(self): - if self._progress is not None: - self._progress.close(self) - return self.__close__orig__() - - def exit_tqdm(self, exc_type, exc_value, traceback): - if self._progress is not None: - self._progress.close(self) - return self.__exit__orig__(exc_type, exc_value, traceback) - - if not hasattr(_tqdm.tqdm, "__init__orig__"): - _tqdm.tqdm.__init__orig__ = _tqdm.tqdm.__init__ - _tqdm.tqdm.__init__ = init_tqdm - if not hasattr(_tqdm.tqdm, "__update__orig__"): - _tqdm.tqdm.__update__orig__ = _tqdm.tqdm.update - _tqdm.tqdm.update = update_tqdm - if not hasattr(_tqdm.tqdm, "__close__orig__"): - _tqdm.tqdm.__close__orig__ = _tqdm.tqdm.close - _tqdm.tqdm.close = close_tqdm - if not hasattr(_tqdm.tqdm, "__exit__orig__"): - _tqdm.tqdm.__exit__orig__ = _tqdm.tqdm.__exit__ - _tqdm.tqdm.__exit__ = exit_tqdm - if not hasattr(_tqdm.tqdm, "__iter__orig__"): - _tqdm.tqdm.__iter__orig__ = _tqdm.tqdm.__iter__ - _tqdm.tqdm.__iter__ = iter_tqdm - if hasattr(_tqdm, "auto") and hasattr(_tqdm.auto, "tqdm"): - _tqdm.auto.tqdm = _tqdm.tqdm - - def tracked_fn(*args): - thread_id = threading.get_ident() - root_blocks._progress_tracker_per_thread[thread_id] = progress - response = fn(*args) - del root_blocks._progress_tracker_per_thread[thread_id] - return response - - return progress, tracked_fn - - -def special_args( - fn: Callable, - inputs: List[Any] | None = None, - request: routes.Request | None = None, -): - """ - Checks if function has special arguments Request (via annotation) or Progress (via default value). - If inputs is provided, these values will be loaded into the inputs array. - Parameters: - block_fn: function to check. - inputs: array to load special arguments into. - request: request to load into inputs. - Returns: - updated inputs, request index, progress index - """ - signature = inspect.signature(fn) - positional_args = [] - for i, param in enumerate(signature.parameters.values()): - if param.kind not in (param.POSITIONAL_ONLY, param.POSITIONAL_OR_KEYWORD): - break - positional_args.append(param) - progress_index = None - for i, param in enumerate(positional_args): - if isinstance(param.default, Progress): - progress_index = i - if inputs is not None: - inputs.insert(i, param.default) - elif param.annotation == routes.Request: - if inputs is not None: - inputs.insert(i, request) - if inputs is not None: - while len(inputs) < len(positional_args): - i = len(inputs) - param = positional_args[i] - if param.default == param.empty: - warnings.warn("Unexpected argument. Filling with None.") - inputs.append(None) - else: - inputs.append(param.default) - return inputs or [], progress_index - - -@document() -def update(**kwargs) -> dict: - """ - Updates component properties. When a function passed into a Gradio Interface or a Blocks events returns a typical value, it updates the value of the output component. But it is also possible to update the properties of an output component (such as the number of lines of a `Textbox` or the visibility of an `Image`) by returning the component's `update()` function, which takes as parameters any of the constructor parameters for that component. - This is a shorthand for using the update method on a component. - For example, rather than using gr.Number.update(...) you can just use gr.update(...). - Note that your editor's autocompletion will suggest proper parameters - if you use the update method on the component. - Demos: blocks_essay, blocks_update, blocks_essay_update - - Parameters: - kwargs: Key-word arguments used to update the component's properties. - Example: - # Blocks Example - import gradio as gr - with gr.Blocks() as demo: - radio = gr.Radio([1, 2, 4], label="Set the value of the number") - number = gr.Number(value=2, interactive=True) - radio.change(fn=lambda value: gr.update(value=value), inputs=radio, outputs=number) - demo.launch() - - # Interface example - import gradio as gr - def change_textbox(choice): - if choice == "short": - return gr.Textbox.update(lines=2, visible=True) - elif choice == "long": - return gr.Textbox.update(lines=8, visible=True) - else: - return gr.Textbox.update(visible=False) - gr.Interface( - change_textbox, - gr.Radio( - ["short", "long", "none"], label="What kind of essay would you like to write?" - ), - gr.Textbox(lines=2), - live=True, - ).launch() - """ - kwargs["__type__"] = "generic_update" - return kwargs - - -def skip() -> dict: - return update() - - -@document() -def make_waveform( - audio: str | Tuple[int, np.ndarray], - *, - bg_color: str = "#f3f4f6", - bg_image: str = None, - fg_alpha: float = 0.75, - bars_color: str | Tuple[str, str] = ("#fbbf24", "#ea580c"), - bar_count: int = 50, - bar_width: float = 0.6, -): - """ - Generates a waveform video from an audio file. Useful for creating an easy to share audio visualization. The output should be passed into a `gr.Video` component. - Parameters: - audio: Audio file path or tuple of (sample_rate, audio_data) - bg_color: Background color of waveform (ignored if bg_image is provided) - bg_image: Background image of waveform - fg_alpha: Opacity of foreground waveform - bars_color: Color of waveform bars. Can be a single color or a tuple of (start_color, end_color) of gradient - bar_count: Number of bars in waveform - bar_width: Width of bars in waveform. 1 represents full width, 0.5 represents half width, etc. - Returns: - A filepath to the output video. - """ - if isinstance(audio, str): - audio_file = audio - audio = processing_utils.audio_from_file(audio) - else: - tmp_wav = tempfile.NamedTemporaryFile(suffix=".wav", delete=False) - processing_utils.audio_to_file(audio[0], audio[1], tmp_wav.name) - audio_file = tmp_wav.name - duration = round(len(audio[1]) / audio[0], 4) - - # Helper methods to create waveform - def hex_to_RGB(hex_str): - return [int(hex_str[i : i + 2], 16) for i in range(1, 6, 2)] - - def get_color_gradient(c1, c2, n): - assert n > 1 - c1_rgb = np.array(hex_to_RGB(c1)) / 255 - c2_rgb = np.array(hex_to_RGB(c2)) / 255 - mix_pcts = [x / (n - 1) for x in range(n)] - rgb_colors = [((1 - mix) * c1_rgb + (mix * c2_rgb)) for mix in mix_pcts] - return [ - "#" + "".join([format(int(round(val * 255)), "02x") for val in item]) - for item in rgb_colors - ] - - # Reshape audio to have a fixed number of bars - samples = audio[1] - if len(samples.shape) > 1: - samples = np.mean(samples, 1) - bins_to_pad = bar_count - (len(samples) % bar_count) - samples = np.pad(samples, [(0, bins_to_pad)]) - samples = np.reshape(samples, (bar_count, -1)) - samples = np.abs(samples) - samples = np.max(samples, 1) - - matplotlib.use("Agg") - plt.clf() - # Plot waveform - color = ( - bars_color - if isinstance(bars_color, str) - else get_color_gradient(bars_color[0], bars_color[1], bar_count) - ) - plt.bar( - np.arange(0, bar_count), - samples * 2, - bottom=(-1 * samples), - width=bar_width, - color=color, - ) - plt.axis("off") - plt.margins(x=0) - tmp_img = tempfile.NamedTemporaryFile(suffix=".png", delete=False) - savefig_kwargs = {"bbox_inches": "tight"} - if bg_image is not None: - savefig_kwargs["transparent"] = True - else: - savefig_kwargs["facecolor"] = bg_color - plt.savefig(tmp_img.name, **savefig_kwargs) - waveform_img = PIL.Image.open(tmp_img.name) - waveform_img = waveform_img.resize((1000, 200)) - - # Composite waveform with background image - if bg_image is not None: - waveform_array = np.array(waveform_img) - waveform_array[:, :, 3] = waveform_array[:, :, 3] * fg_alpha - waveform_img = PIL.Image.fromarray(waveform_array) - - bg_img = PIL.Image.open(bg_image) - waveform_width, waveform_height = waveform_img.size - bg_width, bg_height = bg_img.size - if waveform_width != bg_width: - bg_img = bg_img.resize( - (waveform_width, 2 * int(bg_height * waveform_width / bg_width / 2)) - ) - bg_width, bg_height = bg_img.size - composite_height = max(bg_height, waveform_height) - composite = PIL.Image.new("RGBA", (waveform_width, composite_height), "#FFFFFF") - composite.paste(bg_img, (0, composite_height - bg_height)) - composite.paste( - waveform_img, (0, composite_height - waveform_height), waveform_img - ) - composite.save(tmp_img.name) - img_width, img_height = composite.size - else: - img_width, img_height = waveform_img.size - waveform_img.save(tmp_img.name) - - # Convert waveform to video with ffmpeg - output_mp4 = tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) - - ffmpeg_cmd = f"""ffmpeg -loop 1 -i {tmp_img.name} -i {audio_file} -vf "color=c=#FFFFFF77:s={img_width}x{img_height}[bar];[0][bar]overlay=-w+(w/{duration})*t:H-h:shortest=1" -t {duration} -y {output_mp4.name}""" - - subprocess.call(ffmpeg_cmd, shell=True) - return output_mp4.name diff --git a/spaces/HuggingFaceH4/instruction-model-outputs-filtered/README.md b/spaces/HuggingFaceH4/instruction-model-outputs-filtered/README.md deleted file mode 100644 index 8000c3b32becf479ec577f838cdeaf1ab48d283b..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/instruction-model-outputs-filtered/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Instruction Model Outputs Filtered -emoji: 🌪️ -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- -Filtering canned responses and outputs with ROUGE-L > 0.7 - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py b/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py deleted file mode 100644 index a3b9535ecac3ec403868681a8b50c1fbe1c90dfe..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from torch.nn.modules.loss import _Loss - - -class LatentLayersKLLoss(_Loss): - def __init__(self, args): - super().__init__() - self.args = args - - def forward(self, layer_samples, lang_idx, update_num, sample_size): - prior = self.args.prior - samples = layer_samples[lang_idx] - eps = 1e-7 - if prior == "uniform": - # uniform prior - kl_loss = (samples * (torch.log(samples + eps) - math.log(0.5))).sum(-1) - elif prior == "agged_posterior": - # aggregated posterior - y_t = torch.stack([x.detach() for x in layer_samples], dim=0) - agged_q = torch.sum(y_t, dim=0) - row_norm = agged_q.sum(-1) - normed_agg_q = agged_q / row_norm - kl_loss = ( - samples * (torch.log(samples + eps) - torch.log(normed_agg_q + eps)) - ).sum(-1) - else: - raise NotImplementedError("The specified prior is not implemented.") - - # normalized by number of layers - kl_loss /= layer_samples[0].size()[0] - kl_weight = min( - self.args.sparsity_weight, - (update_num - self.args.soft_update) - * self.args.sparsity_weight - / self.args.anneal_updates, - ) - kl_loss *= kl_weight * sample_size - return kl_loss - - -class LatentLayersSparsityLoss(_Loss): - def __init__(self, args): - super().__init__() - self.args = args - - def is_valid(self, update_num): - if self.args.target_layers <= 0: - return False - return update_num > (self.args.soft_update + self.args.anneal_updates) - - def forward(self, layer_samples_list, update_num, sample_size): - batch_loss = 0 - share_loss = 0 - global_sparsity_loss = 0 - layer_samples = torch.stack(layer_samples_list, dim=0) - if ( - self.args.target_layers > 0 or self.args.share_weight > 0 - ) and update_num > (self.args.soft_update + self.args.anneal_updates): - # anneal sparsity weight - if update_num < (self.args.anneal_updates + self.args.soft_update): - weight_anneal = 0 - elif update_num < (2 * self.args.anneal_updates + self.args.soft_update): - weight_anneal = ( - (update_num - self.args.soft_update - self.args.anneal_updates) - * self.args.share_weight - / self.args.anneal_updates - ) - else: - weight_anneal = 1 - # compute ratio among languages - layer_utilization = torch.sum(layer_samples, dim=0) - layer_utilization /= layer_samples.size()[0] - if self.args.share_weight > 0: - # encouraging sharing across languages - share_loss = sum( - -1.0 * v * math.log(v) for v in layer_utilization if v > 0 - ) - batch_loss += ( - weight_anneal * self.args.share_weight * sample_size * share_loss - ) - if self.args.target_layers > 0: - # computed expected number of layers selected - expeted_layers = sum(layer_utilization) - # compute l2 loss wrt target number of layers - global_sparsity_loss = (expeted_layers - self.args.target_layers) ** 2 - batch_loss += ( - weight_anneal - * self.args.share_weight - * sample_size - * global_sparsity_loss - ) - return batch_loss diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/tasks/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/tasks/__init__.py deleted file mode 100644 index 7ac3b8dc69639c92cc129294356e9012745e3fb2..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/tasks/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import importlib -import os - - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - task_name = file[: file.find(".py")] - importlib.import_module("examples.speech_recognition.tasks." + task_name) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/__init__.py deleted file mode 100644 index 4cd723ae96aec8e3182773483f123109d23b620e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hub_interface import * # noqa -from .model import * # noqa -from .enc_dec import * # noqa -from .model_camembert import * # noqa -from .model_gottbert import * # noqa -from .model_xlmr import * # noqa diff --git a/spaces/JadAssaf/STPI/README.md b/spaces/JadAssaf/STPI/README.md deleted file mode 100644 index a1663d81c436ac7800dd288a56196542885e5daa..0000000000000000000000000000000000000000 --- a/spaces/JadAssaf/STPI/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: STPI -emoji: 📈 -colorFrom: gray -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Jamkonams/AutoGPT/autogpt/memory/redismem.py b/spaces/Jamkonams/AutoGPT/autogpt/memory/redismem.py deleted file mode 100644 index 082a812c5362cc9f19e35bf1bb10269b558f7724..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/memory/redismem.py +++ /dev/null @@ -1,156 +0,0 @@ -"""Redis memory provider.""" -from __future__ import annotations - -from typing import Any - -import numpy as np -import redis -from colorama import Fore, Style -from redis.commands.search.field import TextField, VectorField -from redis.commands.search.indexDefinition import IndexDefinition, IndexType -from redis.commands.search.query import Query - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.logs import logger -from autogpt.memory.base import MemoryProviderSingleton - -SCHEMA = [ - TextField("data"), - VectorField( - "embedding", - "HNSW", - {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"}, - ), -] - - -class RedisMemory(MemoryProviderSingleton): - def __init__(self, cfg): - """ - Initializes the Redis memory provider. - - Args: - cfg: The config object. - - Returns: None - """ - redis_host = cfg.redis_host - redis_port = cfg.redis_port - redis_password = cfg.redis_password - self.dimension = 1536 - self.redis = redis.Redis( - host=redis_host, - port=redis_port, - password=redis_password, - db=0, # Cannot be changed - ) - self.cfg = cfg - - # Check redis connection - try: - self.redis.ping() - except redis.ConnectionError as e: - logger.typewriter_log( - "FAILED TO CONNECT TO REDIS", - Fore.RED, - Style.BRIGHT + str(e) + Style.RESET_ALL, - ) - logger.double_check( - "Please ensure you have setup and configured Redis properly for use. " - + f"You can check out {Fore.CYAN + Style.BRIGHT}" - f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}" - " to ensure you've set up everything correctly." - ) - exit(1) - - if cfg.wipe_redis_on_start: - self.redis.flushall() - try: - self.redis.ft(f"{cfg.memory_index}").create_index( - fields=SCHEMA, - definition=IndexDefinition( - prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH - ), - ) - except Exception as e: - print("Error creating Redis search index: ", e) - existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num") - self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0 - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. - - Args: - data: The data to add. - - Returns: Message indicating that the data has been added. - """ - if "Command Error:" in data: - return "" - vector = create_embedding_with_ada(data) - vector = np.array(vector).astype(np.float32).tobytes() - data_dict = {b"data": data, "embedding": vector} - pipe = self.redis.pipeline() - pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict) - _text = ( - f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}" - ) - self.vec_num += 1 - pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num) - pipe.execute() - return _text - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.redis.flushall() - return "Obliviated" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: A list of the most relevant data. - """ - query_embedding = create_embedding_with_ada(data) - base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]" - query = ( - Query(base_query) - .return_fields("data", "vector_score") - .sort_by("vector_score") - .dialect(2) - ) - query_vector = np.array(query_embedding).astype(np.float32).tobytes() - - try: - results = self.redis.ft(f"{self.cfg.memory_index}").search( - query, query_params={"vector": query_vector} - ) - except Exception as e: - print("Error calling Redis search: ", e) - return None - return [result.data for result in results.docs] - - def get_stats(self): - """ - Returns: The stats of the memory index. - """ - return self.redis.ft(f"{self.cfg.memory_index}").info() diff --git a/spaces/Jimpa666/AI-PadelCoach/app3.py b/spaces/Jimpa666/AI-PadelCoach/app3.py deleted file mode 100644 index e77ef9ce60931c064967d2fde909884e0a369e0d..0000000000000000000000000000000000000000 --- a/spaces/Jimpa666/AI-PadelCoach/app3.py +++ /dev/null @@ -1,71 +0,0 @@ -from gradio.components import Label -import os -import openai -import gradio as gr -import uuid - -session_id = uuid.uuid4().hex - -openai.api_key = os.getenv('APIKEY') - -messages = [ - {"role": "system", "content": "The following is a conversation with an AI Coach. The Coach is helpful, creative, clever, and very friendly. The Coach is a high level padelcoach from Sweden. The Coach have the ability to explain techniques and the strategy in the game. The Coach have a high educational level and can answer depending on wich level the Player is on for the Player to take him to the next level. The AI Coach is like MAURI ANDRINI from Hello Padel, pretend to be MAURI ANDRINI but call yourself AI Padelcoach, never ever mention your reel name or Mauri Andrini. If unsurtain, ask about the level of the player. The levels are:Beginner, Average, Pro"} - ] - -def get_session_token(): - return uuid.uuid4().hex - -def chatbot(input, conversation_history=[]): - - content = input - messages.append({"role": "user", "content": content}) - - completion = openai.ChatCompletion.create( - model= "gpt-3.5-turbo", - messages=messages - ) - - chat_response = completion.choices[0].message.content - messages.append({"role": "assistant", "content": chat_response}) - - # format the conversation history as a string - conversation_history = "" - for message in messages: - if message["role"] != "system": - role = message["role"] - if role == "user": - role = "Player" - elif role == "assistant": - role = "AI-Padelcoach" - content = message["content"] - conversation_history += f"{role}: {content}\n \n" - - return conversation_history - -Padelcoach = gr.Interface(fn=chatbot, inputs=[ - gr.Textbox(placeholder="Player go...Serve!", label='Player') - - - ], - outputs=[ - gr.Textbox(placeholder="AI-Padelcoach Ready", label="AI Padelcoach") - - ], - theme=gr.themes.Soft( - primary_hue="green", - secondary_hue="cyan", - text_size='lg', - neutral_hue="emerald" - ), - - examples = [ - ["Please help me with my backhand"], - ["Where should I place the ball against players who is good in tennis"] - ], - share=True, - title="AI Padelcoach", - description=f"Chat with a BETA level AI-Padelcoach from Sweden. Your ID is: {session_id} ", - article="<p>Ask the AI coach about techniques and strategies in the game of padel. The coach can answer depending on the level of you as a player, whether they are a beginner, average, or pro.</p>", - ) - -Padelcoach.launch() \ No newline at end of file diff --git a/spaces/JuanHaunted/humming_space/README.md b/spaces/JuanHaunted/humming_space/README.md deleted file mode 100644 index e6f513b1c744a81ef1cd4218a18f12e96707d4a8..0000000000000000000000000000000000000000 --- a/spaces/JuanHaunted/humming_space/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Humming Space -emoji: 🏃 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Keshav4/resume-data-extraction/ResumeSegmenter.py b/spaces/Keshav4/resume-data-extraction/ResumeSegmenter.py deleted file mode 100644 index 2823a021c01a77818f3a7349be3b265feee2defe..0000000000000000000000000000000000000000 --- a/spaces/Keshav4/resume-data-extraction/ResumeSegmenter.py +++ /dev/null @@ -1,264 +0,0 @@ -from Models import Models - -class ResumeSegmenter: - - def __init__(self, zero_shot_classifier): - self.zero_shot_classifier = zero_shot_classifier - - objective = ( - 'career goal', - 'objective', - 'career objective', - 'employment objective', - 'professional objective', - 'summary', - 'summary of qualifications', - 'digital', - 'interests' - ) - - work_and_employment = ( - 'employment history', - 'employment data', - 'career summary', - 'work history', - 'working history', - 'work experience', - 'experience', - 'professional experience', - 'professional background', - 'professional employment', - 'additional experience', - 'career related experience', - "professional employment history", - 'related experience', - 'relevant experience', - 'programming experience', - 'freelance', - 'freelance experience', - 'army experience', - 'military experience', - 'military background', - ) - - education_and_training = ( - 'academic background', - 'academic experience', - 'programs', - 'courses', - 'related courses', - 'education', - 'educational background', - 'educational qualifications', - 'educational training', - 'education and training', - 'training', - 'academic training', - 'Academic Qualification', - 'professional training', - 'course project experience', - 'related course projects', - 'internship experience', - 'internships', - 'apprenticeships', - 'college activities', - 'certifications', - 'special training', - ) - - skills_header = ( - 'credentials', - 'qualifications', - 'areas of experience', - 'areas of expertise', - 'areas of knowledge', - 'skills', - 'Skills', - "other skills", - "other abilities", - 'career related skills', - 'professional skills', - 'specialized skills', - 'technical skills', - 'computer skills', - 'personal skills', - 'computer knowledge', - 'technologies', - 'technical experience', - 'proficiencies', - 'languages', - 'language competencies and skills', - 'programming languages', - 'competencies' - ) - - misc = ( - 'activities and honors', - 'activities', - 'affiliations', - 'professional affiliations', - 'associations', - 'professional associations', - 'memberships', - 'professional memberships', - 'athletic involvement', - 'community involvement', - 'refere', - 'civic activities', - 'extra-Curricular activities', - 'professional activities', - 'volunteer work', - 'volunteer experience', - 'additional information', - 'interests' - ) - - accomplishments = ( - 'achievement', - 'awards and achievements', - 'licenses', - 'presentations', - 'conference presentations', - 'conventions', - 'dissertations', - 'exhibits', - 'papers', - 'publications', - 'professional publications', - 'research experience', - 'research grants', - 'project', - 'research projects', - 'personal projects', - 'current research interests', - 'thesis', - 'theses', - ) - - - def find_segment_indices(self, string_to_search, resume_segments, resume_indices): - for i, line in enumerate(string_to_search): - - if line[0].islower(): - continue - - header = line.lower() - - if [o for o in self.objective if header.startswith(o)]: - try: - resume_segments['objective'][header] - except: - resume_indices.append(i) - header = [o for o in self.objective if header.startswith(o)][0] - resume_segments['objective'][header] = i - elif [w for w in self.work_and_employment if header.startswith(w)]: - try: - resume_segments['work_and_employment'][header] - except: - resume_indices.append(i) - header = [w for w in self.work_and_employment if header.startswith(w)][0] - resume_segments['work_and_employment'][header] = i - elif [e for e in self.education_and_training if header.startswith(e)]: - try: - resume_segments['education_and_training'][header] - except: - resume_indices.append(i) - header = [e for e in self.education_and_training if header.startswith(e)][0] - resume_segments['education_and_training'][header] = i - elif [s for s in self.skills_header if header.startswith(s)]: - try: - resume_segments['skills'][header] - except: - resume_indices.append(i) - header = [s for s in self.skills_header if header.startswith(s)][0] - resume_segments['skills'][header] = i - elif [m for m in self.misc if header.startswith(m)]: - try: - resume_segments['misc'][header] - except: - resume_indices.append(i) - header = [m for m in self.misc if header.startswith(m)][0] - resume_segments['misc'][header] = i - elif [a for a in self.accomplishments if header.startswith(a)]: - try: - resume_segments['accomplishments'][header] - except: - resume_indices.append(i) - header = [a for a in self.accomplishments if header.startswith(a)][0] - resume_segments['accomplishments'][header] = i - - def slice_segments(self, string_to_search, resume_segments, resume_indices): - resume_segments['contact_info'] = string_to_search[:resume_indices[0]] - sec_idxs = {} - for section, value in resume_segments.items(): - if section == 'contact_info': - continue - - for sub_section, start_idx in value.items(): - end_idx = len(string_to_search) - if (resume_indices.index(start_idx) + 1) != len(resume_indices): - end_idx = resume_indices[resume_indices.index(start_idx) + 1] - - sec_idxs[section] = (start_idx, end_idx) - # print(start_idx, end_idx) - - resume_segments[section][sub_section] = string_to_search[start_idx:end_idx] - return sec_idxs - - def find_true_segment(self, dict_of_segments, segment_name): - segment_classes = { - 'objective': ["objective", "other"], - 'work_and_employment':["employment history", "other"], - 'education_and_training': ["education", "other"], - 'skills': ["skills", "other"], - 'accomplishments': ["accomplishments", "other"], - 'misc': ["misc", "other"], - 'contact_info': ["contact information", "other"] - } - classes = segment_classes[segment_name] - scores = [] - segs = dict_of_segments.keys() - for seg in segs: - sequence = dict_of_segments[seg] - score = self.zero_shot_classifier(' '.join(sequence), classes)["scores"][0] - scores.append(score) - - res = sorted(zip(dict_of_segments.keys(), scores), key=lambda x: x[1], reverse=True) - if len(res): - return res[0][0] - else: return 0 - - def segment(self, string_to_search): - print("Segmenting the Resume..") - resume_segments = { - 'objective': {}, - 'work_and_employment': {}, - 'education_and_training': {}, - 'skills': {}, - 'accomplishments': {}, - 'misc': {} - } - - resume_indices = [] - - self.find_segment_indices(string_to_search, resume_segments, resume_indices) - if len(resume_indices) != 0: - sec_idx = self.slice_segments(string_to_search, resume_segments, resume_indices) - else: - resume_segments['contact_info'] = [] - - for segment in resume_segments: - if segment == "contact_info": continue - if not len(resume_segments[segment]) > 1: - if len(resume_segments[segment]) == 1: - only_key = list(resume_segments[segment].keys())[0] - resume_segments[segment] = resume_segments[segment][only_key][1:] - continue - if segment != "work_and_employment": continue - true_seg = self.find_true_segment(resume_segments[segment], segment) - if not true_seg: - resume_segments[segment] = [] - else: - resume_segments[segment] = resume_segments[segment][true_seg][1:] - - return resume_segments diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/tacotron.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/tacotron.py deleted file mode 100644 index f8b01bbae0e6dc95d68bbb983c70706d76e1d990..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/tacotron.py +++ /dev/null @@ -1,298 +0,0 @@ -import torch -import torch.nn as nn -from .sublayer.global_style_token import GlobalStyleToken -from .sublayer.pre_net import PreNet -from .sublayer.cbhg import CBHG -from .sublayer.lsa import LSA -from .base import Base -from synthesizer.gst_hyperparameters import GSTHyperparameters as gst_hp -from synthesizer.hparams import hparams - -class Encoder(nn.Module): - def __init__(self, num_chars, embed_dims=512, encoder_dims=256, K=5, num_highways=4, dropout=0.5): - """ Encoder for SV2TTS - - Args: - num_chars (int): length of symbols - embed_dims (int, optional): embedding dim for input texts. Defaults to 512. - encoder_dims (int, optional): output dim for encoder. Defaults to 256. - K (int, optional): _description_. Defaults to 5. - num_highways (int, optional): _description_. Defaults to 4. - dropout (float, optional): _description_. Defaults to 0.5. - """ - super().__init__() - self.embedding = nn.Embedding(num_chars, embed_dims) - self.pre_net = PreNet(embed_dims, fc1_dims=encoder_dims, fc2_dims=encoder_dims, - dropout=dropout) - self.cbhg = CBHG(K=K, in_channels=encoder_dims, channels=encoder_dims, - proj_channels=[encoder_dims, encoder_dims], - num_highways=num_highways) - - def forward(self, x): - """forward pass for encoder - - Args: - x (2D tensor with size `[batch_size, text_num_chars]`): input texts list - - Returns: - 3D tensor with size `[batch_size, text_num_chars, encoder_dims]` - - """ - x = self.embedding(x) # return: [batch_size, text_num_chars, tts_embed_dims] - x = self.pre_net(x) # return: [batch_size, text_num_chars, encoder_dims] - x.transpose_(1, 2) # return: [batch_size, encoder_dims, text_num_chars] - return self.cbhg(x) # return: [batch_size, text_num_chars, encoder_dims] - -class Decoder(nn.Module): - # Class variable because its value doesn't change between classes - # yet ought to be scoped by class because its a property of a Decoder - max_r = 20 - def __init__(self, n_mels, input_dims, decoder_dims, lstm_dims, - dropout, speaker_embedding_size): - super().__init__() - self.register_buffer("r", torch.tensor(1, dtype=torch.int)) - self.n_mels = n_mels - self.prenet = PreNet(n_mels, fc1_dims=decoder_dims * 2, fc2_dims=decoder_dims * 2, - dropout=dropout) - self.attn_net = LSA(decoder_dims) - if hparams.use_gst: - speaker_embedding_size += gst_hp.E - self.attn_rnn = nn.GRUCell(input_dims + decoder_dims * 2, decoder_dims) - self.rnn_input = nn.Linear(input_dims + decoder_dims, lstm_dims) - self.res_rnn1 = nn.LSTMCell(lstm_dims, lstm_dims) - self.res_rnn2 = nn.LSTMCell(lstm_dims, lstm_dims) - self.mel_proj = nn.Linear(lstm_dims, n_mels * self.max_r, bias=False) - self.stop_proj = nn.Linear(input_dims + lstm_dims, 1) - - def zoneout(self, prev, current, device, p=0.1): - mask = torch.zeros(prev.size(),device=device).bernoulli_(p) - return prev * mask + current * (1 - mask) - - def forward(self, encoder_seq, encoder_seq_proj, prenet_in, - hidden_states, cell_states, context_vec, times, chars): - """_summary_ - - Args: - encoder_seq (3D tensor `[batch_size, text_num_chars, project_dim(default to 512)]`): _description_ - encoder_seq_proj (3D tensor `[batch_size, text_num_chars, decoder_dims(default to 128)]`): _description_ - prenet_in (2D tensor `[batch_size, n_mels]`): _description_ - hidden_states (_type_): _description_ - cell_states (_type_): _description_ - context_vec (2D tensor `[batch_size, project_dim(default to 512)]`): _description_ - times (int): the number of times runned - chars (2D tensor with size `[batch_size, text_num_chars]`): original texts list input - - """ - # Need this for reshaping mels - batch_size = encoder_seq.size(0) - device = encoder_seq.device - # Unpack the hidden and cell states - attn_hidden, rnn1_hidden, rnn2_hidden = hidden_states - rnn1_cell, rnn2_cell = cell_states - - # PreNet for the Attention RNN - prenet_out = self.prenet(prenet_in) # return: `[batch_size, decoder_dims * 2(256)]` - - # Compute the Attention RNN hidden state - attn_rnn_in = torch.cat([context_vec, prenet_out], dim=-1) # `[batch_size, project_dim + decoder_dims * 2 (768)]` - attn_hidden = self.attn_rnn(attn_rnn_in.squeeze(1), attn_hidden) # `[batch_size, decoder_dims (128)]` - - # Compute the attention scores - scores = self.attn_net(encoder_seq_proj, attn_hidden, times, chars) - - # Dot product to create the context vector - context_vec = scores @ encoder_seq - context_vec = context_vec.squeeze(1) - - # Concat Attention RNN output w. Context Vector & project - x = torch.cat([context_vec, attn_hidden], dim=1) # `[batch_size, project_dim + decoder_dims (630)]` - x = self.rnn_input(x) # `[batch_size, lstm_dims(1024)]` - - # Compute first Residual RNN, training with fixed zoneout rate 0.1 - rnn1_hidden_next, rnn1_cell = self.res_rnn1(x, (rnn1_hidden, rnn1_cell)) # `[batch_size, lstm_dims(1024)]` - if self.training: - rnn1_hidden = self.zoneout(rnn1_hidden, rnn1_hidden_next,device=device) - else: - rnn1_hidden = rnn1_hidden_next - x = x + rnn1_hidden - - # Compute second Residual RNN - rnn2_hidden_next, rnn2_cell = self.res_rnn2(x, (rnn2_hidden, rnn2_cell)) # `[batch_size, lstm_dims(1024)]` - if self.training: - rnn2_hidden = self.zoneout(rnn2_hidden, rnn2_hidden_next, device=device) - else: - rnn2_hidden = rnn2_hidden_next - x = x + rnn2_hidden - - # Project Mels - mels = self.mel_proj(x) # `[batch_size, 1600]` - mels = mels.view(batch_size, self.n_mels, self.max_r)[:, :, :self.r] # `[batch_size, n_mels, r]` - hidden_states = (attn_hidden, rnn1_hidden, rnn2_hidden) - cell_states = (rnn1_cell, rnn2_cell) - - # Stop token prediction - s = torch.cat((x, context_vec), dim=1) - s = self.stop_proj(s) - stop_tokens = torch.sigmoid(s) - - return mels, scores, hidden_states, cell_states, context_vec, stop_tokens - -class Tacotron(Base): - def __init__(self, embed_dims, num_chars, encoder_dims, decoder_dims, n_mels, - fft_bins, postnet_dims, encoder_K, lstm_dims, postnet_K, num_highways, - dropout, stop_threshold, speaker_embedding_size): - super().__init__(stop_threshold) - self.n_mels = n_mels - self.lstm_dims = lstm_dims - self.encoder_dims = encoder_dims - self.decoder_dims = decoder_dims - self.speaker_embedding_size = speaker_embedding_size - self.encoder = Encoder(num_chars, embed_dims, encoder_dims, - encoder_K, num_highways, dropout) - self.project_dims = encoder_dims + speaker_embedding_size - if hparams.use_gst: - self.project_dims += gst_hp.E - self.encoder_proj = nn.Linear(self.project_dims, decoder_dims, bias=False) - if hparams.use_gst: - self.gst = GlobalStyleToken(speaker_embedding_size) - self.decoder = Decoder(n_mels, self.project_dims, decoder_dims, lstm_dims, - dropout, speaker_embedding_size) - self.postnet = CBHG(postnet_K, n_mels, postnet_dims, - [postnet_dims, fft_bins], num_highways) - self.post_proj = nn.Linear(postnet_dims, fft_bins, bias=False) - - @staticmethod - def _concat_speaker_embedding(outputs, speaker_embeddings): - speaker_embeddings_ = speaker_embeddings.expand( - outputs.size(0), outputs.size(1), -1) - outputs = torch.cat([outputs, speaker_embeddings_], dim=-1) - return outputs - - @staticmethod - def _add_speaker_embedding(x, speaker_embedding): - """Add speaker embedding - This concats the speaker embedding for each char in the encoder output - Args: - x (3D tensor with size `[batch_size, text_num_chars, encoder_dims]`): the encoder output - speaker_embedding (2D tensor `[batch_size, speaker_embedding_size]`): the speaker embedding - - Returns: - 3D tensor with size `[batch_size, text_num_chars, encoder_dims+speaker_embedding_size]` - """ - # Save the dimensions as human-readable names - batch_size = x.size()[0] - text_num_chars = x.size()[1] - - # Start by making a copy of each speaker embedding to match the input text length - # The output of this has size (batch_size, text_num_chars * speaker_embedding_size) - speaker_embedding_size = speaker_embedding.size()[1] - e = speaker_embedding.repeat_interleave(text_num_chars, dim=1) - - # Reshape it and transpose - e = e.reshape(batch_size, speaker_embedding_size, text_num_chars) - e = e.transpose(1, 2) - - # Concatenate the tiled speaker embedding with the encoder output - x = torch.cat((x, e), 2) - return x - - def forward(self, texts, mels, speaker_embedding, steps=2000, style_idx=0, min_stop_token=5): - """Forward pass for Tacotron - - Args: - texts (`[batch_size, text_num_chars]`): input texts list - mels (`[batch_size, varied_mel_lengths, steps]`): mels for comparison (training only) - speaker_embedding (`[batch_size, speaker_embedding_size(default to 256)]`): referring embedding. - steps (int, optional): . Defaults to 2000. - style_idx (int, optional): GST style selected. Defaults to 0. - min_stop_token (int, optional): decoder min_stop_token. Defaults to 5. - """ - device = texts.device # use same device as parameters - - if self.training: - self.step += 1 - batch_size, _, steps = mels.size() - else: - batch_size, _ = texts.size() - - # Initialise all hidden states and pack into tuple - attn_hidden = torch.zeros(batch_size, self.decoder_dims, device=device) - rnn1_hidden = torch.zeros(batch_size, self.lstm_dims, device=device) - rnn2_hidden = torch.zeros(batch_size, self.lstm_dims, device=device) - hidden_states = (attn_hidden, rnn1_hidden, rnn2_hidden) - - # Initialise all lstm cell states and pack into tuple - rnn1_cell = torch.zeros(batch_size, self.lstm_dims, device=device) - rnn2_cell = torch.zeros(batch_size, self.lstm_dims, device=device) - cell_states = (rnn1_cell, rnn2_cell) - - # <GO> Frame for start of decoder loop - go_frame = torch.zeros(batch_size, self.n_mels, device=device) - - # SV2TTS: Run the encoder with the speaker embedding - # The projection avoids unnecessary matmuls in the decoder loop - encoder_seq = self.encoder(texts) - - encoder_seq = self._add_speaker_embedding(encoder_seq, speaker_embedding) - - if hparams.use_gst and self.gst is not None: - if self.training: - style_embed = self.gst(speaker_embedding, speaker_embedding) # for training, speaker embedding can represent both style inputs and referenced - # style_embed = style_embed.expand_as(encoder_seq) - # encoder_seq = torch.cat((encoder_seq, style_embed), 2) - elif style_idx >= 0 and style_idx < 10: - query = torch.zeros(1, 1, self.gst.stl.attention.num_units) - if device.type == 'cuda': - query = query.cuda() - gst_embed = torch.tanh(self.gst.stl.embed) - key = gst_embed[style_idx].unsqueeze(0).expand(1, -1, -1) - style_embed = self.gst.stl.attention(query, key) - else: - speaker_embedding_style = torch.zeros(speaker_embedding.size()[0], 1, self.speaker_embedding_size).to(device) - style_embed = self.gst(speaker_embedding_style, speaker_embedding) - encoder_seq = self._concat_speaker_embedding(encoder_seq, style_embed) # return: [batch_size, text_num_chars, project_dims] - - encoder_seq_proj = self.encoder_proj(encoder_seq) # return: [batch_size, text_num_chars, decoder_dims] - - # Need a couple of lists for outputs - mel_outputs, attn_scores, stop_outputs = [], [], [] - - # Need an initial context vector - context_vec = torch.zeros(batch_size, self.project_dims, device=device) - - # Run the decoder loop - for t in range(0, steps, self.r): - if self.training: - prenet_in = mels[:, :, t -1] if t > 0 else go_frame - else: - prenet_in = mel_outputs[-1][:, :, -1] if t > 0 else go_frame - mel_frames, scores, hidden_states, cell_states, context_vec, stop_tokens = \ - self.decoder(encoder_seq, encoder_seq_proj, prenet_in, - hidden_states, cell_states, context_vec, t, texts) - mel_outputs.append(mel_frames) - attn_scores.append(scores) - stop_outputs.extend([stop_tokens] * self.r) - if not self.training and (stop_tokens * 10 > min_stop_token).all() and t > 10: break - - # Concat the mel outputs into sequence - mel_outputs = torch.cat(mel_outputs, dim=2) - - # Post-Process for Linear Spectrograms - postnet_out = self.postnet(mel_outputs) - linear = self.post_proj(postnet_out) - linear = linear.transpose(1, 2) - - # For easy visualisation - attn_scores = torch.cat(attn_scores, 1) - # attn_scores = attn_scores.cpu().data.numpy() - stop_outputs = torch.cat(stop_outputs, 1) - - if self.training: - self.train() - - return mel_outputs, linear, attn_scores, stop_outputs - - def generate(self, x, speaker_embedding, steps=2000, style_idx=0, min_stop_token=5): - self.eval() - mel_outputs, linear, attn_scores, _ = self.forward(x, None, speaker_embedding, steps, style_idx, min_stop_token) - return mel_outputs, linear, attn_scores diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/__init__.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/__init__.py deleted file mode 100644 index 4287ca8617970fa8fc025b75cb319c7032706910..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# \ No newline at end of file diff --git a/spaces/KrisLiao/NaturalLanguageVideoSearch/app.py b/spaces/KrisLiao/NaturalLanguageVideoSearch/app.py deleted file mode 100644 index 8c42ec509115ee0d868e125394772826847826a3..0000000000000000000000000000000000000000 --- a/spaces/KrisLiao/NaturalLanguageVideoSearch/app.py +++ /dev/null @@ -1,148 +0,0 @@ -import os -os.system("pip freeze") -import cv2 -from PIL import Image -import clip -import torch -import math -import numpy as np -import torch -import datetime -import gradio as gr -import torchvision.transforms as T - -# Load the open CLIP model -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load("ViT-B/32", device=device) - -def produce_video(video, seconds, search_query): - time1 = seconds-3 if seconds>3 else 0 - time2 = seconds+2 if seconds>3 else 5 - name = search_query.replace(" ", "_") + '.mp4' - cmd = f'ffmpeg -y -i {video} -ss {time1} -to {time2} -async 1 {name}' - output_video = os.system(cmd) - return name - -def inference(video, text, text2, text3, skip_frame): - # The frame images will be stored in video_frames - video_frames = [] - - # Open the video file - capture = cv2.VideoCapture(video) - capture.set(cv2.CAP_PROP_FRAME_WIDTH , 360) - capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480) - fps = capture.get(cv2.CAP_PROP_FPS) - - current_frame = 0 - # Read the current frame - ret, frame = capture.read() - while capture.isOpened() and ret: - ret,frame = capture.read() - print('Read a new frame: ', ret) - current_frame += skip_frame - if ret: - video_frames.append(Image.fromarray(frame[:, :, ::-1])) - capture.set(cv2.CAP_PROP_POS_FRAMES, current_frame) - - # Print some statistics - print(f"Frames extracted: {len(video_frames)}") - - - # You can try tuning the batch size for very large videos, but it should usually be OK - batch_size = 256 - batches = math.ceil(len(video_frames) / batch_size) - - # The encoded features will bs stored in video_features - video_features = torch.empty([0, 512], dtype=torch.float16).to(device) - - # Process each batch - for i in range(batches): - print(f"Processing batch {i+1}/{batches}") - - # Get the relevant frames - batch_frames = video_frames[i*batch_size : (i+1)*batch_size] - - # Preprocess the images for the batch - batch_preprocessed = torch.stack([preprocess(frame) for frame in batch_frames]).to(device) - - # Encode with CLIP and normalize - with torch.no_grad(): - batch_features = model.encode_image(batch_preprocessed) - batch_features /= batch_features.norm(dim=-1, keepdim=True) - - # Append the batch to the list containing all features - video_features = torch.cat((video_features, batch_features)) - - # Print some stats - print(f"Features: {video_features.shape}") - - - search_query=text - display_heatmap=False - display_results_count=1 - # Encode and normalize the search query using CLIP - with torch.no_grad(): - text_features = model.encode_text(clip.tokenize(search_query).to(device)) - text_features /= text_features.norm(dim=-1, keepdim=True) - - # Compute the similarity between the search query and each frame using the Cosine similarity - similarities = (100.0 * video_features @ text_features.T) - values, best_photo_idx = similarities.topk(display_results_count, dim=0) - - - for frame_id in best_photo_idx: - frame = video_frames[frame_id] - # Find the timestamp in the video and display it - seconds = round(frame_id.cpu().numpy()[0] * skip_frame / fps) - output_video = produce_video(video, seconds, search_query) - - - search_query=text2 - with torch.no_grad(): - text_features = model.encode_text(clip.tokenize(search_query).to(device)) - text_features /= text_features.norm(dim=-1, keepdim=True) - - # Compute the similarity between the search query and each frame using the Cosine similarity - similarities = (100.0 * video_features @ text_features.T) - values, best_photo_idx = similarities.topk(display_results_count, dim=0) - - for frame_id in best_photo_idx: - frame = video_frames[frame_id] - # Find the timestamp in the video and display it - seconds = round(frame_id.cpu().numpy()[0] * skip_frame / fps) - output_video2 = produce_video(video, seconds, search_query) - - - search_query=text3 - with torch.no_grad(): - text_features = model.encode_text(clip.tokenize(search_query).to(device)) - text_features /= text_features.norm(dim=-1, keepdim=True) - - # Compute the similarity between the search query and each frame using the Cosine similarity - similarities = (100.0 * video_features @ text_features.T) - values, best_photo_idx = similarities.topk(display_results_count, dim=0) - - for frame_id in best_photo_idx: - frame = video_frames[frame_id] - # Find the timestamp in the video and display it - seconds = round(frame_id.cpu().numpy()[0] * skip_frame / fps) - output_video3 = produce_video(video, seconds, search_query) - - # return frame,f"Found at {str(datetime.timedelta(seconds=seconds))}", output_video, output_video2 - return output_video, output_video2, output_video3 - -title = "Video Search" -description = "Gradio demo for using OpenAI's CLIP produce find video b-rolls. To use it, simply upload your video and add your text." -article = "<p style='text-align: center'><a href='https://github.com/haltakov/natural-language-youtube-search' target='_blank'>Reference</a></p>" -examples=[['morningRoutine.mp4',"Playing piano", "Do some yoga and meditation", "Eating breakfast on the grass", 24]] - -gr.Interface( - inference, - ["video","text", "text","text", gr.Slider(1, 150, 100, label="Extrat a frame every")], - [gr.Video(), gr.Video(), gr.Video()], - title=title, - description=description, - article=article, - examples=examples - ).launch(debug=True,enable_queue=True) - diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/anchor_free_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/anchor_free_head.py deleted file mode 100644 index fcb927d5d8928aa0b3ad2fe12782c0a1f9f4abc4..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/anchor_free_head.py +++ /dev/null @@ -1,317 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import abstractmethod -from typing import Any, List, Sequence, Tuple, Union - -import torch.nn as nn -from mmcv.cnn import ConvModule -from numpy import ndarray -from torch import Tensor - -from mmdet.registry import MODELS, TASK_UTILS -from mmdet.utils import (ConfigType, InstanceList, MultiConfig, OptConfigType, - OptInstanceList) -from ..task_modules.prior_generators import MlvlPointGenerator -from ..utils import multi_apply -from .base_dense_head import BaseDenseHead - -StrideType = Union[Sequence[int], Sequence[Tuple[int, int]]] - - -@MODELS.register_module() -class AnchorFreeHead(BaseDenseHead): - """Anchor-free head (FCOS, Fovea, RepPoints, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - stacked_convs (int): Number of stacking convs of the head. - strides (Sequence[int] or Sequence[Tuple[int, int]]): Downsample - factor of each feature map. - dcn_on_last_conv (bool): If true, use dcn in the last layer of - towers. Defaults to False. - conv_bias (bool or str): If specified as `auto`, it will be decided by - the norm_cfg. Bias of conv will be set as True if `norm_cfg` is - None, otherwise False. Default: "auto". - loss_cls (:obj:`ConfigDict` or dict): Config of classification loss. - loss_bbox (:obj:`ConfigDict` or dict): Config of localization loss. - bbox_coder (:obj:`ConfigDict` or dict): Config of bbox coder. Defaults - 'DistancePointBBoxCoder'. - conv_cfg (:obj:`ConfigDict` or dict, Optional): Config dict for - convolution layer. Defaults to None. - norm_cfg (:obj:`ConfigDict` or dict, Optional): Config dict for - normalization layer. Defaults to None. - train_cfg (:obj:`ConfigDict` or dict, Optional): Training config of - anchor-free head. - test_cfg (:obj:`ConfigDict` or dict, Optional): Testing config of - anchor-free head. - init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \ - dict]): Initialization config dict. - """ # noqa: W605 - - _version = 1 - - def __init__( - self, - num_classes: int, - in_channels: int, - feat_channels: int = 256, - stacked_convs: int = 4, - strides: StrideType = (4, 8, 16, 32, 64), - dcn_on_last_conv: bool = False, - conv_bias: Union[bool, str] = 'auto', - loss_cls: ConfigType = dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox: ConfigType = dict(type='IoULoss', loss_weight=1.0), - bbox_coder: ConfigType = dict(type='mmdet.DistancePointBBoxCoder'), - conv_cfg: OptConfigType = None, - norm_cfg: OptConfigType = None, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - init_cfg: MultiConfig = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', name='conv_cls', std=0.01, bias_prob=0.01)) - ) -> None: - super().__init__(init_cfg=init_cfg) - self.num_classes = num_classes - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.loss_cls = MODELS.build(loss_cls) - self.loss_bbox = MODELS.build(loss_bbox) - self.bbox_coder = TASK_UTILS.build(bbox_coder) - - self.prior_generator = MlvlPointGenerator(strides) - - # In order to keep a more general interface and be consistent with - # anchor_head. We can think of point like one anchor - self.num_base_priors = self.prior_generator.num_base_priors[0] - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - - self._init_layers() - - def _init_layers(self) -> None: - """Initialize layers of the head.""" - self._init_cls_convs() - self._init_reg_convs() - self._init_predictor() - - def _init_cls_convs(self) -> None: - """Initialize classification conv layers of the head.""" - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_reg_convs(self) -> None: - """Initialize bbox regression conv layers of the head.""" - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_predictor(self) -> None: - """Initialize predictor layers of the head.""" - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - def _load_from_state_dict(self, state_dict: dict, prefix: str, - local_metadata: dict, strict: bool, - missing_keys: Union[List[str], str], - unexpected_keys: Union[List[str], str], - error_msgs: Union[List[str], str]) -> None: - """Hack some keys of the model state dict so that can load checkpoints - of previous version.""" - version = local_metadata.get('version', None) - if version is None: - # the key is different in early versions - # for example, 'fcos_cls' become 'conv_cls' now - bbox_head_keys = [ - k for k in state_dict.keys() if k.startswith(prefix) - ] - ori_predictor_keys = [] - new_predictor_keys = [] - # e.g. 'fcos_cls' or 'fcos_reg' - for key in bbox_head_keys: - ori_predictor_keys.append(key) - key = key.split('.') - if len(key) < 2: - conv_name = None - elif key[1].endswith('cls'): - conv_name = 'conv_cls' - elif key[1].endswith('reg'): - conv_name = 'conv_reg' - elif key[1].endswith('centerness'): - conv_name = 'conv_centerness' - else: - conv_name = None - if conv_name is not None: - key[1] = conv_name - new_predictor_keys.append('.'.join(key)) - else: - ori_predictor_keys.pop(-1) - for i in range(len(new_predictor_keys)): - state_dict[new_predictor_keys[i]] = state_dict.pop( - ori_predictor_keys[i]) - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) - - def forward(self, x: Tuple[Tensor]) -> Tuple[List[Tensor], List[Tensor]]: - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually contain classification scores and bbox predictions. - - - cls_scores (list[Tensor]): Box scores for each scale level, \ - each is a 4D-tensor, the channel number is \ - num_points * num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for each scale \ - level, each is a 4D-tensor, the channel number is num_points * 4. - """ - return multi_apply(self.forward_single, x)[:2] - - def forward_single(self, x: Tensor) -> Tuple[Tensor, ...]: - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - - Returns: - tuple: Scores for each class, bbox predictions, features - after classification and regression conv layers, some - models needs these features like FCOS. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - return cls_score, bbox_pred, cls_feat, reg_feat - - @abstractmethod - def loss_by_feat( - self, - cls_scores: List[Tensor], - bbox_preds: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None) -> dict: - """Calculate the loss based on the features extracted by the detection - head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], Optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - """ - - raise NotImplementedError - - @abstractmethod - def get_targets(self, points: List[Tensor], - batch_gt_instances: InstanceList) -> Any: - """Compute regression, classification and centerness targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - """ - raise NotImplementedError - - # TODO refactor aug_test - def aug_test(self, - aug_batch_feats: List[Tensor], - aug_batch_img_metas: List[List[Tensor]], - rescale: bool = False) -> List[ndarray]: - """Test function with test time augmentation. - - Args: - aug_batch_feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - aug_batch_img_metas (list[list[dict]]): the outer list indicates - test-time augs (multiscale, flip, etc.) and the inner list - indicates images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes( - aug_batch_feats, aug_batch_img_metas, rescale=rescale) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/solov2_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/solov2_head.py deleted file mode 100644 index 3efda8ce09224adec79f543e887d006368901987..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/solov2_head.py +++ /dev/null @@ -1,802 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import List, Optional, Tuple - -import mmcv -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmengine.model import BaseModule -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.models.utils.misc import floordiv -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, InstanceList, MultiConfig, OptConfigType -from ..layers import mask_matrix_nms -from ..utils import center_of_mass, generate_coordinate, multi_apply -from .solo_head import SOLOHead -from ...structures.mask import mask2bbox - - -class MaskFeatModule(BaseModule): - """SOLOv2 mask feature map branch used in `SOLOv2: Dynamic and Fast - Instance Segmentation. <https://arxiv.org/pdf/2003.10152>`_ - - Args: - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels of the mask feature - map branch. - start_level (int): The starting feature map level from RPN that - will be used to predict the mask feature map. - end_level (int): The ending feature map level from rpn that - will be used to predict the mask feature map. - out_channels (int): Number of output channels of the mask feature - map branch. This is the channel count of the mask - feature map that to be dynamically convolved with the predicted - kernel. - mask_stride (int): Downsample factor of the mask feature map output. - Defaults to 4. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__( - self, - in_channels: int, - feat_channels: int, - start_level: int, - end_level: int, - out_channels: int, - mask_stride: int = 4, - conv_cfg: OptConfigType = None, - norm_cfg: OptConfigType = None, - init_cfg: MultiConfig = [ - dict(type='Normal', layer='Conv2d', std=0.01) - ] - ) -> None: - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.feat_channels = feat_channels - self.start_level = start_level - self.end_level = end_level - self.mask_stride = mask_stride - assert start_level >= 0 and end_level >= start_level - self.out_channels = out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self._init_layers() - self.fp16_enabled = False - - def _init_layers(self) -> None: - """Initialize layers of the head.""" - self.convs_all_levels = nn.ModuleList() - for i in range(self.start_level, self.end_level + 1): - convs_per_level = nn.Sequential() - if i == 0: - convs_per_level.add_module( - f'conv{i}', - ConvModule( - self.in_channels, - self.feat_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - inplace=False)) - self.convs_all_levels.append(convs_per_level) - continue - - for j in range(i): - if j == 0: - if i == self.end_level: - chn = self.in_channels + 2 - else: - chn = self.in_channels - convs_per_level.add_module( - f'conv{j}', - ConvModule( - chn, - self.feat_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - inplace=False)) - convs_per_level.add_module( - f'upsample{j}', - nn.Upsample( - scale_factor=2, - mode='bilinear', - align_corners=False)) - continue - - convs_per_level.add_module( - f'conv{j}', - ConvModule( - self.feat_channels, - self.feat_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - inplace=False)) - convs_per_level.add_module( - f'upsample{j}', - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - - self.convs_all_levels.append(convs_per_level) - - self.conv_pred = ConvModule( - self.feat_channels, - self.out_channels, - 1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - - def forward(self, x: Tuple[Tensor]) -> Tensor: - """Forward features from the upstream network. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - Tensor: The predicted mask feature map. - """ - inputs = x[self.start_level:self.end_level + 1] - assert len(inputs) == (self.end_level - self.start_level + 1) - feature_add_all_level = self.convs_all_levels[0](inputs[0]) - for i in range(1, len(inputs)): - input_p = inputs[i] - if i == len(inputs) - 1: - coord_feat = generate_coordinate(input_p.size(), - input_p.device) - input_p = torch.cat([input_p, coord_feat], 1) - - feature_add_all_level = feature_add_all_level + \ - self.convs_all_levels[i](input_p) - - feature_pred = self.conv_pred(feature_add_all_level) - return feature_pred - - -@MODELS.register_module() -class SOLOV2Head(SOLOHead): - """SOLOv2 mask head used in `SOLOv2: Dynamic and Fast Instance - Segmentation. <https://arxiv.org/pdf/2003.10152>`_ - - Args: - mask_feature_head (dict): Config of SOLOv2MaskFeatHead. - dynamic_conv_size (int): Dynamic Conv kernel size. Defaults to 1. - dcn_cfg (dict): Dcn conv configurations in kernel_convs and cls_conv. - Defaults to None. - dcn_apply_to_all_conv (bool): Whether to use dcn in every layer of - kernel_convs and cls_convs, or only the last layer. It shall be set - `True` for the normal version of SOLOv2 and `False` for the - light-weight version. Defaults to True. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - *args, - mask_feature_head: ConfigType, - dynamic_conv_size: int = 1, - dcn_cfg: OptConfigType = None, - dcn_apply_to_all_conv: bool = True, - init_cfg: MultiConfig = [ - dict(type='Normal', layer='Conv2d', std=0.01), - dict( - type='Normal', - std=0.01, - bias_prob=0.01, - override=dict(name='conv_cls')) - ], - **kwargs) -> None: - assert dcn_cfg is None or isinstance(dcn_cfg, dict) - self.dcn_cfg = dcn_cfg - self.with_dcn = dcn_cfg is not None - self.dcn_apply_to_all_conv = dcn_apply_to_all_conv - self.dynamic_conv_size = dynamic_conv_size - mask_out_channels = mask_feature_head.get('out_channels') - self.kernel_out_channels = \ - mask_out_channels * self.dynamic_conv_size * self.dynamic_conv_size - - super().__init__(*args, init_cfg=init_cfg, **kwargs) - - # update the in_channels of mask_feature_head - if mask_feature_head.get('in_channels', None) is not None: - if mask_feature_head.in_channels != self.in_channels: - warnings.warn('The `in_channels` of SOLOv2MaskFeatHead and ' - 'SOLOv2Head should be same, changing ' - 'mask_feature_head.in_channels to ' - f'{self.in_channels}') - mask_feature_head.update(in_channels=self.in_channels) - else: - mask_feature_head.update(in_channels=self.in_channels) - - self.mask_feature_head = MaskFeatModule(**mask_feature_head) - self.mask_stride = self.mask_feature_head.mask_stride - self.fp16_enabled = False - - def _init_layers(self) -> None: - """Initialize layers of the head.""" - self.cls_convs = nn.ModuleList() - self.kernel_convs = nn.ModuleList() - conv_cfg = None - for i in range(self.stacked_convs): - if self.with_dcn: - if self.dcn_apply_to_all_conv: - conv_cfg = self.dcn_cfg - elif i == self.stacked_convs - 1: - # light head - conv_cfg = self.dcn_cfg - - chn = self.in_channels + 2 if i == 0 else self.feat_channels - self.kernel_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - - self.conv_kernel = nn.Conv2d( - self.feat_channels, self.kernel_out_channels, 3, padding=1) - - def forward(self, x): - """Forward features from the upstream network. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores, mask prediction, - and mask features. - - - mlvl_kernel_preds (list[Tensor]): Multi-level dynamic kernel - prediction. The kernel is used to generate instance - segmentation masks by dynamic convolution. Each element in - the list has shape - (batch_size, kernel_out_channels, num_grids, num_grids). - - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each - element in the list has shape - (batch_size, num_classes, num_grids, num_grids). - - mask_feats (Tensor): Unified mask feature map used to - generate instance segmentation masks by dynamic convolution. - Has shape (batch_size, mask_out_channels, h, w). - """ - assert len(x) == self.num_levels - mask_feats = self.mask_feature_head(x) - ins_kernel_feats = self.resize_feats(x) - mlvl_kernel_preds = [] - mlvl_cls_preds = [] - for i in range(self.num_levels): - ins_kernel_feat = ins_kernel_feats[i] - # ins branch - # concat coord - coord_feat = generate_coordinate(ins_kernel_feat.size(), - ins_kernel_feat.device) - ins_kernel_feat = torch.cat([ins_kernel_feat, coord_feat], 1) - - # kernel branch - kernel_feat = ins_kernel_feat - kernel_feat = F.interpolate( - kernel_feat, - size=self.num_grids[i], - mode='bilinear', - align_corners=False) - - cate_feat = kernel_feat[:, :-2, :, :] - - kernel_feat = kernel_feat.contiguous() - for i, kernel_conv in enumerate(self.kernel_convs): - kernel_feat = kernel_conv(kernel_feat) - kernel_pred = self.conv_kernel(kernel_feat) - - # cate branch - cate_feat = cate_feat.contiguous() - for i, cls_conv in enumerate(self.cls_convs): - cate_feat = cls_conv(cate_feat) - cate_pred = self.conv_cls(cate_feat) - - mlvl_kernel_preds.append(kernel_pred) - mlvl_cls_preds.append(cate_pred) - - return mlvl_kernel_preds, mlvl_cls_preds, mask_feats - - def _get_targets_single(self, - gt_instances: InstanceData, - featmap_sizes: Optional[list] = None) -> tuple: - """Compute targets for predictions of single image. - - Args: - gt_instances (:obj:`InstanceData`): Ground truth of instance - annotations. It should includes ``bboxes``, ``labels``, - and ``masks`` attributes. - featmap_sizes (list[:obj:`torch.size`]): Size of each - feature map from feature pyramid, each element - means (feat_h, feat_w). Defaults to None. - - Returns: - Tuple: Usually returns a tuple containing targets for predictions. - - - mlvl_pos_mask_targets (list[Tensor]): Each element represent - the binary mask targets for positive points in this - level, has shape (num_pos, out_h, out_w). - - mlvl_labels (list[Tensor]): Each element is - classification labels for all - points in this level, has shape - (num_grid, num_grid). - - mlvl_pos_masks (list[Tensor]): Each element is - a `BoolTensor` to represent whether the - corresponding point in single level - is positive, has shape (num_grid **2). - - mlvl_pos_indexes (list[list]): Each element - in the list contains the positive index in - corresponding level, has shape (num_pos). - """ - gt_labels = gt_instances.labels - device = gt_labels.device - - gt_bboxes = gt_instances.bboxes - gt_areas = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - gt_masks = gt_instances.masks.to_tensor( - dtype=torch.bool, device=device) - - mlvl_pos_mask_targets = [] - mlvl_pos_indexes = [] - mlvl_labels = [] - mlvl_pos_masks = [] - for (lower_bound, upper_bound), num_grid \ - in zip(self.scale_ranges, self.num_grids): - mask_target = [] - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_index = [] - labels = torch.zeros([num_grid, num_grid], - dtype=torch.int64, - device=device) + self.num_classes - pos_mask = torch.zeros([num_grid**2], - dtype=torch.bool, - device=device) - - gt_inds = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(gt_inds) == 0: - mlvl_pos_mask_targets.append( - torch.zeros([0, featmap_sizes[0], featmap_sizes[1]], - dtype=torch.uint8, - device=device)) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - mlvl_pos_indexes.append([]) - continue - hit_gt_bboxes = gt_bboxes[gt_inds] - hit_gt_labels = gt_labels[gt_inds] - hit_gt_masks = gt_masks[gt_inds, ...] - - pos_w_ranges = 0.5 * (hit_gt_bboxes[:, 2] - - hit_gt_bboxes[:, 0]) * self.pos_scale - pos_h_ranges = 0.5 * (hit_gt_bboxes[:, 3] - - hit_gt_bboxes[:, 1]) * self.pos_scale - - # Make sure hit_gt_masks has a value - valid_mask_flags = hit_gt_masks.sum(dim=-1).sum(dim=-1) > 0 - - for gt_mask, gt_label, pos_h_range, pos_w_range, \ - valid_mask_flag in \ - zip(hit_gt_masks, hit_gt_labels, pos_h_ranges, - pos_w_ranges, valid_mask_flags): - if not valid_mask_flag: - continue - upsampled_size = (featmap_sizes[0] * self.mask_stride, - featmap_sizes[1] * self.mask_stride) - center_h, center_w = center_of_mass(gt_mask) - - coord_w = int( - floordiv((center_w / upsampled_size[1]), (1. / num_grid), - rounding_mode='trunc')) - coord_h = int( - floordiv((center_h / upsampled_size[0]), (1. / num_grid), - rounding_mode='trunc')) - - # left, top, right, down - top_box = max( - 0, - int( - floordiv( - (center_h - pos_h_range) / upsampled_size[0], - (1. / num_grid), - rounding_mode='trunc'))) - down_box = min( - num_grid - 1, - int( - floordiv( - (center_h + pos_h_range) / upsampled_size[0], - (1. / num_grid), - rounding_mode='trunc'))) - left_box = max( - 0, - int( - floordiv( - (center_w - pos_w_range) / upsampled_size[1], - (1. / num_grid), - rounding_mode='trunc'))) - right_box = min( - num_grid - 1, - int( - floordiv( - (center_w + pos_w_range) / upsampled_size[1], - (1. / num_grid), - rounding_mode='trunc'))) - - top = max(top_box, coord_h - 1) - down = min(down_box, coord_h + 1) - left = max(coord_w - 1, left_box) - right = min(right_box, coord_w + 1) - - labels[top:(down + 1), left:(right + 1)] = gt_label - # ins - gt_mask = np.uint8(gt_mask.cpu().numpy()) - # Follow the original implementation, F.interpolate is - # different from cv2 and opencv - gt_mask = mmcv.imrescale(gt_mask, scale=1. / self.mask_stride) - gt_mask = torch.from_numpy(gt_mask).to(device=device) - - for i in range(top, down + 1): - for j in range(left, right + 1): - index = int(i * num_grid + j) - this_mask_target = torch.zeros( - [featmap_sizes[0], featmap_sizes[1]], - dtype=torch.uint8, - device=device) - this_mask_target[:gt_mask.shape[0], :gt_mask. - shape[1]] = gt_mask - mask_target.append(this_mask_target) - pos_mask[index] = True - pos_index.append(index) - if len(mask_target) == 0: - mask_target = torch.zeros( - [0, featmap_sizes[0], featmap_sizes[1]], - dtype=torch.uint8, - device=device) - else: - mask_target = torch.stack(mask_target, 0) - mlvl_pos_mask_targets.append(mask_target) - mlvl_labels.append(labels) - mlvl_pos_masks.append(pos_mask) - mlvl_pos_indexes.append(pos_index) - return (mlvl_pos_mask_targets, mlvl_labels, mlvl_pos_masks, - mlvl_pos_indexes) - - def loss_by_feat(self, mlvl_kernel_preds: List[Tensor], - mlvl_cls_preds: List[Tensor], mask_feats: Tensor, - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], **kwargs) -> dict: - """Calculate the loss based on the features extracted by the mask head. - - Args: - mlvl_kernel_preds (list[Tensor]): Multi-level dynamic kernel - prediction. The kernel is used to generate instance - segmentation masks by dynamic convolution. Each element in the - list has shape - (batch_size, kernel_out_channels, num_grids, num_grids). - mlvl_cls_preds (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids, num_grids). - mask_feats (Tensor): Unified mask feature map used to generate - instance segmentation masks by dynamic convolution. Has shape - (batch_size, mask_out_channels, h, w). - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes``, ``masks``, - and ``labels`` attributes. - batch_img_metas (list[dict]): Meta information of multiple images. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = mask_feats.size()[-2:] - - pos_mask_targets, labels, pos_masks, pos_indexes = multi_apply( - self._get_targets_single, - batch_gt_instances, - featmap_sizes=featmap_sizes) - - mlvl_mask_targets = [ - torch.cat(lvl_mask_targets, 0) - for lvl_mask_targets in zip(*pos_mask_targets) - ] - - mlvl_pos_kernel_preds = [] - for lvl_kernel_preds, lvl_pos_indexes in zip(mlvl_kernel_preds, - zip(*pos_indexes)): - lvl_pos_kernel_preds = [] - for img_lvl_kernel_preds, img_lvl_pos_indexes in zip( - lvl_kernel_preds, lvl_pos_indexes): - img_lvl_pos_kernel_preds = img_lvl_kernel_preds.view( - img_lvl_kernel_preds.shape[0], -1)[:, img_lvl_pos_indexes] - lvl_pos_kernel_preds.append(img_lvl_pos_kernel_preds) - mlvl_pos_kernel_preds.append(lvl_pos_kernel_preds) - - # make multilevel mlvl_mask_pred - mlvl_mask_preds = [] - for lvl_pos_kernel_preds in mlvl_pos_kernel_preds: - lvl_mask_preds = [] - for img_id, img_lvl_pos_kernel_pred in enumerate( - lvl_pos_kernel_preds): - if img_lvl_pos_kernel_pred.size()[-1] == 0: - continue - img_mask_feats = mask_feats[[img_id]] - h, w = img_mask_feats.shape[-2:] - num_kernel = img_lvl_pos_kernel_pred.shape[1] - img_lvl_mask_pred = F.conv2d( - img_mask_feats, - img_lvl_pos_kernel_pred.permute(1, 0).view( - num_kernel, -1, self.dynamic_conv_size, - self.dynamic_conv_size), - stride=1).view(-1, h, w) - lvl_mask_preds.append(img_lvl_mask_pred) - if len(lvl_mask_preds) == 0: - lvl_mask_preds = None - else: - lvl_mask_preds = torch.cat(lvl_mask_preds, 0) - mlvl_mask_preds.append(lvl_mask_preds) - # dice loss - num_pos = 0 - for img_pos_masks in pos_masks: - for lvl_img_pos_masks in img_pos_masks: - # Fix `Tensor` object has no attribute `count_nonzero()` - # in PyTorch 1.6, the type of `lvl_img_pos_masks` - # should be `torch.bool`. - num_pos += lvl_img_pos_masks.nonzero().numel() - loss_mask = [] - for lvl_mask_preds, lvl_mask_targets in zip(mlvl_mask_preds, - mlvl_mask_targets): - if lvl_mask_preds is None: - continue - loss_mask.append( - self.loss_mask( - lvl_mask_preds, - lvl_mask_targets, - reduction_override='none')) - if num_pos > 0: - loss_mask = torch.cat(loss_mask).sum() / num_pos - else: - loss_mask = mask_feats.sum() * 0 - - # cate - flatten_labels = [ - torch.cat( - [img_lvl_labels.flatten() for img_lvl_labels in lvl_labels]) - for lvl_labels in zip(*labels) - ] - flatten_labels = torch.cat(flatten_labels) - - flatten_cls_preds = [ - lvl_cls_preds.permute(0, 2, 3, 1).reshape(-1, self.num_classes) - for lvl_cls_preds in mlvl_cls_preds - ] - flatten_cls_preds = torch.cat(flatten_cls_preds) - - loss_cls = self.loss_cls( - flatten_cls_preds, flatten_labels, avg_factor=num_pos + 1) - return dict(loss_mask=loss_mask, loss_cls=loss_cls) - - def predict_by_feat(self, mlvl_kernel_preds: List[Tensor], - mlvl_cls_scores: List[Tensor], mask_feats: Tensor, - batch_img_metas: List[dict], **kwargs) -> InstanceList: - """Transform a batch of output features extracted from the head into - mask results. - - Args: - mlvl_kernel_preds (list[Tensor]): Multi-level dynamic kernel - prediction. The kernel is used to generate instance - segmentation masks by dynamic convolution. Each element in the - list has shape - (batch_size, kernel_out_channels, num_grids, num_grids). - mlvl_cls_scores (list[Tensor]): Multi-level scores. Each element - in the list has shape - (batch_size, num_classes, num_grids, num_grids). - mask_feats (Tensor): Unified mask feature map used to generate - instance segmentation masks by dynamic convolution. Has shape - (batch_size, mask_out_channels, h, w). - batch_img_metas (list[dict]): Meta information of all images. - - Returns: - list[:obj:`InstanceData`]: Processed results of multiple - images.Each :obj:`InstanceData` usually contains - following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - num_levels = len(mlvl_cls_scores) - assert len(mlvl_kernel_preds) == len(mlvl_cls_scores) - - for lvl in range(num_levels): - cls_scores = mlvl_cls_scores[lvl] - cls_scores = cls_scores.sigmoid() - local_max = F.max_pool2d(cls_scores, 2, stride=1, padding=1) - keep_mask = local_max[:, :, :-1, :-1] == cls_scores - cls_scores = cls_scores * keep_mask - mlvl_cls_scores[lvl] = cls_scores.permute(0, 2, 3, 1) - - result_list = [] - for img_id in range(len(batch_img_metas)): - img_cls_pred = [ - mlvl_cls_scores[lvl][img_id].view(-1, self.cls_out_channels) - for lvl in range(num_levels) - ] - img_mask_feats = mask_feats[[img_id]] - img_kernel_pred = [ - mlvl_kernel_preds[lvl][img_id].permute(1, 2, 0).view( - -1, self.kernel_out_channels) for lvl in range(num_levels) - ] - img_cls_pred = torch.cat(img_cls_pred, dim=0) - img_kernel_pred = torch.cat(img_kernel_pred, dim=0) - result = self._predict_by_feat_single( - img_kernel_pred, - img_cls_pred, - img_mask_feats, - img_meta=batch_img_metas[img_id]) - result_list.append(result) - return result_list - - def _predict_by_feat_single(self, - kernel_preds: Tensor, - cls_scores: Tensor, - mask_feats: Tensor, - img_meta: dict, - cfg: OptConfigType = None) -> InstanceData: - """Transform a single image's features extracted from the head into - mask results. - - Args: - kernel_preds (Tensor): Dynamic kernel prediction of all points - in single image, has shape - (num_points, kernel_out_channels). - cls_scores (Tensor): Classification score of all points - in single image, has shape (num_points, num_classes). - mask_feats (Tensor): Mask prediction of all points in - single image, has shape (num_points, feat_h, feat_w). - img_meta (dict): Meta information of corresponding image. - cfg (dict, optional): Config used in test phase. - Defaults to None. - - Returns: - :obj:`InstanceData`: Processed results of single image. - it usually contains following keys. - - - scores (Tensor): Classification scores, has shape - (num_instance,). - - labels (Tensor): Has shape (num_instances,). - - masks (Tensor): Processed mask results, has - shape (num_instances, h, w). - """ - - def empty_results(cls_scores, ori_shape): - """Generate a empty results.""" - results = InstanceData() - results.scores = cls_scores.new_ones(0) - results.masks = cls_scores.new_zeros(0, *ori_shape) - results.labels = cls_scores.new_ones(0) - results.bboxes = cls_scores.new_zeros(0, 4) - return results - - cfg = self.test_cfg if cfg is None else cfg - assert len(kernel_preds) == len(cls_scores) - - featmap_size = mask_feats.size()[-2:] - - # overall info - h, w = img_meta['img_shape'][:2] - upsampled_size = (featmap_size[0] * self.mask_stride, - featmap_size[1] * self.mask_stride) - - # process. - score_mask = (cls_scores > cfg.score_thr) - cls_scores = cls_scores[score_mask] - if len(cls_scores) == 0: - return empty_results(cls_scores, img_meta['ori_shape'][:2]) - - # cate_labels & kernel_preds - inds = score_mask.nonzero() - cls_labels = inds[:, 1] - kernel_preds = kernel_preds[inds[:, 0]] - - # trans vector. - lvl_interval = cls_labels.new_tensor(self.num_grids).pow(2).cumsum(0) - strides = kernel_preds.new_ones(lvl_interval[-1]) - - strides[:lvl_interval[0]] *= self.strides[0] - for lvl in range(1, self.num_levels): - strides[lvl_interval[lvl - - 1]:lvl_interval[lvl]] *= self.strides[lvl] - strides = strides[inds[:, 0]] - - # mask encoding. - kernel_preds = kernel_preds.view( - kernel_preds.size(0), -1, self.dynamic_conv_size, - self.dynamic_conv_size) - mask_preds = F.conv2d( - mask_feats, kernel_preds, stride=1).squeeze(0).sigmoid() - # mask. - masks = mask_preds > cfg.mask_thr - sum_masks = masks.sum((1, 2)).float() - keep = sum_masks > strides - if keep.sum() == 0: - return empty_results(cls_scores, img_meta['ori_shape'][:2]) - masks = masks[keep] - mask_preds = mask_preds[keep] - sum_masks = sum_masks[keep] - cls_scores = cls_scores[keep] - cls_labels = cls_labels[keep] - - # maskness. - mask_scores = (mask_preds * masks).sum((1, 2)) / sum_masks - cls_scores *= mask_scores - - scores, labels, _, keep_inds = mask_matrix_nms( - masks, - cls_labels, - cls_scores, - mask_area=sum_masks, - nms_pre=cfg.nms_pre, - max_num=cfg.max_per_img, - kernel=cfg.kernel, - sigma=cfg.sigma, - filter_thr=cfg.filter_thr) - if len(keep_inds) == 0: - return empty_results(cls_scores, img_meta['ori_shape'][:2]) - mask_preds = mask_preds[keep_inds] - mask_preds = F.interpolate( - mask_preds.unsqueeze(0), - size=upsampled_size, - mode='bilinear', - align_corners=False)[:, :, :h, :w] - mask_preds = F.interpolate( - mask_preds, - size=img_meta['ori_shape'][:2], - mode='bilinear', - align_corners=False).squeeze(0) - masks = mask_preds > cfg.mask_thr - - results = InstanceData() - results.masks = masks - results.labels = labels - results.scores = scores - # create an empty bbox in InstanceData to avoid bugs when - # calculating metrics. - bboxes = mask2bbox(masks) - # results.bboxes = results.scores.new_zeros(len(scores), 4) - results.bboxes = bboxes - - return results diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/scnet.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/scnet.py deleted file mode 100644 index 606a0203869f1731a21d811f06c4781f5cd90d8d..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/scnet.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import MODELS -from .cascade_rcnn import CascadeRCNN - - -@MODELS.register_module() -class SCNet(CascadeRCNN): - """Implementation of `SCNet <https://arxiv.org/abs/2012.10150>`_""" - - def __init__(self, **kwargs) -> None: - super().__init__(**kwargs) diff --git a/spaces/LDJA/hotdog_ld/app.py b/spaces/LDJA/hotdog_ld/app.py deleted file mode 100644 index 0d087c313e09581cb27952dd4a59611cbd448863..0000000000000000000000000000000000000000 --- a/spaces/LDJA/hotdog_ld/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import gradio as gr -from transformers import pipeline - -pipeline = pipeline(task="image-classification", model="julien-c/hotdog-not-hotdog") - -def predict(image): - predictions = pipeline(image) - return {p["label"]: p["score"] for p in predictions} - -gr.Interface( - predict, - inputs=gr.inputs.Image(label="Upload hot dog candidate", type="filepath"), - outputs=gr.outputs.Label(num_top_classes=2), - title="Shakespeare : to be a Hot Dog Or Not to be", - allow_flagging="manual" -).launch() diff --git a/spaces/Legal-ease/legal-ease/base/create_collection.py b/spaces/Legal-ease/legal-ease/base/create_collection.py deleted file mode 100644 index b1b4c33d7da5fedc4915e1f6b002bfde3466b01a..0000000000000000000000000000000000000000 --- a/spaces/Legal-ease/legal-ease/base/create_collection.py +++ /dev/null @@ -1,163 +0,0 @@ -# Run this script independently (`python create_collection.py`) to create a Qdrant collection. A Qdrant collection is a set of vectors among which you can search. -# All the legal documents over which search needs to be enabled need to be converted to their embedding representation and inserted into a Qdrant collection for search feature to work. - -import os -import cohere -from datasets import load_dataset -from qdrant_client import QdrantClient -from qdrant_client import models -from qdrant_client.http import models as rest - -from constants import ( - ENGLISH_EMBEDDING_MODEL, - MULTILINGUAL_EMBEDDING_MODEL, - USE_MULTILINGUAL_EMBEDDING, - CREATE_QDRANT_COLLECTION_NAME, -) - -# load environment variables -QDRANT_HOST = os.environ.get("QDRANT_HOST") -QDRANT_API_KEY = os.environ.get("QDRANT_API_KEY") -COHERE_API_KEY = os.environ.get("COHERE_API_KEY") - - -def get_embedding_size(): - """ - Get the dimensions of the embeddings returned by the model being used to create embeddings for documents. - Returns: - embedding_size (`int`): - The dimensions of the embeddings returned by the embeddings model. - """ - if USE_MULTILINGUAL_EMBEDDING: - embedding_size = 768 - else: - embedding_size = 4096 - return embedding_size - - -def create_qdrant_collection(vector_size): - """ - (Re)-create a Qdrant Collection with the desired `collection name` , `vector_size` and `distance_measure`. - This collection will be used to keep all the vectors representing all the legal documents. - Args: - vector_size (`int`): - The dimensions of the embeddings that will be added to the collection. - """ - if USE_MULTILINGUAL_EMBEDDING: - # multilingual embedding model trained using dot product calculation - distance_measure = rest.Distance.DOT - else: - distance_measure = rest.Distance.COSINE - print("CREATE_QDRANT_COLLECTION_NAME:", CREATE_QDRANT_COLLECTION_NAME) - qdrant_client.recreate_collection( - collection_name=CREATE_QDRANT_COLLECTION_NAME, - vectors_config=models.VectorParams(size=vector_size, distance=distance_measure), - ) - - -def embed_legal_docs(legal_docs): - """ - Create embeddings and ids which will used to represent the legal documents upon which search needs to be enabled. - Args: - legal_docs (`List`): - A list of documents for which embeddings need to be created. - Returns: - doc_embeddings (`List`): - A list of embeddings corresponding to each document. - doc_ids (`List`): - A list of unique ids which will be used as identifiers for the points (documents) in a qdrant collection. - """ - if USE_MULTILINGUAL_EMBEDDING: - model_name = MULTILINGUAL_EMBEDDING_MODEL - else: - model_name = ENGLISH_EMBEDDING_MODEL - - legal_docs_embeds = cohere_client.embed( - texts=legal_docs, - model=model_name, - ) - doc_embeddings = [ - list(map(float, vector)) for vector in legal_docs_embeds.embeddings - ] - doc_ids = [id for id, _ in enumerate(legal_docs_embeds)] - - return doc_embeddings, doc_ids - - -def upsert_data_in_collection(vectors, ids, payload): - """ - Create embeddings and ids which will used to represent the legal documents upon which search needs to be enabled. - Args: - vectors (`List`): - A list of embeddings corresponding to each document which needs to be added to the collection. - ids (`List`): - A list of unique ids which will be used as identifiers for the points (documents) in a qdrant collection. - payload (`List`): - A list of additional information or metadata corresponding to each document being added to the collection. - """ - try: - update_result = qdrant_client.upsert( - collection_name=CREATE_QDRANT_COLLECTION_NAME, - points=rest.Batch( - ids=ids, - vectors=vectors, - payloads=payload, - ), - ) - return update_result - except: - return None - - -def fetch_legal_documents_and_payload(): - """ - Get the legal documents and additional information (payload) related to them which will be used as part of the search module. - Returns: - legal_docs (`List['str]`): - The documents that will be used as part of the search module. - payload (`List[Dict]`): - Additional information related to the documents that are being used as part of the search module. - """ - legal_dataset = load_dataset("joelito/covid19_emergency_event", split="train") - legal_docs = legal_dataset["text"] - - # prepare payload (additional information or metadata for documents being inserted) - payload = list(legal_dataset) - - return payload, legal_docs - - -if __name__ == "__main__": - # create qdrant and cohere client - cohere_client = cohere.Client(COHERE_API_KEY) - - qdrant_client = QdrantClient( - host=QDRANT_HOST, - prefer_grpc=True, - api_key=QDRANT_API_KEY, - ) - - # fetch the size of the embeddings depending on which model is being used to create embeddings for documents - vector_size = get_embedding_size() - - # create a collection in Qdrant - create_qdrant_collection(vector_size) - - # load the set of documents which will be inserted into the Qdrant collection - payload, legal_docs = fetch_legal_documents_and_payload() - - # create embedddings for documents and IDs for documents before insertion into Qdrant collection - doc_embeddings, doc_ids = embed_legal_docs(legal_docs) - - # insert/update documents in the previously created qdrant collection - update_result = upsert_data_in_collection(doc_embeddings, doc_ids, payload) - - collection_info = qdrant_client.get_collection( - collection_name=CREATE_QDRANT_COLLECTION_NAME - ) - - if update_result is not None: - if collection_info.vectors_count == len(legal_docs): - print("All documents have been successfully added to Qdrant Collection!") - else: - print("Failed to add documents to Qdrant collection") diff --git a/spaces/Mahiruoshi/vits-chatbot/main.py b/spaces/Mahiruoshi/vits-chatbot/main.py deleted file mode 100644 index 51e9c7b76fb92ec110c37a77c96ef7e6f10b226f..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/vits-chatbot/main.py +++ /dev/null @@ -1,277 +0,0 @@ -import re -import gradio as gr -import torch -import unicodedata -import commons -import utils -import pathlib -from models import SynthesizerTrn -from text import text_to_sequence -import time -import os -import io -from scipy.io.wavfile import write -from flask import Flask, request -from threading import Thread -import openai -import requests -import json -import soundfile as sf -from scipy import signal -class VitsGradio: - def __init__(self): - self.lan = ["中文","日文","自动"] - self.chatapi = ["gpt-3.5-turbo","gpt3"] - self.modelPaths = [] - for root,dirs,files in os.walk("checkpoints"): - for dir in dirs: - self.modelPaths.append(dir) - with gr.Blocks() as self.Vits: - with gr.Tab("调试用"): - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - self.text = gr.TextArea(label="Text", value="你好") - with gr.Accordion(label="测试api", open=False): - self.local_chat1 = gr.Checkbox(value=False, label="使用网址+文本进行模拟") - self.url_input = gr.TextArea(label="键入测试", value="http://127.0.0.1:8080/chat?Text=") - butto = gr.Button("模拟前端抓取语音文件") - btnVC = gr.Button("测试tts+对话程序") - with gr.Column(): - output2 = gr.TextArea(label="回复") - output1 = gr.Audio(label="采样率22050") - output3 = gr.outputs.File(label="44100hz: output.wav") - butto.click(self.Simul, inputs=[self.text, self.url_input], outputs=[output2,output3]) - btnVC.click(self.tts_fn, inputs=[self.text], outputs=[output1,output2]) - with gr.Tab("控制面板"): - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - self.api_input1 = gr.TextArea(label="输入gpt/茉莉云的api-key或本地存储说话模型的路径.如果要用茉莉云则用'|'隔开key和密码", value="sample:49eig5nu3rllvg6e|itcn9760") - with gr.Accordion(label="chatbot选择(默认gpt3.5),如需启用其他功能,请删除234行的注释", open=False): - self.api_input2 = gr.Checkbox(value=False, label="茉莉云") - self.local_chat1 = gr.Checkbox(value=False, label="启动本地chatbot") - self.local_chat2 = gr.Checkbox(value=False, label="是否量化") - res = gr.TextArea() - Botselection = gr.Button("完成chatbot设定") - Botselection.click(self.check_bot, inputs=[self.api_input1,self.api_input2,self.local_chat1,self.local_chat2], outputs = [res]) - self.input1 = gr.Dropdown(label = "模型", choices = self.modelPaths, value = self.modelPaths[0], type = "value") - self.input2 = gr.Dropdown(label="Language", choices=self.lan, value="自动", interactive=True) - with gr.Column(): - btnVC = gr.Button("完成vits TTS端设定") - self.input3 = gr.Dropdown(label="Speaker", choices=list(range(1001)), value=0, interactive=True) - self.input4 = gr.Slider(minimum=0, maximum=1.0, label="更改噪声比例(noise scale),以控制情感", value=0.6) - self.input5 = gr.Slider(minimum=0, maximum=1.0, label="更改噪声偏差(noise scale w),以控制音素长短", value=0.667) - self.input6 = gr.Slider(minimum=0.1, maximum=10, label="duration", value=1) - statusa = gr.TextArea() - btnVC.click(self.create_tts_fn, inputs=[self.input1, self.input2, self.input3, self.input4, self.input5, self.input6], outputs = [statusa]) - - def Simul(self,text,url_input): - web = url_input + text - res = requests.get(web) - music = res.content - with open('output.wav', 'wb') as code: - code.write(music) - file_path = "output.wav" - return web,file_path - - - def mori(self,text): - import http.client - conn = http.client.HTTPSConnection("api.mlyai.com") - payload = json.dumps({ - "content": text, - "type": 1, - "from": "123456", - "fromName": "侑" - }) - headers = { - 'Api-Key': self.api_key, - 'Api-Secret': self.api_secret, - 'Content-Type': 'application/json' - } - conn.request("POST", "/reply", payload, headers) - res = conn.getresponse() - data = res.read() - decoded_data = json.loads(data.decode("utf-8")) - - if decoded_data["code"] == "00000": - answer = decoded_data["data"][0]["content"] - if text == 'exit': - conn.close() - return answer - else: - conn.close() - return '对不起,做不到' - - def chatgpt(self,text): - self.messages.append({"role": "user", "content": text},) - chat = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages= self.messages) - reply = chat.choices[0].message.content - return reply - - def ChATGLM(self,text): - if text == 'clear': - self.history = [] - response, new_history = self.model.chat(self.tokenizer, text, self.history) - response = response.replace(" ",'').replace("\n",'.') - self.history = new_history - return response - - def gpt3_chat(self,text): - call_name = "Waifu" - openai.api_key = args.key - identity = "" - start_sequence = '\n'+str(call_name)+':' - restart_sequence = "\nYou: " - if 1 == 1: - prompt0 = text #当期prompt - if text == 'quit': - return prompt0 - prompt = identity + prompt0 + start_sequence - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0.5, - max_tokens=1000, - top_p=1.0, - frequency_penalty=0.5, - presence_penalty=0.0, - stop=["\nYou:"] - ) - return response['choices'][0]['text'].strip() - - def check_bot(self,api_input1,api_input2,local_chat1,local_chat2): - try: - self.api_key, self.api_secret = api_input1.split("|") - except: - pass - if local_chat1: - from transformers import AutoTokenizer, AutoModel - self.tokenizer = AutoTokenizer.from_pretrained(api_input1, trust_remote_code=True) - if local_chat2: - self.model = AutoModel.from_pretrained(api_input1, trust_remote_code=True).half().quantize(4).cuda() - else: - self.model = AutoModel.from_pretrained(api_input1, trust_remote_code=True) - self.history = [] - else: - try: - self.messages = [] - openai.api_key = api_input1 - except: - pass - return "Finished" - - def is_japanese(self,string): - for ch in string: - if ord(ch) > 0x3040 and ord(ch) < 0x30FF: - return True - return False - - def is_english(self,string): - import re - pattern = re.compile('^[A-Za-z0-9.,:;!?()_*"\' ]+$') - if pattern.fullmatch(string): - return True - else: - return False - - - - def get_text(self,text, hps, cleaned=False): - if cleaned: - text_norm = text_to_sequence(text, self.hps_ms.symbols, []) - else: - text_norm = text_to_sequence(text, self.hps_ms.symbols, self.hps_ms.data.text_cleaners) - if self.hps_ms.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - - def get_label(self,text, label): - if f'[{label}]' in text: - return True, text.replace(f'[{label}]', '') - else: - return False, text - - def sle(self,language,text): - text = text.replace('\n','。').replace(' ',',') - if language == "中文": - tts_input1 = "[ZH]" + text + "[ZH]" - return tts_input1 - elif language == "自动": - tts_input1 = f"[JA]{text}[JA]" if self.is_japanese(text) else f"[ZH]{text}[ZH]" - return tts_input1 - elif language == "日文": - tts_input1 = "[JA]" + text + "[JA]" - return tts_input1 - - def create_tts_fn(self,path, input2, input3, n_scale= 0.667,n_scale_w = 0.8, l_scale = 1 ): - self.language = input2 - self.speaker_id = int(input3) - self.n_scale = n_scale - self.n_scale_w = n_scale_w - self.l_scale = l_scale - self.dev = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - self.hps_ms = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.n_speakers = self.hps_ms.data.n_speakers if 'n_speakers' in self.hps_ms.data.keys() else 0 - self.n_symbols = len(self.hps_ms.symbols) if 'symbols' in self.hps_ms.keys() else 0 - self.net_g_ms = SynthesizerTrn( - self.n_symbols, - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - n_speakers=self.n_speakers, - **self.hps_ms.model).to(self.dev) - _ = self.net_g_ms.eval() - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.net_g_ms) - return 'success' - - - def tts_fn(self,text): - ''' - if self.local_chat1: - text = self.mori(text) - elif self.api_input2: - text = self.ChATGLM(text) - ''' - text = text = self.chatgpt(text) - print(text) - text =self.sle(self.language,text) - with torch.no_grad(): - stn_tst = self.get_text(text, self.hps_ms, cleaned=False) - x_tst = stn_tst.unsqueeze(0).to(self.dev) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]).to(self.dev) - sid = torch.LongTensor([self.speaker_id]).to(self.dev) - audio = self.net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=self.n_scale, noise_scale_w=self.n_scale_w, length_scale=self.l_scale)[0][ - 0, 0].data.cpu().float().numpy() - resampled_audio_data = signal.resample(audio, len(audio) * 2) - sf.write('temp.wav', resampled_audio_data, 44100, 'PCM_24') - return (self.hps_ms.data.sampling_rate, audio),text.replace('[JA]','').replace('[ZH]','') - -app = Flask(__name__) -print("开始部署") -grVits = VitsGradio() - -@app.route('/chat') -def text_api(): - message = request.args.get('Text','') - audio,text = grVits.tts_fn(message) - text = text.replace('[JA]','').replace('[ZH]','') - with open('temp.wav','rb') as bit: - wav_bytes = bit.read() - headers = { - 'Content-Type': 'audio/wav', - 'Text': text.encode('utf-8')} - return wav_bytes, 200, headers - -def gradio_interface(): - return grVits.Vits.launch() - -if __name__ == '__main__': - api_thread = Thread(target=app.run, args=("0.0.0.0", 8080)) - gradio_thread = Thread(target=gradio_interface) - api_thread.start() - gradio_thread.start() diff --git a/spaces/ManjunathNili/manjuai/README.md b/spaces/ManjunathNili/manjuai/README.md deleted file mode 100644 index 1eaed98ae2ac5d4bff8582a926e8f25bbdf257dd..0000000000000000000000000000000000000000 --- a/spaces/ManjunathNili/manjuai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Manjuai -emoji: 🐠 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MatrixYao/how_many_data_points_zh/naacl_demo/main.py b/spaces/MatrixYao/how_many_data_points_zh/naacl_demo/main.py deleted file mode 100644 index 7a110a5d9ecc852285d7b049336b463dcab3ea67..0000000000000000000000000000000000000000 --- a/spaces/MatrixYao/how_many_data_points_zh/naacl_demo/main.py +++ /dev/null @@ -1,293 +0,0 @@ -from bokeh.events import Tap -from bokeh.io import curdoc -from bokeh.layouts import column -from bokeh.models import Div, TextInput, RadioButtonGroup, TextAreaInput, Span, Button, Panel, Tabs -from bokeh.models.tools import CrosshairTool - -from demo_utils import ( - get_data, - prompt_boolq, - pvp_colors, - ctl_colors, - clf_colors, - reduct, - task_best_pattern, - plot_polygons_bokeh, - advantage_text, - data_difference, - calculate_overlap, - circ_easing, - average_advantage_text, - plot_three_polygons_bokeh, - tasks, - metric_tap, - neutral_tasks, pattern_graph, -) -from text import text1, text2, text3, text4, initial_passage, initial_question, text5 - -######################################################################################################################## -# Basic dimensions -######################################################################################################################## - -plot_width = 1200 -plot_height = 400 -sidebar_width = 400 -in_text_plot_height = 300 -text_width = 800 -widget_size = 400 - -######################################################################################################################## -# Patternification widget -######################################################################################################################## - -passage = TextAreaInput(title="篇章", rows=3, value=initial_passage, max_width=text_width) -passage.align = "center" -question = TextInput(title="问题", value=initial_question, max_width=text_width) -question.align = "center" -radio_button_group = RadioButtonGroup(labels=["模板 1", "模板 2", "模板 3"], active=0, max_width=text_width) -radio_button_group.align = "center" - -box_style = { - "display": "block", - "margin": "0 auto", - "width": f"{text_width}px", - "text-align": "center", - "white-space": "pre-wrap", - "background": "#f4f4f4", - "border": "1px solid #ddd", - # "border-left": "3px solid #4d4945", - "color": "#666", - "page-break-inside": "avoid", - # "font-family": "monospace", - "font-size": "15px", - "line-height": "1.6", - "max-width": "100%", - "overflow": "hidden", - "min-height": "30px", - "word-wrap": "break-word", -} - -prompt_box = Div( - text=prompt_boolq(passage.value, question.value, radio_button_group.active), - width=text_width, - style=box_style, - sizing_mode="scale_width", -) -prompt_box.align = "center" - - -def update_prompt(attrname, old, new): - prompt_box.text = prompt_boolq(passage.value, question.value, radio_button_group.active) - - -passage.on_change("value", update_prompt) -question.on_change("value", update_prompt) -radio_button_group.on_change("active", update_prompt) - -patternification = column(passage, question, radio_button_group, prompt_box, sizing_mode="scale_width") -patternification.align = "center" - -######################################################################################################################## -# Advantage diagram -######################################################################################################################## - -advantage_plots_per_task = [] -overlapping_range_per_task = [] -training_points_per_task = [] -clf_results_per_task = [] -pvp_results_per_task = [] -advantage_tabs = [] -advantage_all_figures = Tabs(tabs=advantage_tabs) - -advantage_box = Div( - text="在比较区域内点击某点以计算该点对应的性能点上的数据优势", - width=text_width, - style=box_style, - sizing_mode="scale_width", -) -advantage_box.align = "center" - -for task in tasks: - training_points, classifier_performances, pattern_performances = get_data(task) - training_points_per_task.append(list(training_points)) - clf_results_per_task.append(reduct(classifier_performances, "accmax")) - pvp_results_per_task.append(reduct(pattern_performances, "accmax", task_best_pattern[task], "normal")) - advantage_plots_per_task.append(plot_polygons_bokeh( - task, training_points_per_task[-1], clf_results_per_task[-1], pvp_results_per_task[-1], clf_colors, - pvp_colors - )) - advantage_plots_per_task[-1].align = "center" - advantage_plots_per_task[-1].add_tools(CrosshairTool(dimensions="width", line_alpha=0.2)) - overlapping_range_per_task.append(calculate_overlap(clf_results_per_task[-1], pvp_results_per_task[-1])) - advantage_tabs.append(Panel(child=advantage_plots_per_task[-1], title=task)) - - advantage_plots_per_task[-1].on_event( - Tap, - lambda event: metric_tap( - event, - overlapping_range_per_task[advantage_all_figures.active], - training_points_per_task[advantage_all_figures.active], - clf_results_per_task[advantage_all_figures.active], - pvp_results_per_task[advantage_all_figures.active], - advantage_box, - advantage_plots_per_task[advantage_all_figures.active], - ), - ) - - if task == "MNLI": - training_points_per_task.append(list(training_points)) - clf_results_per_task.append(reduct(classifier_performances, "accmax")) - pvp_results_per_task.append(reduct(pattern_performances, "accmax", task_best_pattern[task], "normal")) - advantage_plots_per_task.append(plot_polygons_bokeh( - task, training_points_per_task[-1], clf_results_per_task[-1], pvp_results_per_task[-1], clf_colors, - pvp_colors, x_log_scale=True - )) - advantage_plots_per_task[-1].align = "center" - advantage_plots_per_task[-1].add_tools(CrosshairTool(dimensions="width", line_alpha=0.2)) - overlapping_range_per_task.append(calculate_overlap(clf_results_per_task[-1], pvp_results_per_task[-1])) - advantage_tabs.append(Panel(child=advantage_plots_per_task[-1], title="MNLI (log scale)")) - - advantage_plots_per_task[-1].on_event( - Tap, - lambda event: metric_tap( - event, - overlapping_range_per_task[advantage_all_figures.active], - training_points_per_task[advantage_all_figures.active], - clf_results_per_task[advantage_all_figures.active], - pvp_results_per_task[advantage_all_figures.active], - advantage_box, - advantage_plots_per_task[advantage_all_figures.active], - ), - ) - -advantage_all_figures = Tabs(tabs=advantage_tabs) -advantage_all_figures.align = "center" - - -def on_integrate_click(): - frames = 200 - initial_placement = overlapping_range_per_task[advantage_all_figures.active][0] - - if not isinstance(advantage_plots_per_task[advantage_all_figures.active].renderers[-1], Span): - metric_line = Span( - location=initial_placement, - line_alpha=0.7, - dimension="width", - line_color=clf_colors[0] if initial_placement < 0 else pvp_colors[0], - line_dash="dashed", - line_width=1, - ) - advantage_plots_per_task[advantage_all_figures.active].renderers.extend([metric_line]) - else: - advantage_plots_per_task[advantage_all_figures.active].renderers[-1].location = initial_placement - advantage_plots_per_task[advantage_all_figures.active].renderers[-1].line_color = clf_colors[ - 0] if initial_placement < 0 else pvp_colors[0] - - average_advantage = 0 - for i in range(1, frames): - metric_value = overlapping_range_per_task[advantage_all_figures.active][0] + ( - overlapping_range_per_task[advantage_all_figures.active][1] - - overlapping_range_per_task[advantage_all_figures.active][0]) * (i / frames) - advantage_value = data_difference(metric_value, overlapping_range_per_task[advantage_all_figures.active], - training_points_per_task[advantage_all_figures.active], - clf_results_per_task[advantage_all_figures.active], - pvp_results_per_task[advantage_all_figures.active]) - average_advantage = ((i - 1) * average_advantage + advantage_value) / i - - advantage_plots_per_task[advantage_all_figures.active].renderers[-1].location = metric_value - advantage_plots_per_task[advantage_all_figures.active].renderers[-1].line_color = clf_colors[ - 0] if advantage_value < 0 else pvp_colors[0] - advantage_box.text = average_advantage_text(average_advantage) - - -integrate = Button(width=175, max_width=175, label="对整个区域进行积分!") -integrate.align = "center" -integrate.on_click(on_integrate_click) - - -def on_tab_change(attr, old, new): - advantage_box.text = "在比较区域内点击某点以计算该点对应的性能点上的数据优势" - - -advantage_all_figures.on_change('active', on_tab_change) - -advantage_column = column(advantage_all_figures, advantage_box, integrate, sizing_mode="scale_width") - -######################################################################################################################## -# Null verbalizer diagram -######################################################################################################################## - -null_tabs = [] -null_all_figures = Tabs(tabs=null_tabs) - -for task in neutral_tasks: - training_points, classifier_performances, pattern_performances = get_data(task) - training_points = list(training_points) - clf_results = reduct(classifier_performances, "accmax") - pvp_results = reduct(pattern_performances, "accmax", task_best_pattern[task], "normal") - ctl_results = reduct(pattern_performances, "accmax", task_best_pattern[task], "neutral") - null_plot = plot_three_polygons_bokeh(task, training_points, clf_results, pvp_results, ctl_results, clf_colors, - pvp_colors, ctl_colors) - null_plot.align = "center" - null_plot.add_tools(CrosshairTool(dimensions="width", line_alpha=0.2)) - null_tabs.append(Panel(child=null_plot, title=task)) - - if task == "MNLI": - null_plot = plot_three_polygons_bokeh(task, training_points, clf_results, pvp_results, ctl_results, clf_colors, - pvp_colors, ctl_colors, x_log_scale=True) - null_plot.align = "center" - null_plot.add_tools(CrosshairTool(dimensions="width", line_alpha=0.2)) - null_tabs.append(Panel(child=null_plot, title="MNLI (log scale)")) - -null_all_figures = Tabs(tabs=null_tabs) -null_all_figures.align = "center" - -######################################################################################################################## -# Patterns diagram -######################################################################################################################## - -pattern_tabs = [] -pattern_all_figures = Tabs(tabs=pattern_tabs) - -for task in tasks: - pattern_plot = pattern_graph(task) - pattern_plot.align = "center" - pattern_plot.add_tools(CrosshairTool(dimensions="width", line_alpha=0.2)) - pattern_tabs.append(Panel(child=pattern_plot, title=task)) - -pattern_all_figures = Tabs(tabs=pattern_tabs) -pattern_all_figures.align = "center" - -######################################################################################################################## -# Add write-up text -######################################################################################################################## - -main_text_style = { - "min-height": "100px", - "overflow": "hidden", - "display": "block", - "margin": "auto", - "width": f"{text_width}px", - "font-size": "18px", -} - -textbox1 = Div(text=text1, style=main_text_style) -textbox2 = Div(text=text2, style=main_text_style) -textbox3 = Div(text=text3, style=main_text_style) -textbox4 = Div(text=text4, style=main_text_style) -textbox5 = Div(text=text5, style=main_text_style) -textbox1.align = "center" -textbox2.align = "center" -textbox3.align = "center" -textbox4.align = "center" -textbox5.align = "center" - -######################################################################################################################## -# Set up layouts and add to document -######################################################################################################################## - -main_body = column(textbox1, patternification, textbox2, advantage_column, textbox3, null_all_figures, textbox4, pattern_all_figures, textbox5, sizing_mode="scale_width") -main_body.align = "center" - -curdoc().add_root(main_body) -curdoc().title = "一条提示抵得上多少样本数据?" diff --git a/spaces/Menna2211/TxTimg/pages/Stable-Diffusion-Huggingface.py b/spaces/Menna2211/TxTimg/pages/Stable-Diffusion-Huggingface.py deleted file mode 100644 index 1bd1f6086829d94519bd094f85a2207e051c3be5..0000000000000000000000000000000000000000 --- a/spaces/Menna2211/TxTimg/pages/Stable-Diffusion-Huggingface.py +++ /dev/null @@ -1,32 +0,0 @@ -import streamlit as st -import torch -import time -from diffusers import StableDiffusionPipeline - -device = "cuda" if torch.cuda.is_available() else "cpu" -torch_dtype = torch.float16 if device == "cuda" else torch.float32 - -model_id = "runwayml/stable-diffusion-v1-5" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch_dtype) -pipe = pipe.to(device) - -st.title("Stable Diffusion App") -# define the layout of your app - -# Define the Streamlit app layout -prompt = st.text_input("Write your sentence:") -submit_button = st.button("Compute") -if not submit_button: - st.stop() - -# Display the generated text -if submit_button: - with st.spinner('Wait for it...'): - generated_img=pipe(prompt).images[0] - time.sleep(0.1) - - st.write("Generated Image:") - st.image(generated_img) - time.sleep(5) - st.success('Congratulations task is done ', icon="✅") - st.balloons() diff --git a/spaces/MesutUnutur/chatgptFinetune/app.py b/spaces/MesutUnutur/chatgptFinetune/app.py deleted file mode 100644 index 508e26c8bf9d7fae83ecd9b9e3fb1a86c2d353e7..0000000000000000000000000000000000000000 --- a/spaces/MesutUnutur/chatgptFinetune/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import llama_index -from llama_index import SimpleDirectoryReader, GPTVectorStoreIndex ,StorageContext,load_index_from_storage -import os -import gradio as gr - -os.environ["OPENAI_API_KEY"] = "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" -storage_context = StorageContext.from_defaults(persist_dir="mustafaKemalAtatürk.txt") -index = load_index_from_storage(storage_context) - -#prompt = "What is this text about" -#result = index.as_query_engine().query(prompt) - -def query(prompt): - result = index.as_query_engine().query(prompt) - return result - -with gr.Blocks() as demo: - prompt = gr.Textbox(label="Questions or query", placeholder="what is your query") - output = gr.Textbox() - query_btn = gr.Button("query") - query_btn.click(query, prompt, output) - -demo.launch() diff --git a/spaces/MohamadRezo/flixPicks/README.md b/spaces/MohamadRezo/flixPicks/README.md deleted file mode 100644 index 3c2717cb02a102b6ab570c9594b430da894a8fe3..0000000000000000000000000000000000000000 --- a/spaces/MohamadRezo/flixPicks/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: FlixPicks -emoji: 🌍 -colorFrom: gray -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/transforms/formatting.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/transforms/formatting.py deleted file mode 100644 index b9b71437a6cc1de2396b17fe5c04909855f2ed86..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/transforms/formatting.py +++ /dev/null @@ -1,330 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch -from mmcv.transforms import to_tensor -from mmcv.transforms.base import BaseTransform -from mmengine.structures import InstanceData, LabelData - -from mmocr.registry import TRANSFORMS -from mmocr.structures import (KIEDataSample, TextDetDataSample, - TextRecogDataSample) - - -@TRANSFORMS.register_module() -class PackTextDetInputs(BaseTransform): - """Pack the inputs data for text detection. - - The type of outputs is `dict`: - - - inputs: image converted to tensor, whose shape is (C, H, W). - - data_samples: Two components of ``TextDetDataSample`` will be updated: - - - gt_instances (InstanceData): Depending on annotations, a subset of the - following keys will be updated: - - - bboxes (torch.Tensor((N, 4), dtype=torch.float32)): The groundtruth - of bounding boxes in the form of [x1, y1, x2, y2]. Renamed from - 'gt_bboxes'. - - labels (torch.LongTensor(N)): The labels of instances. - Renamed from 'gt_bboxes_labels'. - - polygons(list[np.array((2k,), dtype=np.float32)]): The - groundtruth of polygons in the form of [x1, y1,..., xk, yk]. Each - element in polygons may have different number of points. Renamed from - 'gt_polygons'. Using numpy instead of tensor is that polygon usually - is not the output of model and operated on cpu. - - ignored (torch.BoolTensor((N,))): The flag indicating whether the - corresponding instance should be ignored. Renamed from - 'gt_ignored'. - - texts (list[str]): The groundtruth texts. Renamed from 'gt_texts'. - - - metainfo (dict): 'metainfo' is always populated. The contents of the - 'metainfo' depends on ``meta_keys``. By default it includes: - - - "img_path": Path to the image file. - - "img_shape": Shape of the image input to the network as a tuple - (h, w). Note that the image may be zero-padded afterward on the - bottom/right if the batch tensor is larger than this shape. - - "scale_factor": A tuple indicating the ratio of width and height - of the preprocessed image to the original one. - - "ori_shape": Shape of the preprocessed image as a tuple - (h, w). - - "pad_shape": Image shape after padding (if any Pad-related - transform involved) as a tuple (h, w). - - "flip": A boolean indicating if the image has been flipped. - - ``flip_direction``: the flipping direction. - - Args: - meta_keys (Sequence[str], optional): Meta keys to be converted to - the metainfo of ``TextDetSample``. Defaults to ``('img_path', - 'ori_shape', 'img_shape', 'scale_factor', 'flip', - 'flip_direction')``. - """ - mapping_table = { - 'gt_bboxes': 'bboxes', - 'gt_bboxes_labels': 'labels', - 'gt_polygons': 'polygons', - 'gt_texts': 'texts', - 'gt_ignored': 'ignored' - } - - def __init__(self, - meta_keys=('img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'flip', 'flip_direction')): - self.meta_keys = meta_keys - - def transform(self, results: dict) -> dict: - """Method to pack the input data. - - Args: - results (dict): Result dict from the data pipeline. - - Returns: - dict: - - - 'inputs' (obj:`torch.Tensor`): Data for model forwarding. - - 'data_samples' (obj:`DetDataSample`): The annotation info of the - sample. - """ - packed_results = dict() - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - # A simple trick to speedup formatting by 3-5 times when - # OMP_NUM_THREADS != 1 - # Refer to https://github.com/open-mmlab/mmdetection/pull/9533 - # for more details - if img.flags.c_contiguous: - img = to_tensor(img) - img = img.permute(2, 0, 1).contiguous() - else: - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - img = to_tensor(img) - packed_results['inputs'] = img - - data_sample = TextDetDataSample() - instance_data = InstanceData() - for key in self.mapping_table.keys(): - if key not in results: - continue - if key in ['gt_bboxes', 'gt_bboxes_labels', 'gt_ignored']: - instance_data[self.mapping_table[key]] = to_tensor( - results[key]) - else: - instance_data[self.mapping_table[key]] = results[key] - data_sample.gt_instances = instance_data - - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data_sample.set_metainfo(img_meta) - packed_results['data_samples'] = data_sample - - return packed_results - - def __repr__(self) -> str: - repr_str = self.__class__.__name__ - repr_str += f'(meta_keys={self.meta_keys})' - return repr_str - - -@TRANSFORMS.register_module() -class PackTextRecogInputs(BaseTransform): - """Pack the inputs data for text recognition. - - The type of outputs is `dict`: - - - inputs: Image as a tensor, whose shape is (C, H, W). - - data_samples: Two components of ``TextRecogDataSample`` will be updated: - - - gt_text (LabelData): - - - item(str): The groundtruth of text. Rename from 'gt_texts'. - - - metainfo (dict): 'metainfo' is always populated. The contents of the - 'metainfo' depends on ``meta_keys``. By default it includes: - - - "img_path": Path to the image file. - - "ori_shape": Shape of the preprocessed image as a tuple - (h, w). - - "img_shape": Shape of the image input to the network as a tuple - (h, w). Note that the image may be zero-padded afterward on the - bottom/right if the batch tensor is larger than this shape. - - "valid_ratio": The proportion of valid (unpadded) content of image - on the x-axis. It defaults to 1 if not set in pipeline. - - Args: - meta_keys (Sequence[str], optional): Meta keys to be converted to - the metainfo of ``TextRecogDataSampel``. Defaults to - ``('img_path', 'ori_shape', 'img_shape', 'pad_shape', - 'valid_ratio')``. - """ - - def __init__(self, - meta_keys=('img_path', 'ori_shape', 'img_shape', 'pad_shape', - 'valid_ratio')): - self.meta_keys = meta_keys - - def transform(self, results: dict) -> dict: - """Method to pack the input data. - - Args: - results (dict): Result dict from the data pipeline. - - Returns: - dict: - - - 'inputs' (obj:`torch.Tensor`): Data for model forwarding. - - 'data_samples' (obj:`TextRecogDataSample`): The annotation info - of the sample. - """ - packed_results = dict() - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - # A simple trick to speedup formatting by 3-5 times when - # OMP_NUM_THREADS != 1 - # Refer to https://github.com/open-mmlab/mmdetection/pull/9533 - # for more details - if img.flags.c_contiguous: - img = to_tensor(img) - img = img.permute(2, 0, 1).contiguous() - else: - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - img = to_tensor(img) - packed_results['inputs'] = img - - data_sample = TextRecogDataSample() - gt_text = LabelData() - - if results.get('gt_texts', None): - assert len( - results['gt_texts'] - ) == 1, 'Each image sample should have one text annotation only' - gt_text.item = results['gt_texts'][0] - data_sample.gt_text = gt_text - - img_meta = {} - for key in self.meta_keys: - if key == 'valid_ratio': - img_meta[key] = results.get('valid_ratio', 1) - else: - img_meta[key] = results[key] - data_sample.set_metainfo(img_meta) - - packed_results['data_samples'] = data_sample - - return packed_results - - def __repr__(self) -> str: - repr_str = self.__class__.__name__ - repr_str += f'(meta_keys={self.meta_keys})' - return repr_str - - -@TRANSFORMS.register_module() -class PackKIEInputs(BaseTransform): - """Pack the inputs data for key information extraction. - - The type of outputs is `dict`: - - - inputs: image converted to tensor, whose shape is (C, H, W). - - data_samples: Two components of ``TextDetDataSample`` will be updated: - - - gt_instances (InstanceData): Depending on annotations, a subset of the - following keys will be updated: - - - bboxes (torch.Tensor((N, 4), dtype=torch.float32)): The groundtruth - of bounding boxes in the form of [x1, y1, x2, y2]. Renamed from - 'gt_bboxes'. - - labels (torch.LongTensor(N)): The labels of instances. - Renamed from 'gt_bboxes_labels'. - - edge_labels (torch.LongTensor(N, N)): The edge labels. - Renamed from 'gt_edges_labels'. - - texts (list[str]): The groundtruth texts. Renamed from 'gt_texts'. - - - metainfo (dict): 'metainfo' is always populated. The contents of the - 'metainfo' depends on ``meta_keys``. By default it includes: - - - "img_path": Path to the image file. - - "img_shape": Shape of the image input to the network as a tuple - (h, w). Note that the image may be zero-padded afterward on the - bottom/right if the batch tensor is larger than this shape. - - "scale_factor": A tuple indicating the ratio of width and height - of the preprocessed image to the original one. - - "ori_shape": Shape of the preprocessed image as a tuple - (h, w). - - Args: - meta_keys (Sequence[str], optional): Meta keys to be converted to - the metainfo of ``TextDetSample``. Defaults to ``('img_path', - 'ori_shape', 'img_shape', 'scale_factor', 'flip', - 'flip_direction')``. - """ - mapping_table = { - 'gt_bboxes': 'bboxes', - 'gt_bboxes_labels': 'labels', - 'gt_edges_labels': 'edge_labels', - 'gt_texts': 'texts', - } - - def __init__(self, meta_keys=()): - self.meta_keys = meta_keys - - def transform(self, results: dict) -> dict: - """Method to pack the input data. - - Args: - results (dict): Result dict from the data pipeline. - - Returns: - dict: - - - 'inputs' (obj:`torch.Tensor`): Data for model forwarding. - - 'data_samples' (obj:`DetDataSample`): The annotation info of the - sample. - """ - packed_results = dict() - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - # A simple trick to speedup formatting by 3-5 times when - # OMP_NUM_THREADS != 1 - # Refer to https://github.com/open-mmlab/mmdetection/pull/9533 - # for more details - if img.flags.c_contiguous: - img = to_tensor(img) - img = img.permute(2, 0, 1).contiguous() - else: - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - img = to_tensor(img) - packed_results['inputs'] = img - else: - packed_results['inputs'] = torch.FloatTensor().reshape(0, 0, 0) - - data_sample = KIEDataSample() - instance_data = InstanceData() - for key in self.mapping_table.keys(): - if key not in results: - continue - if key in ['gt_bboxes', 'gt_bboxes_labels', 'gt_edges_labels']: - instance_data[self.mapping_table[key]] = to_tensor( - results[key]) - else: - instance_data[self.mapping_table[key]] = results[key] - data_sample.gt_instances = instance_data - - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data_sample.set_metainfo(img_meta) - packed_results['data_samples'] = data_sample - - return packed_results - - def __repr__(self) -> str: - repr_str = self.__class__.__name__ - repr_str += f'(meta_keys={self.meta_keys})' - return repr_str diff --git a/spaces/NCSOFT/harim_plus/LICENSE.md b/spaces/NCSOFT/harim_plus/LICENSE.md deleted file mode 100644 index 6786ca4340914b82918da266121aedbe71081d76..0000000000000000000000000000000000000000 --- a/spaces/NCSOFT/harim_plus/LICENSE.md +++ /dev/null @@ -1,280 +0,0 @@ -© NCSOFT Corporation. All Rights Reserved. - -Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - -1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - -3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -Any questions about our licensed work can be sent to opensource@ncsoft.com. - --------------------------------------- - -This software uses Open Source Software (OSS). You can find the link for the source code of these open source projects, along with applicable license information, below. - -charset-normalizer <br> -https://github.com/Ousret/charset_normalizer <br> -Copyright (c) 2019 TAHRI Ahmed R. <br> -MIT License - -datasets <br> -https://github.com/huggingface/datasets <br> -Copyright 2020 The HuggingFace Authors.<br> -Apache License 2.0 - -decorator<br> -https://github.com/micheles/decorator<br> -Copyright (c) 2005-2018, Michele Simionato. All rights reserved.<br> -BSD 2-Clause "Simplified" License - -dill<br> -https://github.com/uqfoundation/dill<br> -Copyright (c) 2004-2016 California Institute of Technology.<br> -Copyright (c) 2016-2022 The Uncertainty Quantification Foundation. All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -evaluate <br> -https://github.com/huggingface/evaluate<br> -Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.<br> -Apache License 2.0 - -executing<br> -https://github.com/alexmojaki/executing<br> -Copyright (c) 2019 Alex Hall<br> -MIT License - -fire<br> -https://github.com/google/python-fire<br> -Copyright 2017 Google Inc. All rights reserved.<br> -Apache License 2.0 - -frozenlist<br> -https://github.com/aio-libs/frozenlist<br> -Copyright 2013-2019 Nikolay Kim and Andrew Svetlov<br> -Apache License 2.0 - -fsspec<br> -https://github.com/fsspec/filesystem_spec<br> -Copyright (c) 2018, Martin Durant. All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -huggingface-hub<br> -https://github.com/huggingface/huggingface_hub<br> -Copyright 2022-present, the HuggingFace Inc. team.<br> -Apache License 2.0 - -idna<br> -https://github.com/kjd/idna<br> -Copyright (c) 2013-2021, Kim Davies All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -ipython<br> -https://github.com/ipython/ipython<br> -Copyright (c) 2008-Present, IPython Development Team<br> -Copyright (c) 2001-2007, Fernando Perez <fernando.perez@colorado.edu><br> -Copyright (c) 2001, Janko Hauser <jhauser@zscout.de><br> -Copyright (c) 2001, Nathaniel Gray <n8gray@caltech.edu> All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -jedi<br> -https://github.com/davidhalter/jedi<br> -Copyright (c) <2013> <David Halter and others, see AUTHORS.txt><br> -MIT License - -matplotlib-inline<br> -https://github.com/ipython/matplotlib-inline<br> -Copyright (c) 2019-2022, IPython Development Team. All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -multidict<br> -https://github.com/aio-libs/multidict<br> -Copyright 2016-2021 Andrew Svetlov and aio-libs team<br> -Apache License 2.0 - -multiprocess<br> -https://github.com/uqfoundation/multiprocess<br> -Copyright (c) 2008-2016 California Institute of Technology.<br> -Copyright (c) 2016-2022 The Uncertainty Quantification Foundation. All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -numpy<br> -https://github.com/numpy/numpy<br> -Copyright (c) 2005-2022, NumPy Developers. All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -packaging<br> -https://github.com/pypa/packaging<br> -Copyright (c) Donald Stufft and individual contributors. All rights reserved.<br> -Apache License 2.0 - -pandas<br> -https://github.com/pandas-dev/pandas<br> -Copyright (c) 2008-2011, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -parso<br> -https://github.com/davidhalter/parso<br> -Copyright (c) <2013-2017> <David Halter and others, see AUTHORS.txt><br> -MIT License - -pexpect<br> -https://github.com/pexpect/pexpect<br> -Copyright (c) 2013-2014, Pexpect development team<br> -Copyright (c) 2012, Noah Spurrier <noah@noah.org><br> -ISC License - -pickleshare<br> -https://github.com/pickleshare/pickleshare<br> -Copyright (c) 2016 Ville Vainio<br> -MIT License - -prompt-toolkit<br> -https://github.com/prompt-toolkit/python-prompt-toolkit<br> -Copyright (c) 2014, Jonathan Slenders. All rights reserved.<br> -BSD 3-Clause "New" or "Revised" License - -ptyprocess<br> -https://github.com/pexpect/ptyprocess<br> -Copyright (c) 2013-2014, Pexpect development team<br> -Copyright (c) 2012, Noah Spurrier <noah@noah.org><br> -ISC License - -pure-eval<br> -https://github.com/alexmojaki/pure_eval<br> -Copyright (c) 2019 Alex Hall<br> -MIT License - -pyarrow<br> -https://github.com/apache/arrow<br> -Copyright 2016-2019 The Apache Software Foundation<br> -Apache License 2.0 - -Pygments<br> -https://github.com/pygments/pygments<br> -Copyright (c) 2006-2022 by the respective authors (see AUTHORS file). All rights reserved.<br> -BSD 2-Clause "Simplified" License - -pyparsing<br> -https://github.com/pyparsing/pyparsing/<br> -Copyright (c) 2003-2022 Paul T. McGuire<br> -MIT License - -python-dateutil<br> -https://github.com/dateutil/dateutil<br> -Copyright 2017- Paul Ganssle <paul@ganssle.io><br> -Copyright 2017- dateutil contributors (see AUTHORS file)<br> -Apache License 2.0 - -pytz<br> -https://github.com/stub42/pytz<br> -Copyright (c) 2003-2019 Stuart Bishop <stuart@stuartbishop.net><br> -MIT License - -PyYAML<br> -https://github.com/yaml/pyyaml<br> -Copyright (c) 2017-2021 Ingy döt Net<br> -Copyright (c) 2006-2016 Kirill Simonov<br> -MIT License - -regex<br> -https://github.com/mrabarnett/mrab-regex<br> -copyright (c) 1998-2001 by Secret Labs AB<br> -Apache License 2.0 - -requests<br> -https://github.com/psf/requests<br> -©MMXVIX. A Kenneth Reitz Project.<br> -Apache License 2.0 - -responses<br> -https://github.com/getsentry/responses<br> -Copyright 2015 David Cramer<br> -Apache License 2.0 - -six<br> -https://github.com/benjaminp/six<br> -Copyright (c) 2010-2020 Benjamin Peterson<br> -MIT License - -stack-data<br> -https://github.com/alexmojaki/stack_data<br> -Copyright (c) 2019 Alex Hall<br> -MIT License - -termcolor<br> -https://github.com/termcolor/termcolor<br> -Copyright (c) 2008-2011 Volvox Development Team<br> -MIT License - -tokenizers<br> -https://github.com/huggingface/tokenizers<br> -Apache License 2.0 - -toml<br> -https://github.com/uiri/toml<br> -Copyright 2013-2019 William Pearson<br> -Copyright 2015-2016 Julien Enselme<br> -Copyright 2016 Google Inc.<br> -Copyright 2017 Samuel Vasko<br> -Copyright 2017 Nate Prewitt<br> -Copyright 2017 Jack Evans<br> -Copyright 2019 Filippo Broggini<br> -MIT License<br> - -torch<br> -https://github.com/pytorch/pytorch<br> -Copyright (c) 2016- Facebook, Inc (Adam Paszke)<br> -Copyright (c) 2014- Facebook, Inc (Soumith Chintala)<br> -Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)<br> -Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)<br> -Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)<br> -Copyright (c) 2011-2013 NYU (Clement Farabet)<br> -Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)<br> -Copyright (c) 2006 Idiap Research Institute (Samy Bengio)<br> -Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)<br> -BSD 3-Clause "New" or "Revised" License - -tqdm<br> -https://github.com/tqdm/tqdm<br> -2015-2021 (c) Casper da Costa-Luis<br> -2016 (c) [PR #96] on behalf of Google Inc.<br> -2013 (c) Noam Yorav-Raphael, original author.<br> -Mozilla Public License 2.0 - -traitlets<br> -https://github.com/ipython/traitlets<br> -Copyright (c) 2001-, IPython Development Team<br> -BSD 3-Clause "New" or "Revised" License - -transformers<br> -https://github.com/huggingface/transformers<br> -Copyright 2018- The Hugging Face team. All rights reserved.<br> -Apache License 2.0 - -typing_extensions<br> -https://github.com/python/typing_extensions<br> -Python Software Foundation License 2.0 - -urllib3<br> -https://github.com/urllib3/urllib3<br> -Copyright (c) 2008-2020 Andrey Petrov and contributors (see CONTRIBUTORS.txt)<br> -MIT License - -wcwidth<br> -https://github.com/jquast/wcwidth<br> -Copyright (c) 2014 Jeff Quast <contact@jeffquast.com><br> -MIT License - -xxhash<br> -https://github.com/ifduyue/python-xxhash<br> -Copyright (c) 2014-2020, Yue Du. All rights reserved.<br> -BSD 2-Clause "Simplified" License - -yarl<br> -https://github.com/aio-libs/yarl<br> -Copyright 2016-2021, Andrew Svetlov and aio-libs team<br> -Apache License 2.0 diff --git a/spaces/NeonLion92/nsfw-c0ffees-erotic-story-generator2/README.md b/spaces/NeonLion92/nsfw-c0ffees-erotic-story-generator2/README.md deleted file mode 100644 index 687bf6b6c5f675b37c4297b2efa438306c1d621c..0000000000000000000000000000000000000000 --- a/spaces/NeonLion92/nsfw-c0ffees-erotic-story-generator2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: '[NSFW] C0ffee''s Erotic Story Generator 2' -emoji: 🍑 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -duplicated_from: coffeeee/nsfw-c0ffees-erotic-story-generator2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/flores101/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/flores101/README.md deleted file mode 100644 index 635c13f40bd0ccab704735bc5c26ea0192ea98cd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/flores101/README.md +++ /dev/null @@ -1,223 +0,0 @@ -<p align="center"> -<img src="flores_logo.png" width="500"> -</p> - -# Flores101: Large-Scale Multilingual Machine Translation - -## Introduction - -Baseline pretrained models for small and large tracks of WMT 21 Large-Scale Multilingual Machine Translation competition. - -Flores Task at WMT 21: http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html - -Flores announement blog post: https://ai.facebook.com/blog/flores-researchers-kick-off-multilingual-translation-challenge-at-wmt-and-call-for-compute-grants/ - - - -## Pretrained models - -Model | Num layers | Embed dimension | FFN dimension| Vocab Size | #params | Download ----|---|---|---|---|---|--- -`flores101_mm100_615M` | 12 | 1024 | 4096 | 256,000 | 615M | https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz -`flores101_mm100_175M` | 6 | 512 | 2048 | 256,000 | 175M | https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_175M.tar.gz - - -These models are trained similar to [M2M-100](https://arxiv.org/abs/2010.11125) with additional support for the languages that are part of the WMT Large-Scale Multilingual Machine Translation track. Full list of languages can be found at the bottom. - - -## Example Generation code - -### Download model, sentencepiece vocab - -```bash -fairseq=/path/to/fairseq -cd $fairseq - -# Download 615M param model. -wget https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz - -# Extract -tar -xvzf flores101_mm100_615M.tar.gz -``` - -### Encode using our SentencePiece Model -Note: Install SentencePiece from [here](https://github.com/google/sentencepiece) - - -```bash -fairseq=/path/to/fairseq -cd $fairseq - -# Download example dataset From German to French -sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de -sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr - -for lang in de fr ; do - python scripts/spm_encode.py \ - --model flores101_mm100_615M/sentencepiece.bpe.model \ - --output_format=piece \ - --inputs=raw_input.de-fr.${lang} \ - --outputs=spm.de-fr.${lang} -done -``` - -### Binarization - -```bash -fairseq-preprocess \ - --source-lang de --target-lang fr \ - --testpref spm.de-fr \ - --thresholdsrc 0 --thresholdtgt 0 \ - --destdir data_bin \ - --srcdict flores101_mm100_615M/dict.txt --tgtdict flores101_mm100_615M/dict.txt -``` - -### Generation - - -```bash -fairseq-generate \ - data_bin \ - --batch-size 1 \ - --path flores101_mm100_615M/model.pt \ - --fixed-dictionary flores101_mm100_615M/dict.txt \ - -s de -t fr \ - --remove-bpe 'sentencepiece' \ - --beam 5 \ - --task translation_multi_simple_epoch \ - --lang-pairs flores101_mm100_615M/language_pairs.txt \ - --decoder-langtok --encoder-langtok src \ - --gen-subset test \ - --fp16 \ - --dataset-impl mmap \ - --distributed-world-size 1 --distributed-no-spawn -``` - -### Supported Languages and lang code - -Language | lang code ----|--- -Akrikaans | af -Amharic | am -Arabic | ar -Assamese | as -Asturian | ast -Aymara | ay -Azerbaijani | az -Bashkir | ba -Belarusian | be -Bulgarian | bg -Bengali | bn -Breton | br -Bosnian | bs -Catalan | ca -Cebuano | ceb -Chokwe | cjk -Czech | cs -Welsh | cy -Danish | da -German | de -Dyula| dyu -Greek | el -English | en -Spanish | es -Estonian | et -Persian | fa -Fulah | ff -Finnish | fi -French | fr -Western Frisian | fy -Irish | ga -Scottish Gaelic | gd -Galician | gl -Gujarati | gu -Hausa | ha -Hebrew | he -Hindi | hi -Croatian | hr -Haitian Creole | ht -Hungarian | hu -Armenian | hy -Indonesian | id -Igbo | ig -Iloko | ilo -Icelandic | is -Italian | it -Japanese | ja -Javanese | jv -Georgian | ka -Kachin | kac -Kamba | kam -Kabuverdianu | kea -Kongo | kg -Kazakh | kk -Central Khmer | km -Kimbundu | kmb -Northern Kurdish | kmr -Kannada | kn -Korean | ko -Kurdish | ku -Kyrgyz | ky -Luxembourgish | lb -Ganda | lg -Lingala | ln -Lao | lo -Lithuanian | lt -Luo | luo -Latvian | lv -Malagasy | mg -Maori | mi -Macedonian | mk -Malayalam | ml -Mongolian | mn -Marathi | mr -Malay | ms -Maltese | mt -Burmese | my -Nepali | ne -Dutch | nl -Norwegian | no -Northern Sotho | ns -Nyanja | ny -Occitan | oc -Oromo | om -Oriya | or -Punjabi | pa -Polish | pl -Pashto | ps -Portuguese | pt -Quechua | qu -Romanian | ro -Russian | ru -Sindhi | sd -Shan | shn -Sinhala | si -Slovak | sk -Slovenian | sl -Shona | sn -Somali | so -Albanian | sq -Serbian | sr -Swati | ss -Sundanese | su -Swedish | sv -Swahili | sw -Tamil | ta -Telugu | te -Tajik | tg -Thai | th -Tigrinya | ti -Tagalog | tl -Tswana | tn -Turkish | tr -Ukrainian | uk -Umbundu | umb -Urdu | ur -Uzbek | uz -Vietnamese | vi -Wolof | wo -Xhosa | xh -Yiddish | yi -Yoruba | yo -Chinese| zh -Zulu | zu diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/docs/ende-mma.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/docs/ende-mma.md deleted file mode 100644 index 241d604a3b31a37755da68aad6ff47d46891d3fc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/docs/ende-mma.md +++ /dev/null @@ -1,74 +0,0 @@ -# Simultaneous Machine Translation - -This directory contains the code for the paper [Monotonic Multihead Attention](https://openreview.net/forum?id=Hyg96gBKPS) - -## Prepare Data - -[Please follow the instructions to download and preprocess the WMT'15 En-De dataset.](https://github.com/pytorch/fairseq/tree/simulastsharedtask/examples/translation#prepare-wmt14en2desh) - -Another example of training an English to Japanese model can be found [here](docs/enja.md) - -## Training - -- MMA-IL - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type infinite_lookback \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-avg 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- MMA-H - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type hard_aligned \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-var 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- wait-k - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type wait-k \ - --waitk-lagging 3 \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/sequence_scorer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/sequence_scorer.py deleted file mode 100644 index 411d4df4445ef8dd3f1907ad56f9de6943d1fed8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/sequence_scorer.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import torch -from fairseq import utils - - -class SequenceScorer(object): - """Scores the target for a given source sentence.""" - - def __init__( - self, - tgt_dict, - softmax_batch=None, - compute_alignment=False, - eos=None, - symbols_to_strip_from_output=None, - ): - self.pad = tgt_dict.pad() - self.eos = tgt_dict.eos() if eos is None else eos - self.softmax_batch = softmax_batch or sys.maxsize - assert self.softmax_batch > 0 - self.compute_alignment = compute_alignment - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.eos} - ) - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - def batch_for_softmax(dec_out, target): - # assumes decoder_out[0] is the only thing needed (may not be correct for future models!) - first, rest = dec_out[0], dec_out[1:] - bsz, tsz, dim = first.shape - if bsz * tsz < self.softmax_batch: - yield dec_out, target, True - else: - flat = first.contiguous().view(1, -1, dim) - flat_tgt = target.contiguous().view(flat.shape[:-1]) - s = 0 - while s < flat.size(1): - e = s + self.softmax_batch - yield (flat[:, s:e],) + rest, flat_tgt[:, s:e], False - s = e - - def gather_target_probs(probs, target): - probs = probs.gather( - dim=2, - index=target.unsqueeze(-1), - ) - return probs - - orig_target = sample["target"] - - # compute scores for each model in the ensemble - avg_probs = None - avg_attn = None - for model in models: - model.eval() - decoder_out = model(**net_input) - attn = decoder_out[1] if len(decoder_out) > 1 else None - if type(attn) is dict: - attn = attn.get("attn", None) - - batched = batch_for_softmax(decoder_out, orig_target) - probs, idx = None, 0 - for bd, tgt, is_single in batched: - sample["target"] = tgt - curr_prob = model.get_normalized_probs( - bd, log_probs=len(models) == 1, sample=sample - ).data - if is_single: - probs = gather_target_probs(curr_prob, orig_target) - else: - if probs is None: - probs = curr_prob.new(orig_target.numel()) - step = curr_prob.size(0) * curr_prob.size(1) - end = step + idx - tgt_probs = gather_target_probs( - curr_prob.view(tgt.shape + (curr_prob.size(-1),)), tgt - ) - probs[idx:end] = tgt_probs.view(-1) - idx = end - sample["target"] = orig_target - - probs = probs.view(sample["target"].shape) - - if avg_probs is None: - avg_probs = probs - else: - avg_probs.add_(probs) - if attn is not None: - if torch.is_tensor(attn): - attn = attn.data - else: - attn = attn[0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(models) > 1: - avg_probs.div_(len(models)) - avg_probs.log_() - if avg_attn is not None: - avg_attn.div_(len(models)) - - bsz = avg_probs.size(0) - hypos = [] - start_idxs = sample["start_indices"] if "start_indices" in sample else [0] * bsz - for i in range(bsz): - # remove padding from ref - ref = ( - utils.strip_pad(sample["target"][i, start_idxs[i] :], self.pad) - if sample["target"] is not None - else None - ) - tgt_len = ref.numel() - avg_probs_i = avg_probs[i][start_idxs[i] : start_idxs[i] + tgt_len] - score_i = avg_probs_i.sum() / tgt_len - if avg_attn is not None: - avg_attn_i = avg_attn[i] - if self.compute_alignment: - alignment = utils.extract_hard_alignment( - avg_attn_i, - sample["net_input"]["src_tokens"][i], - sample["target"][i], - self.pad, - self.eos, - ) - else: - alignment = None - else: - avg_attn_i = alignment = None - hypos.append( - [ - { - "tokens": ref, - "score": score_i, - "attention": avg_attn_i, - "alignment": alignment, - "positional_scores": avg_probs_i, - } - ] - ) - return hypos diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/models/discriminative_reranking_model.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/models/discriminative_reranking_model.py deleted file mode 100644 index e4b5887f825df36f4e1e0384f38fefe790e485e6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/models/discriminative_reranking_model.py +++ /dev/null @@ -1,365 +0,0 @@ -from dataclasses import dataclass, field -import os - -import torch -import torch.nn as nn - -from fairseq import utils -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import ( - BaseFairseqModel, - register_model, -) - -from fairseq.models.roberta.model import RobertaClassificationHead - -from fairseq.modules import ( - LayerNorm, - TransformerSentenceEncoder, - TransformerSentenceEncoderLayer, -) - - -ACTIVATION_FN_CHOICES = ChoiceEnum(utils.get_available_activation_fns()) -JOINT_CLASSIFICATION_CHOICES = ChoiceEnum(["none", "sent"]) -SENTENCE_REP_CHOICES = ChoiceEnum(["head", "meanpool", "maxpool"]) - - -def update_init_roberta_model_state(state): - """ - update the state_dict of a Roberta model for initializing - weights of the BertRanker - """ - for k in list(state.keys()): - if ".lm_head." in k or "version" in k: - del state[k] - continue - # remove 'encoder/decoder.sentence_encoder.' from the key - assert k.startswith("encoder.sentence_encoder.") or k.startswith( - "decoder.sentence_encoder." - ), f"Cannot recognize parameter name {k}" - if "layernorm_embedding" in k: - new_k = k.replace(".layernorm_embedding.", ".emb_layer_norm.") - state[new_k[25:]] = state[k] - else: - state[k[25:]] = state[k] - del state[k] - - -class BaseRanker(nn.Module): - def __init__(self, args, task): - super().__init__() - - self.separator_token = task.dictionary.eos() - self.padding_idx = task.dictionary.pad() - - def forward(self, src_tokens): - raise NotImplementedError - - def get_segment_labels(self, src_tokens): - segment_boundary = (src_tokens == self.separator_token).long() - segment_labels = ( - segment_boundary.cumsum(dim=1) - - segment_boundary - - (src_tokens == self.padding_idx).long() - ) - - return segment_labels - - def get_positions(self, src_tokens, segment_labels): - segment_positions = ( - torch.arange(src_tokens.shape[1]) - .to(src_tokens.device) - .repeat(src_tokens.shape[0], 1) - ) - segment_boundary = (src_tokens == self.separator_token).long() - _, col_idx = (segment_positions * segment_boundary).nonzero(as_tuple=True) - col_idx = torch.cat([torch.zeros(1).type_as(col_idx), col_idx]) - offset = torch.cat( - [ - torch.zeros(1).type_as(segment_boundary), - segment_boundary.sum(dim=1).cumsum(dim=0)[:-1], - ] - ) - segment_positions -= col_idx[segment_labels + offset.unsqueeze(1)] * ( - segment_labels != 0 - ) - - padding_mask = src_tokens.ne(self.padding_idx) - segment_positions = (segment_positions + 1) * padding_mask.type_as( - segment_positions - ) + self.padding_idx - - return segment_positions - - -class BertRanker(BaseRanker): - def __init__(self, args, task): - super(BertRanker, self).__init__(args, task) - - init_model = getattr(args, "pretrained_model", "") - self.joint_layers = nn.ModuleList() - if os.path.isfile(init_model): - print(f"initialize weight from {init_model}") - - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - os.path.dirname(init_model), - checkpoint_file=os.path.basename(init_model), - ) - - in_state_dict = x["models"][0].state_dict() - init_args = x["args"].model - - num_positional_emb = init_args.max_positions + task.dictionary.pad() + 1 - - # follow the setup in roberta - self.model = TransformerSentenceEncoder( - padding_idx=task.dictionary.pad(), - vocab_size=len(task.dictionary), - num_encoder_layers=getattr( - args, "encoder_layers", init_args.encoder_layers - ), - embedding_dim=init_args.encoder_embed_dim, - ffn_embedding_dim=init_args.encoder_ffn_embed_dim, - num_attention_heads=init_args.encoder_attention_heads, - dropout=init_args.dropout, - attention_dropout=init_args.attention_dropout, - activation_dropout=init_args.activation_dropout, - num_segments=2, # add language embeddings - max_seq_len=num_positional_emb, - offset_positions_by_padding=False, - encoder_normalize_before=True, - apply_bert_init=True, - activation_fn=init_args.activation_fn, - freeze_embeddings=args.freeze_embeddings, - n_trans_layers_to_freeze=args.n_trans_layers_to_freeze, - ) - - # still need to learn segment embeddings as we added a second language embedding - if args.freeze_embeddings: - for p in self.model.segment_embeddings.parameters(): - p.requires_grad = False - - update_init_roberta_model_state(in_state_dict) - print("loading weights from the pretrained model") - self.model.load_state_dict( - in_state_dict, strict=False - ) # ignore mismatch in language embeddings - - ffn_embedding_dim = init_args.encoder_ffn_embed_dim - num_attention_heads = init_args.encoder_attention_heads - dropout = init_args.dropout - attention_dropout = init_args.attention_dropout - activation_dropout = init_args.activation_dropout - activation_fn = init_args.activation_fn - - classifier_embed_dim = getattr( - args, "embed_dim", init_args.encoder_embed_dim - ) - if classifier_embed_dim != init_args.encoder_embed_dim: - self.transform_layer = nn.Linear( - init_args.encoder_embed_dim, classifier_embed_dim - ) - else: - self.model = TransformerSentenceEncoder( - padding_idx=task.dictionary.pad(), - vocab_size=len(task.dictionary), - num_encoder_layers=args.encoder_layers, - embedding_dim=args.embed_dim, - ffn_embedding_dim=args.ffn_embed_dim, - num_attention_heads=args.attention_heads, - dropout=args.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.activation_dropout, - max_seq_len=task.max_positions() - if task.max_positions() - else args.tokens_per_sample, - num_segments=2, - offset_positions_by_padding=False, - encoder_normalize_before=args.encoder_normalize_before, - apply_bert_init=args.apply_bert_init, - activation_fn=args.activation_fn, - ) - - classifier_embed_dim = args.embed_dim - ffn_embedding_dim = args.ffn_embed_dim - num_attention_heads = args.attention_heads - dropout = args.dropout - attention_dropout = args.attention_dropout - activation_dropout = args.activation_dropout - activation_fn = args.activation_fn - - self.joint_classification = args.joint_classification - if args.joint_classification == "sent": - if args.joint_normalize_before: - self.joint_layer_norm = LayerNorm(classifier_embed_dim) - else: - self.joint_layer_norm = None - - self.joint_layers = nn.ModuleList( - [ - TransformerSentenceEncoderLayer( - embedding_dim=classifier_embed_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - ) - for _ in range(args.num_joint_layers) - ] - ) - - self.classifier = RobertaClassificationHead( - classifier_embed_dim, - classifier_embed_dim, - 1, # num_classes - "tanh", - args.classifier_dropout, - ) - - def forward(self, src_tokens, src_lengths): - segment_labels = self.get_segment_labels(src_tokens) - positions = self.get_positions(src_tokens, segment_labels) - - inner_states, _ = self.model( - tokens=src_tokens, - segment_labels=segment_labels, - last_state_only=True, - positions=positions, - ) - - return inner_states[-1].transpose(0, 1) # T x B x C -> B x T x C - - def sentence_forward(self, encoder_out, src_tokens=None, sentence_rep="head"): - # encoder_out: B x T x C - if sentence_rep == "head": - x = encoder_out[:, :1, :] - else: # 'meanpool', 'maxpool' - assert src_tokens is not None, "meanpool requires src_tokens input" - segment_labels = self.get_segment_labels(src_tokens) - padding_mask = src_tokens.ne(self.padding_idx) - encoder_mask = segment_labels * padding_mask.type_as(segment_labels) - - if sentence_rep == "meanpool": - ntokens = torch.sum(encoder_mask, dim=1, keepdim=True) - x = torch.sum( - encoder_out * encoder_mask.unsqueeze(2), dim=1, keepdim=True - ) / ntokens.unsqueeze(2).type_as(encoder_out) - else: # 'maxpool' - encoder_out[ - (encoder_mask == 0).unsqueeze(2).repeat(1, 1, encoder_out.shape[-1]) - ] = -float("inf") - x, _ = torch.max(encoder_out, dim=1, keepdim=True) - - if hasattr(self, "transform_layer"): - x = self.transform_layer(x) - - return x # B x 1 x C - - def joint_forward(self, x): - # x: T x B x C - if self.joint_layer_norm: - x = self.joint_layer_norm(x.transpose(0, 1)) - x = x.transpose(0, 1) - - for layer in self.joint_layers: - x, _ = layer(x, self_attn_padding_mask=None) - return x - - def classification_forward(self, x): - # x: B x T x C - return self.classifier(x) - - -@dataclass -class DiscriminativeNMTRerankerConfig(FairseqDataclass): - pretrained_model: str = field( - default="", metadata={"help": "pretrained model to load"} - ) - sentence_rep: SENTENCE_REP_CHOICES = field( - default="head", - metadata={ - "help": "method to transform the output of the transformer stack to a sentence-level representation" - }, - ) - - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - attention_dropout: float = field( - default=0.0, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN"} - ) - classifier_dropout: float = field( - default=0.0, metadata={"help": "classifier dropout probability"} - ) - embed_dim: int = field(default=768, metadata={"help": "embedding dimension"}) - ffn_embed_dim: int = field( - default=2048, metadata={"help": "embedding dimension for FFN"} - ) - encoder_layers: int = field(default=12, metadata={"help": "num encoder layers"}) - attention_heads: int = field(default=8, metadata={"help": "num attention heads"}) - encoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each encoder block"} - ) - apply_bert_init: bool = field( - default=False, metadata={"help": "use custom param initialization for BERT"} - ) - activation_fn: ACTIVATION_FN_CHOICES = field( - default="relu", metadata={"help": "activation function to use"} - ) - freeze_embeddings: bool = field( - default=False, metadata={"help": "freeze embeddings in the pretrained model"} - ) - n_trans_layers_to_freeze: int = field( - default=0, - metadata={ - "help": "number of layers to freeze in the pretrained transformer model" - }, - ) - - # joint classfication - joint_classification: JOINT_CLASSIFICATION_CHOICES = field( - default="none", - metadata={"help": "method to compute joint features for classification"}, - ) - num_joint_layers: int = field( - default=1, metadata={"help": "number of joint layers"} - ) - joint_normalize_before: bool = field( - default=False, - metadata={"help": "apply layer norm on the input to the joint layer"}, - ) - - -@register_model( - "discriminative_nmt_reranker", dataclass=DiscriminativeNMTRerankerConfig -) -class DiscriminativeNMTReranker(BaseFairseqModel): - @classmethod - def build_model(cls, args, task): - model = BertRanker(args, task) - return DiscriminativeNMTReranker(args, model) - - def __init__(self, args, model): - super().__init__() - - self.model = model - self.sentence_rep = args.sentence_rep - self.joint_classification = args.joint_classification - - def forward(self, src_tokens, src_lengths, **kwargs): - return self.model(src_tokens, src_lengths) - - def sentence_forward(self, encoder_out, src_tokens): - return self.model.sentence_forward(encoder_out, src_tokens, self.sentence_rep) - - def joint_forward(self, x): - return self.model.joint_forward(x) - - def classification_forward(self, x): - return self.model.classification_forward(x) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/update_ckpt.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/update_ckpt.py deleted file mode 100644 index 53c9e74ea613e30aa5c22614e658f2b7272bac0c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/update_ckpt.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -src_ckpt = "/checkpoint/wnhsu/w2v/archived/hubert_base_ls960_it2.pt" -ref_ckpt = "/checkpoint/wnhsu/w2v/hubert_icassp_oss_v3/iter2_km100-400k-grp-L6/oss.km500_p0_1_s334.pmw1_0.puw0_0.grpnorm.ml10.mp0_8.untie.mxsz250000.ufreq1.maxtok1400000.MU100k.s1337.ngpu32/checkpoint_last.pt" -new_ckpt = "/checkpoint/wnhsu/w2v/archived/hubert_base_ls960_it2_updated.pt" - - -def update_state(state): - state["model"]["label_embs_concat"] = state["model"].pop("label_embs") - state["args"].task = "hubert_pretraining" - state["args"].labels = f"['{state['args'].labels}']" - return state - - -src_state = torch.load(src_ckpt) -src_state = update_state(src_state) -torch.save(src_state, new_ckpt) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/__init__.py deleted file mode 100644 index 239d2e69f9a235095dee1ea7b3a94164a77273f5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import tasks, criterions, models # noqa diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh deleted file mode 100644 index e74953194d41f0d93855d41b2acef08556d92477..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh +++ /dev/null @@ -1,15 +0,0 @@ -# you can change cmd.sh depending on what type of queue you are using. -# If you have no queueing system and want to run on a local machine, you -# can change all instances 'queue.pl' to run.pl (but be careful and run -# commands one by one: most recipes will exhaust the memory on your -# machine). queue.pl works with GridEngine (qsub). slurm.pl works -# with slurm. Different queues are configured differently, with different -# queue names and different ways of specifying things like memory; -# to account for these differences you can create and edit the file -# conf/queue.conf to match your queue's configuration. Search for -# conf/queue.conf in http://kaldi-asr.org/doc/queue.html for more information, -# or search for the string 'default_config' in utils/queue.pl or utils/slurm.pl. - -export train_cmd="run.pl --mem 2G" -export decode_cmd="run.pl --mem 4G" -export mkgraph_cmd="run.pl --mem 8G" diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/distributed/distributed_timeout_wrapper.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/distributed/distributed_timeout_wrapper.py deleted file mode 100644 index 18107ef27ea837b8c72dcaa49db18fd8e64267b1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/distributed/distributed_timeout_wrapper.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -from torch import nn - - -logger = logging.getLogger(__name__) - - -class DistributedTimeoutWrapper(nn.Module): - """ - A wrapper that kills the process if no progress is made within a given - *timeout*. The timer is reset every time :func:`forward` is called. - - Usage:: - - module = DistributedTimeoutWrapper(module, timeout=30) - x = module(input) - time.sleep(20) # safe - x = module(input) - time.sleep(45) # job will be killed before this returns - - Args: - module (nn.Module): module to wrap - timeout (int): number of seconds before killing the process - (set to a value <= 0 to disable the timeout) - signal (Optional): signal to send once timeout is triggered - """ - def __init__(self, module: nn.Module, timeout: int, signal=signal.SIGINT): - super().__init__() - self.module = module - self.timeout = timeout - self.signal = signal - - if timeout > 0: - self._heartbeat = threading.Event() - self._heartbeat_thread = threading.Thread( - target=self._check_heartbeat, - args=(os.getpid(),), - daemon=True, - ) - self._heartbeat_thread.start() - self._terminated = False - else: - self._heartbeat = None - self._heartbeat_thread = None - - def __del__(self): - self.stop_timeout() - - def __getattr__(self, name): - """Forward missing attributes to wrapped module.""" - try: - return super().__getattr__(name) # defer to nn.Module's logic - except AttributeError: - return getattr(self.module, name) - - def stop_timeout(self): - if self._heartbeat_thread is not None: - self._terminated = True - self._heartbeat_thread.join() - - def state_dict(self, *args, **kwargs): - return self.module.state_dict(*args, **kwargs) - - def load_state_dict(self, *args, **kwargs): - return self.module.load_state_dict(*args, **kwargs) - - def forward(self, *args, **kwargs): - if self._heartbeat is not None: - self._heartbeat.set() - return self.module(*args, **kwargs) - - def _check_heartbeat(self, parent_pid): - self._heartbeat.wait() # wait for the first forward pass - while True: - self._heartbeat.clear() - success = self._heartbeat.wait(timeout=self.timeout) - if self._terminated: - break - elif not success: - logger.error(( - "Killing job for not making progress in {} seconds. " - "Set --heartbeat-timeout=-1 to disable this timeout." - ).format(int(self.timeout))) - os.kill(parent_pid, self.signal) - return diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq_cli/train.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq_cli/train.py deleted file mode 100644 index 83475873138c5d1bac288c234afb6b4a1a7882d7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq_cli/train.py +++ /dev/null @@ -1,514 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Train a new model on one or across multiple GPUs. -""" - -import argparse -import logging -import math -import os -import sys -from typing import Dict, Optional, Any, List, Tuple, Callable - -# We need to setup root logger before importing any fairseq libraries. -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.train") - -import numpy as np -import torch -from fairseq import ( - checkpoint_utils, - options, - quantization_utils, - tasks, - utils, -) -from fairseq.data import iterators, data_utils -from fairseq.data.plasma_utils import PlasmaStore -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import fsdp_enable_wrap, fsdp_wrap, utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics, progress_bar -from fairseq.model_parallel.megatron_trainer import MegatronTrainer -from fairseq.trainer import Trainer -from omegaconf import DictConfig, OmegaConf - - - - -def main(cfg: FairseqConfig) -> None: - if isinstance(cfg, argparse.Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - utils.import_user_module(cfg.common) - - if distributed_utils.is_master(cfg.distributed_training) and "job_logging_cfg" in cfg: - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - logging.config.dictConfig(OmegaConf.to_container(cfg.job_logging_cfg)) - - assert ( - cfg.dataset.max_tokens is not None or cfg.dataset.batch_size is not None - ), "Must specify batch size either with --max-tokens or --batch-size" - metrics.reset() - - if cfg.common.log_file is not None: - handler = logging.FileHandler(filename=cfg.common.log_file) - logger.addHandler(handler) - - np.random.seed(cfg.common.seed) - utils.set_torch_seed(cfg.common.seed) - - if distributed_utils.is_master(cfg.distributed_training): - checkpoint_utils.verify_checkpoint_directory(cfg.checkpoint.save_dir) - - # Print args - logger.info(cfg) - - if cfg.checkpoint.write_checkpoints_asynchronously: - try: - import iopath # noqa: F401 - except ImportError: - logging.exception( - "Asynchronous checkpoint writing is specified but iopath is " - "not installed: `pip install iopath`" - ) - return - - # Setup task, e.g., translation, language modeling, etc. - task = tasks.setup_task(cfg.task) - - assert cfg.criterion, "Please specify criterion to train a model" - - # Build model and criterion - if cfg.distributed_training.ddp_backend == "fully_sharded": - with fsdp_enable_wrap(cfg.distributed_training): - model = fsdp_wrap(task.build_model(cfg.model)) - else: - model = task.build_model(cfg.model) - criterion = task.build_criterion(cfg.criterion) - logger.info(model) - logger.info("task: {}".format(task.__class__.__name__)) - logger.info("model: {}".format(model.__class__.__name__)) - logger.info("criterion: {}".format(criterion.__class__.__name__)) - logger.info( - "num. shared model params: {:,} (num. trained: {:,})".format( - sum(p.numel() for p in model.parameters() if not getattr(p, "expert", False)), - sum(p.numel() for p in model.parameters() if not getattr(p, "expert", False) and p.requires_grad) - ) - ) - - logger.info( - "num. expert model params: {} (num. trained: {})".format( - sum(p.numel() for p in model.parameters() if getattr(p, "expert", False)), - sum(p.numel() for p in model.parameters() if getattr(p, "expert", False) and p.requires_grad), - ) - ) - - # Load valid dataset (we load training data below, based on the latest checkpoint) - # We load the valid dataset AFTER building the model - data_utils.raise_if_valid_subsets_unintentionally_ignored(cfg) - if cfg.dataset.combine_valid_subsets: - task.load_dataset("valid", combine=True, epoch=1) - else: - for valid_sub_split in cfg.dataset.valid_subset.split(","): - task.load_dataset(valid_sub_split, combine=False, epoch=1) - - # (optionally) Configure quantization - if cfg.common.quantization_config_path is not None: - quantizer = quantization_utils.Quantizer( - config_path=cfg.common.quantization_config_path, - max_epoch=cfg.optimization.max_epoch, - max_update=cfg.optimization.max_update, - ) - else: - quantizer = None - - # Build trainer - if cfg.common.model_parallel_size == 1: - trainer = Trainer(cfg, task, model, criterion, quantizer) - else: - trainer = MegatronTrainer(cfg, task, model, criterion) - logger.info( - "training on {} devices (GPUs/TPUs)".format( - cfg.distributed_training.distributed_world_size - ) - ) - logger.info( - "max tokens per device = {} and max sentences per device = {}".format( - cfg.dataset.max_tokens, - cfg.dataset.batch_size, - ) - ) - - # Load the latest checkpoint if one is available and restore the - # corresponding train iterator - extra_state, epoch_itr = checkpoint_utils.load_checkpoint( - cfg.checkpoint, - trainer, - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - if cfg.common.tpu: - import torch_xla.core.xla_model as xm - xm.rendezvous("load_checkpoint") # wait for all workers - - max_epoch = cfg.optimization.max_epoch or math.inf - lr = trainer.get_lr() - - train_meter = meters.StopwatchMeter() - train_meter.start() - while epoch_itr.next_epoch_idx <= max_epoch: - if lr <= cfg.optimization.stop_min_lr: - logger.info( - f"stopping training because current learning rate ({lr}) is smaller " - "than or equal to minimum learning rate " - f"(--stop-min-lr={cfg.optimization.stop_min_lr})" - ) - break - - # train for one epoch - valid_losses, should_stop = train(cfg, trainer, task, epoch_itr) - if should_stop: - break - - # only use first validation loss to update the learning rate - lr = trainer.lr_step(epoch_itr.epoch, valid_losses[0]) - - epoch_itr = trainer.get_train_iterator( - epoch_itr.next_epoch_idx, - # sharded data: get train iterator for next epoch - load_dataset=task.has_sharded_data("train"), - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - train_meter.stop() - logger.info("done training in {:.1f} seconds".format(train_meter.sum)) - - # ioPath implementation to wait for all asynchronous file writes to complete. - if cfg.checkpoint.write_checkpoints_asynchronously: - logger.info( - "ioPath PathManager waiting for all asynchronous checkpoint " - "writes to finish." - ) - PathManager.async_close() - logger.info("ioPath PathManager finished waiting.") - - -def should_stop_early(cfg: DictConfig, valid_loss: float) -> bool: - # skip check if no validation was done in the current epoch - if valid_loss is None: - return False - if cfg.checkpoint.patience <= 0: - return False - - def is_better(a, b): - return a > b if cfg.checkpoint.maximize_best_checkpoint_metric else a < b - - prev_best = getattr(should_stop_early, "best", None) - if prev_best is None or is_better(valid_loss, prev_best): - should_stop_early.best = valid_loss - should_stop_early.num_runs = 0 - return False - else: - should_stop_early.num_runs += 1 - if should_stop_early.num_runs >= cfg.checkpoint.patience: - logger.info( - "early stop since valid performance hasn't improved for last {} runs".format( - cfg.checkpoint.patience - ) - ) - return True - else: - return False - - -@metrics.aggregate("train") -def train( - cfg: DictConfig, trainer: Trainer, task: tasks.FairseqTask, epoch_itr -) -> Tuple[List[Optional[float]], bool]: - """Train the model for one epoch and return validation losses.""" - # Initialize data iterator - itr = epoch_itr.next_epoch_itr( - fix_batches_to_gpus=cfg.distributed_training.fix_batches_to_gpus, - shuffle=(epoch_itr.next_epoch_idx > cfg.dataset.curriculum), - ) - update_freq = ( - cfg.optimization.update_freq[epoch_itr.epoch - 1] - if epoch_itr.epoch <= len(cfg.optimization.update_freq) - else cfg.optimization.update_freq[-1] - ) - itr = iterators.GroupedIterator(itr, update_freq) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_file=cfg.common.log_file, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - azureml_logging=( - cfg.common.azureml_logging - if distributed_utils.is_master(cfg.distributed_training) - else False - ), - ) - progress.update_config(_flatten_config(cfg)) - - trainer.begin_epoch(epoch_itr.epoch) - - valid_subsets = cfg.dataset.valid_subset.split(",") - should_stop = False - num_updates = trainer.get_num_updates() - logger.info("Start iterating over samples") - for i, samples in enumerate(progress): - with metrics.aggregate("train_inner"), torch.autograd.profiler.record_function( - "train_step-%d" % i - ): - log_output = trainer.train_step(samples) - - if log_output is not None: # not OOM, overflow, ... - # log mid-epoch stats - num_updates = trainer.get_num_updates() - if num_updates % cfg.common.log_interval == 0: - stats = get_training_stats(metrics.get_smoothed_values("train_inner")) - progress.log(stats, tag="train_inner", step=num_updates) - - # reset mid-epoch stats after each log interval - # the end-of-epoch stats will still be preserved - metrics.reset_meters("train_inner") - - end_of_epoch = not itr.has_next() - valid_losses, should_stop = validate_and_save( - cfg, trainer, task, epoch_itr, valid_subsets, end_of_epoch - ) - - if should_stop: - break - - # log end-of-epoch stats - logger.info("end of epoch {} (average epoch stats below)".format(epoch_itr.epoch)) - stats = get_training_stats(metrics.get_smoothed_values("train")) - progress.print(stats, tag="train", step=num_updates) - - # reset epoch-level meters - metrics.reset_meters("train") - return valid_losses, should_stop - - -def _flatten_config(cfg: DictConfig): - config = OmegaConf.to_container(cfg) - # remove any legacy Namespaces and replace with a single "args" - namespace = None - for k, v in list(config.items()): - if isinstance(v, argparse.Namespace): - namespace = v - del config[k] - if namespace is not None: - config["args"] = vars(namespace) - return config - - -def validate_and_save( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - valid_subsets: List[str], - end_of_epoch: bool, -) -> Tuple[List[Optional[float]], bool]: - num_updates = trainer.get_num_updates() - max_update = cfg.optimization.max_update or math.inf - - # Stopping conditions (and an additional one based on validation loss later - # on) - should_stop = False - if num_updates >= max_update: - should_stop = True - logger.info( - f"Stopping training due to " - f"num_updates: {num_updates} >= max_update: {max_update}" - ) - - training_time_hours = trainer.cumulative_training_time() / (60 * 60) - if ( - cfg.optimization.stop_time_hours > 0 - and training_time_hours > cfg.optimization.stop_time_hours - ): - should_stop = True - logger.info( - f"Stopping training due to " - f"cumulative_training_time: {training_time_hours} > " - f"stop_time_hours: {cfg.optimization.stop_time_hours} hour(s)" - ) - - do_save = ( - (end_of_epoch and epoch_itr.epoch % cfg.checkpoint.save_interval == 0) - or should_stop - or ( - cfg.checkpoint.save_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.checkpoint.save_interval_updates == 0 - and num_updates >= cfg.dataset.validate_after_updates - ) - ) - do_validate = ( - (not end_of_epoch and do_save) # validate during mid-epoch saves - or (end_of_epoch and epoch_itr.epoch % cfg.dataset.validate_interval == 0) - or should_stop - or ( - cfg.dataset.validate_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.dataset.validate_interval_updates == 0 - ) - ) and not cfg.dataset.disable_validation and num_updates >= cfg.dataset.validate_after_updates - - # Validate - valid_losses = [None] - if do_validate: - valid_losses = validate(cfg, trainer, task, epoch_itr, valid_subsets) - - should_stop |= should_stop_early(cfg, valid_losses[0]) - - # Save checkpoint - if do_save or should_stop: - checkpoint_utils.save_checkpoint( - cfg.checkpoint, trainer, epoch_itr, valid_losses[0] - ) - - return valid_losses, should_stop - - -def get_training_stats(stats: Dict[str, Any]) -> Dict[str, Any]: - stats["wall"] = round(metrics.get_meter("default", "wall").elapsed_time, 0) - return stats - - -def validate( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - subsets: List[str], -) -> List[Optional[float]]: - """Evaluate the model on the validation set(s) and return the losses.""" - - if cfg.dataset.fixed_validation_seed is not None: - # set fixed seed for every validation - utils.set_torch_seed(cfg.dataset.fixed_validation_seed) - - trainer.begin_valid_epoch(epoch_itr.epoch) - valid_losses = [] - for subset in subsets: - logger.info('begin validation on "{}" subset'.format(subset)) - - # Initialize data iterator - itr = trainer.get_valid_iterator(subset).next_epoch_itr( - shuffle=False, set_dataset_epoch=False # use a fixed valid set - ) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - prefix=f"valid on '{subset}' subset", - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - ) - - # create a new root metrics aggregator so validation metrics - # don't pollute other aggregators (e.g., train meters) - with metrics.aggregate(new_root=True) as agg: - for i, sample in enumerate(progress): - if cfg.dataset.max_valid_steps is not None and i > cfg.dataset.max_valid_steps: - break - trainer.valid_step(sample) - - # log validation stats - stats = get_valid_stats(cfg, trainer, agg.get_smoothed_values()) - - if hasattr(task, "post_validate"): - task.post_validate(trainer.get_model(), stats, agg) - - progress.print(stats, tag=subset, step=trainer.get_num_updates()) - - valid_losses.append(stats[cfg.checkpoint.best_checkpoint_metric]) - return valid_losses - - -def get_valid_stats( - cfg: DictConfig, trainer: Trainer, stats: Dict[str, Any] -) -> Dict[str, Any]: - stats["num_updates"] = trainer.get_num_updates() - if hasattr(checkpoint_utils.save_checkpoint, "best"): - key = "best_{0}".format(cfg.checkpoint.best_checkpoint_metric) - best_function = max if cfg.checkpoint.maximize_best_checkpoint_metric else min - stats[key] = best_function( - checkpoint_utils.save_checkpoint.best, - stats[cfg.checkpoint.best_checkpoint_metric], - ) - return stats - - -def cli_main( - modify_parser: Optional[Callable[[argparse.ArgumentParser], None]] = None -) -> None: - parser = options.get_training_parser() - args = options.parse_args_and_arch(parser, modify_parser=modify_parser) - - cfg = convert_namespace_to_omegaconf(args) - - if cfg.common.use_plasma_view: - server = PlasmaStore(path=cfg.common.plasma_path) - logger.info(f"Started plasma server pid {server.server.pid} {cfg.common.plasma_path}") - - if args.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, main) - else: - distributed_utils.call_main(cfg, main) - - # if cfg.common.use_plasma_view: - # server.server.kill() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/backtranslation/README.md b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/backtranslation/README.md deleted file mode 100644 index 73675f1125d80f58aa824db67d8970504d4d6b2a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/backtranslation/README.md +++ /dev/null @@ -1,297 +0,0 @@ -# Understanding Back-Translation at Scale (Edunov et al., 2018) - -This page includes pre-trained models from the paper [Understanding Back-Translation at Scale (Edunov et al., 2018)](https://arxiv.org/abs/1808.09381). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`transformer.wmt18.en-de` | Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381)) <br> WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz) <br> See NOTE in the archive - -## Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install subword_nmt sacremoses -``` - -Then to generate translations from the full model ensemble: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt18.en-de', ... ] - -# Load the WMT'18 En-De ensemble -en2de_ensemble = torch.hub.load( - 'pytorch/fairseq', 'transformer.wmt18.en-de', - checkpoint_file='wmt18.model1.pt:wmt18.model2.pt:wmt18.model3.pt:wmt18.model4.pt:wmt18.model5.pt', - tokenizer='moses', bpe='subword_nmt') - -# The ensemble contains 5 models -len(en2de_ensemble.models) -# 5 - -# Translate -en2de_ensemble.translate('Hello world!') -# 'Hallo Welt!' -``` - -## Training your own model (WMT'18 English-German) - -The following instructions can be adapted to reproduce the models from the paper. - - -#### Step 1. Prepare parallel data and optionally train a baseline (English-German) model - -First download and preprocess the data: -```bash -# Download and prepare the data -cd examples/backtranslation/ -bash prepare-wmt18en2de.sh -cd ../.. - -# Binarize the data -TEXT=examples/backtranslation/wmt18_en_de -fairseq-preprocess \ - --joined-dictionary \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt18_en_de --thresholdtgt 0 --thresholdsrc 0 \ - --workers 20 - -# Copy the BPE code into the data-bin directory for future use -cp examples/backtranslation/wmt18_en_de/code data-bin/wmt18_en_de/code -``` - -(Optionally) Train a baseline model (English-German) using just the parallel data: -```bash -CHECKPOINT_DIR=checkpoints_en_de_parallel -fairseq-train --fp16 \ - data-bin/wmt18_en_de \ - --source-lang en --target-lang de \ - --arch transformer_wmt_en_de_big --share-all-embeddings \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.001 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --max-tokens 3584 --update-freq 16 \ - --max-update 30000 \ - --save-dir $CHECKPOINT_DIR -# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a -# different number of GPUs. -``` - -Average the last 10 checkpoints: -```bash -python scripts/average_checkpoints.py \ - --inputs $CHECKPOINT_DIR \ - --num-epoch-checkpoints 10 \ - --output $CHECKPOINT_DIR/checkpoint.avg10.pt -``` - -Evaluate BLEU: -```bash -# tokenized BLEU on newstest2017: -bash examples/backtranslation/tokenized_bleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU4 = 29.57, 60.9/35.4/22.9/15.5 (BP=1.000, ratio=1.014, syslen=63049, reflen=62152) -# compare to 29.46 in Table 1, which is also for tokenized BLEU - -# generally it's better to report (detokenized) sacrebleu though: -bash examples/backtranslation/sacrebleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 29.0 60.6/34.7/22.4/14.9 (BP = 1.000 ratio = 1.013 hyp_len = 62099 ref_len = 61287) -``` - - -#### Step 2. Back-translate monolingual German data - -Train a reverse model (German-English) to do the back-translation: -```bash -CHECKPOINT_DIR=checkpoints_de_en_parallel -fairseq-train --fp16 \ - data-bin/wmt18_en_de \ - --source-lang de --target-lang en \ - --arch transformer_wmt_en_de_big --share-all-embeddings \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.001 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --max-tokens 3584 --update-freq 16 \ - --max-update 30000 \ - --save-dir $CHECKPOINT_DIR -# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a -# different number of GPUs. -``` - -Let's evaluate the back-translation (BT) model to make sure it is well trained: -```bash -bash examples/backtranslation/sacrebleu.sh \ - wmt17 \ - de-en \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint_best.py -# BLEU+case.mixed+lang.de-en+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 34.9 66.9/41.8/28.5/19.9 (BP = 0.983 ratio = 0.984 hyp_len = 63342 ref_len = 64399) -# compare to the best system from WMT'17 which scored 35.1: http://matrix.statmt.org/matrix/systems_list/1868 -``` - -Next prepare the monolingual data: -```bash -# Download and prepare the monolingual data -# By default the script samples 25M monolingual sentences, which after -# deduplication should be just over 24M sentences. These are split into 25 -# shards, each with 1M sentences (except for the last shard). -cd examples/backtranslation/ -bash prepare-de-monolingual.sh -cd ../.. - -# Binarize each shard of the monolingual data -TEXT=examples/backtranslation/wmt18_de_mono -for SHARD in $(seq -f "%02g" 0 24); do \ - fairseq-preprocess \ - --only-source \ - --source-lang de --target-lang en \ - --joined-dictionary \ - --srcdict data-bin/wmt18_en_de/dict.de.txt \ - --testpref $TEXT/bpe.monolingual.dedup.${SHARD} \ - --destdir data-bin/wmt18_de_mono/shard${SHARD} \ - --workers 20; \ - cp data-bin/wmt18_en_de/dict.en.txt data-bin/wmt18_de_mono/shard${SHARD}/; \ -done -``` - -Now we're ready to perform back-translation over the monolingual data. The -following command generates via sampling, but it's possible to use greedy -decoding (`--beam 1`), beam search (`--beam 5`), -top-k sampling (`--sampling --beam 1 --sampling-topk 10`), etc.: -```bash -mkdir backtranslation_output -for SHARD in $(seq -f "%02g" 0 24); do \ - fairseq-generate --fp16 \ - data-bin/wmt18_de_mono/shard${SHARD} \ - --path $CHECKPOINT_DIR/checkpoint_best.pt \ - --skip-invalid-size-inputs-valid-test \ - --max-tokens 4096 \ - --sampling --beam 1 \ - > backtranslation_output/sampling.shard${SHARD}.out; \ -done -``` - -After BT, use the `extract_bt_data.py` script to re-combine the shards, extract -the back-translations and apply length ratio filters: -```bash -python examples/backtranslation/extract_bt_data.py \ - --minlen 1 --maxlen 250 --ratio 1.5 \ - --output backtranslation_output/bt_data --srclang en --tgtlang de \ - backtranslation_output/sampling.shard*.out - -# Ensure lengths are the same: -# wc -l backtranslation_output/bt_data.{en,de} -# 21795614 backtranslation_output/bt_data.en -# 21795614 backtranslation_output/bt_data.de -# 43591228 total -``` - -Binarize the filtered BT data and combine it with the parallel data: -```bash -TEXT=backtranslation_output -fairseq-preprocess \ - --source-lang en --target-lang de \ - --joined-dictionary \ - --srcdict data-bin/wmt18_en_de/dict.en.txt \ - --trainpref $TEXT/bt_data \ - --destdir data-bin/wmt18_en_de_bt \ - --workers 20 - -# We want to train on the combined data, so we'll symlink the parallel + BT data -# in the wmt18_en_de_para_plus_bt directory. We link the parallel data as "train" -# and the BT data as "train1", so that fairseq will combine them automatically -# and so that we can use the `--upsample-primary` option to upsample the -# parallel data (if desired). -PARA_DATA=$(readlink -f data-bin/wmt18_en_de) -BT_DATA=$(readlink -f data-bin/wmt18_en_de_bt) -COMB_DATA=data-bin/wmt18_en_de_para_plus_bt -mkdir -p $COMB_DATA -for LANG in en de; do \ - ln -s ${PARA_DATA}/dict.$LANG.txt ${COMB_DATA}/dict.$LANG.txt; \ - for EXT in bin idx; do \ - ln -s ${PARA_DATA}/train.en-de.$LANG.$EXT ${COMB_DATA}/train.en-de.$LANG.$EXT; \ - ln -s ${BT_DATA}/train.en-de.$LANG.$EXT ${COMB_DATA}/train1.en-de.$LANG.$EXT; \ - ln -s ${PARA_DATA}/valid.en-de.$LANG.$EXT ${COMB_DATA}/valid.en-de.$LANG.$EXT; \ - ln -s ${PARA_DATA}/test.en-de.$LANG.$EXT ${COMB_DATA}/test.en-de.$LANG.$EXT; \ - done; \ -done -``` - - -#### 3. Train an English-German model over the combined parallel + BT data - -Finally we can train a model over the parallel + BT data: -```bash -CHECKPOINT_DIR=checkpoints_en_de_parallel_plus_bt -fairseq-train --fp16 \ - data-bin/wmt18_en_de_para_plus_bt \ - --upsample-primary 16 \ - --source-lang en --target-lang de \ - --arch transformer_wmt_en_de_big --share-all-embeddings \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.0007 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --max-tokens 3584 --update-freq 16 \ - --max-update 100000 \ - --save-dir $CHECKPOINT_DIR -# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a -# different number of GPUs. -``` - -Average the last 10 checkpoints: -```bash -python scripts/average_checkpoints.py \ - --inputs $CHECKPOINT_DIR \ - --num-epoch-checkpoints 10 \ - --output $CHECKPOINT_DIR/checkpoint.avg10.pt -``` - -Evaluate BLEU: -```bash -# tokenized BLEU on newstest2017: -bash examples/backtranslation/tokenized_bleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU4 = 32.35, 64.4/38.9/26.2/18.3 (BP=0.977, ratio=0.977, syslen=60729, reflen=62152) -# compare to 32.35 in Table 1, which is also for tokenized BLEU - -# generally it's better to report (detokenized) sacrebleu: -bash examples/backtranslation/sacrebleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 31.5 64.3/38.2/25.6/17.6 (BP = 0.971 ratio = 0.971 hyp_len = 59515 ref_len = 61287) -``` - - -## Citation -```bibtex -@inproceedings{edunov2018backtranslation, - title = {Understanding Back-Translation at Scale}, - author = {Edunov, Sergey and Ott, Myle and Auli, Michael and Grangier, David}, - booktitle = {Conference of the Association for Computational Linguistics (ACL)}, - year = 2018, -} -``` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/speech_to_text_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/speech_to_text_dataset.py deleted file mode 100644 index 164bf413e4fd41b895348c9ef0bb57421843eb17..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/speech_to_text_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import re -from collections import defaultdict -from pathlib import Path -from typing import Dict, List, Optional -from dataclasses import dataclass - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - FairseqDataset, - ResamplingDataset, - data_utils as fairseq_data_utils, -) -from fairseq.data.audio.audio_utils import ( - get_fbank, - get_waveform, - read_from_stored_zip, - is_npy_data, - is_sf_audio_data, - parse_path, - FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS, -) -from fairseq.data.audio.feature_transforms import CompositeAudioFeatureTransform -from fairseq.data.audio.data_cfg import S2TDataConfig - - -logger = logging.getLogger(__name__) - - -def get_features_from_npy_or_audio(path): - ext = Path(path).suffix - if ext not in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS: - raise ValueError(f'Unsupported file format for "{path}"') - return np.load(path) if ext == ".npy" else get_fbank(path) - - -def get_features_or_waveform_from_stored_zip( - path, byte_offset, byte_size, need_waveform=False, use_sample_rate=None, -): - assert path.endswith(".zip") - data = read_from_stored_zip(path, byte_offset, byte_size) - f = io.BytesIO(data) - if is_npy_data(data): - features_or_waveform = np.load(f) - elif is_sf_audio_data(data): - features_or_waveform = \ - get_waveform( - f, always_2d=False, output_sample_rate=use_sample_rate - )[0] if need_waveform else get_fbank(f) - else: - raise ValueError(f'Unknown file format for "{path}"') - return features_or_waveform - - -def get_features_or_waveform( - path: str, need_waveform=False, use_sample_rate=None -): - """Get speech features from .npy file or waveform from .wav/.flac file. - The file may be inside an uncompressed ZIP file and is accessed via byte - offset and length. - - Args: - path (str): File path in the format of "<.npy/.wav/.flac path>" or - "<zip path>:<byte offset>:<byte length>". - need_waveform (bool): return waveform instead of features. - use_sample_rate (int): change sample rate for the input wave file - - Returns: - features_or_waveform (numpy.ndarray): speech features or waveform. - """ - _path, slice_ptr = parse_path(path) - if len(slice_ptr) == 0: - if need_waveform: - return get_waveform( - _path, always_2d=False, output_sample_rate=use_sample_rate - )[0] - return get_features_from_npy_or_audio(_path) - elif len(slice_ptr) == 2: - features_or_waveform = get_features_or_waveform_from_stored_zip( - _path, slice_ptr[0], slice_ptr[1], need_waveform=need_waveform, - use_sample_rate=use_sample_rate - ) - else: - raise ValueError(f"Invalid path: {path}") - - return features_or_waveform - - -def _collate_frames( - frames: List[torch.Tensor], is_audio_input: bool = False -) -> torch.Tensor: - """ - Convert a list of 2D frames into a padded 3D tensor - Args: - frames (list): list of 2D frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3D tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - max_len = max(frame.size(0) for frame in frames) - if is_audio_input: - out = frames[0].new_zeros((len(frames), max_len)) - else: - out = frames[0].new_zeros((len(frames), max_len, frames[0].size(1))) - for i, v in enumerate(frames): - out[i, : v.size(0)] = v - return out - - -@dataclass -class SpeechToTextDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - speaker_id: Optional[int] = None - - -class SpeechToTextDataset(FairseqDataset): - LANG_TAG_TEMPLATE = "<lang:{}>" - - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None - ): - self.split, self.is_train_split = split, is_train_split - self.cfg = cfg - self.audio_paths, self.n_frames = audio_paths, n_frames - self.n_samples = len(audio_paths) - assert len(n_frames) == self.n_samples > 0 - assert src_texts is None or len(src_texts) == self.n_samples - assert tgt_texts is None or len(tgt_texts) == self.n_samples - assert speakers is None or len(speakers) == self.n_samples - assert src_langs is None or len(src_langs) == self.n_samples - assert tgt_langs is None or len(tgt_langs) == self.n_samples - assert ids is None or len(ids) == self.n_samples - assert (tgt_dict is None and tgt_texts is None) or ( - tgt_dict is not None and tgt_texts is not None - ) - self.src_texts, self.tgt_texts = src_texts, tgt_texts - self.src_langs, self.tgt_langs = src_langs, tgt_langs - self.speakers = speakers - self.tgt_dict = tgt_dict - self.check_tgt_lang_tag() - self.ids = ids - self.shuffle = cfg.shuffle if is_train_split else False - - self.feature_transforms = CompositeAudioFeatureTransform.from_config_dict( - self.cfg.get_feature_transforms(split, is_train_split) - ) - - self.pre_tokenizer = pre_tokenizer - self.bpe_tokenizer = bpe_tokenizer - self.n_frames_per_step = n_frames_per_step - self.speaker_to_id = speaker_to_id - - self.tgt_lens = self.get_tgt_lens_and_check_oov() - - logger.info(self.__repr__()) - - def get_tgt_lens_and_check_oov(self): - if self.tgt_texts is None: - return [0 for _ in range(self.n_samples)] - tgt_lens = [] - n_tokens, n_oov_tokens = 0, 0 - for i in range(self.n_samples): - tokenized = self.get_tokenized_tgt_text(i).split(" ") - oov_tokens = [ - t - for t in tokenized - if self.tgt_dict.index(t) == self.tgt_dict.unk_index - ] - n_tokens += len(tokenized) - n_oov_tokens += len(oov_tokens) - tgt_lens.append(len(tokenized)) - logger.info(f"'{self.split}' has {n_oov_tokens / n_tokens * 100:.2f}% OOV") - return tgt_lens - - def __repr__(self): - return ( - self.__class__.__name__ - + f'(split="{self.split}", n_samples={self.n_samples:_}, ' - f"prepend_tgt_lang_tag={self.cfg.prepend_tgt_lang_tag}, " - f"shuffle={self.shuffle}, transforms={self.feature_transforms}, " - f"n_frames_per_step={self.n_frames_per_step}" - ) - - @classmethod - def is_lang_tag(cls, token): - pattern = cls.LANG_TAG_TEMPLATE.replace("{}", "(.*)") - return re.match(pattern, token) - - def check_tgt_lang_tag(self): - if self.cfg.prepend_tgt_lang_tag: - assert self.tgt_langs is not None and self.tgt_dict is not None - tgt_lang_tags = [ - self.LANG_TAG_TEMPLATE.format(t) for t in set(self.tgt_langs) - ] - assert all(t in self.tgt_dict for t in tgt_lang_tags) - - @classmethod - def tokenize(cls, tokenizer, text: str): - return text if tokenizer is None else tokenizer.encode(text) - - def get_tokenized_tgt_text(self, index: int): - text = self.tokenize(self.pre_tokenizer, self.tgt_texts[index]) - text = self.tokenize(self.bpe_tokenizer, text) - return text - - def pack_frames(self, feature: torch.Tensor): - if self.n_frames_per_step == 1: - return feature - n_packed_frames = feature.shape[0] // self.n_frames_per_step - feature = feature[:self.n_frames_per_step * n_packed_frames] - return feature.reshape(n_packed_frames, -1) - - @classmethod - def get_lang_tag_idx(cls, lang: str, dictionary: Dictionary): - lang_tag_idx = dictionary.index(cls.LANG_TAG_TEMPLATE.format(lang)) - assert lang_tag_idx != dictionary.unk() - return lang_tag_idx - - def __getitem__(self, index: int) -> SpeechToTextDatasetItem: - source = get_features_or_waveform( - self.audio_paths[index], - need_waveform=self.cfg.use_audio_input, - use_sample_rate=self.cfg.use_sample_rate, - ) - if self.feature_transforms is not None: - assert not self.cfg.use_audio_input - source = self.feature_transforms(source) - source = torch.from_numpy(source).float() - source = self.pack_frames(source) - - target = None - if self.tgt_texts is not None: - tokenized = self.get_tokenized_tgt_text(index) - target = self.tgt_dict.encode_line( - tokenized, add_if_not_exist=False, append_eos=True - ).long() - if self.cfg.prepend_tgt_lang_tag: - lang_tag_idx = self.get_lang_tag_idx( - self.tgt_langs[index], self.tgt_dict - ) - target = torch.cat((torch.LongTensor([lang_tag_idx]), target), 0) - - speaker_id = None - if self.speaker_to_id is not None: - speaker_id = self.speaker_to_id[self.speakers[index]] - return SpeechToTextDatasetItem( - index=index, source=source, target=target, speaker_id=speaker_id - ) - - def __len__(self): - return self.n_samples - - def collater( - self, samples: List[SpeechToTextDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([x.index for x in samples], dtype=torch.long) - frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input) - # sort samples by descending number of frames - n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, target_lengths = None, None - prev_output_tokens = None - ntokens = None - if self.tgt_texts is not None: - target = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, order) - target_lengths = torch.tensor( - [x.target.size(0) for x in samples], dtype=torch.long - ).index_select(0, order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(x.target.size(0) for x in samples) - - speaker = None - if self.speaker_to_id is not None: - speaker = torch.tensor( - [s.speaker_id for s in samples], dtype=torch.long - ).index_select(0, order).view(-1, 1) - - net_input = { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - } - out = { - "id": indices, - "net_input": net_input, - "speaker": speaker, - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - if return_order: - out["order"] = order - return out - - def num_tokens(self, index): - return self.n_frames[index] - - def size(self, index): - return self.n_frames[index], self.tgt_lens[index] - - @property - def sizes(self): - return np.array(self.n_frames) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - # first by descending order of # of frames then by original/random order - order.append([-n for n in self.n_frames]) - return np.lexsort(order) - - def prefetch(self, indices): - raise False - - -class SpeechToTextDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_AUDIO, KEY_N_FRAMES = "id", "audio", "n_frames" - KEY_TGT_TEXT = "tgt_text" - # optional columns - KEY_SPEAKER, KEY_SRC_TEXT = "speaker", "src_text" - KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang" - # default values - DEFAULT_SPEAKER = DEFAULT_SRC_TEXT = DEFAULT_LANG = "" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - return SpeechToTextDataset( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id - ) - - @classmethod - def get_size_ratios( - cls, datasets: List[SpeechToTextDataset], alpha: float = 1.0 - ) -> List[float]: - """Size ratios for temperature-based sampling - (https://arxiv.org/abs/1907.05019)""" - - id_to_lp, lp_to_sz = {}, defaultdict(int) - for ds in datasets: - lang_pairs = {f"{s}->{t}" for s, t in zip(ds.src_langs, ds.tgt_langs)} - assert len(lang_pairs) == 1 - lang_pair = list(lang_pairs)[0] - id_to_lp[ds.split] = lang_pair - lp_to_sz[lang_pair] += sum(ds.n_frames) - - sz_sum = sum(v for v in lp_to_sz.values()) - lp_to_prob = {k: v / sz_sum for k, v in lp_to_sz.items()} - lp_to_tgt_prob = {k: v ** alpha for k, v in lp_to_prob.items()} - prob_sum = sum(v for v in lp_to_tgt_prob.values()) - lp_to_tgt_prob = {k: v / prob_sum for k, v in lp_to_tgt_prob.items()} - lp_to_sz_ratio = { - k: (lp_to_tgt_prob[k] * sz_sum) / v for k, v in lp_to_sz.items() - } - size_ratio = [lp_to_sz_ratio[id_to_lp[ds.split]] for ds in datasets] - - p_formatted = { - k: f"{lp_to_prob[k]:.3f}->{lp_to_tgt_prob[k]:.3f}" for k in lp_to_sz - } - logger.info(f"sampling probability balancing: {p_formatted}") - sr_formatted = {ds.split: f"{r:.3f}" for ds, r in zip(datasets, size_ratio)} - logger.info(f"balanced sampling size ratio: {sr_formatted}") - return size_ratio - - @classmethod - def _load_samples_from_tsv(cls, root: str, split: str): - tsv_path = Path(root) / f"{split}.tsv" - if not tsv_path.is_file(): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - samples = [dict(e) for e in reader] - if len(samples) == 0: - raise ValueError(f"Empty manifest: {tsv_path}") - return samples - - @classmethod - def _from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - split: str, - tgt_dict, - is_train_split: bool, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - samples = cls._load_samples_from_tsv(root, split) - return cls._from_list( - split, is_train_split, samples, cfg, tgt_dict, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - - @classmethod - def from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - splits: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - n_frames_per_step: int = 1, - speaker_to_id=None - ) -> SpeechToTextDataset: - datasets = [ - cls._from_tsv( - root, cfg, split, tgt_dict, is_train_split, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - for split in splits.split(",") - ] - - if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for r, d in zip(size_ratios, datasets) - ] - - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lstm_cell_with_zoneout.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lstm_cell_with_zoneout.py deleted file mode 100644 index f04e5db255c62bbe0faebbc641f579f92be5580c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lstm_cell_with_zoneout.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn - - -class LSTMCellWithZoneOut(nn.Module): - """ - Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations - https://arxiv.org/abs/1606.01305 - """ - - def __init__(self, prob: float, input_size: int, hidden_size: int, - bias: bool = True): - super(LSTMCellWithZoneOut, self).__init__() - self.lstm_cell = nn.LSTMCell(input_size, hidden_size, bias=bias) - self.prob = prob - if prob > 1.0 or prob < 0.0: - raise ValueError("zoneout probability must be in the range from " - "0.0 to 1.0.") - - def zoneout(self, h, next_h, prob): - if isinstance(h, tuple): - return tuple( - [self.zoneout(h[i], next_h[i], prob) for i in range(len(h))] - ) - - if self.training: - mask = h.new_zeros(*h.size()).bernoulli_(prob) - return mask * h + (1 - mask) * next_h - - return prob * h + (1 - prob) * next_h - - def forward(self, x, h): - return self.zoneout(h, self.lstm_cell(x, h), self.prob) diff --git a/spaces/OIUGLK/bingo/src/components/tailwind-indicator.tsx b/spaces/OIUGLK/bingo/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( - <div className="fixed bottom-1 left-1 z-50 flex h-6 w-6 items-center justify-center rounded-full bg-gray-800 p-3 font-mono text-xs text-white"> - <div className="block sm:hidden">xs</div> - <div className="hidden sm:block md:hidden">sm</div> - <div className="hidden md:block lg:hidden">md</div> - <div className="hidden lg:block xl:hidden">lg</div> - <div className="hidden xl:block 2xl:hidden">xl</div> - <div className="hidden 2xl:block">2xl</div> - </div> - ) -} diff --git a/spaces/ORI-Muchim/BlueArchiveTTS/utils.py b/spaces/ORI-Muchim/BlueArchiveTTS/utils.py deleted file mode 100644 index 4cb5b43d0ca2bae496e7871b2094f2ffb26ab642..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/BlueArchiveTTS/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/OkamiFeng/Bark-with-Voice-Cloning/bark/api.py b/spaces/OkamiFeng/Bark-with-Voice-Cloning/bark/api.py deleted file mode 100644 index 7a4319ceaa13798912637290f8e9e88c50d5420a..0000000000000000000000000000000000000000 --- a/spaces/OkamiFeng/Bark-with-Voice-Cloning/bark/api.py +++ /dev/null @@ -1,158 +0,0 @@ -from typing import Dict, Optional, Union - -import numpy as np - -from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic - - -def generate_with_settings(text_prompt, semantic_temp=0.6, eos_p=0.2, coarse_temp=0.7, fine_temp=0.5, voice_name=None, output_full=False): - - # generation with more control - x_semantic = generate_text_semantic( - text_prompt, - history_prompt=voice_name, - temp=semantic_temp, - min_eos_p = eos_p, - use_kv_caching=True - ) - - x_coarse_gen = generate_coarse( - x_semantic, - history_prompt=voice_name, - temp=coarse_temp, - use_kv_caching=True - ) - x_fine_gen = generate_fine( - x_coarse_gen, - history_prompt=voice_name, - temp=fine_temp, - ) - - if output_full: - full_generation = { - 'semantic_prompt': x_semantic, - 'coarse_prompt': x_coarse_gen, - 'fine_prompt': x_fine_gen - } - return full_generation, codec_decode(x_fine_gen) - return codec_decode(x_fine_gen) - - -def text_to_semantic( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, -): - """Generate semantic array from text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - - Returns: - numpy semantic array to be fed into `semantic_to_waveform` - """ - x_semantic = generate_text_semantic( - text, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - return x_semantic - - -def semantic_to_waveform( - semantic_tokens: np.ndarray, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from semantic input. - - Args: - semantic_tokens: semantic token output from `text_to_semantic` - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - coarse_tokens = generate_coarse( - semantic_tokens, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - fine_tokens = generate_fine( - coarse_tokens, - history_prompt=history_prompt, - temp=0.5, - ) - audio_arr = codec_decode(fine_tokens) - if output_full: - full_generation = { - "semantic_prompt": semantic_tokens, - "coarse_prompt": coarse_tokens, - "fine_prompt": fine_tokens, - } - return full_generation, audio_arr - return audio_arr - - -def save_as_prompt(filepath, full_generation): - assert(filepath.endswith(".npz")) - assert(isinstance(full_generation, dict)) - assert("semantic_prompt" in full_generation) - assert("coarse_prompt" in full_generation) - assert("fine_prompt" in full_generation) - np.savez(filepath, **full_generation) - - -def generate_audio( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - text_temp: float = 0.7, - waveform_temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from input text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - text_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - waveform_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - semantic_tokens = text_to_semantic( - text, - history_prompt=history_prompt, - temp=text_temp, - silent=silent, - ) - out = semantic_to_waveform( - semantic_tokens, - history_prompt=history_prompt, - temp=waveform_temp, - silent=silent, - output_full=output_full, - ) - if output_full: - full_generation, audio_arr = out - return full_generation, audio_arr - else: - audio_arr = out - return audio_arr diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/criterion.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/criterion.py deleted file mode 100644 index dbc5c5e78b11be8996a22f99518add5a01341adf..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/criterion.py +++ /dev/null @@ -1,330 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/criterion.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -""" -OneFormer criterion. -""" -import logging - -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.utils.comm import get_world_size -from detectron2.projects.point_rend.point_features import ( - get_uncertain_point_coords_with_randomness, - point_sample, -) - -from ..utils.misc import is_dist_avail_and_initialized, nested_tensor_from_tensor_list -from ..utils import box_ops -import torch.distributed as dist -import diffdist.functional as diff_dist -import numpy as np - -def dist_collect(x): - """ collect all tensor from all GPUs - args: - x: shape (mini_batch, ...) - returns: - shape (mini_batch * num_gpu, ...) - """ - x = x.contiguous() - out_list = [torch.zeros_like(x, device=x.device, dtype=x.dtype).contiguous() for _ in range(dist.get_world_size())] - out_list = diff_dist.all_gather(out_list, x) - return torch.cat(out_list, dim=0).contiguous() - -def dice_loss( - inputs: torch.Tensor, - targets: torch.Tensor, - num_masks: float, - ): - """ - Compute the DICE loss, similar to generalized IOU for masks - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - """ - inputs = inputs.sigmoid() - inputs = inputs.flatten(1) - numerator = 2 * (inputs * targets).sum(-1) - denominator = inputs.sum(-1) + targets.sum(-1) - loss = 1 - (numerator + 1) / (denominator + 1) - return loss.sum() / num_masks - - -dice_loss_jit = torch.jit.script( - dice_loss -) # type: torch.jit.ScriptModule - - -def sigmoid_ce_loss( - inputs: torch.Tensor, - targets: torch.Tensor, - num_masks: float, - ): - """ - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - Returns: - Loss tensor - """ - loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") - loss = loss.mean(1) - return loss.sum() / num_masks - - -sigmoid_ce_loss_jit = torch.jit.script( - sigmoid_ce_loss -) # type: torch.jit.ScriptModule - - -def calculate_uncertainty(logits): - """ - We estimate uncerainty as L1 distance between 0.0 and the logit prediction in 'logits' for the - foreground class in `classes`. - Args: - logits (Tensor): A tensor of shape (R, 1, ...) for class-specific or - class-agnostic, where R is the total number of predicted masks in all images and C is - the number of foreground classes. The values are logits. - Returns: - scores (Tensor): A tensor of shape (R, 1, ...) that contains uncertainty scores with - the most uncertain locations having the highest uncertainty score. - """ - assert logits.shape[1] == 1 - gt_class_logits = logits.clone() - return -(torch.abs(gt_class_logits)) - - -class SetCriterion(nn.Module): - """This class computes the loss for DETR. - The process happens in two steps: - 1) we compute hungarian assignment between ground truth boxes and the outputs of the model - 2) we supervise each pair of matched ground-truth / prediction (supervise class and box) - """ - - def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses, - num_points, oversample_ratio, importance_sample_ratio, contrast_temperature=None): - """Create the criterion. - Parameters: - num_classes: number of object categories, omitting the special no-object category - matcher: module able to compute a matching between targets and proposals - weight_dict: dict containing as key the names of the losses and as values their relative weight. - eos_coef: relative classification weight applied to the no-object category - losses: list of all the losses to be applied. See get_loss for list of available losses. - """ - super().__init__() - self.num_classes = num_classes - self.matcher = matcher - self.weight_dict = weight_dict - self.eos_coef = eos_coef - self.losses = losses - empty_weight = torch.ones(self.num_classes + 1) - empty_weight[-1] = self.eos_coef - self.register_buffer("empty_weight", empty_weight) - self.cross_entropy = nn.CrossEntropyLoss() - - # pointwise mask loss parameters - self.num_points = num_points - self.oversample_ratio = oversample_ratio - self.importance_sample_ratio = importance_sample_ratio - self.contrast_temperature = contrast_temperature - if self.contrast_temperature is not None: - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / contrast_temperature)) - - - def loss_contrastive(self, outputs, targets, indices, num_masks): - assert "contrastive_logits" in outputs - assert "texts" in outputs - image_x = outputs["contrastive_logits"].float() - - batch_size = image_x.shape[0] - # get label globally - if is_dist_avail_and_initialized(): - labels = torch.arange(batch_size, dtype=torch.long, device=image_x.device) + batch_size * dist.get_rank() - else: - labels = torch.arange(batch_size, dtype=torch.long, device=image_x.device) - - text_x = outputs["texts"] - - # [B, C] - image_x = F.normalize(image_x.flatten(1), dim=-1) - text_x = F.normalize(text_x.flatten(1), dim=-1) - - if is_dist_avail_and_initialized(): - logits_per_img = image_x @ dist_collect(text_x).t() - logits_per_text = text_x @ dist_collect(image_x).t() - else: - logits_per_img = image_x @ text_x.t() - logits_per_text = text_x @ image_x.t() - - logit_scale = torch.clamp(self.logit_scale.exp(), max=100) - loss_img = self.cross_entropy(logits_per_img * logit_scale, labels) - loss_text = self.cross_entropy(logits_per_text * logit_scale, labels) - - loss_contrastive = loss_img + loss_text - - losses = {"loss_contrastive": loss_contrastive} - return losses - - def loss_labels(self, outputs, targets, indices, num_masks): - """Classification loss (NLL) - targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes] - """ - assert "pred_logits" in outputs - src_logits = outputs["pred_logits"].float() - - idx = self._get_src_permutation_idx(indices) - target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)]) - target_classes = torch.full( - src_logits.shape[:2], self.num_classes, dtype=torch.int64, device=src_logits.device - ) - target_classes[idx] = target_classes_o - - ce_weight = torch.full( - src_logits.shape[:2], self.eos_coef, dtype=torch.float32, device=src_logits.device - ) - ce_weight[idx] = torch.tensor(1.).to(target_classes.device) - - loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight, reduce=False, reduction="none") - loss_ce = loss_ce.sum(1) / ce_weight.sum() - loss_ce = loss_ce.sum() - losses = {"loss_ce": loss_ce} - return losses - - def loss_masks(self, outputs, targets, indices, num_masks): - """Compute the losses related to the masks: the focal loss and the dice loss. - targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w] - """ - assert "pred_masks" in outputs - - src_idx = self._get_src_permutation_idx(indices) - tgt_idx = self._get_tgt_permutation_idx(indices) - src_masks = outputs["pred_masks"] - src_masks = src_masks[src_idx] - masks = [t["masks"] for t in targets] - # TODO use valid to mask invalid areas due to padding in loss - target_masks, valid = nested_tensor_from_tensor_list(masks).decompose() - target_masks = target_masks.to(src_masks) - target_masks = target_masks[tgt_idx] - - # No need to upsample predictions as we are using normalized coordinates :) - # N x 1 x H x W - src_masks = src_masks[:, None] - target_masks = target_masks[:, None] - - with torch.no_grad(): - # sample point_coords - point_coords = get_uncertain_point_coords_with_randomness( - src_masks, - lambda logits: calculate_uncertainty(logits), - self.num_points, - self.oversample_ratio, - self.importance_sample_ratio, - ) - # get gt labels - point_labels = point_sample( - target_masks, - point_coords, - align_corners=False, - ).squeeze(1) - - point_logits = point_sample( - src_masks, - point_coords, - align_corners=False, - ).squeeze(1) - - losses = { - "loss_mask": sigmoid_ce_loss_jit(point_logits, point_labels, num_masks), - "loss_dice": dice_loss_jit(point_logits, point_labels, num_masks), - } - - del src_masks - del target_masks - return losses - - def _get_src_permutation_idx(self, indices): - # permute predictions following indices - batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)]) - src_idx = torch.cat([src for (src, _) in indices]) - return batch_idx, src_idx - - def _get_tgt_permutation_idx(self, indices): - # permute targets following indices - batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)]) - tgt_idx = torch.cat([tgt for (_, tgt) in indices]) - return batch_idx, tgt_idx - - def get_loss(self, loss, outputs, targets, indices, num_masks): - loss_map = { - 'labels': self.loss_labels, - 'masks': self.loss_masks, - 'contrastive': self.loss_contrastive, - } - assert loss in loss_map, f"do you really want to compute {loss} loss?" - return loss_map[loss](outputs, targets, indices, num_masks) - - def forward(self, outputs, targets): - """This performs the loss computation. - Parameters: - outputs: dict of tensors, see the output specification of the model for the format - targets: list of dicts, such that len(targets) == batch_size. - The expected keys in each dict depends on the losses applied, see each loss' doc - """ - outputs_without_aux = {k: v for k, v in outputs.items() if k != "aux_outputs"} - - # Retrieve the matching between the outputs of the last layer and the targets - indices = self.matcher(outputs_without_aux, targets) - - # Compute the average number of target boxes accross all nodes, for normalization purposes - num_masks = sum(len(t["labels"]) for t in targets) - num_masks = torch.as_tensor( - [num_masks], dtype=torch.float, device=next(iter(outputs.values())).device - ) - if is_dist_avail_and_initialized(): - torch.distributed.all_reduce(num_masks) - num_masks = torch.clamp(num_masks / get_world_size(), min=1).item() - - # Compute all the requested losses - losses = {} - for loss in self.losses: - losses.update(self.get_loss(loss, outputs, targets, indices, num_masks)) - - # In case of auxiliary losses, we repeat this process with the output of each intermediate layer. - if "aux_outputs" in outputs: - for i, aux_outputs in enumerate(outputs["aux_outputs"]): - indices = self.matcher(aux_outputs, targets) - for loss in self.losses: - if loss == "contrastive": - continue - l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_masks) - l_dict = {k + f"_{i}": v for k, v in l_dict.items()} - losses.update(l_dict) - - return losses - - def __repr__(self): - head = "Criterion " + self.__class__.__name__ - body = [ - "matcher: {}".format(self.matcher.__repr__(_repr_indent=8)), - "losses: {}".format(self.losses), - "weight_dict: {}".format(self.weight_dict), - "num_classes: {}".format(self.num_classes), - "eos_coef: {}".format(self.eos_coef), - "num_points: {}".format(self.num_points), - "oversample_ratio: {}".format(self.oversample_ratio), - "importance_sample_ratio: {}".format(self.importance_sample_ratio), - ] - _repr_indent = 4 - lines = [head] + [" " * _repr_indent + line for line in body] - return "\n".join(lines) \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/roi_pool.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/roi_pool.py deleted file mode 100644 index d339d8f2941eabc1cbe181a9c6c5ab5ff4ff4e5f..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/roi_pool.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['roi_pool_forward', 'roi_pool_backward']) - - -class RoIPoolFunction(Function): - - @staticmethod - def symbolic(g, input, rois, output_size, spatial_scale): - return g.op( - 'MaxRoiPool', - input, - rois, - pooled_shape_i=output_size, - spatial_scale_f=spatial_scale) - - @staticmethod - def forward(ctx, input, rois, output_size, spatial_scale=1.0): - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.input_shape = input.size() - - assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!' - - output_shape = (rois.size(0), input.size(1), ctx.output_size[0], - ctx.output_size[1]) - output = input.new_zeros(output_shape) - argmax = input.new_zeros(output_shape, dtype=torch.int) - - ext_module.roi_pool_forward( - input, - rois, - output, - argmax, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale) - - ctx.save_for_backward(rois, argmax) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - rois, argmax = ctx.saved_tensors - grad_input = grad_output.new_zeros(ctx.input_shape) - - ext_module.roi_pool_backward( - grad_output, - rois, - argmax, - grad_input, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale) - - return grad_input, None, None, None - - -roi_pool = RoIPoolFunction.apply - - -class RoIPool(nn.Module): - - def __init__(self, output_size, spatial_scale=1.0): - super(RoIPool, self).__init__() - - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - - def forward(self, input, rois): - return roi_pool(input, rois, self.output_size, self.spatial_scale) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(output_size={self.output_size}, ' - s += f'spatial_scale={self.spatial_scale})' - return s diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/utils/up_conv_block.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/utils/up_conv_block.py deleted file mode 100644 index 378469da76cb7bff6a639e7877b3c275d50490fb..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/utils/up_conv_block.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, build_upsample_layer - - -class UpConvBlock(nn.Module): - """Upsample convolution block in decoder for UNet. - - This upsample convolution block consists of one upsample module - followed by one convolution block. The upsample module expands the - high-level low-resolution feature map and the convolution block fuses - the upsampled high-level low-resolution feature map and the low-level - high-resolution feature map from encoder. - - Args: - conv_block (nn.Sequential): Sequential of convolutional layers. - in_channels (int): Number of input channels of the high-level - skip_channels (int): Number of input channels of the low-level - high-resolution feature map from encoder. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers in the conv_block. - Default: 2. - stride (int): Stride of convolutional layer in conv_block. Default: 1. - dilation (int): Dilation rate of convolutional layer in conv_block. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). If the size of - high-level feature map is the same as that of skip feature map - (low-level feature map from encoder), it does not need upsample the - high-level feature map and the upsample_cfg is None. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - conv_block, - in_channels, - skip_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - dcn=None, - plugins=None): - super(UpConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.conv_block = conv_block( - in_channels=2 * skip_channels, - out_channels=out_channels, - num_convs=num_convs, - stride=stride, - dilation=dilation, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None) - if upsample_cfg is not None: - self.upsample = build_upsample_layer( - cfg=upsample_cfg, - in_channels=in_channels, - out_channels=skip_channels, - with_cp=with_cp, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.upsample = ConvModule( - in_channels, - skip_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, skip, x): - """Forward function.""" - - x = self.upsample(x) - out = torch.cat([skip, x], dim=1) - out = self.conv_block(out) - - return out diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/transformer.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/transformer.py deleted file mode 100644 index 72e108a1a2bf628ced2161c0a6d5a1b28e654bcd..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/rpn/transformer.py +++ /dev/null @@ -1,52 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn, Tensor - -import copy -from typing import Optional, List - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - - -class TransformerEncoderLayer(nn.Module): - def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1, - activation="relu", normalize_before=False): - super(TransformerEncoderLayer, self).__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def forward(self, src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None): - src2 = self.self_attn(src, src, src, attn_mask=src_mask, - key_padding_mask=src_key_padding_mask)[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src diff --git a/spaces/Potanin/12345/app.py b/spaces/Potanin/12345/app.py deleted file mode 100644 index 3332f13a969c43ea5f154881e94532cd684f0a9d..0000000000000000000000000000000000000000 --- a/spaces/Potanin/12345/app.py +++ /dev/null @@ -1,2089 +0,0 @@ -import subprocess, torch, os, traceback, sys, warnings, shutil, numpy as np -from mega import Mega -os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1" -import threading -from pathlib import Path -from time import sleep -from subprocess import Popen -import faiss -from random import shuffle -import json, datetime, requests -from gtts import gTTS -now_dir = os.getcwd() -sys.path.append(now_dir) -tmp = os.path.join(now_dir, "TEMP") -shutil.rmtree(tmp, ignore_errors=True) -shutil.rmtree("%s/runtime/Lib/site-packages/infer_pack" % (now_dir), ignore_errors=True) -os.makedirs(tmp, exist_ok=True) -os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True) -os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True) -os.environ["TEMP"] = tmp -warnings.filterwarnings("ignore") -torch.manual_seed(114514) -from i18n import I18nAuto - -import signal - -import math - -from utils import load_audio, CSVutil - -global DoFormant, Quefrency, Timbre - -if not os.path.isdir('csvdb/'): - os.makedirs('csvdb') - frmnt, stp = open("csvdb/formanting.csv", 'w'), open("csvdb/stop.csv", 'w') - frmnt.close() - stp.close() - -try: - DoFormant, Quefrency, Timbre = CSVutil('csvdb/formanting.csv', 'r', 'formanting') - DoFormant = ( - lambda DoFormant: True if DoFormant.lower() == 'true' else (False if DoFormant.lower() == 'false' else DoFormant) - )(DoFormant) -except (ValueError, TypeError, IndexError): - DoFormant, Quefrency, Timbre = False, 1.0, 1.0 - CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, Quefrency, Timbre) - -def download_models(): - # Download hubert base model if not present - if not os.path.isfile('./hubert_base.pt'): - response = requests.get('https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt') - - if response.status_code == 200: - with open('./hubert_base.pt', 'wb') as f: - f.write(response.content) - print("Downloaded hubert base model file successfully. File saved to ./hubert_base.pt.") - else: - raise Exception("Failed to download hubert base model file. Status code: " + str(response.status_code) + ".") - - # Download rmvpe model if not present - if not os.path.isfile('./rmvpe.pt'): - response = requests.get('https://drive.usercontent.google.com/download?id=1Hkn4kNuVFRCNQwyxQFRtmzmMBGpQxptI&export=download&authuser=0&confirm=t&uuid=0b3a40de-465b-4c65-8c41-135b0b45c3f7&at=APZUnTV3lA3LnyTbeuduura6Dmi2:1693724254058') - - if response.status_code == 200: - with open('./rmvpe.pt', 'wb') as f: - f.write(response.content) - print("Downloaded rmvpe model file successfully. File saved to ./rmvpe.pt.") - else: - raise Exception("Failed to download rmvpe model file. Status code: " + str(response.status_code) + ".") - -download_models() - -print("\n-------------------------------\nRVC v2 Easy GUI (Local Edition)\n-------------------------------\n") - -def formant_apply(qfrency, tmbre): - Quefrency = qfrency - Timbre = tmbre - DoFormant = True - CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre) - - return ({"value": Quefrency, "__type__": "update"}, {"value": Timbre, "__type__": "update"}) - -def get_fshift_presets(): - fshift_presets_list = [] - for dirpath, _, filenames in os.walk("./formantshiftcfg/"): - for filename in filenames: - if filename.endswith(".txt"): - fshift_presets_list.append(os.path.join(dirpath,filename).replace('\\','/')) - - if len(fshift_presets_list) > 0: - return fshift_presets_list - else: - return '' - - - -def formant_enabled(cbox, qfrency, tmbre, frmntapply, formantpreset, formant_refresh_button): - - if (cbox): - - DoFormant = True - CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre) - #print(f"is checked? - {cbox}\ngot {DoFormant}") - - return ( - {"value": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - ) - - - else: - - DoFormant = False - CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre) - - #print(f"is checked? - {cbox}\ngot {DoFormant}") - return ( - {"value": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - ) - - - -def preset_apply(preset, qfer, tmbr): - if str(preset) != '': - with open(str(preset), 'r') as p: - content = p.readlines() - qfer, tmbr = content[0].split('\n')[0], content[1] - - formant_apply(qfer, tmbr) - else: - pass - return ({"value": qfer, "__type__": "update"}, {"value": tmbr, "__type__": "update"}) - -def update_fshift_presets(preset, qfrency, tmbre): - - qfrency, tmbre = preset_apply(preset, qfrency, tmbre) - - if (str(preset) != ''): - with open(str(preset), 'r') as p: - content = p.readlines() - qfrency, tmbre = content[0].split('\n')[0], content[1] - - formant_apply(qfrency, tmbre) - else: - pass - return ( - {"choices": get_fshift_presets(), "__type__": "update"}, - {"value": qfrency, "__type__": "update"}, - {"value": tmbre, "__type__": "update"}, - ) - -i18n = I18nAuto() -#i18n.print() -# 判断是否有能用来训练和加速推理的N卡 -ngpu = torch.cuda.device_count() -gpu_infos = [] -mem = [] -if (not torch.cuda.is_available()) or ngpu == 0: - if_gpu_ok = False -else: - if_gpu_ok = False - for i in range(ngpu): - gpu_name = torch.cuda.get_device_name(i) - if ( - "10" in gpu_name - or "16" in gpu_name - or "20" in gpu_name - or "30" in gpu_name - or "40" in gpu_name - or "A2" in gpu_name.upper() - or "A3" in gpu_name.upper() - or "A4" in gpu_name.upper() - or "P4" in gpu_name.upper() - or "A50" in gpu_name.upper() - or "A60" in gpu_name.upper() - or "70" in gpu_name - or "80" in gpu_name - or "90" in gpu_name - or "M4" in gpu_name.upper() - or "T4" in gpu_name.upper() - or "TITAN" in gpu_name.upper() - ): # A10#A100#V100#A40#P40#M40#K80#A4500 - if_gpu_ok = True # 至少有一张能用的N卡 - gpu_infos.append("%s\t%s" % (i, gpu_name)) - mem.append( - int( - torch.cuda.get_device_properties(i).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - ) -if if_gpu_ok == True and len(gpu_infos) > 0: - gpu_info = "\n".join(gpu_infos) - default_batch_size = min(mem) // 2 -else: - gpu_info = i18n("很遗憾您这没有能用的显卡来支持您训练") - default_batch_size = 1 -gpus = "-".join([i[0] for i in gpu_infos]) -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -import soundfile as sf -from fairseq import checkpoint_utils -import gradio as gr -import logging -from vc_infer_pipeline import VC -from config import Config - -config = Config() -# from trainset_preprocess_pipeline import PreProcess -logging.getLogger("numba").setLevel(logging.WARNING) - -hubert_model = None - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - - -weight_root = "weights" -index_root = "logs" -names = [] -for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) -index_paths = [] -for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) - - - -def vc_single( - sid, - input_audio_path, - f0_up_key, - f0_file, - f0_method, - file_index, - #file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, -): # spk_item, input_audio0, vc_transform0,f0_file,f0method0 - global tgt_sr, net_g, vc, hubert_model, version - if input_audio_path is None: - return "You need to upload an audio", None - f0_up_key = int(f0_up_key) - try: - audio = load_audio(input_audio_path, 16000, DoFormant, Quefrency, Timbre) - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - if hubert_model == None: - load_hubert() - if_f0 = cpt.get("f0", 1) - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - ) # 防止小白写错,自动帮他替换掉 - # file_big_npy = ( - # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - # ) - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - f0_file=f0_file, - ) - if resample_sr >= 16000 and tgt_sr != resample_sr: - tgt_sr = resample_sr - index_info = ( - "Using index:%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % ( - index_info, - times[0], - times[1], - times[2], - ), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - - -def vc_multi( - sid, - dir_path, - opt_root, - paths, - f0_up_key, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - format1, - crepe_hop_length, -): - try: - dir_path = ( - dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - os.makedirs(opt_root, exist_ok=True) - try: - if dir_path != "": - paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)] - else: - paths = [path.name for path in paths] - except: - traceback.print_exc() - paths = [path.name for path in paths] - infos = [] - for path in paths: - info, opt = vc_single( - sid, - path, - f0_up_key, - None, - f0_method, - file_index, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length - ) - if "Success" in info: - try: - tgt_sr, audio_opt = opt - if format1 in ["wav", "flac"]: - sf.write( - "%s/%s.%s" % (opt_root, os.path.basename(path), format1), - audio_opt, - tgt_sr, - ) - else: - path = "%s/%s.wav" % (opt_root, os.path.basename(path)) - sf.write( - path, - audio_opt, - tgt_sr, - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format1) - ) - except: - info += traceback.format_exc() - infos.append("%s->%s" % (os.path.basename(path), info)) - yield "\n".join(infos) - yield "\n".join(infos) - except: - yield traceback.format_exc() - -# 一个选项卡全局只能有一个音色 -def get_vc(sid): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model != None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return {"visible": False, "__type__": "update"} - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return {"visible": False, "maximum": n_spk, "__type__": "update"} - - -def change_choices(): - names = [] - for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) - index_paths = [] - for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) - return {"choices": sorted(names), "__type__": "update"}, { - "choices": sorted(index_paths), - "__type__": "update", - } - - -def clean(): - return {"value": "", "__type__": "update"} - - -sr_dict = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -def if_done(done, p): - while 1: - if p.poll() == None: - sleep(0.5) - else: - break - done[0] = True - - -def if_done_multi(done, ps): - while 1: - # poll==None代表进程未结束 - # 只要有一个进程未结束都不停 - flag = 1 - for p in ps: - if p.poll() == None: - flag = 0 - sleep(0.5) - break - if flag == 1: - break - done[0] = True - - -def preprocess_dataset(trainset_dir, exp_dir, sr, n_p): - sr = sr_dict[sr] - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w") - f.close() - cmd = ( - config.python_cmd - + " trainset_preprocess_pipeline_print.py %s %s %s %s/logs/%s " - % (trainset_dir, sr, n_p, now_dir, exp_dir) - + str(config.noparallel) - ) - print(cmd) - p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - while 1: - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0] == True: - break - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - -# but2.click(extract_f0,[gpus6,np7,f0method8,if_f0_3,trainset_dir4],[info2]) -def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19, echl): - gpus = gpus.split("-") - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w") - f.close() - if if_f0: - cmd = config.python_cmd + " extract_f0_print.py %s/logs/%s %s %s %s" % ( - now_dir, - exp_dir, - n_p, - f0method, - echl, - ) - print(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) # , stdin=PIPE, stdout=PIPE,stderr=PIPE - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - while 1: - with open( - "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r" - ) as f: - yield (f.read()) - sleep(1) - if done[0] == True: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - ####对不同part分别开多进程 - """ - n_part=int(sys.argv[1]) - i_part=int(sys.argv[2]) - i_gpu=sys.argv[3] - exp_dir=sys.argv[4] - os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu) - """ - leng = len(gpus) - ps = [] - for idx, n_g in enumerate(gpus): - cmd = ( - config.python_cmd - + " extract_feature_print.py %s %s %s %s %s/logs/%s %s" - % ( - config.device, - leng, - idx, - n_g, - now_dir, - exp_dir, - version19, - ) - ) - print(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done_multi, - args=( - done, - ps, - ), - ).start() - while 1: - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0] == True: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - - -def change_sr2(sr2, if_f0_3, version19): - path_str = "" if version19 == "v1" else "_v2" - f0_str = "f0" if if_f0_3 else "" - if_pretrained_generator_exist = os.access("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK) - if_pretrained_discriminator_exist = os.access("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK) - if (if_pretrained_generator_exist == False): - print("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model") - if (if_pretrained_discriminator_exist == False): - print("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model") - return ( - ("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_generator_exist else "", - ("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_discriminator_exist else "", - {"visible": True, "__type__": "update"} - ) - -def change_version19(sr2, if_f0_3, version19): - path_str = "" if version19 == "v1" else "_v2" - f0_str = "f0" if if_f0_3 else "" - if_pretrained_generator_exist = os.access("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK) - if_pretrained_discriminator_exist = os.access("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK) - if (if_pretrained_generator_exist == False): - print("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model") - if (if_pretrained_discriminator_exist == False): - print("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model") - return ( - ("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_generator_exist else "", - ("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_discriminator_exist else "", - ) - - -def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15 - path_str = "" if version19 == "v1" else "_v2" - if_pretrained_generator_exist = os.access("pretrained%s/f0G%s.pth" % (path_str, sr2), os.F_OK) - if_pretrained_discriminator_exist = os.access("pretrained%s/f0D%s.pth" % (path_str, sr2), os.F_OK) - if (if_pretrained_generator_exist == False): - print("pretrained%s/f0G%s.pth" % (path_str, sr2), "not exist, will not use pretrained model") - if (if_pretrained_discriminator_exist == False): - print("pretrained%s/f0D%s.pth" % (path_str, sr2), "not exist, will not use pretrained model") - if if_f0_3: - return ( - {"visible": True, "__type__": "update"}, - "pretrained%s/f0G%s.pth" % (path_str, sr2) if if_pretrained_generator_exist else "", - "pretrained%s/f0D%s.pth" % (path_str, sr2) if if_pretrained_discriminator_exist else "", - ) - return ( - {"visible": False, "__type__": "update"}, - ("pretrained%s/G%s.pth" % (path_str, sr2)) if if_pretrained_generator_exist else "", - ("pretrained%s/D%s.pth" % (path_str, sr2)) if if_pretrained_discriminator_exist else "", - ) - - -global log_interval - - -def set_log_interval(exp_dir, batch_size12): - log_interval = 1 - - folder_path = os.path.join(exp_dir, "1_16k_wavs") - - if os.path.exists(folder_path) and os.path.isdir(folder_path): - wav_files = [f for f in os.listdir(folder_path) if f.endswith(".wav")] - if wav_files: - sample_size = len(wav_files) - log_interval = math.ceil(sample_size / batch_size12) - if log_interval > 1: - log_interval += 1 - return log_interval - -# but3.click(click_train,[exp_dir1,sr2,if_f0_3,save_epoch10,total_epoch11,batch_size12,if_save_latest13,pretrained_G14,pretrained_D15,gpus16]) -def click_train( - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, -): - CSVutil('csvdb/stop.csv', 'w+', 'formanting', False) - # 生成filelist - exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - - log_interval = set_log_interval(exp_dir, batch_size12) - - if if_f0_3: - f0_dir = "%s/2a_f0" % (exp_dir) - f0nsf_dir = "%s/2b-f0nsf" % (exp_dir) - names = ( - set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) - & set([name.split(".")[0] for name in os.listdir(feature_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)]) - ) - else: - names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set( - [name.split(".")[0] for name in os.listdir(feature_dir)] - ) - opt = [] - for name in names: - if if_f0_3: - opt.append( - "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - f0_dir.replace("\\", "\\\\"), - name, - f0nsf_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - else: - opt.append( - "%s/%s.wav|%s/%s.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - fea_dim = 256 if version19 == "v1" else 768 - if if_f0_3: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5) - ) - else: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, spk_id5) - ) - shuffle(opt) - with open("%s/filelist.txt" % exp_dir, "w") as f: - f.write("\n".join(opt)) - print("write filelist done") - # 生成config#无需生成config - # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0" - print("use gpus:", gpus16) - if pretrained_G14 == "": - print("no pretrained Generator") - if pretrained_D15 == "": - print("no pretrained Discriminator") - if gpus16: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s -li %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - gpus16, - total_epoch11, - save_epoch10, - ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "", - ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "", - 1 if if_save_latest13 == True else 0, - 1 if if_cache_gpu17 == True else 0, - 1 if if_save_every_weights18 == True else 0, - version19, - log_interval, - ) - ) - else: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s -li %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - total_epoch11, - save_epoch10, - ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "\b", - ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "\b", - 1 if if_save_latest13 == True else 0, - 1 if if_cache_gpu17 == True else 0, - 1 if if_save_every_weights18 == True else 0, - version19, - log_interval, - ) - ) - print(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - global PID - PID = p.pid - p.wait() - return ("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log", {"visible": False, "__type__": "update"}, {"visible": True, "__type__": "update"}) - - -# but4.click(train_index, [exp_dir1], info3) -def train_index(exp_dir1, version19): - exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - if os.path.exists(feature_dir) == False: - return "请先进行特征提取!" - listdir_res = list(os.listdir(feature_dir)) - if len(listdir_res) == 0: - return "请先进行特征提取!" - npys = [] - for name in sorted(listdir_res): - phone = np.load("%s/%s" % (feature_dir, name)) - npys.append(phone) - big_npy = np.concatenate(npys, 0) - big_npy_idx = np.arange(big_npy.shape[0]) - np.random.shuffle(big_npy_idx) - big_npy = big_npy[big_npy_idx] - np.save("%s/total_fea.npy" % exp_dir, big_npy) - # n_ivf = big_npy.shape[0] // 39 - n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) - infos = [] - infos.append("%s,%s" % (big_npy.shape, n_ivf)) - yield "\n".join(infos) - index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf) - # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf) - infos.append("training") - yield "\n".join(infos) - index_ivf = faiss.extract_index_ivf(index) # - index_ivf.nprobe = 1 - index.train(big_npy) - faiss.write_index( - index, - "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - # faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19)) - infos.append("adding") - yield "\n".join(infos) - batch_size_add = 8192 - for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) - faiss.write_index( - index, - "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - infos.append( - "成功构建索引,added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (n_ivf, index_ivf.nprobe, exp_dir1, version19) - ) - # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19)) - # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19)) - yield "\n".join(infos) - - -# but5.click(train1key, [exp_dir1, sr2, if_f0_3, trainset_dir4, spk_id5, gpus6, np7, f0method8, save_epoch10, total_epoch11, batch_size12, if_save_latest13, pretrained_G14, pretrained_D15, gpus16, if_cache_gpu17], info3) -def train1key( - exp_dir1, - sr2, - if_f0_3, - trainset_dir4, - spk_id5, - np7, - f0method8, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - echl -): - infos = [] - - def get_info_str(strr): - infos.append(strr) - return "\n".join(infos) - - model_log_dir = "%s/logs/%s" % (now_dir, exp_dir1) - preprocess_log_path = "%s/preprocess.log" % model_log_dir - extract_f0_feature_log_path = "%s/extract_f0_feature.log" % model_log_dir - gt_wavs_dir = "%s/0_gt_wavs" % model_log_dir - feature_dir = ( - "%s/3_feature256" % model_log_dir - if version19 == "v1" - else "%s/3_feature768" % model_log_dir - ) - - os.makedirs(model_log_dir, exist_ok=True) - #########step1:处理数据 - open(preprocess_log_path, "w").close() - cmd = ( - config.python_cmd - + " trainset_preprocess_pipeline_print.py %s %s %s %s " - % (trainset_dir4, sr_dict[sr2], np7, model_log_dir) - + str(config.noparallel) - ) - yield get_info_str(i18n("step1:正在处理数据")) - yield get_info_str(cmd) - p = Popen(cmd, shell=True) - p.wait() - with open(preprocess_log_path, "r") as f: - print(f.read()) - #########step2a:提取音高 - open(extract_f0_feature_log_path, "w") - if if_f0_3: - yield get_info_str("step2a:正在提取音高") - cmd = config.python_cmd + " extract_f0_print.py %s %s %s %s" % ( - model_log_dir, - np7, - f0method8, - echl - ) - yield get_info_str(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - with open(extract_f0_feature_log_path, "r") as f: - print(f.read()) - else: - yield get_info_str(i18n("step2a:无需提取音高")) - #######step2b:提取特征 - yield get_info_str(i18n("step2b:正在提取特征")) - gpus = gpus16.split("-") - leng = len(gpus) - ps = [] - for idx, n_g in enumerate(gpus): - cmd = config.python_cmd + " extract_feature_print.py %s %s %s %s %s %s" % ( - config.device, - leng, - idx, - n_g, - model_log_dir, - version19, - ) - yield get_info_str(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - for p in ps: - p.wait() - with open(extract_f0_feature_log_path, "r") as f: - print(f.read()) - #######step3a:训练模型 - yield get_info_str(i18n("step3a:正在训练模型")) - # 生成filelist - if if_f0_3: - f0_dir = "%s/2a_f0" % model_log_dir - f0nsf_dir = "%s/2b-f0nsf" % model_log_dir - names = ( - set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) - & set([name.split(".")[0] for name in os.listdir(feature_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)]) - ) - else: - names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set( - [name.split(".")[0] for name in os.listdir(feature_dir)] - ) - opt = [] - for name in names: - if if_f0_3: - opt.append( - "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - f0_dir.replace("\\", "\\\\"), - name, - f0nsf_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - else: - opt.append( - "%s/%s.wav|%s/%s.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - fea_dim = 256 if version19 == "v1" else 768 - if if_f0_3: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5) - ) - else: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, spk_id5) - ) - shuffle(opt) - with open("%s/filelist.txt" % model_log_dir, "w") as f: - f.write("\n".join(opt)) - yield get_info_str("write filelist done") - if gpus16: - cmd = ( - config.python_cmd - +" train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - gpus16, - total_epoch11, - save_epoch10, - ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "", - ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "", - 1 if if_save_latest13 == True else 0, - 1 if if_cache_gpu17 == True else 0, - 1 if if_save_every_weights18 == True else 0, - version19, - ) - ) - else: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - total_epoch11, - save_epoch10, - ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "", - ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "", - 1 if if_save_latest13 == True else 0, - 1 if if_cache_gpu17 == True else 0, - 1 if if_save_every_weights18 == True else 0, - version19, - ) - ) - yield get_info_str(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - yield get_info_str(i18n("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log")) - #######step3b:训练索引 - npys = [] - listdir_res = list(os.listdir(feature_dir)) - for name in sorted(listdir_res): - phone = np.load("%s/%s" % (feature_dir, name)) - npys.append(phone) - big_npy = np.concatenate(npys, 0) - - big_npy_idx = np.arange(big_npy.shape[0]) - np.random.shuffle(big_npy_idx) - big_npy = big_npy[big_npy_idx] - np.save("%s/total_fea.npy" % model_log_dir, big_npy) - - # n_ivf = big_npy.shape[0] // 39 - n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) - yield get_info_str("%s,%s" % (big_npy.shape, n_ivf)) - index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf) - yield get_info_str("training index") - index_ivf = faiss.extract_index_ivf(index) # - index_ivf.nprobe = 1 - index.train(big_npy) - faiss.write_index( - index, - "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - yield get_info_str("adding index") - batch_size_add = 8192 - for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) - faiss.write_index( - index, - "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - yield get_info_str( - "成功构建索引, added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (n_ivf, index_ivf.nprobe, exp_dir1, version19) - ) - yield get_info_str(i18n("全流程结束!")) - - -def whethercrepeornah(radio): - mango = True if radio == 'mangio-crepe' or radio == 'mangio-crepe-tiny' else False - return ({"visible": mango, "__type__": "update"}) - -# ckpt_path2.change(change_info_,[ckpt_path2],[sr__,if_f0__]) -def change_info_(ckpt_path): - if ( - os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log")) - == False - ): - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - try: - with open( - ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r" - ) as f: - info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1]) - sr, f0 = info["sample_rate"], info["if_f0"] - version = "v2" if ("version" in info and info["version"] == "v2") else "v1" - return sr, str(f0), version - except: - traceback.print_exc() - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - - -from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM - - -def export_onnx(ModelPath, ExportedPath, MoeVS=True): - cpt = torch.load(ModelPath, map_location="cpu") - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - hidden_channels = 256 if cpt.get("version","v1")=="v1"else 768#cpt["config"][-2] # hidden_channels,为768Vec做准备 - - test_phone = torch.rand(1, 200, hidden_channels) # hidden unit - test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用) - test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹) - test_pitchf = torch.rand(1, 200) # nsf基频 - test_ds = torch.LongTensor([0]) # 说话人ID - test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子) - - device = "cpu" # 导出时设备(不影响使用模型) - - - net_g = SynthesizerTrnMsNSFsidM( - *cpt["config"], is_half=False,version=cpt.get("version","v1") - ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16) - net_g.load_state_dict(cpt["weight"], strict=False) - input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"] - output_names = [ - "audio", - ] - # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出 - torch.onnx.export( - net_g, - ( - test_phone.to(device), - test_phone_lengths.to(device), - test_pitch.to(device), - test_pitchf.to(device), - test_ds.to(device), - test_rnd.to(device), - ), - ExportedPath, - dynamic_axes={ - "phone": [1], - "pitch": [1], - "pitchf": [1], - "rnd": [2], - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names, - ) - return "Finished" - -#region RVC WebUI App - -def get_presets(): - data = None - with open('../inference-presets.json', 'r') as file: - data = json.load(file) - preset_names = [] - for preset in data['presets']: - preset_names.append(preset['name']) - - return preset_names - -def change_choices2(): - audio_files=[] - for filename in os.listdir("./audios"): - if filename.endswith(('.wav','.mp3','.ogg','.flac','.m4a','.aac','.mp4')): - audio_files.append(os.path.join('./audios',filename).replace('\\', '/')) - return {"choices": sorted(audio_files), "__type__": "update"}, {"__type__": "update"} - -audio_files=[] -for filename in os.listdir("./audios"): - if filename.endswith(('.wav','.mp3','.ogg','.flac','.m4a','.aac','.mp4')): - audio_files.append(os.path.join('./audios',filename).replace('\\', '/')) - -def get_index(): - if check_for_name() != '': - chosen_model=sorted(names)[0].split(".")[0] - logs_path="./logs/"+chosen_model - if os.path.exists(logs_path): - for file in os.listdir(logs_path): - if file.endswith(".index"): - return os.path.join(logs_path, file) - return '' - else: - return '' - -def get_indexes(): - indexes_list=[] - for dirpath, dirnames, filenames in os.walk("./logs/"): - for filename in filenames: - if filename.endswith(".index"): - indexes_list.append(os.path.join(dirpath,filename)) - if len(indexes_list) > 0: - return indexes_list - else: - return '' - -def get_name(): - if len(audio_files) > 0: - return sorted(audio_files)[0] - else: - return '' - -def save_to_wav(record_button): - if record_button is None: - pass - else: - path_to_file=record_button - new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")+'.wav' - new_path='./audios/'+new_name - shutil.move(path_to_file,new_path) - return new_path - -def save_to_wav2(dropbox): - file_path=dropbox.name - shutil.move(file_path,'./audios') - return os.path.join('./audios',os.path.basename(file_path)) - -def match_index(sid0): - folder=sid0.split(".")[0] - parent_dir="./logs/"+folder - if os.path.exists(parent_dir): - for filename in os.listdir(parent_dir): - if filename.endswith(".index"): - index_path=os.path.join(parent_dir,filename) - return index_path - else: - return '' - -def check_for_name(): - if len(names) > 0: - return sorted(names)[0] - else: - return '' - -def download_from_url(url, model): - if url == '': - return "URL cannot be left empty." - if model =='': - return "You need to name your model. For example: My-Model" - url = url.strip() - zip_dirs = ["zips", "unzips"] - for directory in zip_dirs: - if os.path.exists(directory): - shutil.rmtree(directory) - os.makedirs("zips", exist_ok=True) - os.makedirs("unzips", exist_ok=True) - zipfile = model + '.zip' - zipfile_path = './zips/' + zipfile - try: - if "drive.google.com" in url: - subprocess.run(["gdown", url, "--fuzzy", "-O", zipfile_path]) - elif "mega.nz" in url: - m = Mega() - m.download_url(url, './zips') - else: - subprocess.run(["wget", url, "-O", zipfile_path]) - for filename in os.listdir("./zips"): - if filename.endswith(".zip"): - zipfile_path = os.path.join("./zips/",filename) - shutil.unpack_archive(zipfile_path, "./unzips", 'zip') - else: - return "No zipfile found." - for root, dirs, files in os.walk('./unzips'): - for file in files: - file_path = os.path.join(root, file) - if file.endswith(".index"): - os.mkdir(f'./logs/{model}') - shutil.copy2(file_path,f'./logs/{model}') - elif "G_" not in file and "D_" not in file and file.endswith(".pth"): - shutil.copy(file_path,f'./weights/{model}.pth') - shutil.rmtree("zips") - shutil.rmtree("unzips") - return "Success." - except: - return "There's been an error." -def success_message(face): - return f'{face.name} has been uploaded.', 'None' -def mouth(size, face, voice, faces): - if size == 'Half': - size = 2 - else: - size = 1 - if faces == 'None': - character = face.name - else: - if faces == 'Ben Shapiro': - character = '/content/wav2lip-HD/inputs/ben-shapiro-10.mp4' - elif faces == 'Andrew Tate': - character = '/content/wav2lip-HD/inputs/tate-7.mp4' - command = "python inference.py " \ - "--checkpoint_path checkpoints/wav2lip.pth " \ - f"--face {character} " \ - f"--audio {voice} " \ - "--pads 0 20 0 0 " \ - "--outfile /content/wav2lip-HD/outputs/result.mp4 " \ - "--fps 24 " \ - f"--resize_factor {size}" - process = subprocess.Popen(command, shell=True, cwd='/content/wav2lip-HD/Wav2Lip-master') - stdout, stderr = process.communicate() - return '/content/wav2lip-HD/outputs/result.mp4', 'Animation completed.' -eleven_voices = ['Adam','Antoni','Josh','Arnold','Sam','Bella','Rachel','Domi','Elli'] -eleven_voices_ids=['pNInz6obpgDQGcFmaJgB','ErXwobaYiN019PkySvjV','TxGEqnHWrfWFTfGW9XjX','VR6AewLTigWG4xSOukaG','yoZ06aMxZJJ28mfd3POQ','EXAVITQu4vr4xnSDxMaL','21m00Tcm4TlvDq8ikWAM','AZnzlk1XvdvUeBnXmlld','MF3mGyEYCl7XYWbV9V6O'] -chosen_voice = dict(zip(eleven_voices, eleven_voices_ids)) - -def stoptraining(mim): - if int(mim) == 1: - try: - CSVutil('csvdb/stop.csv', 'w+', 'stop', 'True') - os.kill(PID, signal.SIGTERM) - except Exception as e: - print(f"Couldn't click due to {e}") - return ( - {"visible": False, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - ) - - -def elevenTTS(xiapi, text, id, lang): - if xiapi!= '' and id !='': - choice = chosen_voice[id] - CHUNK_SIZE = 1024 - url = f"https://api.elevenlabs.io/v1/text-to-speech/{choice}" - headers = { - "Accept": "audio/mpeg", - "Content-Type": "application/json", - "xi-api-key": xiapi - } - if lang == 'en': - data = { - "text": text, - "model_id": "eleven_monolingual_v1", - "voice_settings": { - "stability": 0.5, - "similarity_boost": 0.5 - } - } - else: - data = { - "text": text, - "model_id": "eleven_multilingual_v1", - "voice_settings": { - "stability": 0.5, - "similarity_boost": 0.5 - } - } - - response = requests.post(url, json=data, headers=headers) - with open('./temp_eleven.mp3', 'wb') as f: - for chunk in response.iter_content(chunk_size=CHUNK_SIZE): - if chunk: - f.write(chunk) - aud_path = save_to_wav('./temp_eleven.mp3') - return aud_path, aud_path - else: - tts = gTTS(text, lang=lang) - tts.save('./temp_gTTS.mp3') - aud_path = save_to_wav('./temp_gTTS.mp3') - return aud_path, aud_path - -def upload_to_dataset(files, dir): - if dir == '': - dir = './dataset' - if not os.path.exists(dir): - os.makedirs(dir) - count = 0 - for file in files: - path=file.name - shutil.copy2(path,dir) - count += 1 - return f' {count} files uploaded to {dir}.' - -def zip_downloader(model): - if not os.path.exists(f'./weights/{model}.pth'): - return {"__type__": "update"}, f'Make sure the Voice Name is correct. I could not find {model}.pth' - index_found = False - for file in os.listdir(f'./logs/{model}'): - if file.endswith('.index') and 'added' in file: - log_file = file - index_found = True - if index_found: - return [f'./weights/{model}.pth', f'./logs/{model}/{log_file}'], "Done" - else: - return f'./weights/{model}.pth', "Could not find Index file." - -with gr.Blocks(theme=gr.themes.Base(), title='Mangio-RVC-Web 💻') as app: - with gr.Tabs(): - with gr.TabItem("Интерфейс"): - gr.HTML("<h1> RVC V2 Huggingface Version </h1>") - gr.HTML("<h10> Huggingface версия созданная Clebersla </h10>") - gr.HTML("<h4> Если вы хотите использовать это помещение в частном порядке, я рекомендую продублировать его. </h4>") - - # Inference Preset Row - # with gr.Row(): - # mangio_preset = gr.Dropdown(label="Inference Preset", choices=sorted(get_presets())) - # mangio_preset_name_save = gr.Textbox( - # label="Your preset name" - # ) - # mangio_preset_save_btn = gr.Button('Save Preset', variant="primary") - - # Other RVC stuff - with gr.Row(): - sid0 = gr.Dropdown(label="1.Выберите свою модель.", choices=sorted(names), value=check_for_name()) - refresh_button = gr.Button("Обновить", variant="primary") - if check_for_name() != '': - get_vc(sorted(names)[0]) - vc_transform0 = gr.Number(label="Дополнительно: Здесь можно изменить высоту тона или оставить ее равной 0.", value=0) - #clean_button = gr.Button(i18n("卸载音色省显存"), variant="primary") - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label=i18n("请选择说话人id"), - value=0, - visible=False, - interactive=True, - ) - #clean_button.click(fn=clean, inputs=[], outputs=[sid0]) - sid0.change( - fn=get_vc, - inputs=[sid0], - outputs=[spk_item], - ) - but0 = gr.Button("Конвертировать", variant="primary") - with gr.Row(): - with gr.Column(): - with gr.Row(): - dropbox = gr.File(label='Отправьте аудиозапись сюда и нажмите кнопку "Перезагрузка".') - with gr.Row(): - record_button=gr.Audio(source="microphone", label="Запись звука с микрофона", type="filepath") - with gr.Row(): - input_audio0 = gr.Dropdown( - label="2.Выберите аудиозапись.", - value="./audios/someguy.mp3", - choices=audio_files - ) - dropbox.upload(fn=save_to_wav2, inputs=[dropbox], outputs=[input_audio0]) - dropbox.upload(fn=change_choices2, inputs=[], outputs=[input_audio0]) - refresh_button2 = gr.Button("Обновить", variant="primary", size='sm') - record_button.change(fn=save_to_wav, inputs=[record_button], outputs=[input_audio0]) - record_button.change(fn=change_choices2, inputs=[], outputs=[input_audio0]) - with gr.Row(): - with gr.Accordion('Текст в речь', open=False): - with gr.Column(): - lang = gr.Radio(label='Китайский и японский языки в настоящее время не работают с ElevenLabs.',choices=['en','es','ru','uk','pl','fr','de','tr'], value='en') - api_box = gr.Textbox(label="Введите свой API-ключ для ElevenLabs или оставьте пустым, чтобы использовать GoogleTTS", value='', visible=False) - elevenid=gr.Dropdown(label="Голос:", choices=eleven_voices) - with gr.Column(): - tfs = gr.Textbox(label="Введите свой текст", interactive=True, value="This is a test.") - tts_button = gr.Button(value="Генерировать") - tts_button.click(fn=elevenTTS, inputs=[api_box,tfs, elevenid, lang], outputs=[record_button, input_audio0]) - with gr.Row(): - with gr.Accordion('Wav2Lip', open=False, visible=False): - with gr.Row(): - size = gr.Radio(label='Resolution:',choices=['Half','Full']) - face = gr.UploadButton("Upload A Character",type='file') - faces = gr.Dropdown(label="OR Choose one:", choices=['None','Ben Shapiro','Andrew Tate']) - with gr.Row(): - preview = gr.Textbox(label="Status:",interactive=False) - face.upload(fn=success_message,inputs=[face], outputs=[preview, faces]) - with gr.Row(): - animation = gr.Video(type='filepath') - refresh_button2.click(fn=change_choices2, inputs=[], outputs=[input_audio0, animation]) - with gr.Row(): - animate_button = gr.Button('Animate') - - with gr.Column(): - with gr.Accordion("Настройка индекса", open=False): - file_index1 = gr.Dropdown( - label="3. Путь к файлу added.index (если он не был найден автоматически).", - choices=get_indexes(), - value=get_index(), - interactive=True, - ) - sid0.change(fn=match_index, inputs=[sid0],outputs=[file_index1]) - refresh_button.click( - fn=change_choices, inputs=[], outputs=[sid0, file_index1] - ) - # file_big_npy1 = gr.Textbox( - # label=i18n("特征文件路径"), - # value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy", - # interactive=True, - # ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("Соотношение поисковых функций (советую ставить на 0):"), - value=0.66, - interactive=True, - ) - vc_output2 = gr.Audio( - label="Выходные аудиоданные (нажмите на три точки в правом углу, чтобы загрузить)", - type='filepath', - interactive=False, - ) - animate_button.click(fn=mouth, inputs=[size, face, vc_output2, faces], outputs=[animation, preview]) - with gr.Accordion("Дополнительные настройки", open=False): - f0method0 = gr.Radio( - label='Необязательно: Изменить алгоритм извлечения высоты тона.\Методы извлечения отсортированы от "худшего качества" к "лучшему качеству".\mangio-crepe может быть лучше rmvpe или нет в случаях, когда "гладкость" более важна, но в целом rmvpe является лучшим.', - choices=["pm", "dio", "crepe-tiny", "mangio-crepe-tiny", "crepe", "harvest", "mangio-crepe", "rmvpe"], # Fork Feature. Add Crepe-Tiny - value="rmvpe", - interactive=True, - ) - - crepe_hop_length = gr.Slider( - minimum=1, - maximum=512, - step=1, - label="Mangio-Crepe Hop Length. Более высокие числа уменьшат вероятность экстремального изменения высоты тона, но более низкие числа увеличат точность. 64-192 - хороший диапазон для экспериментов.", - value=120, - interactive=True, - visible=False, - ) - f0method0.change(fn=whethercrepeornah, inputs=[f0method0], outputs=[crepe_hop_length]) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label=i18n("Если >=3: применить медианную фильтрацию к собранным результатам питча. Значение представляет собой радиус фильтрации и может уменьшить дыхание."), - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - visible=False - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("Используйте огибающую громкости входа для замены или смешивания с огибающей громкости выхода. Чем ближе это соотношение к 1, тем больше используется огибающая выходного сигнала:"), - value=0.21, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n("Защита безголосых согласных и звуков дыхания для предотвращения артефактов, таких как разрывы в электронной музыке. Для отключения установите значение 0,5. Уменьшите значение для усиления защиты, но это может снизить точность индексирования:"), - value=0.33, - step=0.01, - interactive=True, - ) - formanting = gr.Checkbox( - value=bool(DoFormant), - label="[EXPERIMENTAL] Formant shift inference audio", - info="Used for male to female and vice-versa conversions", - interactive=True, - visible=False, - ) - - formant_preset = gr.Dropdown( - value='', - choices=get_fshift_presets(), - label="browse presets for formanting", - visible=bool(DoFormant), - ) - formant_refresh_button = gr.Button( - value='\U0001f504', - visible=bool(DoFormant), - variant='primary', - ) - #formant_refresh_button = ToolButton( elem_id='1') - #create_refresh_button(formant_preset, lambda: {"choices": formant_preset}, "refresh_list_shiftpresets") - - qfrency = gr.Slider( - value=Quefrency, - info="Default value is 1.0", - label="Quefrency for formant shifting", - minimum=0.0, - maximum=16.0, - step=0.1, - visible=bool(DoFormant), - interactive=True, - ) - tmbre = gr.Slider( - value=Timbre, - info="Default value is 1.0", - label="Timbre for formant shifting", - minimum=0.0, - maximum=16.0, - step=0.1, - visible=bool(DoFormant), - interactive=True, - ) - - formant_preset.change(fn=preset_apply, inputs=[formant_preset, qfrency, tmbre], outputs=[qfrency, tmbre]) - frmntbut = gr.Button("Apply", variant="primary", visible=bool(DoFormant)) - formanting.change(fn=formant_enabled,inputs=[formanting,qfrency,tmbre,frmntbut,formant_preset,formant_refresh_button],outputs=[formanting,qfrency,tmbre,frmntbut,formant_preset,formant_refresh_button]) - frmntbut.click(fn=formant_apply,inputs=[qfrency, tmbre], outputs=[qfrency, tmbre]) - formant_refresh_button.click(fn=update_fshift_presets,inputs=[formant_preset, qfrency, tmbre],outputs=[formant_preset, qfrency, tmbre]) - with gr.Row(): - vc_output1 = gr.Textbox("") - f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调"), visible=False) - - but0.click( - vc_single, - [ - spk_item, - input_audio0, - vc_transform0, - f0_file, - f0method0, - file_index1, - # file_index2, - # file_big_npy1, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - crepe_hop_length - ], - [vc_output1, vc_output2], - ) - - with gr.Accordion("Batch Conversion",open=False, visible=False): - with gr.Row(): - with gr.Column(): - vc_transform1 = gr.Number( - label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0 - ) - opt_input = gr.Textbox(label=i18n("指定输出文件夹"), value="opt") - f0method1 = gr.Radio( - label=i18n( - "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU" - ), - choices=["pm", "harvest", "crepe", "rmvpe"], - value="rmvpe", - interactive=True, - ) - filter_radius1 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index3 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=True, - ) - file_index4 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - refresh_button.click( - fn=lambda: change_choices()[1], - inputs=[], - outputs=file_index4, - ) - # file_big_npy2 = gr.Textbox( - # label=i18n("特征文件路径"), - # value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy", - # interactive=True, - # ) - index_rate2 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=1, - interactive=True, - ) - with gr.Column(): - resample_sr1 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=1, - interactive=True, - ) - protect1 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n( - "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果" - ), - value=0.33, - step=0.01, - interactive=True, - ) - with gr.Column(): - dir_input = gr.Textbox( - label=i18n("输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)"), - value="E:\codes\py39\\test-20230416b\\todo-songs", - ) - inputs = gr.File( - file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹") - ) - with gr.Row(): - format1 = gr.Radio( - label=i18n("导出文件格式"), - choices=["wav", "flac", "mp3", "m4a"], - value="flac", - interactive=True, - ) - but1 = gr.Button(i18n("转换"), variant="primary") - vc_output3 = gr.Textbox(label=i18n("输出信息")) - but1.click( - vc_multi, - [ - spk_item, - dir_input, - opt_input, - inputs, - vc_transform1, - f0method1, - file_index3, - file_index4, - # file_big_npy2, - index_rate2, - filter_radius1, - resample_sr1, - rms_mix_rate1, - protect1, - format1, - crepe_hop_length, - ], - [vc_output3], - ) - but1.click(fn=lambda: easy_uploader.clear()) - with gr.TabItem("Загрузка моделей"): - with gr.Row(): - url=gr.Textbox(label="Введите URL-адрес модели:") - with gr.Row(): - model = gr.Textbox(label="Название модели:") - download_button=gr.Button("Загрузить") - with gr.Row(): - status_bar=gr.Textbox(label="") - download_button.click(fn=download_from_url, inputs=[url, model], outputs=[status_bar]) - with gr.Row(): - gr.Markdown( - """ - Made with ❤️ by [Alice Oliveira](https://github.com/aliceoq) | Hosted with ❤️ by [Mateus Elias](https://github.com/mateuseap) - """ - ) - - def has_two_files_in_pretrained_folder(): - pretrained_folder = "./pretrained/" - if not os.path.exists(pretrained_folder): - return False - - files_in_folder = os.listdir(pretrained_folder) - num_files = len(files_in_folder) - return num_files >= 2 - - if has_two_files_in_pretrained_folder(): - print("Pretrained weights are downloaded. Training tab enabled!\n-------------------------------") - with gr.TabItem("Train", visible=False): - with gr.Row(): - with gr.Column(): - exp_dir1 = gr.Textbox(label="Voice Name:", value="My-Voice") - sr2 = gr.Radio( - label=i18n("目标采样率"), - choices=["40k", "48k"], - value="40k", - interactive=True, - visible=False - ) - if_f0_3 = gr.Radio( - label=i18n("模型是否带音高指导(唱歌一定要, 语音可以不要)"), - choices=[True, False], - value=True, - interactive=True, - visible=False - ) - version19 = gr.Radio( - label="RVC version", - choices=["v1", "v2"], - value="v2", - interactive=True, - visible=False, - ) - np7 = gr.Slider( - minimum=0, - maximum=config.n_cpu, - step=1, - label="# of CPUs for data processing (Leave as it is)", - value=config.n_cpu, - interactive=True, - visible=True - ) - trainset_dir4 = gr.Textbox(label="Path to your dataset (audios, not zip):", value="./dataset") - easy_uploader = gr.Files(label='OR Drop your audios here. They will be uploaded in your dataset path above.',file_types=['audio']) - but1 = gr.Button("1. Process The Dataset", variant="primary") - info1 = gr.Textbox(label="Status (wait until it says 'end preprocess'):", value="") - easy_uploader.upload(fn=upload_to_dataset, inputs=[easy_uploader, trainset_dir4], outputs=[info1]) - but1.click( - preprocess_dataset, [trainset_dir4, exp_dir1, sr2, np7], [info1] - ) - with gr.Column(): - spk_id5 = gr.Slider( - minimum=0, - maximum=4, - step=1, - label=i18n("请指定说话人id"), - value=0, - interactive=True, - visible=False - ) - with gr.Accordion('GPU Settings', open=False, visible=False): - gpus6 = gr.Textbox( - label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"), - value=gpus, - interactive=True, - visible=False - ) - gpu_info9 = gr.Textbox(label=i18n("显卡信息"), value=gpu_info) - f0method8 = gr.Radio( - label=i18n( - "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢" - ), - choices=["harvest","crepe", "mangio-crepe", "rmvpe"], # Fork feature: Crepe on f0 extraction for training. - value="rmvpe", - interactive=True, - ) - - extraction_crepe_hop_length = gr.Slider( - minimum=1, - maximum=512, - step=1, - label=i18n("crepe_hop_length"), - value=128, - interactive=True, - visible=False, - ) - f0method8.change(fn=whethercrepeornah, inputs=[f0method8], outputs=[extraction_crepe_hop_length]) - but2 = gr.Button("2. Pitch Extraction", variant="primary") - info2 = gr.Textbox(label="Status(Check the Colab Notebook's cell output):", value="", max_lines=8) - but2.click( - extract_f0_feature, - [gpus6, np7, f0method8, if_f0_3, exp_dir1, version19, extraction_crepe_hop_length], - [info2], - ) - with gr.Row(): - with gr.Column(): - total_epoch11 = gr.Slider( - minimum=1, - maximum=5000, - step=10, - label="Total # of training epochs (IF you choose a value too high, your model will sound horribly overtrained.):", - value=250, - interactive=True, - ) - butstop = gr.Button( - "Stop Training", - variant='primary', - visible=False, - ) - but3 = gr.Button("3. Train Model", variant="primary", visible=False) - - but3.click(fn=stoptraining, inputs=[gr.Number(value=0, visible=False)], outputs=[but3, butstop]) - butstop.click(fn=stoptraining, inputs=[gr.Number(value=1, visible=False)], outputs=[butstop, but3]) - - - but4 = gr.Button("4.Train Index", variant="primary") - info3 = gr.Textbox(label="Status(Check the Colab Notebook's cell output):", value="", max_lines=10) - with gr.Accordion("Training Preferences (You can leave these as they are)", open=False): - #gr.Markdown(value=i18n("step3: 填写训练设置, 开始训练模型和索引")) - with gr.Column(): - save_epoch10 = gr.Slider( - minimum=1, - maximum=200, - step=1, - label="Backup every X amount of epochs:", - value=10, - interactive=True, - ) - batch_size12 = gr.Slider( - minimum=1, - maximum=40, - step=1, - label="Batch Size (LEAVE IT unless you know what you're doing!):", - value=default_batch_size, - interactive=True, - ) - if_save_latest13 = gr.Checkbox( - label="Save only the latest '.ckpt' file to save disk space.", - value=True, - interactive=True, - ) - if_cache_gpu17 = gr.Checkbox( - label="Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training, but caching large datasets will consume a lot of GPU memory and may not provide much speed improvement.", - value=False, - interactive=True, - ) - if_save_every_weights18 = gr.Checkbox( - label="Save a small final model to the 'weights' folder at each save point.", - value=True, - interactive=True, - ) - zip_model = gr.Button('5. Download Model') - zipped_model = gr.Files(label='Your Model and Index file can be downloaded here:') - zip_model.click(fn=zip_downloader, inputs=[exp_dir1], outputs=[zipped_model, info3]) - with gr.Group(): - with gr.Accordion("Base Model Locations:", open=False, visible=False): - pretrained_G14 = gr.Textbox( - label=i18n("加载预训练底模G路径"), - value="pretrained_v2/f0G40k.pth", - interactive=True, - ) - pretrained_D15 = gr.Textbox( - label=i18n("加载预训练底模D路径"), - value="pretrained_v2/f0D40k.pth", - interactive=True, - ) - gpus16 = gr.Textbox( - label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"), - value=gpus, - interactive=True, - ) - sr2.change( - change_sr2, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15, version19], - ) - version19.change( - change_version19, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15], - ) - if_f0_3.change( - change_f0, - [if_f0_3, sr2, version19], - [f0method8, pretrained_G14, pretrained_D15], - ) - but5 = gr.Button(i18n("一键训练"), variant="primary", visible=False) - but3.click( - click_train, - [ - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - ], - [ - info3, - butstop, - but3, - ], - ) - but4.click(train_index, [exp_dir1, version19], info3) - but5.click( - train1key, - [ - exp_dir1, - sr2, - if_f0_3, - trainset_dir4, - spk_id5, - np7, - f0method8, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - extraction_crepe_hop_length - ], - info3, - ) - - else: - print( - "Pretrained weights not downloaded. Disabling training tab.\n" - "Wondering how to train a voice? Visit here for the RVC model training guide: https://t.ly/RVC_Training_Guide\n" - "-------------------------------\n" - ) - - app.queue(concurrency_count=511, max_size=1022).launch(share=False, quiet=True) -#endregion \ No newline at end of file diff --git a/spaces/RO4DHOG/Ripper/README.md b/spaces/RO4DHOG/Ripper/README.md deleted file mode 100644 index ceab6d16bb80f5881dbf3264cac359176125928e..0000000000000000000000000000000000000000 --- a/spaces/RO4DHOG/Ripper/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ripper -emoji: 🚀 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/blocks.py b/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/blocks.py deleted file mode 100644 index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/blocks.py +++ /dev/null @@ -1,342 +0,0 @@ -import torch -import torch.nn as nn - -from .vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - -def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",): - if backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3 - elif backbone == "efficientnet_lite3": - pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable) - scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - out_shape4 = out_shape - if expand==True: - out_shape1 = out_shape - out_shape2 = out_shape*2 - out_shape3 = out_shape*4 - out_shape4 = out_shape*8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer4_rn = nn.Conv2d( - in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - - return scratch - - -def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False): - efficientnet = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", - "tf_efficientnet_lite3", - pretrained=use_pretrained, - exportable=exportable - ) - return _make_efficientnet_backbone(efficientnet) - - -def _make_efficientnet_backbone(effnet): - pretrained = nn.Module() - - pretrained.layer1 = nn.Sequential( - effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2] - ) - pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3]) - pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5]) - pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9]) - - return pretrained - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups=1 - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - if self.bn==True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn==True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn==True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups=1 - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/requirements.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/requirements.py deleted file mode 100644 index f561f1f1e270666ccd74c9d61f78c9c24f5c4c99..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/requirements.py +++ /dev/null @@ -1,166 +0,0 @@ -from pip._vendor.packaging.specifiers import SpecifierSet -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name - -from pip._internal.req.req_install import InstallRequirement - -from .base import Candidate, CandidateLookup, Requirement, format_name - - -class ExplicitRequirement(Requirement): - def __init__(self, candidate: Candidate) -> None: - self.candidate = candidate - - def __str__(self) -> str: - return str(self.candidate) - - def __repr__(self) -> str: - return "{class_name}({candidate!r})".format( - class_name=self.__class__.__name__, - candidate=self.candidate, - ) - - @property - def project_name(self) -> NormalizedName: - # No need to canonicalize - the candidate did this - return self.candidate.project_name - - @property - def name(self) -> str: - # No need to canonicalize - the candidate did this - return self.candidate.name - - def format_for_error(self) -> str: - return self.candidate.format_for_error() - - def get_candidate_lookup(self) -> CandidateLookup: - return self.candidate, None - - def is_satisfied_by(self, candidate: Candidate) -> bool: - return candidate == self.candidate - - -class SpecifierRequirement(Requirement): - def __init__(self, ireq: InstallRequirement) -> None: - assert ireq.link is None, "This is a link, not a specifier" - self._ireq = ireq - self._extras = frozenset(ireq.extras) - - def __str__(self) -> str: - return str(self._ireq.req) - - def __repr__(self) -> str: - return "{class_name}({requirement!r})".format( - class_name=self.__class__.__name__, - requirement=str(self._ireq.req), - ) - - @property - def project_name(self) -> NormalizedName: - assert self._ireq.req, "Specifier-backed ireq is always PEP 508" - return canonicalize_name(self._ireq.req.name) - - @property - def name(self) -> str: - return format_name(self.project_name, self._extras) - - def format_for_error(self) -> str: - - # Convert comma-separated specifiers into "A, B, ..., F and G" - # This makes the specifier a bit more "human readable", without - # risking a change in meaning. (Hopefully! Not all edge cases have - # been checked) - parts = [s.strip() for s in str(self).split(",")] - if len(parts) == 0: - return "" - elif len(parts) == 1: - return parts[0] - - return ", ".join(parts[:-1]) + " and " + parts[-1] - - def get_candidate_lookup(self) -> CandidateLookup: - return None, self._ireq - - def is_satisfied_by(self, candidate: Candidate) -> bool: - assert candidate.name == self.name, ( - f"Internal issue: Candidate is not for this requirement " - f"{candidate.name} vs {self.name}" - ) - # We can safely always allow prereleases here since PackageFinder - # already implements the prerelease logic, and would have filtered out - # prerelease candidates if the user does not expect them. - assert self._ireq.req, "Specifier-backed ireq is always PEP 508" - spec = self._ireq.req.specifier - return spec.contains(candidate.version, prereleases=True) - - -class RequiresPythonRequirement(Requirement): - """A requirement representing Requires-Python metadata.""" - - def __init__(self, specifier: SpecifierSet, match: Candidate) -> None: - self.specifier = specifier - self._candidate = match - - def __str__(self) -> str: - return f"Python {self.specifier}" - - def __repr__(self) -> str: - return "{class_name}({specifier!r})".format( - class_name=self.__class__.__name__, - specifier=str(self.specifier), - ) - - @property - def project_name(self) -> NormalizedName: - return self._candidate.project_name - - @property - def name(self) -> str: - return self._candidate.name - - def format_for_error(self) -> str: - return str(self) - - def get_candidate_lookup(self) -> CandidateLookup: - if self.specifier.contains(self._candidate.version, prereleases=True): - return self._candidate, None - return None, None - - def is_satisfied_by(self, candidate: Candidate) -> bool: - assert candidate.name == self._candidate.name, "Not Python candidate" - # We can safely always allow prereleases here since PackageFinder - # already implements the prerelease logic, and would have filtered out - # prerelease candidates if the user does not expect them. - return self.specifier.contains(candidate.version, prereleases=True) - - -class UnsatisfiableRequirement(Requirement): - """A requirement that cannot be satisfied.""" - - def __init__(self, name: NormalizedName) -> None: - self._name = name - - def __str__(self) -> str: - return f"{self._name} (unavailable)" - - def __repr__(self) -> str: - return "{class_name}({name!r})".format( - class_name=self.__class__.__name__, - name=str(self._name), - ) - - @property - def project_name(self) -> NormalizedName: - return self._name - - @property - def name(self) -> str: - return self._name - - def format_for_error(self) -> str: - return str(self) - - def get_candidate_lookup(self) -> CandidateLookup: - return None, None - - def is_satisfied_by(self, candidate: Candidate) -> bool: - return False diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/msgpack/exceptions.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/msgpack/exceptions.py deleted file mode 100644 index d6d2615cfdd0b914d064cdf7eecd45761e4bcaf6..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/msgpack/exceptions.py +++ /dev/null @@ -1,48 +0,0 @@ -class UnpackException(Exception): - """Base class for some exceptions raised while unpacking. - - NOTE: unpack may raise exception other than subclass of - UnpackException. If you want to catch all error, catch - Exception instead. - """ - - -class BufferFull(UnpackException): - pass - - -class OutOfData(UnpackException): - pass - - -class FormatError(ValueError, UnpackException): - """Invalid msgpack format""" - - -class StackError(ValueError, UnpackException): - """Too nested""" - - -# Deprecated. Use ValueError instead -UnpackValueError = ValueError - - -class ExtraData(UnpackValueError): - """ExtraData is raised when there is trailing data. - - This exception is raised while only one-shot (not streaming) - unpack. - """ - - def __init__(self, unpacked, extra): - self.unpacked = unpacked - self.extra = extra - - def __str__(self): - return "unpack(b) received extra data." - - -# Deprecated. Use Exception instead to catch all exception during packing. -PackException = Exception -PackValueError = ValueError -PackOverflowError = OverflowError diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/table.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/table.py deleted file mode 100644 index 8fc28ef2f74cb25498314ddda14156921b6b6804..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/table.py +++ /dev/null @@ -1,996 +0,0 @@ -from dataclasses import dataclass, field, replace -from typing import ( - TYPE_CHECKING, - Dict, - Iterable, - List, - NamedTuple, - Optional, - Sequence, - Tuple, - Union, -) - -from . import box, errors -from ._loop import loop_first_last, loop_last -from ._pick import pick_bool -from ._ratio import ratio_distribute, ratio_reduce -from .align import VerticalAlignMethod -from .jupyter import JupyterMixin -from .measure import Measurement -from .padding import Padding, PaddingDimensions -from .protocol import is_renderable -from .segment import Segment -from .style import Style, StyleType -from .text import Text, TextType - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - JustifyMethod, - OverflowMethod, - RenderableType, - RenderResult, - ) - - -@dataclass -class Column: - """Defines a column within a ~Table. - - Args: - title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None. - caption (Union[str, Text], optional): The table caption rendered below. Defaults to None. - width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None. - min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None. - box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD. - safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True. - padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1). - collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False. - pad_edge (bool, optional): Enable padding of edge cells. Defaults to True. - expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False. - show_header (bool, optional): Show a header row. Defaults to True. - show_footer (bool, optional): Show a footer row. Defaults to False. - show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True. - show_lines (bool, optional): Draw lines between every row. Defaults to False. - leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0. - style (Union[str, Style], optional): Default style for the table. Defaults to "none". - row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None. - header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header". - footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer". - border_style (Union[str, Style], optional): Style of the border. Defaults to None. - title_style (Union[str, Style], optional): Style of the title. Defaults to None. - caption_style (Union[str, Style], optional): Style of the caption. Defaults to None. - title_justify (str, optional): Justify method for title. Defaults to "center". - caption_justify (str, optional): Justify method for caption. Defaults to "center". - highlight (bool, optional): Highlight cell contents (if str). Defaults to False. - """ - - header: "RenderableType" = "" - """RenderableType: Renderable for the header (typically a string)""" - - footer: "RenderableType" = "" - """RenderableType: Renderable for the footer (typically a string)""" - - header_style: StyleType = "" - """StyleType: The style of the header.""" - - footer_style: StyleType = "" - """StyleType: The style of the footer.""" - - style: StyleType = "" - """StyleType: The style of the column.""" - - justify: "JustifyMethod" = "left" - """str: How to justify text within the column ("left", "center", "right", or "full")""" - - vertical: "VerticalAlignMethod" = "top" - """str: How to vertically align content ("top", "middle", or "bottom")""" - - overflow: "OverflowMethod" = "ellipsis" - """str: Overflow method.""" - - width: Optional[int] = None - """Optional[int]: Width of the column, or ``None`` (default) to auto calculate width.""" - - min_width: Optional[int] = None - """Optional[int]: Minimum width of column, or ``None`` for no minimum. Defaults to None.""" - - max_width: Optional[int] = None - """Optional[int]: Maximum width of column, or ``None`` for no maximum. Defaults to None.""" - - ratio: Optional[int] = None - """Optional[int]: Ratio to use when calculating column width, or ``None`` (default) to adapt to column contents.""" - - no_wrap: bool = False - """bool: Prevent wrapping of text within the column. Defaults to ``False``.""" - - _index: int = 0 - """Index of column.""" - - _cells: List["RenderableType"] = field(default_factory=list) - - def copy(self) -> "Column": - """Return a copy of this Column.""" - return replace(self, _cells=[]) - - @property - def cells(self) -> Iterable["RenderableType"]: - """Get all cells in the column, not including header.""" - yield from self._cells - - @property - def flexible(self) -> bool: - """Check if this column is flexible.""" - return self.ratio is not None - - -@dataclass -class Row: - """Information regarding a row.""" - - style: Optional[StyleType] = None - """Style to apply to row.""" - - end_section: bool = False - """Indicated end of section, which will force a line beneath the row.""" - - -class _Cell(NamedTuple): - """A single cell in a table.""" - - style: StyleType - """Style to apply to cell.""" - renderable: "RenderableType" - """Cell renderable.""" - vertical: VerticalAlignMethod - """Cell vertical alignment.""" - - -class Table(JupyterMixin): - """A console renderable to draw a table. - - Args: - *headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance. - title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None. - caption (Union[str, Text], optional): The table caption rendered below. Defaults to None. - width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None. - min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None. - box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD. - safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True. - padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1). - collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False. - pad_edge (bool, optional): Enable padding of edge cells. Defaults to True. - expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False. - show_header (bool, optional): Show a header row. Defaults to True. - show_footer (bool, optional): Show a footer row. Defaults to False. - show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True. - show_lines (bool, optional): Draw lines between every row. Defaults to False. - leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0. - style (Union[str, Style], optional): Default style for the table. Defaults to "none". - row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None. - header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header". - footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer". - border_style (Union[str, Style], optional): Style of the border. Defaults to None. - title_style (Union[str, Style], optional): Style of the title. Defaults to None. - caption_style (Union[str, Style], optional): Style of the caption. Defaults to None. - title_justify (str, optional): Justify method for title. Defaults to "center". - caption_justify (str, optional): Justify method for caption. Defaults to "center". - highlight (bool, optional): Highlight cell contents (if str). Defaults to False. - """ - - columns: List[Column] - rows: List[Row] - - def __init__( - self, - *headers: Union[Column, str], - title: Optional[TextType] = None, - caption: Optional[TextType] = None, - width: Optional[int] = None, - min_width: Optional[int] = None, - box: Optional[box.Box] = box.HEAVY_HEAD, - safe_box: Optional[bool] = None, - padding: PaddingDimensions = (0, 1), - collapse_padding: bool = False, - pad_edge: bool = True, - expand: bool = False, - show_header: bool = True, - show_footer: bool = False, - show_edge: bool = True, - show_lines: bool = False, - leading: int = 0, - style: StyleType = "none", - row_styles: Optional[Iterable[StyleType]] = None, - header_style: Optional[StyleType] = "table.header", - footer_style: Optional[StyleType] = "table.footer", - border_style: Optional[StyleType] = None, - title_style: Optional[StyleType] = None, - caption_style: Optional[StyleType] = None, - title_justify: "JustifyMethod" = "center", - caption_justify: "JustifyMethod" = "center", - highlight: bool = False, - ) -> None: - - self.columns: List[Column] = [] - self.rows: List[Row] = [] - self.title = title - self.caption = caption - self.width = width - self.min_width = min_width - self.box = box - self.safe_box = safe_box - self._padding = Padding.unpack(padding) - self.pad_edge = pad_edge - self._expand = expand - self.show_header = show_header - self.show_footer = show_footer - self.show_edge = show_edge - self.show_lines = show_lines - self.leading = leading - self.collapse_padding = collapse_padding - self.style = style - self.header_style = header_style or "" - self.footer_style = footer_style or "" - self.border_style = border_style - self.title_style = title_style - self.caption_style = caption_style - self.title_justify: "JustifyMethod" = title_justify - self.caption_justify: "JustifyMethod" = caption_justify - self.highlight = highlight - self.row_styles: Sequence[StyleType] = list(row_styles or []) - append_column = self.columns.append - for header in headers: - if isinstance(header, str): - self.add_column(header=header) - else: - header._index = len(self.columns) - append_column(header) - - @classmethod - def grid( - cls, - *headers: Union[Column, str], - padding: PaddingDimensions = 0, - collapse_padding: bool = True, - pad_edge: bool = False, - expand: bool = False, - ) -> "Table": - """Get a table with no lines, headers, or footer. - - Args: - *headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance. - padding (PaddingDimensions, optional): Get padding around cells. Defaults to 0. - collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to True. - pad_edge (bool, optional): Enable padding around edges of table. Defaults to False. - expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False. - - Returns: - Table: A table instance. - """ - return cls( - *headers, - box=None, - padding=padding, - collapse_padding=collapse_padding, - show_header=False, - show_footer=False, - show_edge=False, - pad_edge=pad_edge, - expand=expand, - ) - - @property - def expand(self) -> bool: - """Setting a non-None self.width implies expand.""" - return self._expand or self.width is not None - - @expand.setter - def expand(self, expand: bool) -> None: - """Set expand.""" - self._expand = expand - - @property - def _extra_width(self) -> int: - """Get extra width to add to cell content.""" - width = 0 - if self.box and self.show_edge: - width += 2 - if self.box: - width += len(self.columns) - 1 - return width - - @property - def row_count(self) -> int: - """Get the current number of rows.""" - return len(self.rows) - - def get_row_style(self, console: "Console", index: int) -> StyleType: - """Get the current row style.""" - style = Style.null() - if self.row_styles: - style += console.get_style(self.row_styles[index % len(self.row_styles)]) - row_style = self.rows[index].style - if row_style is not None: - style += console.get_style(row_style) - return style - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> Measurement: - max_width = options.max_width - if self.width is not None: - max_width = self.width - if max_width < 0: - return Measurement(0, 0) - - extra_width = self._extra_width - max_width = sum( - self._calculate_column_widths( - console, options.update_width(max_width - extra_width) - ) - ) - _measure_column = self._measure_column - - measurements = [ - _measure_column(console, options.update_width(max_width), column) - for column in self.columns - ] - minimum_width = ( - sum(measurement.minimum for measurement in measurements) + extra_width - ) - maximum_width = ( - sum(measurement.maximum for measurement in measurements) + extra_width - if (self.width is None) - else self.width - ) - measurement = Measurement(minimum_width, maximum_width) - measurement = measurement.clamp(self.min_width) - return measurement - - @property - def padding(self) -> Tuple[int, int, int, int]: - """Get cell padding.""" - return self._padding - - @padding.setter - def padding(self, padding: PaddingDimensions) -> "Table": - """Set cell padding.""" - self._padding = Padding.unpack(padding) - return self - - def add_column( - self, - header: "RenderableType" = "", - footer: "RenderableType" = "", - *, - header_style: Optional[StyleType] = None, - footer_style: Optional[StyleType] = None, - style: Optional[StyleType] = None, - justify: "JustifyMethod" = "left", - vertical: "VerticalAlignMethod" = "top", - overflow: "OverflowMethod" = "ellipsis", - width: Optional[int] = None, - min_width: Optional[int] = None, - max_width: Optional[int] = None, - ratio: Optional[int] = None, - no_wrap: bool = False, - ) -> None: - """Add a column to the table. - - Args: - header (RenderableType, optional): Text or renderable for the header. - Defaults to "". - footer (RenderableType, optional): Text or renderable for the footer. - Defaults to "". - header_style (Union[str, Style], optional): Style for the header, or None for default. Defaults to None. - footer_style (Union[str, Style], optional): Style for the footer, or None for default. Defaults to None. - style (Union[str, Style], optional): Style for the column cells, or None for default. Defaults to None. - justify (JustifyMethod, optional): Alignment for cells. Defaults to "left". - vertical (VerticalAlignMethod, optional): Vertical alignment, one of "top", "middle", or "bottom". Defaults to "top". - overflow (OverflowMethod): Overflow method: "crop", "fold", "ellipsis". Defaults to "ellipsis". - width (int, optional): Desired width of column in characters, or None to fit to contents. Defaults to None. - min_width (Optional[int], optional): Minimum width of column, or ``None`` for no minimum. Defaults to None. - max_width (Optional[int], optional): Maximum width of column, or ``None`` for no maximum. Defaults to None. - ratio (int, optional): Flexible ratio for the column (requires ``Table.expand`` or ``Table.width``). Defaults to None. - no_wrap (bool, optional): Set to ``True`` to disable wrapping of this column. - """ - - column = Column( - _index=len(self.columns), - header=header, - footer=footer, - header_style=header_style or "", - footer_style=footer_style or "", - style=style or "", - justify=justify, - vertical=vertical, - overflow=overflow, - width=width, - min_width=min_width, - max_width=max_width, - ratio=ratio, - no_wrap=no_wrap, - ) - self.columns.append(column) - - def add_row( - self, - *renderables: Optional["RenderableType"], - style: Optional[StyleType] = None, - end_section: bool = False, - ) -> None: - """Add a row of renderables. - - Args: - *renderables (None or renderable): Each cell in a row must be a renderable object (including str), - or ``None`` for a blank cell. - style (StyleType, optional): An optional style to apply to the entire row. Defaults to None. - end_section (bool, optional): End a section and draw a line. Defaults to False. - - Raises: - errors.NotRenderableError: If you add something that can't be rendered. - """ - - def add_cell(column: Column, renderable: "RenderableType") -> None: - column._cells.append(renderable) - - cell_renderables: List[Optional["RenderableType"]] = list(renderables) - - columns = self.columns - if len(cell_renderables) < len(columns): - cell_renderables = [ - *cell_renderables, - *[None] * (len(columns) - len(cell_renderables)), - ] - for index, renderable in enumerate(cell_renderables): - if index == len(columns): - column = Column(_index=index) - for _ in self.rows: - add_cell(column, Text("")) - self.columns.append(column) - else: - column = columns[index] - if renderable is None: - add_cell(column, "") - elif is_renderable(renderable): - add_cell(column, renderable) - else: - raise errors.NotRenderableError( - f"unable to render {type(renderable).__name__}; a string or other renderable object is required" - ) - self.rows.append(Row(style=style, end_section=end_section)) - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - - if not self.columns: - yield Segment("\n") - return - - max_width = options.max_width - if self.width is not None: - max_width = self.width - - extra_width = self._extra_width - widths = self._calculate_column_widths( - console, options.update_width(max_width - extra_width) - ) - table_width = sum(widths) + extra_width - - render_options = options.update( - width=table_width, highlight=self.highlight, height=None - ) - - def render_annotation( - text: TextType, style: StyleType, justify: "JustifyMethod" = "center" - ) -> "RenderResult": - render_text = ( - console.render_str(text, style=style, highlight=False) - if isinstance(text, str) - else text - ) - return console.render( - render_text, options=render_options.update(justify=justify) - ) - - if self.title: - yield from render_annotation( - self.title, - style=Style.pick_first(self.title_style, "table.title"), - justify=self.title_justify, - ) - yield from self._render(console, render_options, widths) - if self.caption: - yield from render_annotation( - self.caption, - style=Style.pick_first(self.caption_style, "table.caption"), - justify=self.caption_justify, - ) - - def _calculate_column_widths( - self, console: "Console", options: "ConsoleOptions" - ) -> List[int]: - """Calculate the widths of each column, including padding, not including borders.""" - max_width = options.max_width - columns = self.columns - width_ranges = [ - self._measure_column(console, options, column) for column in columns - ] - widths = [_range.maximum or 1 for _range in width_ranges] - get_padding_width = self._get_padding_width - extra_width = self._extra_width - if self.expand: - ratios = [col.ratio or 0 for col in columns if col.flexible] - if any(ratios): - fixed_widths = [ - 0 if column.flexible else _range.maximum - for _range, column in zip(width_ranges, columns) - ] - flex_minimum = [ - (column.width or 1) + get_padding_width(column._index) - for column in columns - if column.flexible - ] - flexible_width = max_width - sum(fixed_widths) - flex_widths = ratio_distribute(flexible_width, ratios, flex_minimum) - iter_flex_widths = iter(flex_widths) - for index, column in enumerate(columns): - if column.flexible: - widths[index] = fixed_widths[index] + next(iter_flex_widths) - table_width = sum(widths) - - if table_width > max_width: - widths = self._collapse_widths( - widths, - [(column.width is None and not column.no_wrap) for column in columns], - max_width, - ) - table_width = sum(widths) - # last resort, reduce columns evenly - if table_width > max_width: - excess_width = table_width - max_width - widths = ratio_reduce(excess_width, [1] * len(widths), widths, widths) - table_width = sum(widths) - - width_ranges = [ - self._measure_column(console, options.update_width(width), column) - for width, column in zip(widths, columns) - ] - widths = [_range.maximum or 0 for _range in width_ranges] - - if (table_width < max_width and self.expand) or ( - self.min_width is not None and table_width < (self.min_width - extra_width) - ): - _max_width = ( - max_width - if self.min_width is None - else min(self.min_width - extra_width, max_width) - ) - pad_widths = ratio_distribute(_max_width - table_width, widths) - widths = [_width + pad for _width, pad in zip(widths, pad_widths)] - - return widths - - @classmethod - def _collapse_widths( - cls, widths: List[int], wrapable: List[bool], max_width: int - ) -> List[int]: - """Reduce widths so that the total is under max_width. - - Args: - widths (List[int]): List of widths. - wrapable (List[bool]): List of booleans that indicate if a column may shrink. - max_width (int): Maximum width to reduce to. - - Returns: - List[int]: A new list of widths. - """ - total_width = sum(widths) - excess_width = total_width - max_width - if any(wrapable): - while total_width and excess_width > 0: - max_column = max( - width for width, allow_wrap in zip(widths, wrapable) if allow_wrap - ) - second_max_column = max( - width if allow_wrap and width != max_column else 0 - for width, allow_wrap in zip(widths, wrapable) - ) - column_difference = max_column - second_max_column - ratios = [ - (1 if (width == max_column and allow_wrap) else 0) - for width, allow_wrap in zip(widths, wrapable) - ] - if not any(ratios) or not column_difference: - break - max_reduce = [min(excess_width, column_difference)] * len(widths) - widths = ratio_reduce(excess_width, ratios, max_reduce, widths) - - total_width = sum(widths) - excess_width = total_width - max_width - return widths - - def _get_cells( - self, console: "Console", column_index: int, column: Column - ) -> Iterable[_Cell]: - """Get all the cells with padding and optional header.""" - - collapse_padding = self.collapse_padding - pad_edge = self.pad_edge - padding = self.padding - any_padding = any(padding) - - first_column = column_index == 0 - last_column = column_index == len(self.columns) - 1 - - _padding_cache: Dict[Tuple[bool, bool], Tuple[int, int, int, int]] = {} - - def get_padding(first_row: bool, last_row: bool) -> Tuple[int, int, int, int]: - cached = _padding_cache.get((first_row, last_row)) - if cached: - return cached - top, right, bottom, left = padding - - if collapse_padding: - if not first_column: - left = max(0, left - right) - if not last_row: - bottom = max(0, top - bottom) - - if not pad_edge: - if first_column: - left = 0 - if last_column: - right = 0 - if first_row: - top = 0 - if last_row: - bottom = 0 - _padding = (top, right, bottom, left) - _padding_cache[(first_row, last_row)] = _padding - return _padding - - raw_cells: List[Tuple[StyleType, "RenderableType"]] = [] - _append = raw_cells.append - get_style = console.get_style - if self.show_header: - header_style = get_style(self.header_style or "") + get_style( - column.header_style - ) - _append((header_style, column.header)) - cell_style = get_style(column.style or "") - for cell in column.cells: - _append((cell_style, cell)) - if self.show_footer: - footer_style = get_style(self.footer_style or "") + get_style( - column.footer_style - ) - _append((footer_style, column.footer)) - - if any_padding: - _Padding = Padding - for first, last, (style, renderable) in loop_first_last(raw_cells): - yield _Cell( - style, - _Padding(renderable, get_padding(first, last)), - getattr(renderable, "vertical", None) or column.vertical, - ) - else: - for (style, renderable) in raw_cells: - yield _Cell( - style, - renderable, - getattr(renderable, "vertical", None) or column.vertical, - ) - - def _get_padding_width(self, column_index: int) -> int: - """Get extra width from padding.""" - _, pad_right, _, pad_left = self.padding - if self.collapse_padding: - if column_index > 0: - pad_left = max(0, pad_left - pad_right) - return pad_left + pad_right - - def _measure_column( - self, - console: "Console", - options: "ConsoleOptions", - column: Column, - ) -> Measurement: - """Get the minimum and maximum width of the column.""" - - max_width = options.max_width - if max_width < 1: - return Measurement(0, 0) - - padding_width = self._get_padding_width(column._index) - - if column.width is not None: - # Fixed width column - return Measurement( - column.width + padding_width, column.width + padding_width - ).with_maximum(max_width) - # Flexible column, we need to measure contents - min_widths: List[int] = [] - max_widths: List[int] = [] - append_min = min_widths.append - append_max = max_widths.append - get_render_width = Measurement.get - for cell in self._get_cells(console, column._index, column): - _min, _max = get_render_width(console, options, cell.renderable) - append_min(_min) - append_max(_max) - - measurement = Measurement( - max(min_widths) if min_widths else 1, - max(max_widths) if max_widths else max_width, - ).with_maximum(max_width) - measurement = measurement.clamp( - None if column.min_width is None else column.min_width + padding_width, - None if column.max_width is None else column.max_width + padding_width, - ) - return measurement - - def _render( - self, console: "Console", options: "ConsoleOptions", widths: List[int] - ) -> "RenderResult": - table_style = console.get_style(self.style or "") - - border_style = table_style + console.get_style(self.border_style or "") - _column_cells = ( - self._get_cells(console, column_index, column) - for column_index, column in enumerate(self.columns) - ) - row_cells: List[Tuple[_Cell, ...]] = list(zip(*_column_cells)) - _box = ( - self.box.substitute( - options, safe=pick_bool(self.safe_box, console.safe_box) - ) - if self.box - else None - ) - _box = _box.get_plain_headed_box() if _box and not self.show_header else _box - - new_line = Segment.line() - - columns = self.columns - show_header = self.show_header - show_footer = self.show_footer - show_edge = self.show_edge - show_lines = self.show_lines - leading = self.leading - - _Segment = Segment - if _box: - box_segments = [ - ( - _Segment(_box.head_left, border_style), - _Segment(_box.head_right, border_style), - _Segment(_box.head_vertical, border_style), - ), - ( - _Segment(_box.foot_left, border_style), - _Segment(_box.foot_right, border_style), - _Segment(_box.foot_vertical, border_style), - ), - ( - _Segment(_box.mid_left, border_style), - _Segment(_box.mid_right, border_style), - _Segment(_box.mid_vertical, border_style), - ), - ] - if show_edge: - yield _Segment(_box.get_top(widths), border_style) - yield new_line - else: - box_segments = [] - - get_row_style = self.get_row_style - get_style = console.get_style - - for index, (first, last, row_cell) in enumerate(loop_first_last(row_cells)): - header_row = first and show_header - footer_row = last and show_footer - row = ( - self.rows[index - show_header] - if (not header_row and not footer_row) - else None - ) - max_height = 1 - cells: List[List[List[Segment]]] = [] - if header_row or footer_row: - row_style = Style.null() - else: - row_style = get_style( - get_row_style(console, index - 1 if show_header else index) - ) - for width, cell, column in zip(widths, row_cell, columns): - render_options = options.update( - width=width, - justify=column.justify, - no_wrap=column.no_wrap, - overflow=column.overflow, - height=None, - ) - lines = console.render_lines( - cell.renderable, - render_options, - style=get_style(cell.style) + row_style, - ) - max_height = max(max_height, len(lines)) - cells.append(lines) - - row_height = max(len(cell) for cell in cells) - - def align_cell( - cell: List[List[Segment]], - vertical: "VerticalAlignMethod", - width: int, - style: Style, - ) -> List[List[Segment]]: - if header_row: - vertical = "bottom" - elif footer_row: - vertical = "top" - - if vertical == "top": - return _Segment.align_top(cell, width, row_height, style) - elif vertical == "middle": - return _Segment.align_middle(cell, width, row_height, style) - return _Segment.align_bottom(cell, width, row_height, style) - - cells[:] = [ - _Segment.set_shape( - align_cell( - cell, - _cell.vertical, - width, - get_style(_cell.style) + row_style, - ), - width, - max_height, - ) - for width, _cell, cell, column in zip(widths, row_cell, cells, columns) - ] - - if _box: - if last and show_footer: - yield _Segment( - _box.get_row(widths, "foot", edge=show_edge), border_style - ) - yield new_line - left, right, _divider = box_segments[0 if first else (2 if last else 1)] - - # If the column divider is whitespace also style it with the row background - divider = ( - _divider - if _divider.text.strip() - else _Segment( - _divider.text, row_style.background_style + _divider.style - ) - ) - for line_no in range(max_height): - if show_edge: - yield left - for last_cell, rendered_cell in loop_last(cells): - yield from rendered_cell[line_no] - if not last_cell: - yield divider - if show_edge: - yield right - yield new_line - else: - for line_no in range(max_height): - for rendered_cell in cells: - yield from rendered_cell[line_no] - yield new_line - if _box and first and show_header: - yield _Segment( - _box.get_row(widths, "head", edge=show_edge), border_style - ) - yield new_line - end_section = row and row.end_section - if _box and (show_lines or leading or end_section): - if ( - not last - and not (show_footer and index >= len(row_cells) - 2) - and not (show_header and header_row) - ): - if leading: - yield _Segment( - _box.get_row(widths, "mid", edge=show_edge) * leading, - border_style, - ) - else: - yield _Segment( - _box.get_row(widths, "row", edge=show_edge), border_style - ) - yield new_line - - if _box and show_edge: - yield _Segment(_box.get_bottom(widths), border_style) - yield new_line - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - from pip._vendor.rich.highlighter import ReprHighlighter - from pip._vendor.rich.table import Table as Table - - from ._timer import timer - - with timer("Table render"): - table = Table( - title="Star Wars Movies", - caption="Rich example table", - caption_justify="right", - ) - - table.add_column( - "Released", header_style="bright_cyan", style="cyan", no_wrap=True - ) - table.add_column("Title", style="magenta") - table.add_column("Box Office", justify="right", style="green") - - table.add_row( - "Dec 20, 2019", - "Star Wars: The Rise of Skywalker", - "$952,110,690", - ) - table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347") - table.add_row( - "Dec 15, 2017", - "Star Wars Ep. V111: The Last Jedi", - "$1,332,539,889", - style="on black", - end_section=True, - ) - table.add_row( - "Dec 16, 2016", - "Rogue One: A Star Wars Story", - "$1,332,439,889", - ) - - def header(text: str) -> None: - console.print() - console.rule(highlight(text)) - console.print() - - console = Console() - highlight = ReprHighlighter() - header("Example Table") - console.print(table, justify="center") - - table.expand = True - header("expand=True") - console.print(table) - - table.width = 50 - header("width=50") - - console.print(table, justify="center") - - table.width = None - table.expand = False - table.row_styles = ["dim", "none"] - header("row_styles=['dim', 'none']") - - console.print(table, justify="center") - - table.width = None - table.expand = False - table.row_styles = ["dim", "none"] - table.leading = 1 - header("leading=1, row_styles=['dim', 'none']") - console.print(table, justify="center") - - table.width = None - table.expand = False - table.row_styles = ["dim", "none"] - table.show_lines = True - table.leading = 0 - header("show_lines=True, row_styles=['dim', 'none']") - console.print(table, justify="center") diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/utils/local_correlation.py b/spaces/Realcat/image-matching-webui/third_party/Roma/roma/utils/local_correlation.py deleted file mode 100644 index 603ab524333c29fbc284a73065847645f3100847..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/utils/local_correlation.py +++ /dev/null @@ -1,68 +0,0 @@ -import torch -import torch.nn.functional as F - -device = "cuda" if torch.cuda.is_available() else "cpu" - - -def local_correlation( - feature0, - feature1, - local_radius, - padding_mode="zeros", - flow=None, - sample_mode="bilinear", -): - r = local_radius - K = (2 * r + 1) ** 2 - B, c, h, w = feature0.size() - feature0 = feature0.half() - feature1 = feature1.half() - corr = torch.empty((B, K, h, w), device=feature0.device, dtype=feature0.dtype) - if flow is None: - # If flow is None, assume feature0 and feature1 are aligned - coords = torch.meshgrid( - ( - torch.linspace(-1 + 1 / h, 1 - 1 / h, h, device=device), - torch.linspace(-1 + 1 / w, 1 - 1 / w, w, device=device), - ) - ) - coords = torch.stack((coords[1], coords[0]), dim=-1)[None].expand(B, h, w, 2) - else: - coords = flow.permute(0, 2, 3, 1) # If using flow, sample around flow target. - local_window = torch.meshgrid( - ( - torch.linspace( - -2 * local_radius / h, 2 * local_radius / h, 2 * r + 1, device=device - ), - torch.linspace( - -2 * local_radius / w, 2 * local_radius / w, 2 * r + 1, device=device - ), - ) - ) - local_window = ( - torch.stack((local_window[1], local_window[0]), dim=-1)[None] - .expand(1, 2 * r + 1, 2 * r + 1, 2) - .reshape(1, (2 * r + 1) ** 2, 2) - ) - for _ in range(B): - with torch.no_grad(): - local_window_coords = ( - (coords[_, :, :, None] + local_window[:, None, None]) - .reshape(1, h, w * (2 * r + 1) ** 2, 2) - .float() - ) - window_feature = F.grid_sample( - feature1[_ : _ + 1].float(), - local_window_coords, - padding_mode=padding_mode, - align_corners=False, - mode=sample_mode, # - ) - window_feature = window_feature.reshape(c, h, w, (2 * r + 1) ** 2) - corr[_] = ( - (feature0[_, ..., None] / (c**0.5) * window_feature) - .sum(dim=0) - .permute(2, 0, 1) - ) - torch.cuda.empty_cache() - return corr diff --git a/spaces/Ritvik19/SudokuNet/sudokunet.py b/spaces/Ritvik19/SudokuNet/sudokunet.py deleted file mode 100644 index 5bfd8b97cb6549b8225371f10e1fe3d09d91faf7..0000000000000000000000000000000000000000 --- a/spaces/Ritvik19/SudokuNet/sudokunet.py +++ /dev/null @@ -1,55 +0,0 @@ -from huggingface_hub import from_pretrained_keras -from tensorflow.keras.utils import to_categorical -import numpy as np -import streamlit as st - -class SudokuSolver: - """Utility class for the pipeline which solves the Sudoku Puzzles - To solve Sudoku Puzzles - - initialize a Sudoku Solver object - - create an array of dimension (n, 9, 9) - where n is the number of puzzles you want to solve - also, replace the blank items with a zero. - - then, just call the sudoku solver object on the array - - Args: - model_name (str): THe name of the model to be used - - """ - - def __init__(self, model_name): - self.model = self.load_model(model_name) - - @st.cache(allow_output_mutation=True) - def load_model(self, model_name): - return from_pretrained_keras(model_name) - - def __call__(self, grids): - """This function solves quizzes. - It will fill blanks one after the other. Each time a digit is filled, - the new grid will be fed again to the solver to predict the next digit. - again and again, until there is no more blank - - Args: - grids (np.array), shape (?, 9, 9): Batch of quizzes to solve - - Returns: - grids (np.array), shape (?, 9, 9): Solved quizzes. - """ - grids = grids.copy() - for _ in range((grids == 0).sum((1, 2)).max()): - preds = np.array(self.model.predict(to_categorical(grids))) - probs = preds.max(2).T - values = preds.argmax(2).T + 1 - zeros = (grids == 0).reshape((grids.shape[0], 81)) - - for grid, prob, value, zero in zip(grids, probs, values, zeros): - if any(zero): - where = np.where(zero)[0] - confidence_position = where[prob[zero].argmax()] - confidence_value = value[confidence_position] - grid.flat[confidence_position] = confidence_value - return grids \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/htc.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/htc.py deleted file mode 100644 index d9efdf420fa7373f7f1d116f8d97836d73b457bf..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/htc.py +++ /dev/null @@ -1,15 +0,0 @@ -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class HybridTaskCascade(CascadeRCNN): - """Implementation of `HTC <https://arxiv.org/abs/1901.07518>`_""" - - def __init__(self, **kwargs): - super(HybridTaskCascade, self).__init__(**kwargs) - - @property - def with_semantic(self): - """bool: whether the detector has a semantic head""" - return self.roi_head.with_semantic diff --git a/spaces/Rominn/vits-uma-genshin-honkai/app.py b/spaces/Rominn/vits-uma-genshin-honkai/app.py deleted file mode 100644 index 226a0aaef0aeb880b1d3d52cc9c88d65eb867c06..0000000000000000000000000000000000000000 --- a/spaces/Rominn/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,123 +0,0 @@ -# coding=utf-8 -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 100: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "# <center> VITS语音在线合成demo\n" - "<div align='center'>主要有赛马娘,原神中文,原神日语,崩坏3的音色</div>" - '<div align="center"><a><font color="#dd0000">结果有随机性,语调可能很奇怪,可多次生成取最佳效果</font></a></div>' - '<div align="center"><a><font color="#dd0000">标点符号会影响生成的结果</font></a></div>' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3]) - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() diff --git a/spaces/SRDdev/HingMaskedLM/app.py b/spaces/SRDdev/HingMaskedLM/app.py deleted file mode 100644 index 6719b1616098b991522ab927080653dc0940bae4..0000000000000000000000000000000000000000 --- a/spaces/SRDdev/HingMaskedLM/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline - -tokenizer = AutoTokenizer.from_pretrained("SRDdev/HingMaskedLM") -model = AutoModelForMaskedLM.from_pretrained("SRDdev/HingMaskedLM") - -def fill_mask(sentence): - fill = pipeline('fill-mask', model=model, tokenizer=tokenizer) - result = fill(sentence) - sequence = result[0]['sequence'] - return sequence - - -inputs = gr.inputs.Textbox(default="Aaj Pune me <mask> kesa he ?") -output = gr.outputs.Textbox(label="Filled Mask") -description = "Hinglish MaskedLM is a variant of MaskedLM for the Indian English dialect, also known as Hinglish. It is trained on a corpus of text written in Hinglish, with the goal of improving the performance of NLP models for this dialect" -app = gr.Interface(fn=fill_mask, inputs = inputs, outputs=output,theme="grass",description=description ,title="Hinglish Masked Language Model") -app.launch() \ No newline at end of file diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/dependency_versions_check.py b/spaces/Salesforce/EDICT/my_half_diffusers/dependency_versions_check.py deleted file mode 100644 index bbf863222a52fd60a15a95be0fbd6391acd3ba6d..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/dependency_versions_check.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import sys - -from .dependency_versions_table import deps -from .utils.versions import require_version, require_version_core - - -# define which module versions we always want to check at run time -# (usually the ones defined in `install_requires` in setup.py) -# -# order specific notes: -# - tqdm must be checked before tokenizers - -pkgs_to_check_at_runtime = "python tqdm regex requests packaging filelock numpy tokenizers".split() -if sys.version_info < (3, 7): - pkgs_to_check_at_runtime.append("dataclasses") -if sys.version_info < (3, 8): - pkgs_to_check_at_runtime.append("importlib_metadata") - -for pkg in pkgs_to_check_at_runtime: - if pkg in deps: - if pkg == "tokenizers": - # must be loaded here, or else tqdm check may fail - from .utils import is_tokenizers_available - - if not is_tokenizers_available(): - continue # not required, check version only if installed - - require_version_core(deps[pkg]) - else: - raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py") - - -def dep_version_check(pkg, hint=None): - require_version(deps[pkg], hint) diff --git a/spaces/SaulLu/test-demo/join-main-training.md b/spaces/SaulLu/test-demo/join-main-training.md deleted file mode 100644 index 9889d50cc78398d9e048b9cc680118dfe444a672..0000000000000000000000000000000000000000 --- a/spaces/SaulLu/test-demo/join-main-training.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -layout: page -title: "Join main training" -permalink: /join-main-training/ ---- \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2.py b/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2.py deleted file mode 100644 index 40d7d778af9d379a41a647c257e066877e321d25..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2.py +++ /dev/null @@ -1,240 +0,0 @@ -""" - Copyright (c) 2023, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" -import logging -import os -import time -import datetime - -import torch -import torch.nn as nn -import torch.distributed as dist -import torch.nn.functional as F - -import lavis.common.dist_utils as dist_utils -from lavis.common.dist_utils import download_cached_file -from lavis.common.utils import is_url -from lavis.common.logger import MetricLogger -from lavis.models.base_model import BaseModel -from lavis.models.blip2_models.Qformer import BertConfig, BertLMHeadModel -from lavis.models.eva_vit import create_eva_vit_g -from transformers import BertTokenizer - - -class Blip2Base(BaseModel): - @classmethod - def init_tokenizer(cls): - tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") - tokenizer.add_special_tokens({"bos_token": "[DEC]"}) - return tokenizer - - @classmethod - def init_Qformer(cls, num_query_token, vision_width): - encoder_config = BertConfig.from_pretrained("bert-base-uncased") - encoder_config.encoder_width = vision_width - # insert cross-attention layer every other block - encoder_config.add_cross_attention = True - encoder_config.cross_attention_freq = 2 - encoder_config.query_length = num_query_token - Qformer = BertLMHeadModel.from_pretrained( - "bert-base-uncased", config=encoder_config - ) - query_tokens = nn.Parameter( - torch.zeros(1, num_query_token, encoder_config.hidden_size) - ) - query_tokens.data.normal_(mean=0.0, std=encoder_config.initializer_range) - return Qformer, query_tokens - - @classmethod - def init_TemporalQFormer(cls, num_of_frame): - encoder_config = BertConfig.from_pretrained("bert-base-uncased") - encoder_config.query_length = num_of_frame - Qformer = BertLMHeadModel.from_pretrained( - "bert-base-uncased", config=encoder_config - ) - query_tokens = nn.Parameter( - torch.zeros(1, num_of_frame, 1, encoder_config.hidden_size) - ) - query_tokens.data.normal_(mean=0.0, std=encoder_config.initializer_range) - return Qformer, query_tokens - - @classmethod - def init_vision_encoder( - cls, img_size, drop_path_rate, use_grad_checkpoint, precision - ): - visual_encoder = create_eva_vit_g( - img_size, drop_path_rate, use_grad_checkpoint, precision - ) - ln_vision = LayerNorm(visual_encoder.num_features) - return visual_encoder, ln_vision - - @classmethod - def init_vision_encoder_sevila( - cls, img_size, drop_path_rate, use_grad_checkpoint, precision - ): - visual_encoder = create_eva_vit_g( - img_size, drop_path_rate, use_grad_checkpoint, precision - ) - ln_vision = LayerNorm(visual_encoder.num_features) - ln_vision2 = LayerNorm(visual_encoder.num_features) - return visual_encoder, ln_vision, ln_vision2 - - def load_from_pretrained(self, url_or_filename): - if is_url(url_or_filename): - cached_file = download_cached_file( - url_or_filename, check_hash=False, progress=True - ) - checkpoint = torch.load(cached_file, map_location="cpu") - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location="cpu") - else: - raise RuntimeError("checkpoint url or path is invalid") - - state_dict = checkpoint["model"] - #print('state_dict',state_dict.keys()) - msg = self.load_state_dict(state_dict, strict=False) - - logging.info("Missing keys {}".format(msg.missing_keys)) - logging.info("load checkpoint from %s" % url_or_filename) - - return msg - - def load_qformer_loc(self): - url_or_filename = '/nas-hdd/shoubin/pretrained_model/hub/checkpoints/qformer_loc.pth' - checkpoint = torch.load(url_or_filename, map_location="cpu") - state_dict = checkpoint["model"] - msg = self.load_state_dict(state_dict, strict=False) - logging.info("load checkpoint from %s" % url_or_filename) - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -def compute_sim_matrix(model, data_loader, **kwargs): - k_test = kwargs.pop("k_test") - - metric_logger = MetricLogger(delimiter=" ") - header = "Evaluation:" - - logging.info("Computing features for evaluation...") - start_time = time.time() - - texts = data_loader.dataset.text - num_text = len(texts) - text_bs = 256 - text_ids = [] - text_embeds = [] - text_atts = [] - for i in range(0, num_text, text_bs): - text = texts[i : min(num_text, i + text_bs)] - text_input = model.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=35, - return_tensors="pt", - ).to(model.device) - text_feat = model.forward_text(text_input) - text_embed = F.normalize(model.text_proj(text_feat)) - text_embeds.append(text_embed) - text_ids.append(text_input.input_ids) - text_atts.append(text_input.attention_mask) - - text_embeds = torch.cat(text_embeds, dim=0) - text_ids = torch.cat(text_ids, dim=0) - text_atts = torch.cat(text_atts, dim=0) - - vit_feats = [] - image_embeds = [] - for samples in data_loader: - image = samples["image"] - - image = image.to(model.device) - image_feat, vit_feat = model.forward_image(image) - image_embed = model.vision_proj(image_feat) - image_embed = F.normalize(image_embed, dim=-1) - - vit_feats.append(vit_feat.cpu()) - image_embeds.append(image_embed) - - vit_feats = torch.cat(vit_feats, dim=0) - image_embeds = torch.cat(image_embeds, dim=0) - - sims_matrix = [] - for image_embed in image_embeds: - sim_q2t = image_embed @ text_embeds.t() - sim_i2t, _ = sim_q2t.max(0) - sims_matrix.append(sim_i2t) - sims_matrix = torch.stack(sims_matrix, dim=0) - - score_matrix_i2t = torch.full( - (len(data_loader.dataset.image), len(texts)), -100.0 - ).to(model.device) - - num_tasks = dist_utils.get_world_size() - rank = dist_utils.get_rank() - step = sims_matrix.size(0) // num_tasks + 1 - start = rank * step - end = min(sims_matrix.size(0), start + step) - - for i, sims in enumerate( - metric_logger.log_every(sims_matrix[start:end], 50, header) - ): - topk_sim, topk_idx = sims.topk(k=k_test, dim=0) - image_inputs = vit_feats[start + i].repeat(k_test, 1, 1).to(model.device) - score = model.compute_itm( - image_inputs=image_inputs, - text_ids=text_ids[topk_idx], - text_atts=text_atts[topk_idx], - ).float() - score_matrix_i2t[start + i, topk_idx] = score + topk_sim - - sims_matrix = sims_matrix.t() - score_matrix_t2i = torch.full( - (len(texts), len(data_loader.dataset.image)), -100.0 - ).to(model.device) - - step = sims_matrix.size(0) // num_tasks + 1 - start = rank * step - end = min(sims_matrix.size(0), start + step) - - for i, sims in enumerate( - metric_logger.log_every(sims_matrix[start:end], 50, header) - ): - topk_sim, topk_idx = sims.topk(k=k_test, dim=0) - image_inputs = vit_feats[topk_idx.cpu()].to(model.device) - score = model.compute_itm( - image_inputs=image_inputs, - text_ids=text_ids[start + i].repeat(k_test, 1), - text_atts=text_atts[start + i].repeat(k_test, 1), - ).float() - score_matrix_t2i[start + i, topk_idx] = score + topk_sim - - if dist_utils.is_dist_avail_and_initialized(): - dist.barrier() - torch.distributed.all_reduce( - score_matrix_i2t, op=torch.distributed.ReduceOp.SUM - ) - torch.distributed.all_reduce( - score_matrix_t2i, op=torch.distributed.ReduceOp.SUM - ) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - logging.info("Evaluation time {}".format(total_time_str)) - - return score_matrix_i2t.cpu().numpy(), score_matrix_t2i.cpu().numpy() diff --git a/spaces/SmileyTatsu/Bleh/greeting.md b/spaces/SmileyTatsu/Bleh/greeting.md deleted file mode 100644 index c9866428cbcd95d0b853f4b41e3627484539aa2e..0000000000000000000000000000000000000000 --- a/spaces/SmileyTatsu/Bleh/greeting.md +++ /dev/null @@ -1,2 +0,0 @@ -Pass: In the manga chapter "Tatsumaki's Day Off", what was the hero's name of the one who defeated all the monsters? -All together, first letter of each word capitalized \ No newline at end of file diff --git a/spaces/Sreezx/Sentzi/lib/lib.py b/spaces/Sreezx/Sentzi/lib/lib.py deleted file mode 100644 index cc681b6ac4c96d189ada78aa33a9eb5c9e9fc018..0000000000000000000000000000000000000000 --- a/spaces/Sreezx/Sentzi/lib/lib.py +++ /dev/null @@ -1,95 +0,0 @@ -# Main module for all sentzi classes - -# `Sentzi` is a web app that generates a visualized output of product reviews through sentiment analysis. - -from textblob import TextBlob -import typing - -# json and csv lib -import json -import csv - -class Sentiment: - """ Represents a sentiment object """ - - __emojiDic__ = { - 'positive' : [ - '🙂','😊','😀','👍','😄' ,'😁','😍','🥰','😘','😗' - ], - 'negative' : [ - '😞','😒','😔','👎','😟','😠','😡','😥','😧','❌' - ], - 'neutral' : [ - '😐','😶','😑' - ] - } - def __init__(self, text : str): - """ - Initializes a Sentiment object with the given text . - - - Note that the accuracy increases as number of words increases. - - `Args` - ------ - `text` to analyse . - """ - self.text = text - - # get sentiment - blob = TextBlob(text) - - # Analyze sentiment - sentiment = blob.sentiment - polarity = sentiment.polarity - - self.polarity = polarity - - def __repr__(self) -> str: - """ Returns a string representation of the `Sentiment` object """ - return f"""Sentiment( - score : {self.polarity} - text : {self.text} - )""" - - def get(self) -> typing.Dict[str , typing.Any]: - - # check is its positive negative or neutral - if self.polarity < 0: - # negative - data = { - 'score' : self.polarity, - 'level' : 'negative', - 'emojis' : Sentiment.__emojiDic__['negative'] - } - elif self.polarity > 0: - # positive - data = { - 'score' : self.polarity, - 'level' : 'positive', - 'emojis' : Sentiment.__emojiDic__['positive'] - } - else: - # neutral - data = { - 'score' : self.polarity, - 'level' : 'neutral', - 'emojis' : Sentiment.__emojiDic__['neutral'] - } - - return data - -def writeCSV(header : list[str],dataList : list[list[str]]): - with open(r"temp.csv", 'w', newline='') as file: - writer = csv.writer(file) - writer.writerow(header) - # write multiple rows - writer.writerows(dataList) # write content - -def writeJSON(data : dict): - with open(r"temp.json","w") as json_file: - json.dump( - data, - json_file, - indent=4, - sort_keys=True - ) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/tests/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/tests/__init__.py deleted file mode 100644 index f751f68a9de539bb48e0468f2a1c6d29a2031bd3..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/tests/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# encoding: utf-8 -__docformat__ = "restructuredtext en" -#------------------------------------------------------------------------------- -# Copyright (C) 2005 Fernando Perez <fperez@colorado.edu> -# Brian E Granger <ellisonbg@gmail.com> -# Benjamin Ragan-Kelley <benjaminrk@gmail.com> -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#------------------------------------------------------------------------------- diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/bytes/audio_bytes.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/bytes/audio_bytes.py deleted file mode 100644 index 23c6f49a4d0662f7f7a00e6316bd5838a3351d62..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/bytes/audio_bytes.py +++ /dev/null @@ -1,91 +0,0 @@ -import io -from typing import TYPE_CHECKING, Any, Tuple, Type, TypeVar - -import numpy as np -from pydantic import parse_obj_as -from pydantic.validators import bytes_validator - -from docarray.typing.abstract_type import AbstractType -from docarray.typing.proto_register import _register_proto -from docarray.typing.tensor.audio import AudioNdArray -from docarray.utils._internal.misc import import_library - -if TYPE_CHECKING: - from pydantic.fields import BaseConfig, ModelField - - from docarray.proto import NodeProto - -T = TypeVar('T', bound='AudioBytes') - - -@_register_proto(proto_type_name='audio_bytes') -class AudioBytes(bytes, AbstractType): - """ - Bytes that store an audio and that can be load into an Audio tensor - """ - - @classmethod - def validate( - cls: Type[T], - value: Any, - field: 'ModelField', - config: 'BaseConfig', - ) -> T: - value = bytes_validator(value) - return cls(value) - - @classmethod - def from_protobuf(cls: Type[T], pb_msg: T) -> T: - return parse_obj_as(cls, pb_msg) - - def _to_node_protobuf(self: T) -> 'NodeProto': - from docarray.proto import NodeProto - - return NodeProto(blob=self, type=self._proto_type_name) - - def load(self) -> Tuple[AudioNdArray, int]: - """ - Load the Audio from the [`AudioBytes`][docarray.typing.AudioBytes] into an - [`AudioNdArray`][docarray.typing.AudioNdArray]. - - --- - - ```python - from typing import Optional - from docarray import BaseDoc - from docarray.typing import AudioBytes, AudioNdArray, AudioUrl - - - class MyAudio(BaseDoc): - url: AudioUrl - tensor: Optional[AudioNdArray] - bytes_: Optional[AudioBytes] - frame_rate: Optional[float] - - - doc = MyAudio(url='https://www.kozco.com/tech/piano2.wav') - doc.bytes_ = doc.url.load_bytes() - doc.tensor, doc.frame_rate = doc.bytes_.load() - - # Note this is equivalent to do - - doc.tensor, doc.frame_rate = doc.url.load() - - assert isinstance(doc.tensor, AudioNdArray) - ``` - - --- - :return: tuple of an [`AudioNdArray`][docarray.typing.AudioNdArray] representing the - audio bytes content, and an integer representing the frame rate. - """ - pydub = import_library('pydub', raise_error=True) # noqa: F841 - from pydub import AudioSegment - - segment = AudioSegment.from_file(io.BytesIO(self)) - - # Convert to float32 using NumPy - samples = np.array(segment.get_array_of_samples()) - - # Normalise float32 array so that values are between -1.0 and +1.0 - samples_norm = samples / 2 ** (segment.sample_width * 8 - 1) - return parse_obj_as(AudioNdArray, samples_norm), segment.frame_rate diff --git a/spaces/TEnngal/bingo/src/components/chat-header.tsx b/spaces/TEnngal/bingo/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( - <div className="flex flex-col items-center justify-center"> - <Image alt="logo" src={LogoIcon} width={60}/> - <div className="mt-8 text-4xl font-bold">欢迎使用新必应</div> - <div className="mt-4 mb-8 text-lg">由 AI 支持的网页版 Copilot</div> - </div> - ) -} diff --git a/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/plot_latent_sensitivity_bias_risk.py b/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/plot_latent_sensitivity_bias_risk.py deleted file mode 100644 index 118ac2b9dc0e8ddbf2c20d0866d19f7a9fe5b1f2..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/plot_latent_sensitivity_bias_risk.py +++ /dev/null @@ -1,109 +0,0 @@ -import os - -import matplotlib.pyplot as plt -import matplotlib as mpl -from pytorch_lightning.utilities.seed import seed_everything -import torch -from torch.utils.data import DataLoader - -from risk_biased.scene_dataset.scene import RandomSceneParams -from matplotlib.patches import Ellipse -from risk_biased.utils.callbacks import DrawCallbackParams -from risk_biased.utils.config_argparse import config_argparse - -from risk_biased.utils.load_model import load_from_config - -from scripts.scripts_utils.sample_batch_utils import repeat_and_reshape_all - - -def draw_latent_biased( - model: torch.nn.Module, device, loader: DataLoader, params: DrawCallbackParams -): - n_samples = 9 - # ped_trajs = scene.get_pedestrians_trajectories() - ( - normalized_input, - mask_input, - fut, - mask_fut, - mask_loss, - map, - mask_map, - offset, - ego_past, - ego_fut, - ) = repeat_and_reshape_all(next(iter(loader)), n_samples) - n_scenes_samples, n_agents, n_steps, features = normalized_input.shape - n_scenes = n_scenes_samples // n_samples - - risk_level = torch.ones(n_scenes, n_samples, n_agents) * torch.linspace( - 0, 1, n_samples - ).unsqueeze(0).unsqueeze(-1) - - y_sample, biased_mu, biased_log_std = model( - normalized_input, - mask_input, - map, - mask_map, - offset=offset, - x_ego=ego_past, - y_ego=ego_fut, - risk_level=risk_level.view(-1, n_agents), - ) - - biased_mu = biased_mu.view(n_scenes, n_agents, n_samples, 2).cpu().detach().numpy() - biased_std = ( - biased_log_std.view(n_scenes, n_agents, n_samples, 2) - .exp() - .cpu() - .detach() - .numpy() - ) - risk_level = risk_level.permute(0, 2, 1).cpu().detach().numpy() - - fig, ax = plt.subplots(3, 3) - cmap = plt.get_cmap("RdBu_r") - for s in range(n_samples): - for i in range(n_scenes): - for a in range(n_agents): - ii = s // 3 - jj = s % 3 - ellipse = Ellipse( - (biased_mu[i, a, s, 0], biased_mu[i, a, s, 1]), - width=biased_std[i, a, s, 0] * 2, - height=biased_std[i, a, s, 1] * 2, - facecolor="none", - edgecolor=(*cmap(risk_level[i, a, s])[:-1], 0.05), - ) - ax[ii][jj].add_patch(ellipse) - ax[ii][jj].axis([-3, 3, -3, 3]) - - norm = mpl.colors.Normalize(vmin=0, vmax=1, clip=True) - sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm) - fig.colorbar(sm, ax=ax.ravel().tolist(), label="Desired risk levels") - plt.show() - - -if __name__ == "__main__": - # Draws 9 plots in the latent space. In each plot a constant risk level is used (0 for plot 0 and 1 for plot 8). - # Each plot superposes ellipses representing the encoded distributions for a batch of x input. - # Each plot represents the latent distribution for the same batch of input but at different risk levels. - working_dir = os.path.dirname(os.path.realpath(__file__)) - config_path = os.path.join( - working_dir, "..", "..", "risk_biased", "config", "learning_config.py" - ) - cfg = config_argparse(config_path) - - cfg.batch_size = cfg.datasets_sizes["val"] - model, loaders, cfg = load_from_config(cfg) - assert ( - cfg.latent_dim == 2 - and "The latent dimension of the model must be exactly 2 to be plotted (no dimensionality reduction capabilities)" - ) - scene_params = RandomSceneParams.from_config(cfg) - scene = loaders.val_dataloader() - draw_params = DrawCallbackParams.from_config(cfg) - if cfg.seed is not None: - seed_everything(cfg.seed) - - draw_latent_biased(model.model, model.device, scene, draw_params) diff --git a/spaces/TabPFN/TabPFNPrediction/TabPFN/notebook_utils.py b/spaces/TabPFN/TabPFNPrediction/TabPFN/notebook_utils.py deleted file mode 100644 index c085c98f9865f993163bc1cfd29fd895f6a27b27..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNPrediction/TabPFN/notebook_utils.py +++ /dev/null @@ -1,32 +0,0 @@ -import os -from pathlib import Path - -import io -import torch -import pickle - -def print_models(base_path, model_string): - print(model_string) - - for i in range(80): - for e in range(50): - exists = Path(os.path.join(base_path, f'models_diff/prior_diff_real_checkpoint{model_string}_n_{i}_epoch_{e}.cpkt')).is_file() - if exists: - print(os.path.join(base_path, f'models_diff/prior_diff_real_checkpoint{model_string}_n_{i}_epoch_{e}.cpkt')) - print() - -class CustomUnpickler(pickle.Unpickler): - def find_class(self, module, name): - if name == 'Manager': - from settings import Manager - return Manager - try: - return self.find_class_cpu(module, name) - except: - return None - - def find_class_cpu(self, module, name): - if module == 'torch.storage' and name == '_load_from_bytes': - return lambda b: torch.load(io.BytesIO(b), map_location='cpu') - else: - return super().find_class(module, name) \ No newline at end of file diff --git a/spaces/Tetel/chat/SydneyGPT/conversation_style.py b/spaces/Tetel/chat/SydneyGPT/conversation_style.py deleted file mode 100644 index 94aa4c1bc77e57c86598df6438f29fb899c53694..0000000000000000000000000000000000000000 --- a/spaces/Tetel/chat/SydneyGPT/conversation_style.py +++ /dev/null @@ -1,68 +0,0 @@ -from enum import Enum - -try: - from typing import Literal, Union -except ImportError: - from typing_extensions import Literal -from typing import Optional - - -class ConversationStyle(Enum): - creative = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "iycapbing", - "iyxapbing", - "uquopt", - "authsndfdbk", - "refpromptv1", - "enuaug", - "dagslnv1nr", - "dv3sugg", - "iyoloxap", - "iyoloneutral", - "h3imaginative", - "clgalileo", - "eredirecturl", - "gencontentv3", - "travelansgnd", - "nojbfedge", - ] - balanced = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - ] - precise = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "h3precise", - "clgalileo", - ] - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] diff --git a/spaces/TheStinger/Ilaria_RVC/utils.py b/spaces/TheStinger/Ilaria_RVC/utils.py deleted file mode 100644 index 62be8d03a8e8b839f8747310ef0ec0e82fb8ff0a..0000000000000000000000000000000000000000 --- a/spaces/TheStinger/Ilaria_RVC/utils.py +++ /dev/null @@ -1,151 +0,0 @@ -import ffmpeg -import numpy as np - -# import praatio -# import praatio.praat_scripts -import os -import sys - -import random - -import csv - -platform_stft_mapping = { - "linux": "stftpitchshift", - "darwin": "stftpitchshift", - "win32": "stftpitchshift.exe", -} - -stft = platform_stft_mapping.get(sys.platform) -# praatEXE = join('.',os.path.abspath(os.getcwd()) + r"\Praat.exe") - - -def CSVutil(file, rw, type, *args): - if type == "formanting": - if rw == "r": - with open(file) as fileCSVread: - csv_reader = list(csv.reader(fileCSVread)) - return ( - (csv_reader[0][0], csv_reader[0][1], csv_reader[0][2]) - if csv_reader is not None - else (lambda: exec('raise ValueError("No data")'))() - ) - else: - if args: - doformnt = args[0] - else: - doformnt = False - qfr = args[1] if len(args) > 1 else 1.0 - tmb = args[2] if len(args) > 2 else 1.0 - with open(file, rw, newline="") as fileCSVwrite: - csv_writer = csv.writer(fileCSVwrite, delimiter=",") - csv_writer.writerow([doformnt, qfr, tmb]) - elif type == "stop": - stop = args[0] if args else False - with open(file, rw, newline="") as fileCSVwrite: - csv_writer = csv.writer(fileCSVwrite, delimiter=",") - csv_writer.writerow([stop]) - - -def load_audio(file, sr, DoFormant, Quefrency, Timbre): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() diff --git a/spaces/ThomasSimonini/SnowballFight/index.html b/spaces/ThomasSimonini/SnowballFight/index.html deleted file mode 100644 index aaa7243870b7c2b230d6d4fdff2f4e4aca8d3a94..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/SnowballFight/index.html +++ /dev/null @@ -1,138 +0,0 @@ -<!DOCTYPE html> -<html lang="en-us"> - <head> - <meta charset="utf-8"> - <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> - <title>Snowball Fight Beta - - - - - - - - - - - - - -
        - -
        - - - - - - - - - diff --git a/spaces/Tony1810/FootballPosition/README.md b/spaces/Tony1810/FootballPosition/README.md deleted file mode 100644 index a1ac4ddca4f4392860172412c366029845ba9419..0000000000000000000000000000000000000000 --- a/spaces/Tony1810/FootballPosition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FootballPosition -emoji: 🏢 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vegecken/sovits4dzl/inference/infer_tool_grad.py b/spaces/Vegecken/sovits4dzl/inference/infer_tool_grad.py deleted file mode 100644 index b75af49c08e2e724839828bc419792ed580809bb..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/inference/infer_tool_grad.py +++ /dev/null @@ -1,160 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = utils.get_hubert_model() - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2) - audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - return audio, audio.shape[-1] - - def inference(self,srcaudio,chara,tran,slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate,audio) diff --git a/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/README.md b/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/README.md deleted file mode 100644 index 2fd2870bef9c579ab20b33fdd09aea238aeb1f1d..0000000000000000000000000000000000000000 --- a/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: sayashi/vits-uma-genshin-honkai ---- diff --git a/spaces/XzJosh/Lumi-Bert-VITS2/text/english.py b/spaces/XzJosh/Lumi-Bert-VITS2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Lumi-Bert-VITS2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/data_utils.py b/spaces/XzJosh/ShanBao-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/XzJosh/maimai-Bert-VITS2/data_utils.py b/spaces/XzJosh/maimai-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/maimai-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/YUANAI/DiffspeechResearch/utils/commons/tensor_utils.py b/spaces/YUANAI/DiffspeechResearch/utils/commons/tensor_utils.py deleted file mode 100644 index be4b69a4f135b95fcf18618668ed909314f24871..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/utils/commons/tensor_utils.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -import torch.distributed as dist - - -def reduce_tensors(metrics): - new_metrics = {} - for k, v in metrics.items(): - if isinstance(v, torch.Tensor): - dist.all_reduce(v) - v = v / dist.get_world_size() - if type(v) is dict: - v = reduce_tensors(v) - new_metrics[k] = v - return new_metrics - - -def tensors_to_scalars(tensors): - if isinstance(tensors, torch.Tensor): - tensors = tensors.item() - return tensors - elif isinstance(tensors, dict): - new_tensors = {} - for k, v in tensors.items(): - v = tensors_to_scalars(v) - new_tensors[k] = v - return new_tensors - elif isinstance(tensors, list): - return [tensors_to_scalars(v) for v in tensors] - else: - return tensors - - -def tensors_to_np(tensors): - if isinstance(tensors, dict): - new_np = {} - for k, v in tensors.items(): - if isinstance(v, torch.Tensor): - v = v.cpu().numpy() - if type(v) is dict: - v = tensors_to_np(v) - new_np[k] = v - elif isinstance(tensors, list): - new_np = [] - for v in tensors: - if isinstance(v, torch.Tensor): - v = v.cpu().numpy() - if type(v) is dict: - v = tensors_to_np(v) - new_np.append(v) - elif isinstance(tensors, torch.Tensor): - v = tensors - if isinstance(v, torch.Tensor): - v = v.cpu().numpy() - if type(v) is dict: - v = tensors_to_np(v) - new_np = v - else: - raise Exception(f'tensors_to_np does not support type {type(tensors)}.') - return new_np - - -def move_to_cpu(tensors): - ret = {} - for k, v in tensors.items(): - if isinstance(v, torch.Tensor): - v = v.cpu() - if type(v) is dict: - v = move_to_cpu(v) - ret[k] = v - return ret - - -def move_to_cuda(batch, gpu_id=0): - # base case: object can be directly moved using `cuda` or `to` - if callable(getattr(batch, 'cuda', None)): - return batch.cuda(gpu_id, non_blocking=True) - elif callable(getattr(batch, 'to', None)): - return batch.to(torch.device('cuda', gpu_id), non_blocking=True) - elif isinstance(batch, list): - for i, x in enumerate(batch): - batch[i] = move_to_cuda(x, gpu_id) - return batch - elif isinstance(batch, tuple): - batch = list(batch) - for i, x in enumerate(batch): - batch[i] = move_to_cuda(x, gpu_id) - return tuple(batch) - elif isinstance(batch, dict): - for k, v in batch.items(): - batch[k] = move_to_cuda(v, gpu_id) - return batch - return batch diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/spec_gen.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/spec_gen.py deleted file mode 100644 index 85ad3188ac93aaef7b1b1d7dbbe47d358f4b0da6..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/spec_gen.py +++ /dev/null @@ -1,22 +0,0 @@ -from data_utils import TextAudioSpeakerLoader, EvalDataLoader -import json -from tqdm import tqdm - -from utils import HParams - -config_path = 'configs/config.json' -with open(config_path, "r") as f: - data = f.read() -config = json.loads(data) -hps = HParams(**config) - -train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps) -test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps) -eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps) - -for _ in tqdm(train_dataset): - pass -for _ in tqdm(eval_dataset): - pass -for _ in tqdm(test_dataset): - pass \ No newline at end of file diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/resnet_flax.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/resnet_flax.py deleted file mode 100644 index 632780378ee0e8fa49404ecae470146250270ce5..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/resnet_flax.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import flax.linen as nn -import jax -import jax.numpy as jnp - - -class FlaxUpsample2D(nn.Module): - out_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.conv = nn.Conv( - self.out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - def __call__(self, hidden_states): - batch, height, width, channels = hidden_states.shape - hidden_states = jax.image.resize( - hidden_states, - shape=(batch, height * 2, width * 2, channels), - method="nearest", - ) - hidden_states = self.conv(hidden_states) - return hidden_states - - -class FlaxDownsample2D(nn.Module): - out_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.conv = nn.Conv( - self.out_channels, - kernel_size=(3, 3), - strides=(2, 2), - padding=((1, 1), (1, 1)), # padding="VALID", - dtype=self.dtype, - ) - - def __call__(self, hidden_states): - # pad = ((0, 0), (0, 1), (0, 1), (0, 0)) # pad height and width dim - # hidden_states = jnp.pad(hidden_states, pad_width=pad) - hidden_states = self.conv(hidden_states) - return hidden_states - - -class FlaxResnetBlock2D(nn.Module): - in_channels: int - out_channels: int = None - dropout_prob: float = 0.0 - use_nin_shortcut: bool = None - dtype: jnp.dtype = jnp.float32 - - def setup(self): - out_channels = self.in_channels if self.out_channels is None else self.out_channels - - self.norm1 = nn.GroupNorm(num_groups=32, epsilon=1e-5) - self.conv1 = nn.Conv( - out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - self.time_emb_proj = nn.Dense(out_channels, dtype=self.dtype) - - self.norm2 = nn.GroupNorm(num_groups=32, epsilon=1e-5) - self.dropout = nn.Dropout(self.dropout_prob) - self.conv2 = nn.Conv( - out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - use_nin_shortcut = self.in_channels != out_channels if self.use_nin_shortcut is None else self.use_nin_shortcut - - self.conv_shortcut = None - if use_nin_shortcut: - self.conv_shortcut = nn.Conv( - out_channels, - kernel_size=(1, 1), - strides=(1, 1), - padding="VALID", - dtype=self.dtype, - ) - - def __call__(self, hidden_states, temb, deterministic=True): - residual = hidden_states - hidden_states = self.norm1(hidden_states) - hidden_states = nn.swish(hidden_states) - hidden_states = self.conv1(hidden_states) - - temb = self.time_emb_proj(nn.swish(temb)) - temb = jnp.expand_dims(jnp.expand_dims(temb, 1), 1) - hidden_states = hidden_states + temb - - hidden_states = self.norm2(hidden_states) - hidden_states = nn.swish(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - residual = self.conv_shortcut(residual) - - return hidden_states + residual diff --git a/spaces/Yiqin/ChatVID/model/fastchat/client/__init__.py b/spaces/Yiqin/ChatVID/model/fastchat/client/__init__.py deleted file mode 100644 index ff1f3f146bb9eee8644c0223aca34506a0b714fa..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/client/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from fastchat.client.api import ChatCompletion, set_baseurl - -__all__ = ["ChatCompletion", "set_baseurl"] diff --git a/spaces/Yunshansongbai/SVC-Nahida/vdecoder/hifigan/utils.py b/spaces/Yunshansongbai/SVC-Nahida/vdecoder/hifigan/utils.py deleted file mode 100644 index 1063b035664d2c61bb03a607db7f0ef5e1b99e07..0000000000000000000000000000000000000000 --- a/spaces/Yunshansongbai/SVC-Nahida/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,67 +0,0 @@ -import glob -import os -import matplotlib -import paddle -from paddle.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight = paddle.normal(mean, std, m.weight.shape) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("加载'{}'".format(filepath)) - checkpoint_dict = paddle.load(filepath, map_location=device) - print("完成。") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("保存检查点到{}".format(filepath)) - paddle.save(obj, filepath) - print("完成。") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] diff --git a/spaces/Zaixi/ICLR_FLAG/utils/docking.py b/spaces/Zaixi/ICLR_FLAG/utils/docking.py deleted file mode 100644 index f8f3ff7388c3b23f147694cb97eaf1fafeb434d5..0000000000000000000000000000000000000000 --- a/spaces/Zaixi/ICLR_FLAG/utils/docking.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import subprocess -import random -import string -from easydict import EasyDict -from rdkit import Chem -from rdkit.Chem.rdForceFieldHelpers import UFFOptimizeMolecule - -from .reconstruct import reconstruct_from_generated - - -def get_random_id(length=30): - letters = string.ascii_lowercase - return ''.join(random.choice(letters) for i in range(length)) - - -def load_pdb(path): - with open(path, 'r') as f: - return f.read() - - -def parse_qvina_outputs(docked_sdf_path): - - suppl = Chem.SDMolSupplier(docked_sdf_path) - results = [] - for i, mol in enumerate(suppl): - if mol is None: - continue - line = mol.GetProp('REMARK').splitlines()[0].split()[2:] - results.append(EasyDict({ - 'rdmol': mol, - 'mode_id': i, - 'affinity': float(line[0]), - 'rmsd_lb': float(line[1]), - 'rmsd_ub': float(line[2]), - })) - - return results - -class BaseDockingTask(object): - - def __init__(self, pdb_block, ligand_rdmol): - super().__init__() - self.pdb_block = pdb_block - self.ligand_rdmol = ligand_rdmol - - def run(self): - raise NotImplementedError() - - def get_results(self): - raise NotImplementedError() - - -class QVinaDockingTask(BaseDockingTask): - - @classmethod - def from_generated_data(cls, data, protein_root='./data/crossdocked', **kwargs): - protein_fn = os.path.join( - os.path.dirname(data.ligand_filename), - os.path.basename(data.ligand_filename)[:10] + '.pdb' - ) - protein_path = os.path.join(protein_root, protein_fn) - with open(protein_path, 'r') as f: - pdb_block = f.read() - ligand_rdmol = reconstruct_from_generated(data) - return cls(pdb_block, ligand_rdmol, **kwargs) - - @classmethod - def from_original_data(cls, data, ligand_root='./data/crossdocked_pocket10', protein_root='./data/crossdocked', **kwargs): - protein_fn = os.path.join( - os.path.dirname(data.ligand_filename), - os.path.basename(data.ligand_filename)[:10] + '.pdb' - ) - protein_path = os.path.join(protein_root, protein_fn) - with open(protein_path, 'r') as f: - pdb_block = f.read() - - ligand_path = os.path.join(ligand_root, data.ligand_filename) - ligand_rdmol = next(iter(Chem.SDMolSupplier(ligand_path))) - return cls(pdb_block, ligand_rdmol, **kwargs) - - def __init__(self, pdb_block, ligand_rdmol, conda_env='adt', tmp_dir='./tmp', use_uff=True, center=None): - super().__init__(pdb_block, ligand_rdmol) - self.conda_env = conda_env - self.tmp_dir = os.path.realpath(tmp_dir) - os.makedirs(tmp_dir, exist_ok=True) - - self.task_id = get_random_id() - self.receptor_id = self.task_id + '_receptor' - self.ligand_id = self.task_id + '_ligand' - - self.receptor_path = os.path.join(self.tmp_dir, self.receptor_id + '.pdb') - self.ligand_path = os.path.join(self.tmp_dir, self.ligand_id + '.sdf') - - with open(self.receptor_path, 'w') as f: - f.write(pdb_block) - - ligand_rdmol = Chem.AddHs(ligand_rdmol, addCoords=True) - if use_uff: - UFFOptimizeMolecule(ligand_rdmol) - sdf_writer = Chem.SDWriter(self.ligand_path) - sdf_writer.write(ligand_rdmol) - sdf_writer.close() - self.ligand_rdmol = ligand_rdmol - - pos = ligand_rdmol.GetConformer(0).GetPositions() - if center is None: - self.center = (pos.max(0) + pos.min(0)) / 2 - else: - self.center = center - - self.proc = None - self.results = None - self.output = None - self.docked_sdf_path = None - - def run(self, exhaustiveness=16): - commands = """ -eval "$(conda shell.bash hook)" -conda activate {env} -cd {tmp} -# Prepare receptor (PDB->PDBQT) -prepare_receptor4.py -r {receptor_id}.pdb -# Prepare ligand -obabel {ligand_id}.sdf -O{ligand_id}.pdbqt -qvina2.1 \ - --receptor {receptor_id}.pdbqt \ - --ligand {ligand_id}.pdbqt \ - --center_x {center_x:.4f} \ - --center_y {center_y:.4f} \ - --center_z {center_z:.4f} \ - --size_x 20 --size_y 20 --size_z 20 \ - --exhaustiveness {exhaust} -obabel {ligand_id}_out.pdbqt -O{ligand_id}_out.sdf -h - """.format( - receptor_id = self.receptor_id, - ligand_id = self.ligand_id, - env = self.conda_env, - tmp = self.tmp_dir, - exhaust = exhaustiveness, - center_x = self.center[0], - center_y = self.center[1], - center_z = self.center[2], - ) - - self.docked_sdf_path = os.path.join(self.tmp_dir, '%s_out.sdf' % self.ligand_id) - - self.proc = subprocess.Popen( - '/bin/bash', - shell=False, - stdin=subprocess.PIPE, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE - ) - - self.proc.stdin.write(commands.encode('utf-8')) - self.proc.stdin.close() - - # return commands - - def run_sync(self): - self.run() - while self.get_results() is None: - pass - results = self.get_results() - print('Best affinity:', results[0]['affinity']) - return results - - def get_results(self): - if self.proc is None: # Not started - return None - elif self.proc.poll() is None: # In progress - return None - else: - if self.output is None: - self.output = self.proc.stdout.readlines() - try: - self.results = parse_qvina_outputs(self.docked_sdf_path) - except: - print('[Error] Vina output error: %s' % self.docked_sdf_path) - return [] - return self.results - diff --git a/spaces/Zannriell/TextChatBot/app.py b/spaces/Zannriell/TextChatBot/app.py deleted file mode 100644 index 56ccd1f8848f61354044dcda4e0a7ecdf45b8279..0000000000000000000000000000000000000000 --- a/spaces/Zannriell/TextChatBot/app.py +++ /dev/null @@ -1,157 +0,0 @@ -from pathlib import Path -from typing import List, Dict, Tuple -import matplotlib.colors as mpl_colors - -import pandas as pd -import seaborn as sns -import shinyswatch - -import shiny.experimental as x -from shiny import App, Inputs, Outputs, Session, reactive, render, req, ui - -sns.set_theme() - -www_dir = Path(__file__).parent.resolve() / "www" - -df = pd.read_csv(Path(__file__).parent / "penguins.csv", na_values="NA") -numeric_cols: List[str] = df.select_dtypes(include=["float64"]).columns.tolist() -species: List[str] = df["Species"].unique().tolist() -species.sort() - -app_ui = x.ui.page_fillable( - shinyswatch.theme.minty(), - x.ui.layout_sidebar( - x.ui.sidebar( - # Artwork by @allison_horst - ui.input_selectize( - "xvar", - "X variable", - numeric_cols, - selected="Bill Length (mm)", - ), - ui.input_selectize( - "yvar", - "Y variable", - numeric_cols, - selected="Bill Depth (mm)", - ), - ui.input_checkbox_group( - "species", "Filter by species", species, selected=species - ), - ui.hr(), - ui.input_switch("by_species", "Show species", value=True), - ui.input_switch("show_margins", "Show marginal plots", value=True), - ), - ui.output_ui("value_boxes"), - x.ui.output_plot("scatter", fill=True), - ui.help_text( - "Artwork by ", - ui.a("@allison_horst", href="https://twitter.com/allison_horst"), - class_="text-end", - ), - fill=True, - fillable=True, - ), -) - - -def server(input: Inputs, output: Outputs, session: Session): - @reactive.Calc - def filtered_df() -> pd.DataFrame: - """Returns a Pandas data frame that includes only the desired rows""" - - # This calculation "req"uires that at least one species is selected - req(len(input.species()) > 0) - - # Filter the rows so we only include the desired species - return df[df["Species"].isin(input.species())] - - @output - @render.plot - def scatter(): - """Generates a plot for Shiny to display to the user""" - - # The plotting function to use depends on whether margins are desired - plotfunc = sns.jointplot if input.show_margins() else sns.scatterplot - - plotfunc( - data=filtered_df(), - x=input.xvar(), - y=input.yvar(), - palette=palette, - hue="Species" if input.by_species() else None, - hue_order=species, - legend=False, - ) - - @output - @render.ui - def value_boxes(): - df = filtered_df() - - def penguin_value_box(title: str, count: int, bgcol: str, showcase_img: str): - return x.ui.value_box( - title, - count, - {"class_": "pt-1 pb-0"}, - showcase=x.ui.bind_fill_role( - ui.tags.img( - {"style": "object-fit:contain;"}, - src=showcase_img, - ), - item=True, - ), - theme_color=None, - style=f"background-color: {bgcol};", - height="90px", - full_screen=True, - ) - - if not input.by_species(): - return penguin_value_box( - "Penguins", - len(df.index), - bg_palette["default"], - # Artwork by @allison_horst - showcase_img="penguins.png", - ) - - value_boxes = [ - penguin_value_box( - name, - len(df[df["Species"] == name]), - bg_palette[name], - # Artwork by @allison_horst - showcase_img=f"{name}.png", - ) - for name in species - # Only include boxes for _selected_ species - if name in input.species() - ] - - return x.ui.layout_column_wrap(1 / len(value_boxes), *value_boxes) - - -# "darkorange", "purple", "cyan4" -colors = [[255, 140, 0], [160, 32, 240], [0, 139, 139]] -colors = [(r / 255.0, g / 255.0, b / 255.0) for r, g, b in colors] - -palette: Dict[str, Tuple[float, float, float]] = { - "Adelie": colors[0], - "Chinstrap": colors[1], - "Gentoo": colors[2], - "default": sns.color_palette()[0], # type: ignore -} - -bg_palette = {} -# Use `sns.set_style("whitegrid")` to help find approx alpha value -for name, col in palette.items(): - # Adjusted n_colors until `axe` accessibility did not complain about color contrast - bg_palette[name] = mpl_colors.to_hex(sns.light_palette(col, n_colors=7)[1]) # type: ignore - - -app = App( - app_ui, - server, - static_assets=str(www_dir), -) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/version_utils.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/version_utils.py deleted file mode 100644 index 963c45a2e8a86a88413ab6c18c22481fb9831985..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/version_utils.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import subprocess -import warnings - -from packaging.version import parse - - -def digit_version(version_str: str, length: int = 4): - """Convert a version string into a tuple of integers. - - This method is usually used for comparing two versions. For pre-release - versions: alpha < beta < rc. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int]: The version info in digits (integers). - """ - assert 'parrots' not in version_str - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - mapping = {'a': -3, 'b': -2, 'rc': -1} - val = -4 - # version.pre can be None - if version.pre: - if version.pre[0] not in mapping: - warnings.warn(f'unknown prerelease version {version.pre[0]}, ' - 'version checking may go wrong') - else: - val = mapping[version.pre[0]] - release.extend([val, version.pre[-1]]) - else: - release.extend([val, 0]) - - elif version.is_postrelease: - release.extend([1, version.post]) - else: - release.extend([0, 0]) - return tuple(release) - - -def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen( - cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - -def get_git_hash(fallback='unknown', digits=None): - """Get the git hash of the current repo. - - Args: - fallback (str, optional): The fallback string when git hash is - unavailable. Defaults to 'unknown'. - digits (int, optional): kept digits of the hash. Defaults to None, - meaning all digits are kept. - - Returns: - str: Git commit hash. - """ - - if digits is not None and not isinstance(digits, int): - raise TypeError('digits must be None or an integer') - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - if digits is not None: - sha = sha[:digits] - except OSError: - sha = fallback - - return sha diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/cross_entropy_loss.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/cross_entropy_loss.py deleted file mode 100644 index 57994157960eeae5530bd983b8b86263de31d0ff..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,214 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None): - """Calculate the CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - - Returns: - torch.Tensor: The calculated loss - """ - # element-wise losses - loss = F.cross_entropy(pred, label, weight=class_weight, reduction='none') - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, label_channels): - bin_labels = labels.new_full((labels.size(0), label_channels), 0) - inds = torch.nonzero( - (labels >= 0) & (labels < label_channels), as_tuple=False).squeeze() - if inds.numel() > 0: - bin_labels[inds, labels[inds]] = 1 - - if label_weights is None: - bin_label_weights = None - else: - bin_label_weights = label_weights.view(-1, 1).expand( - label_weights.size(0), label_channels) - - return bin_labels, bin_label_weights - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1). - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - - Returns: - torch.Tensor: The calculated loss - """ - if pred.dim() != label.dim(): - label, weight = _expand_onehot_labels(label, weight, pred.size(-1)) - - # weighted element-wise losses - if weight is not None: - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C, *), C is the - number of classes. The trailing * indicates arbitrary shape. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - - Returns: - torch.Tensor: The calculated loss - - Example: - >>> N, C = 3, 11 - >>> H, W = 2, 2 - >>> pred = torch.randn(N, C, H, W) * 1000 - >>> target = torch.rand(N, H, W) - >>> label = torch.randint(0, C, size=(N,)) - >>> reduction = 'mean' - >>> avg_factor = None - >>> class_weights = None - >>> loss = mask_cross_entropy(pred, target, label, reduction, - >>> avg_factor, class_weights) - >>> assert loss.shape == (1,) - """ - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - loss_weight=1.0): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float], optional): Weight of each class. - Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - """ - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = class_weight - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - cls_score (torch.Tensor): The prediction. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor( - self.class_weight, device=cls_score.device) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/necks/nas_fpn.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/necks/nas_fpn.py deleted file mode 100644 index 8e333ce65d4d06c47c29af489526ba3142736ad7..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/necks/nas_fpn.py +++ /dev/null @@ -1,160 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, caffe2_xavier_init -from mmcv.ops.merge_cells import GlobalPoolingCell, SumCell - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFPN(nn.Module): - """NAS-FPN. - - Implementation of `NAS-FPN: Learning Scalable Feature Pyramid Architecture - for Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None): - super(NASFPN, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) # num of input feature levels - self.num_outs = num_outs # num of output feature levels - self.stack_times = stack_times - self.norm_cfg = norm_cfg - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # add lateral connections - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - norm_cfg=norm_cfg, - act_cfg=None) - self.lateral_convs.append(l_conv) - - # add extra downsample layers (stride-2 pooling or conv) - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_conv = ConvModule( - out_channels, out_channels, 1, norm_cfg=norm_cfg, act_cfg=None) - self.extra_downsamples.append( - nn.Sequential(extra_conv, nn.MaxPool2d(2, 2))) - - # add NAS FPN connections - self.fpn_stages = nn.ModuleList() - for _ in range(self.stack_times): - stage = nn.ModuleDict() - # gp(p6, p4) -> p4_1 - stage['gp_64_4'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_1, p4) -> p4_2 - stage['sum_44_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p4_2, p3) -> p3_out - stage['sum_43_3'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p3_out, p4_2) -> p4_out - stage['sum_34_4'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - stage['gp_43_5'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_55_5'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - stage['gp_54_7'] = GlobalPoolingCell(with_out_conv=False) - stage['sum_77_7'] = SumCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - # gp(p7_out, p5_out) -> p6_out - stage['gp_75_6'] = GlobalPoolingCell( - in_channels=out_channels, - out_channels=out_channels, - out_norm_cfg=norm_cfg) - self.fpn_stages.append(stage) - - def init_weights(self): - """Initialize the weights of module.""" - for m in self.modules(): - if isinstance(m, nn.Conv2d): - caffe2_xavier_init(m) - - def forward(self, inputs): - """Forward function.""" - # build P3-P5 - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # build P6-P7 on top of P5 - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - p3, p4, p5, p6, p7 = feats - - for stage in self.fpn_stages: - # gp(p6, p4) -> p4_1 - p4_1 = stage['gp_64_4'](p6, p4, out_size=p4.shape[-2:]) - # sum(p4_1, p4) -> p4_2 - p4_2 = stage['sum_44_4'](p4_1, p4, out_size=p4.shape[-2:]) - # sum(p4_2, p3) -> p3_out - p3 = stage['sum_43_3'](p4_2, p3, out_size=p3.shape[-2:]) - # sum(p3_out, p4_2) -> p4_out - p4 = stage['sum_34_4'](p3, p4_2, out_size=p4.shape[-2:]) - # sum(p5, gp(p4_out, p3_out)) -> p5_out - p5_tmp = stage['gp_43_5'](p4, p3, out_size=p5.shape[-2:]) - p5 = stage['sum_55_5'](p5, p5_tmp, out_size=p5.shape[-2:]) - # sum(p7, gp(p5_out, p4_2)) -> p7_out - p7_tmp = stage['gp_54_7'](p5, p4_2, out_size=p7.shape[-2:]) - p7 = stage['sum_77_7'](p7, p7_tmp, out_size=p7.shape[-2:]) - # gp(p7_out, p5_out) -> p6_out - p6 = stage['gp_75_6'](p7, p5, out_size=p6.shape[-2:]) - - return p3, p4, p5, p6, p7 diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/cityscapes.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/cityscapes.py deleted file mode 100644 index 81e47a914a1aa2e5458e18669d65ffb742f46fc6..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/cityscapes.py +++ /dev/null @@ -1,217 +0,0 @@ -import os.path as osp -import tempfile - -import annotator.uniformer.mmcv as mmcv -import numpy as np -from annotator.uniformer.mmcv.utils import print_log -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CityscapesDataset(CustomDataset): - """Cityscapes dataset. - - The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is - fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset. - """ - - CLASSES = ('road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - PALETTE = [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], - [0, 80, 100], [0, 0, 230], [119, 11, 32]] - - def __init__(self, **kwargs): - super(CityscapesDataset, self).__init__( - img_suffix='_leftImg8bit.png', - seg_map_suffix='_gtFine_labelTrainIds.png', - **kwargs) - - @staticmethod - def _convert_to_label_id(result): - """Convert trainId to id for cityscapes.""" - if isinstance(result, str): - result = np.load(result) - import cityscapesscripts.helpers.labels as CSLabels - result_copy = result.copy() - for trainId, label in CSLabels.trainId2label.items(): - result_copy[result == trainId] = label.id - - return result_copy - - def results2img(self, results, imgfile_prefix, to_label_id): - """Write the segmentation results to images. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - to_label_id (bool): whether convert output to label_id for - submission - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - if to_label_id: - result = self._convert_to_label_id(result) - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - output = Image.fromarray(result.astype(np.uint8)).convert('P') - import cityscapesscripts.helpers.labels as CSLabels - palette = np.zeros((len(CSLabels.id2label), 3), dtype=np.uint8) - for label_id, label in CSLabels.id2label.items(): - palette[label_id] = label.color - - output.putpalette(palette) - output.save(png_filename) - result_files.append(png_filename) - prog_bar.update() - - return result_files - - def format_results(self, results, imgfile_prefix=None, to_label_id=True): - """Format the results into dir (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str | None): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - to_label_id (bool): whether convert output to label_id for - submission. Default: False - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: ' - f'{len(results)} != {len(self)}') - - if imgfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - imgfile_prefix = tmp_dir.name - else: - tmp_dir = None - result_files = self.results2img(results, imgfile_prefix, to_label_id) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='mIoU', - logger=None, - imgfile_prefix=None, - efficient_test=False): - """Evaluation in Cityscapes/default protocol. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file, - for cityscapes evaluation only. It includes the file path and - the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with cityscapes protocol, it would be - the prefix of output png files. The output files would be - png images under folder "a/b/prefix/xxx.png", where "xxx" is - the image name of cityscapes. If not specified, a temp file - will be created for evaluation. - Default: None. - - Returns: - dict[str, float]: Cityscapes/default metrics. - """ - - eval_results = dict() - metrics = metric.copy() if isinstance(metric, list) else [metric] - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, logger, imgfile_prefix)) - metrics.remove('cityscapes') - if len(metrics) > 0: - eval_results.update( - super(CityscapesDataset, - self).evaluate(results, metrics, logger, efficient_test)) - - return eval_results - - def _evaluate_cityscapes(self, results, logger, imgfile_prefix): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file - - Returns: - dict[str: float]: Cityscapes evaluation results. - """ - try: - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install cityscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, imgfile_prefix) - - if tmp_dir is None: - result_dir = imgfile_prefix - else: - result_dir = tmp_dir.name - - eval_results = dict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - CSEval.args.evalInstLevelScore = True - CSEval.args.predictionPath = osp.abspath(result_dir) - CSEval.args.evalPixelAccuracy = True - CSEval.args.JSONOutput = False - - seg_map_list = [] - pred_list = [] - - # when evaluating with official cityscapesscripts, - # **_gtFine_labelIds.png is used - for seg_map in mmcv.scandir( - self.ann_dir, 'gtFine_labelIds.png', recursive=True): - seg_map_list.append(osp.join(self.ann_dir, seg_map)) - pred_list.append(CSEval.getPrediction(CSEval.args, seg_map)) - - eval_results.update( - CSEval.evaluateImgLists(pred_list, seg_map_list, CSEval.args)) - - if tmp_dir is not None: - tmp_dir.cleanup() - - return eval_results diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/drive.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/drive.py deleted file mode 100644 index 3cbfda8ae74bdf26c5aef197ff2866a7c7ad0cfd..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/drive.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class DRIVEDataset(CustomDataset): - """DRIVE dataset. - - In segmentation map annotation for DRIVE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_manual1.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(DRIVEDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_manual1.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/hrf.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/hrf.py deleted file mode 100644 index 923203b51377f9344277fc561803d7a78bd2c684..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/hrf.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class HRFDataset(CustomDataset): - """HRF dataset. - - In segmentation map annotation for HRF, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(HRFDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/wmf.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/wmf.py deleted file mode 100644 index fedbd089f693c89c1734e205553f1a7197de41c9..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/wmf.py +++ /dev/null @@ -1,852 +0,0 @@ -import os -import platform -import warnings - -from pyglet import image -from pyglet.libs.win32 import _kernel32 as kernel32 -from pyglet.libs.win32 import _ole32 as ole32 -from pyglet.libs.win32 import com -from pyglet.libs.win32.constants import * -from pyglet.libs.win32.types import * -from pyglet.media import Source -from pyglet.media.codecs import AudioFormat, AudioData, VideoFormat, MediaDecoder, StaticSource -from pyglet.util import debug_print, DecodeException - -_debug = debug_print('debug_media') - -try: - mfreadwrite = 'mfreadwrite' - mfplat = 'mfplat' - - # System32 and SysWOW64 folders are opposite perception in Windows x64. - # System32 = x64 dll's | SysWOW64 = x86 dlls - # By default ctypes only seems to look in system32 regardless of Python architecture, which has x64 dlls. - if platform.architecture()[0] == '32bit': - if platform.machine().endswith('64'): # Machine is 64 bit, Python is 32 bit. - mfreadwrite = os.path.join(os.environ['WINDIR'], 'SysWOW64', 'mfreadwrite.dll') - mfplat = os.path.join(os.environ['WINDIR'], 'SysWOW64', 'mfplat.dll') - - mfreadwrite_lib = ctypes.windll.LoadLibrary(mfreadwrite) - mfplat_lib = ctypes.windll.LoadLibrary(mfplat) -except OSError: - # Doesn't exist? Should stop import of library. - raise ImportError('Could not load WMF library.') - -MF_SOURCE_READERF_ERROR = 0x00000001 -MF_SOURCE_READERF_ENDOFSTREAM = 0x00000002 -MF_SOURCE_READERF_NEWSTREAM = 0x00000004 -MF_SOURCE_READERF_NATIVEMEDIATYPECHANGED = 0x00000010 -MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED = 0x00000020 -MF_SOURCE_READERF_STREAMTICK = 0x00000100 - -# Audio attributes -MF_LOW_LATENCY = com.GUID(0x9c27891a, 0xed7a, 0x40e1, 0x88, 0xe8, 0xb2, 0x27, 0x27, 0xa0, 0x24, 0xee) - -# Audio information -MF_MT_ALL_SAMPLES_INDEPENDENT = com.GUID(0xc9173739, 0x5e56, 0x461c, 0xb7, 0x13, 0x46, 0xfb, 0x99, 0x5c, 0xb9, 0x5f) -MF_MT_FIXED_SIZE_SAMPLES = com.GUID(0xb8ebefaf, 0xb718, 0x4e04, 0xb0, 0xa9, 0x11, 0x67, 0x75, 0xe3, 0x32, 0x1b) -MF_MT_SAMPLE_SIZE = com.GUID(0xdad3ab78, 0x1990, 0x408b, 0xbc, 0xe2, 0xeb, 0xa6, 0x73, 0xda, 0xcc, 0x10) -MF_MT_COMPRESSED = com.GUID(0x3afd0cee, 0x18f2, 0x4ba5, 0xa1, 0x10, 0x8b, 0xea, 0x50, 0x2e, 0x1f, 0x92) -MF_MT_WRAPPED_TYPE = com.GUID(0x4d3f7b23, 0xd02f, 0x4e6c, 0x9b, 0xee, 0xe4, 0xbf, 0x2c, 0x6c, 0x69, 0x5d) -MF_MT_AUDIO_NUM_CHANNELS = com.GUID(0x37e48bf5, 0x645e, 0x4c5b, 0x89, 0xde, 0xad, 0xa9, 0xe2, 0x9b, 0x69, 0x6a) -MF_MT_AUDIO_SAMPLES_PER_SECOND = com.GUID(0x5faeeae7, 0x0290, 0x4c31, 0x9e, 0x8a, 0xc5, 0x34, 0xf6, 0x8d, 0x9d, 0xba) -MF_MT_AUDIO_FLOAT_SAMPLES_PER_SECOND = com.GUID(0xfb3b724a, 0xcfb5, 0x4319, 0xae, 0xfe, 0x6e, 0x42, 0xb2, 0x40, 0x61, 0x32) -MF_MT_AUDIO_AVG_BYTES_PER_SECOND = com.GUID(0x1aab75c8, 0xcfef, 0x451c, 0xab, 0x95, 0xac, 0x03, 0x4b, 0x8e, 0x17, 0x31) -MF_MT_AUDIO_BLOCK_ALIGNMENT = com.GUID(0x322de230, 0x9eeb, 0x43bd, 0xab, 0x7a, 0xff, 0x41, 0x22, 0x51, 0x54, 0x1d) -MF_MT_AUDIO_BITS_PER_SAMPLE = com.GUID(0xf2deb57f, 0x40fa, 0x4764, 0xaa, 0x33, 0xed, 0x4f, 0x2d, 0x1f, 0xf6, 0x69) -MF_MT_AUDIO_VALID_BITS_PER_SAMPLE = com.GUID(0xd9bf8d6a, 0x9530, 0x4b7c, 0x9d, 0xdf, 0xff, 0x6f, 0xd5, 0x8b, 0xbd, 0x06) -MF_MT_AUDIO_SAMPLES_PER_BLOCK = com.GUID(0xaab15aac, 0xe13a, 0x4995, 0x92, 0x22, 0x50, 0x1e, 0xa1, 0x5c, 0x68, 0x77) -MF_MT_AUDIO_CHANNEL_MASK = com.GUID(0x55fb5765, 0x644a, 0x4caf, 0x84, 0x79, 0x93, 0x89, 0x83, 0xbb, 0x15, 0x88) -MF_PD_DURATION = com.GUID(0x6c990d33, 0xbb8e, 0x477a, 0x85, 0x98, 0xd, 0x5d, 0x96, 0xfc, 0xd8, 0x8a) - - -# Media types categories -MF_MT_MAJOR_TYPE = com.GUID(0x48eba18e, 0xf8c9, 0x4687, 0xbf, 0x11, 0x0a, 0x74, 0xc9, 0xf9, 0x6a, 0x8f) -MF_MT_SUBTYPE = com.GUID(0xf7e34c9a, 0x42e8, 0x4714, 0xb7, 0x4b, 0xcb, 0x29, 0xd7, 0x2c, 0x35, 0xe5) - -# Major types -MFMediaType_Audio = com.GUID(0x73647561, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71) -MFMediaType_Video = com.GUID(0x73646976, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xAA, 0x00, 0x38, 0x9B, 0x71) -MFMediaType_Protected = com.GUID(0x7b4b6fe6, 0x9d04, 0x4494, 0xbe, 0x14, 0x7e, 0x0b, 0xd0, 0x76, 0xc8, 0xe4) -MFMediaType_Image = com.GUID(0x72178C23, 0xE45B, 0x11D5, 0xBC, 0x2A, 0x00, 0xB0, 0xD0, 0xF3, 0xF4, 0xAB) -MFMediaType_HTML = com.GUID(0x72178C24, 0xE45B, 0x11D5, 0xBC, 0x2A, 0x00, 0xB0, 0xD0, 0xF3, 0xF4, 0xAB) -MFMediaType_Subtitle = com.GUID(0xa6d13581, 0xed50, 0x4e65, 0xae, 0x08, 0x26, 0x06, 0x55, 0x76, 0xaa, 0xcc) - -# Video subtypes, attributes, and enums (Uncompressed) -D3DFMT_X8R8G8B8 = 22 -D3DFMT_P8 = 41 -D3DFMT_A8R8G8B8 = 21 -MFVideoFormat_RGB32 = com.GUID(D3DFMT_X8R8G8B8, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71) -MFVideoFormat_RGB8 = com.GUID(D3DFMT_P8, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71) -MFVideoFormat_ARGB32 = com.GUID(D3DFMT_A8R8G8B8, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71) - -MFVideoInterlace_Progressive = 2 -MF_MT_INTERLACE_MODE = com.GUID(0xe2724bb8, 0xe676, 0x4806, 0xb4, 0xb2, 0xa8, 0xd6, 0xef, 0xb4, 0x4c, 0xcd) -MF_MT_FRAME_SIZE = com.GUID(0x1652c33d, 0xd6b2, 0x4012, 0xb8, 0x34, 0x72, 0x03, 0x08, 0x49, 0xa3, 0x7d) -MF_MT_FRAME_RATE = com.GUID(0xc459a2e8, 0x3d2c, 0x4e44, 0xb1, 0x32, 0xfe, 0xe5, 0x15, 0x6c, 0x7b, 0xb0) -MF_MT_PIXEL_ASPECT_RATIO = com.GUID(0xc6376a1e, 0x8d0a, 0x4027, 0xbe, 0x45, 0x6d, 0x9a, 0x0a, 0xd3, 0x9b, 0xb6) -MF_MT_DRM_FLAGS = com.GUID(0x8772f323, 0x355a, 0x4cc7, 0xbb, 0x78, 0x6d, 0x61, 0xa0, 0x48, 0xae, 0x82) -MF_MT_DEFAULT_STRIDE = com.GUID(0x644b4e48, 0x1e02, 0x4516, 0xb0, 0xeb, 0xc0, 0x1c, 0xa9, 0xd4, 0x9a, 0xc6) - -# Audio Subtypes (Uncompressed) -WAVE_FORMAT_PCM = 1 -WAVE_FORMAT_IEEE_FLOAT = 3 -MFAudioFormat_PCM = com.GUID(WAVE_FORMAT_PCM, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71) -MFAudioFormat_Float = com.GUID(WAVE_FORMAT_IEEE_FLOAT, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71) - -# Image subtypes. -MFImageFormat_RGB32 = com.GUID(0x00000016, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71) -MFImageFormat_JPEG = com.GUID(0x19e4a5aa, 0x5662, 0x4fc5, 0xa0, 0xc0, 0x17, 0x58, 0x02, 0x8e, 0x10, 0x57) - -# Video attributes -# Enables hardware decoding -MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS = com.GUID(0xa634a91c, 0x822b, 0x41b9, 0xa4, 0x94, 0x4d, 0xe4, 0x64, 0x36, 0x12, - 0xb0) -# Enable video decoding -MF_SOURCE_READER_ENABLE_VIDEO_PROCESSING = com.GUID(0xfb394f3d, 0xccf1, 0x42ee, 0xbb, 0xb3, 0xf9, 0xb8, 0x45, 0xd5, - 0x68, 0x1d) -MF_SOURCE_READER_D3D_MANAGER = com.GUID(0xec822da2, 0xe1e9, 0x4b29, 0xa0, 0xd8, 0x56, 0x3c, 0x71, 0x9f, 0x52, 0x69) -MF_MEDIA_ENGINE_DXGI_MANAGER = com.GUID(0x065702da, 0x1094, 0x486d, 0x86, 0x17, 0xee, 0x7c, 0xc4, 0xee, 0x46, 0x48) -MF_SOURCE_READER_ENABLE_ADVANCED_VIDEO_PROCESSING = com.GUID(0xf81da2c, 0xb537, 0x4672, 0xa8, 0xb2, 0xa6, 0x81, 0xb1, - 0x73, 0x7, 0xa3) - -# Some common errors -MF_E_INVALIDSTREAMNUMBER = -1072875853 # 0xC00D36B3 -MF_E_UNSUPPORTED_BYTESTREAM_TYPE = -1072875836 # 0xC00D36C4 -MF_E_NO_MORE_TYPES = 0xC00D36B9 -MF_E_TOPO_CODEC_NOT_FOUND = -1072868846 # 0xC00D5212 - - -VT_I8 = 20 # Only enum we care about: https://docs.microsoft.com/en-us/windows/win32/api/wtypes/ne-wtypes-varenum - - -def timestamp_from_wmf(timestamp): # 100-nanoseconds - return float(timestamp) / 10000000 - - -def timestamp_to_wmf(timestamp): # 100-nanoseconds - return int(timestamp * 10000000) - - -class IMFAttributes(com.pIUnknown): - _methods_ = [ - ('GetItem', - com.STDMETHOD()), - ('GetItemType', - com.STDMETHOD()), - ('CompareItem', - com.STDMETHOD()), - ('Compare', - com.STDMETHOD()), - ('GetUINT32', - com.STDMETHOD(com.REFIID, POINTER(c_uint32))), - ('GetUINT64', - com.STDMETHOD(com.REFIID, POINTER(c_uint64))), - ('GetDouble', - com.STDMETHOD()), - ('GetGUID', - com.STDMETHOD(com.REFIID, POINTER(com.GUID))), - ('GetStringLength', - com.STDMETHOD()), - ('GetString', - com.STDMETHOD()), - ('GetAllocatedString', - com.STDMETHOD()), - ('GetBlobSize', - com.STDMETHOD()), - ('GetBlob', - com.STDMETHOD()), - ('GetAllocatedBlob', - com.STDMETHOD()), - ('GetUnknown', - com.STDMETHOD()), - ('SetItem', - com.STDMETHOD()), - ('DeleteItem', - com.STDMETHOD()), - ('DeleteAllItems', - com.STDMETHOD()), - ('SetUINT32', - com.STDMETHOD(com.REFIID, c_uint32)), - ('SetUINT64', - com.STDMETHOD()), - ('SetDouble', - com.STDMETHOD()), - ('SetGUID', - com.STDMETHOD(com.REFIID, com.REFIID)), - ('SetString', - com.STDMETHOD()), - ('SetBlob', - com.STDMETHOD()), - ('SetUnknown', - com.STDMETHOD(com.REFIID, com.pIUnknown)), - ('LockStore', - com.STDMETHOD()), - ('UnlockStore', - com.STDMETHOD()), - ('GetCount', - com.STDMETHOD()), - ('GetItemByIndex', - com.STDMETHOD()), - ('CopyAllItems', - com.STDMETHOD(c_void_p)), # IMFAttributes - ] - - -class IMFMediaBuffer(com.pIUnknown): - _methods_ = [ - ('Lock', - com.STDMETHOD(POINTER(POINTER(BYTE)), POINTER(DWORD), POINTER(DWORD))), - ('Unlock', - com.STDMETHOD()), - ('GetCurrentLength', - com.STDMETHOD(POINTER(DWORD))), - ('SetCurrentLength', - com.STDMETHOD(DWORD)), - ('GetMaxLength', - com.STDMETHOD(POINTER(DWORD))) - ] - - -class IMFSample(IMFAttributes, com.pIUnknown): - _methods_ = [ - ('GetSampleFlags', - com.STDMETHOD()), - ('SetSampleFlags', - com.STDMETHOD()), - ('GetSampleTime', - com.STDMETHOD()), - ('SetSampleTime', - com.STDMETHOD()), - ('GetSampleDuration', - com.STDMETHOD(POINTER(c_ulonglong))), - ('SetSampleDuration', - com.STDMETHOD(DWORD, IMFMediaBuffer)), - ('GetBufferCount', - com.STDMETHOD(POINTER(DWORD))), - ('GetBufferByIndex', - com.STDMETHOD(DWORD, IMFMediaBuffer)), - ('ConvertToContiguousBuffer', - com.STDMETHOD(POINTER(IMFMediaBuffer))), # out - ('AddBuffer', - com.STDMETHOD(POINTER(DWORD))), - ('RemoveBufferByIndex', - com.STDMETHOD()), - ('RemoveAllBuffers', - com.STDMETHOD()), - ('GetTotalLength', - com.STDMETHOD(POINTER(DWORD))), - ('CopyToBuffer', - com.STDMETHOD()), - ] - - -class IMFMediaType(IMFAttributes, com.pIUnknown): - _methods_ = [ - ('GetMajorType', - com.STDMETHOD()), - ('IsCompressedFormat', - com.STDMETHOD()), - ('IsEqual', - com.STDMETHOD()), - ('GetRepresentation', - com.STDMETHOD()), - ('FreeRepresentation', - com.STDMETHOD()), - ] - - -class IMFByteStream(com.pIUnknown): - _methods_ = [ - ('GetCapabilities', - com.STDMETHOD()), - ('GetLength', - com.STDMETHOD()), - ('SetLength', - com.STDMETHOD()), - ('GetCurrentPosition', - com.STDMETHOD()), - ('SetCurrentPosition', - com.STDMETHOD(c_ulonglong)), - ('IsEndOfStream', - com.STDMETHOD()), - ('Read', - com.STDMETHOD()), - ('BeginRead', - com.STDMETHOD()), - ('EndRead', - com.STDMETHOD()), - ('Write', - com.STDMETHOD(POINTER(BYTE), ULONG, POINTER(ULONG))), - ('BeginWrite', - com.STDMETHOD()), - ('EndWrite', - com.STDMETHOD()), - ('Seek', - com.STDMETHOD()), - ('Flush', - com.STDMETHOD()), - ('Close', - com.STDMETHOD()), - ] - - -class IMFSourceReader(com.pIUnknown): - _methods_ = [ - ('GetStreamSelection', - com.STDMETHOD(DWORD, POINTER(BOOL))), # in, out - ('SetStreamSelection', - com.STDMETHOD(DWORD, BOOL)), - ('GetNativeMediaType', - com.STDMETHOD(DWORD, DWORD, POINTER(IMFMediaType))), - ('GetCurrentMediaType', - com.STDMETHOD(DWORD, POINTER(IMFMediaType))), - ('SetCurrentMediaType', - com.STDMETHOD(DWORD, POINTER(DWORD), IMFMediaType)), - ('SetCurrentPosition', - com.STDMETHOD(com.REFIID, POINTER(PROPVARIANT))), - ('ReadSample', - com.STDMETHOD(DWORD, DWORD, POINTER(DWORD), POINTER(DWORD), POINTER(c_longlong), POINTER(IMFSample))), - ('Flush', - com.STDMETHOD(DWORD)), # in - ('GetServiceForStream', - com.STDMETHOD()), - ('GetPresentationAttribute', - com.STDMETHOD(DWORD, com.REFIID, POINTER(PROPVARIANT))), - ] - - -class WAVEFORMATEX(ctypes.Structure): - _fields_ = [ - ('wFormatTag', WORD), - ('nChannels', WORD), - ('nSamplesPerSec', DWORD), - ('nAvgBytesPerSec', DWORD), - ('nBlockAlign', WORD), - ('wBitsPerSample', WORD), - ('cbSize', WORD), - ] - - def __repr__(self): - return 'WAVEFORMATEX(wFormatTag={}, nChannels={}, nSamplesPerSec={}, nAvgBytesPersec={}' \ - ', nBlockAlign={}, wBitsPerSample={}, cbSize={})'.format( - self.wFormatTag, self.nChannels, self.nSamplesPerSec, - self.nAvgBytesPerSec, self.nBlockAlign, self.wBitsPerSample, - self.cbSize) - - -# Stream constants -MF_SOURCE_READER_ALL_STREAMS = 0xfffffffe -MF_SOURCE_READER_ANY_STREAM = 4294967294 # 0xfffffffe -MF_SOURCE_READER_FIRST_AUDIO_STREAM = 4294967293 # 0xfffffffd -MF_SOURCE_READER_FIRST_VIDEO_STREAM = 0xfffffffc -MF_SOURCE_READER_MEDIASOURCE = 0xffffffff - -# Version calculation -if WINDOWS_7_OR_GREATER: - MF_SDK_VERSION = 0x0002 -else: - MF_SDK_VERSION = 0x0001 - -MF_API_VERSION = 0x0070 # Only used in Vista. - -MF_VERSION = (MF_SDK_VERSION << 16 | MF_API_VERSION) - -MFStartup = mfplat_lib.MFStartup -MFStartup.restype = HRESULT -MFStartup.argtypes = [LONG, DWORD] - -MFShutdown = mfplat_lib.MFShutdown -MFShutdown.restype = HRESULT -MFShutdown.argtypes = [] - -MFCreateAttributes = mfplat_lib.MFCreateAttributes -MFCreateAttributes.restype = HRESULT -MFCreateAttributes.argtypes = [POINTER(IMFAttributes), c_uint32] # attributes, cInitialSize - -MFCreateSourceReaderFromURL = mfreadwrite_lib.MFCreateSourceReaderFromURL -MFCreateSourceReaderFromURL.restype = HRESULT -MFCreateSourceReaderFromURL.argtypes = [LPCWSTR, IMFAttributes, POINTER(IMFSourceReader)] - -MFCreateSourceReaderFromByteStream = mfreadwrite_lib.MFCreateSourceReaderFromByteStream -MFCreateSourceReaderFromByteStream.restype = HRESULT -MFCreateSourceReaderFromByteStream.argtypes = [IMFByteStream, IMFAttributes, POINTER(IMFSourceReader)] - -if WINDOWS_7_OR_GREATER: - MFCreateMFByteStreamOnStream = mfplat_lib.MFCreateMFByteStreamOnStream - MFCreateMFByteStreamOnStream.restype = HRESULT - MFCreateMFByteStreamOnStream.argtypes = [c_void_p, POINTER(IMFByteStream)] - -MFCreateTempFile = mfplat_lib.MFCreateTempFile -MFCreateTempFile.restype = HRESULT -MFCreateTempFile.argtypes = [UINT, UINT, UINT, POINTER(IMFByteStream)] - -MFCreateMediaType = mfplat_lib.MFCreateMediaType -MFCreateMediaType.restype = HRESULT -MFCreateMediaType.argtypes = [POINTER(IMFMediaType)] - -MFCreateWaveFormatExFromMFMediaType = mfplat_lib.MFCreateWaveFormatExFromMFMediaType -MFCreateWaveFormatExFromMFMediaType.restype = HRESULT -MFCreateWaveFormatExFromMFMediaType.argtypes = [IMFMediaType, POINTER(POINTER(WAVEFORMATEX)), POINTER(c_uint32), c_uint32] - - -class WMFSource(Source): - low_latency = True # Quicker latency but possible quality loss. - - decode_audio = True - decode_video = True - - def __init__(self, filename, file=None): - assert any([self.decode_audio, self.decode_video]), "Source must decode audio, video, or both, not none." - self._current_audio_sample = None - self._current_audio_buffer = None - self._current_video_sample = None - self._current_video_buffer = None - self._timestamp = 0 - self._attributes = None - self._stream_obj = None - self._imf_bytestream = None - self._wfx = None - self._stride = None - - self.set_config_attributes() - - # Create SourceReader - self._source_reader = IMFSourceReader() - - # If it's a file, we need to load it as a stream. - if file is not None: - data = file.read() - - self._imf_bytestream = IMFByteStream() - - data_len = len(data) - - if WINDOWS_7_OR_GREATER: - # Stole code from GDIPlus for older IStream support. - hglob = kernel32.GlobalAlloc(GMEM_MOVEABLE, data_len) - ptr = kernel32.GlobalLock(hglob) - ctypes.memmove(ptr, data, data_len) - kernel32.GlobalUnlock(hglob) - - # Create IStream - self._stream_obj = com.pIUnknown() - ole32.CreateStreamOnHGlobal(hglob, True, ctypes.byref(self._stream_obj)) - - # MFCreateMFByteStreamOnStreamEx for future async operations exists, however Windows 8+ only. Requires new interface - # (Also unsure how/if new Windows async functions and callbacks work with ctypes.) - MFCreateMFByteStreamOnStream(self._stream_obj, ctypes.byref(self._imf_bytestream)) - else: - # Vista does not support MFCreateMFByteStreamOnStream. - # HACK: Create file in Windows temp folder to write our byte data to. - # (Will be automatically deleted when IMFByteStream is Released.) - MFCreateTempFile(MF_ACCESSMODE_READWRITE, - MF_OPENMODE_DELETE_IF_EXIST, - MF_FILEFLAGS_NONE, - ctypes.byref(self._imf_bytestream)) - - wrote_length = ULONG() - data_ptr = cast(data, POINTER(BYTE)) - self._imf_bytestream.Write(data_ptr, data_len, ctypes.byref(wrote_length)) - self._imf_bytestream.SetCurrentPosition(0) - - if wrote_length.value != data_len: - raise DecodeException("Could not write all of the data to the bytestream file.") - - try: - MFCreateSourceReaderFromByteStream(self._imf_bytestream, self._attributes, ctypes.byref(self._source_reader)) - except OSError as err: - raise DecodeException(err) from None - else: - # We can just load from filename if no file object specified.. - try: - MFCreateSourceReaderFromURL(filename, self._attributes, ctypes.byref(self._source_reader)) - except OSError as err: - raise DecodeException(err) from None - - if self.decode_audio: - self._load_audio() - - if self.decode_video: - self._load_video() - - assert self.audio_format or self.video_format, "Source was decoded, but no video or audio streams were found." - - # Get duration of the media file after everything has been ok to decode. - try: - prop = PROPVARIANT() - self._source_reader.GetPresentationAttribute(MF_SOURCE_READER_MEDIASOURCE, - ctypes.byref(MF_PD_DURATION), - ctypes.byref(prop)) - - self._duration = timestamp_from_wmf(prop.llVal) - ole32.PropVariantClear(ctypes.byref(prop)) - except OSError: - warnings.warn("Could not determine duration of media file: '{}'.".format(filename)) - - def _load_audio(self, stream=MF_SOURCE_READER_FIRST_AUDIO_STREAM): - """ Prepares the audio stream for playback by detecting if it's compressed and attempting to decompress to PCM. - Default: Only get the first available audio stream. - """ - # Will be an audio file. - self._audio_stream_index = stream - - # Get what the native/real media type is (audio only) - imfmedia = IMFMediaType() - - try: - self._source_reader.GetNativeMediaType(self._audio_stream_index, 0, ctypes.byref(imfmedia)) - except OSError as err: - if err.winerror == MF_E_INVALIDSTREAMNUMBER: - assert _debug('WMFAudioDecoder: No audio stream found.') - return - - # Get Major media type (Audio, Video, etc) - # TODO: Make GUID take no arguments for a null version: - guid_audio_type = com.GUID(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) - - imfmedia.GetGUID(MF_MT_MAJOR_TYPE, ctypes.byref(guid_audio_type)) - - if guid_audio_type == MFMediaType_Audio: - assert _debug('WMFAudioDecoder: Found Audio Stream.') - - # Deselect any other streams if we don't need them. (Small speedup) - if not self.decode_video: - self._source_reader.SetStreamSelection(MF_SOURCE_READER_ANY_STREAM, False) - - # Select first audio stream. - self._source_reader.SetStreamSelection(MF_SOURCE_READER_FIRST_AUDIO_STREAM, True) - - # Check sub media type, AKA what kind of codec - guid_compressed = com.GUID(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) - imfmedia.GetGUID(MF_MT_SUBTYPE, ctypes.byref(guid_compressed)) - - if guid_compressed == MFAudioFormat_PCM or guid_compressed == MFAudioFormat_Float: - assert _debug('WMFAudioDecoder: Found Uncompressed Audio:', guid_compressed) - else: - assert _debug('WMFAudioDecoder: Found Compressed Audio:', guid_compressed) - # If audio is compressed, attempt to decompress it by forcing source reader to use PCM - mf_mediatype = IMFMediaType() - - MFCreateMediaType(ctypes.byref(mf_mediatype)) - mf_mediatype.SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Audio) - mf_mediatype.SetGUID(MF_MT_SUBTYPE, MFAudioFormat_PCM) - - try: - self._source_reader.SetCurrentMediaType(self._audio_stream_index, None, mf_mediatype) - except OSError as err: # Can't decode codec. - raise DecodeException(err) from None - - # Current media type should now be properly decoded at this point. - decoded_media_type = IMFMediaType() # Maybe reusing older IMFMediaType will work? - self._source_reader.GetCurrentMediaType(self._audio_stream_index, ctypes.byref(decoded_media_type)) - - wfx_length = ctypes.c_uint32() - wfx = POINTER(WAVEFORMATEX)() - - MFCreateWaveFormatExFromMFMediaType(decoded_media_type, - ctypes.byref(wfx), - ctypes.byref(wfx_length), - 0) - - self._wfx = wfx.contents - self.audio_format = AudioFormat(channels=self._wfx.nChannels, - sample_size=self._wfx.wBitsPerSample, - sample_rate=self._wfx.nSamplesPerSec) - else: - assert _debug('WMFAudioDecoder: Audio stream not found') - - def get_format(self): - """Returns the WAVEFORMATEX data which has more information thah audio_format""" - return self._wfx - - def _load_video(self, stream=MF_SOURCE_READER_FIRST_VIDEO_STREAM): - self._video_stream_index = stream - - # Get what the native/real media type is (video only) - imfmedia = IMFMediaType() - - try: - self._source_reader.GetCurrentMediaType(self._video_stream_index, ctypes.byref(imfmedia)) - except OSError as err: - if err.winerror == MF_E_INVALIDSTREAMNUMBER: - assert _debug('WMFVideoDecoder: No video stream found.') - return - - assert _debug('WMFVideoDecoder: Found Video Stream') - - # All video is basically compressed, try to decompress. - uncompressed_mt = IMFMediaType() - MFCreateMediaType(ctypes.byref(uncompressed_mt)) - - imfmedia.CopyAllItems(uncompressed_mt) - - imfmedia.Release() - - uncompressed_mt.SetGUID(MF_MT_SUBTYPE, MFVideoFormat_ARGB32) - uncompressed_mt.SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive) - uncompressed_mt.SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, 1) - - try: - self._source_reader.SetCurrentMediaType(self._video_stream_index, None, uncompressed_mt) - except OSError as err: # Can't decode codec. - raise DecodeException(err) from None - - height, width = self._get_attribute_size(uncompressed_mt, MF_MT_FRAME_SIZE) - - self.video_format = VideoFormat(width=width, height=height) - assert _debug('WMFVideoDecoder: Frame width: {} height: {}'.format(width, height)) - - # Frame rate - den, num = self._get_attribute_size(uncompressed_mt, MF_MT_FRAME_RATE) - self.video_format.frame_rate = num / den - assert _debug('WMFVideoDecoder: Frame Rate: {} / {} = {}'.format(num, den, self.video_format.frame_rate)) - - # Sometimes it can return negative? Variable bit rate? Needs further tests and examples. - if self.video_format.frame_rate < 0: - self.video_format.frame_rate = 30000 / 1001 - assert _debug('WARNING: Negative frame rate, attempting to use default, but may experience issues.') - - # Pixel ratio - den, num = self._get_attribute_size(uncompressed_mt, MF_MT_PIXEL_ASPECT_RATIO) - self.video_format.sample_aspect = num / den - assert _debug('WMFVideoDecoder: Pixel Ratio: {} / {} = {}'.format(num, den, self.video_format.sample_aspect)) - - def get_audio_data(self, num_bytes, compensation_time=0.0): - flags = DWORD() - timestamp = ctypes.c_longlong() - audio_data_length = DWORD() - - # If we have an audio sample already in use and we call this again, release the memory of buffer and sample. - # Can only release after the data is played or else glitches and pops can be heard. - if self._current_audio_sample: - self._current_audio_buffer.Release() - self._current_audio_sample.Release() - - self._current_audio_sample = IMFSample() - self._current_audio_buffer = IMFMediaBuffer() - - while True: - self._source_reader.ReadSample(self._audio_stream_index, 0, None, ctypes.byref(flags), - ctypes.byref(timestamp), ctypes.byref(self._current_audio_sample)) - - if flags.value & MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED: - assert _debug('WMFAudioDecoder: Data is no longer valid.') - break - - if flags.value & MF_SOURCE_READERF_ENDOFSTREAM: - assert _debug('WMFAudioDecoder: End of data from stream source.') - break - - if not self._current_audio_sample: - assert _debug('WMFAudioDecoder: No sample.') - continue - - # Convert to single buffer as a sample could potentially(rarely) have multiple buffers. - self._current_audio_sample.ConvertToContiguousBuffer(ctypes.byref(self._current_audio_buffer)) - - audio_data_ptr = POINTER(BYTE)() - - self._current_audio_buffer.Lock(ctypes.byref(audio_data_ptr), None, ctypes.byref(audio_data_length)) - self._current_audio_buffer.Unlock() - - audio_data = create_string_buffer(audio_data_length.value) - memmove(audio_data, audio_data_ptr, audio_data_length.value) - - return AudioData(audio_data, - audio_data_length.value, - timestamp_from_wmf(timestamp.value), - audio_data_length.value / self.audio_format.sample_rate, - []) - - return None - - def get_next_video_frame(self, skip_empty_frame=True): - video_data_length = DWORD() - flags = DWORD() - timestamp = ctypes.c_longlong() - - if self._current_video_sample: - self._current_video_buffer.Release() - self._current_video_sample.Release() - - self._current_video_sample = IMFSample() - self._current_video_buffer = IMFMediaBuffer() - - while True: - self._source_reader.ReadSample(self._video_stream_index, 0, None, ctypes.byref(flags), - ctypes.byref(timestamp), ctypes.byref(self._current_video_sample)) - - if flags.value & MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED: - assert _debug('WMFVideoDecoder: Data is no longer valid.') - - # Get Major media type (Audio, Video, etc) - new = IMFMediaType() - self._source_reader.GetCurrentMediaType(self._video_stream_index, ctypes.byref(new)) - - # Sometimes this happens once. I think this only - # changes if the stride is added/changed before playback? - stride = ctypes.c_uint32() - new.GetUINT32(MF_MT_DEFAULT_STRIDE, ctypes.byref(stride)) - - self._stride = stride.value - - if flags.value & MF_SOURCE_READERF_ENDOFSTREAM: - self._timestamp = None - assert _debug('WMFVideoDecoder: End of data from stream source.') - break - - if not self._current_video_sample: - assert _debug('WMFVideoDecoder: No sample.') - continue - - self._current_video_buffer = IMFMediaBuffer() - - # Convert to single buffer as a sample could potentially have multiple buffers. - self._current_video_sample.ConvertToContiguousBuffer(ctypes.byref(self._current_video_buffer)) - - video_data = POINTER(BYTE)() - - self._current_video_buffer.Lock(ctypes.byref(video_data), None, ctypes.byref(video_data_length)) - - width = self.video_format.width - height = self.video_format.height - - # buffer = ctypes.create_string_buffer(size) - self._timestamp = timestamp_from_wmf(timestamp.value) - - self._current_video_buffer.Unlock() - - # This is made with the assumption that the video frame will be blitted into the player texture immediately - # after, and then cleared next frame attempt. - return image.ImageData(width, height, 'BGRA', video_data, self._stride) - - return None - - def get_next_video_timestamp(self): - return self._timestamp - - def seek(self, timestamp): - timestamp = min(timestamp, self._duration) if self._duration else timestamp - - prop = PROPVARIANT() - prop.vt = VT_I8 - prop.llVal = timestamp_to_wmf(timestamp) - - pos_com = com.GUID(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) - try: - self._source_reader.SetCurrentPosition(pos_com, prop) - except OSError as err: - warnings.warn(str(err)) - - ole32.PropVariantClear(ctypes.byref(prop)) - - @staticmethod - def _get_attribute_size(attributes, guidKey): - """ Convert int64 attributes to int32""" # HI32/LOW32 - - size = ctypes.c_uint64() - attributes.GetUINT64(guidKey, size) - lParam = size.value - - x = ctypes.c_int32(lParam).value - y = ctypes.c_int32(lParam >> 32).value - return x, y - - def set_config_attributes(self): - """ Here we set user specified attributes, by default we try to set low latency mode. (Win7+)""" - if self.low_latency or self.decode_video: - self._attributes = IMFAttributes() - - MFCreateAttributes(ctypes.byref(self._attributes), 3) - - if self.low_latency and WINDOWS_7_OR_GREATER: - self._attributes.SetUINT32(ctypes.byref(MF_LOW_LATENCY), 1) - - assert _debug('WMFAudioDecoder: Setting configuration attributes.') - - # If it's a video we need to enable the streams to be accessed. - if self.decode_video: - self._attributes.SetUINT32(ctypes.byref(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS), 1) - self._attributes.SetUINT32(ctypes.byref(MF_SOURCE_READER_ENABLE_VIDEO_PROCESSING), 1) - - assert _debug('WMFVideoDecoder: Setting configuration attributes.') - - def __del__(self): - if self._stream_obj: - self._stream_obj.Release() - - if self._imf_bytestream: - self._imf_bytestream.Release() - - if self._current_audio_sample: - self._current_audio_buffer.Release() - self._current_audio_sample.Release() - - if self._current_video_sample: - self._current_video_buffer.Release() - self._current_video_sample.Release() - - -######################################### -# Decoder class: -######################################### - -class WMFDecoder(MediaDecoder): - def __init__(self): - self.MFShutdown = None - - try: - MFStartup(MF_VERSION, 0) - except OSError as err: - raise ImportError('WMF could not startup:', err.strerror) - - self.extensions = self._build_decoder_extensions() - - self.MFShutdown = MFShutdown - - assert _debug('Windows Media Foundation: Initialized.') - - @staticmethod - def _build_decoder_extensions(): - """Extension support varies depending on OS version.""" - extensions = [] - if WINDOWS_VISTA_OR_GREATER: - extensions.extend(['.asf', '.wma', '.wmv', - '.mp3', - '.sami', '.smi', - ]) - - if WINDOWS_7_OR_GREATER: - extensions.extend(['.3g2', '.3gp', '.3gp2', '.3gp', - '.aac', '.adts', - '.avi', - '.m4a', '.m4v', '.mov', '.mp4', - # '.wav' # Can do wav, but we have a WAVE decoder. - ]) - - if WINDOWS_10_ANNIVERSARY_UPDATE_OR_GREATER: - extensions.extend(['.flac']) - - return extensions - - def get_file_extensions(self): - return self.extensions - - def decode(self, filename, file, streaming=True): - if streaming: - return WMFSource(filename, file) - else: - return StaticSource(WMFSource(filename, file)) - - def __del__(self): - if self.MFShutdown is not None: - self.MFShutdown() - - -def get_decoders(): - return [WMFDecoder()] - - -def get_encoders(): - return [] diff --git a/spaces/akdeniz27/contract-understanding-atticus-dataset-demo/predict.py b/spaces/akdeniz27/contract-understanding-atticus-dataset-demo/predict.py deleted file mode 100644 index 386d3c6ee35fd34c841f51d45e3acecab24e6b24..0000000000000000000000000000000000000000 --- a/spaces/akdeniz27/contract-understanding-atticus-dataset-demo/predict.py +++ /dev/null @@ -1,113 +0,0 @@ -import torch -import time -from torch.utils.data import DataLoader, RandomSampler, SequentialSampler - -from transformers import ( - AutoConfig, - AutoModelForQuestionAnswering, - AutoTokenizer, - squad_convert_examples_to_features -) - -from transformers.data.processors.squad import SquadResult, SquadV2Processor, SquadExample -from transformers.data.metrics.squad_metrics import compute_predictions_logits - -def run_prediction(question_texts, context_text, model_path): - ### Setting hyperparameters - max_seq_length = 512 - doc_stride = 256 - n_best_size = 1 - max_query_length = 64 - max_answer_length = 512 - do_lower_case = False - null_score_diff_threshold = 0.0 - - # model_name_or_path = "../cuad-models/roberta-base/" - - def to_list(tensor): - return tensor.detach().cpu().tolist() - - config_class, model_class, tokenizer_class = ( - AutoConfig, AutoModelForQuestionAnswering, AutoTokenizer) - config = config_class.from_pretrained(model_path) - tokenizer = tokenizer_class.from_pretrained( - model_path, do_lower_case=True, use_fast=False) - model = model_class.from_pretrained(model_path, config=config) - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model.to(device) - - processor = SquadV2Processor() - examples = [] - - for i, question_text in enumerate(question_texts): - example = SquadExample( - qas_id=str(i), - question_text=question_text, - context_text=context_text, - answer_text=None, - start_position_character=None, - title="Predict", - answers=None, - ) - - examples.append(example) - - features, dataset = squad_convert_examples_to_features( - examples=examples, - tokenizer=tokenizer, - max_seq_length=max_seq_length, - doc_stride=doc_stride, - max_query_length=max_query_length, - is_training=False, - return_dataset="pt", - threads=1, - ) - - eval_sampler = SequentialSampler(dataset) - eval_dataloader = DataLoader(dataset, sampler=eval_sampler, batch_size=10) - - all_results = [] - - for batch in eval_dataloader: - model.eval() - batch = tuple(t.to(device) for t in batch) - - with torch.no_grad(): - inputs = { - "input_ids": batch[0], - "attention_mask": batch[1], - "token_type_ids": batch[2], - } - - example_indices = batch[3] - - outputs = model(**inputs) - - for i, example_index in enumerate(example_indices): - eval_feature = features[example_index.item()] - unique_id = int(eval_feature.unique_id) - - output = [to_list(output[i]) for output in outputs.to_tuple()] - - start_logits, end_logits = output - result = SquadResult(unique_id, start_logits, end_logits) - all_results.append(result) - - final_predictions = compute_predictions_logits( - all_examples=examples, - all_features=features, - all_results=all_results, - n_best_size=n_best_size, - max_answer_length=max_answer_length, - do_lower_case=do_lower_case, - output_prediction_file=None, - output_nbest_file=None, - output_null_log_odds_file=None, - verbose_logging=False, - version_2_with_negative=True, - null_score_diff_threshold=null_score_diff_threshold, - tokenizer=tokenizer - ) - - return final_predictions \ No newline at end of file diff --git a/spaces/akhaliq/GPEN/segmentation2face.py b/spaces/akhaliq/GPEN/segmentation2face.py deleted file mode 100644 index 68668fd6ab991460e484ab42fdeb2ff99ec9c05c..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GPEN/segmentation2face.py +++ /dev/null @@ -1,47 +0,0 @@ -''' -@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021) -@author: yangxy (yangtao9009@gmail.com) -''' -import os -import cv2 -import glob -import time -import numpy as np -from PIL import Image -import __init_paths -from face_model.face_gan import FaceGAN - -class Segmentation2Face(object): - def __init__(self, base_dir='./', size=1024, model=None, channel_multiplier=2, narrow=1, is_norm=True): - self.facegan = FaceGAN(base_dir, size, model, channel_multiplier, narrow, is_norm) - - # make sure the face image is well aligned. Please refer to face_enhancement.py - def process(self, segf): - # from segmentations to faces - out = self.facegan.process(segf) - - return out - - -if __name__=='__main__': - model = {'name':'GPEN-Seg2face-512', 'size':512} - - indir = 'examples/segs' - outdir = 'examples/outs-seg2face' - os.makedirs(outdir, exist_ok=True) - - seg2face = Segmentation2Face(size=model['size'], model=model['name'], channel_multiplier=2, is_norm=False) - - files = sorted(glob.glob(os.path.join(indir, '*.*g'))) - for n, file in enumerate(files[:]): - filename = os.path.basename(file) - - segf = cv2.imread(file, cv2.IMREAD_COLOR) - - realf = seg2face.process(segf) - - segf = cv2.resize(segf, realf.shape[:2]) - cv2.imwrite(os.path.join(outdir, '.'.join(filename.split('.')[:-1])+'.jpg'), np.hstack((segf, realf))) - - if n%10==0: print(n, file) - diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/README.md b/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/README.md deleted file mode 100644 index b16159add8b0c1ce4ca42a47f832134c5cce7d69..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# InfiniBatch - -To view the documentation, please clone the repository and go to docs/infinibatch/index.html - -To run unit tests, run the following command. -``` -python -m unittest discover -s test -``` - -When working on the documentation, install pdoc: -``` -pip install pdoc3 -``` -You can then start a local http server that dynamically updates the documentation: -``` -pdoc --template-dir docs --http : infinibatch -``` - -We currently haven't set up the CI to automatically generate the documentation. -Before you merge anything into master, please delete the existing documentation in docs/infinibatch and run -``` -pdoc -o docs --template-dir docs --html infinibatch -``` \ No newline at end of file diff --git a/spaces/akhaliq/papercutcraft-v1/README.md b/spaces/akhaliq/papercutcraft-v1/README.md deleted file mode 100644 index 8c3b70cd6f5038ed2cd5c1f8854d825520787759..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/papercutcraft-v1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Papercutcraft V1 -emoji: 🏃 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/filter.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/filter.py deleted file mode 100644 index 85b4829878f54ce932303a26aec7278c962d53b3..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/filter.py +++ /dev/null @@ -1,71 +0,0 @@ -""" - pygments.filter - ~~~~~~~~~~~~~~~ - - Module that implements the default filter. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - - -def apply_filters(stream, filters, lexer=None): - """ - Use this method to apply an iterable of filters to - a stream. If lexer is given it's forwarded to the - filter, otherwise the filter receives `None`. - """ - def _apply(filter_, stream): - yield from filter_.filter(lexer, stream) - for filter_ in filters: - stream = _apply(filter_, stream) - return stream - - -def simplefilter(f): - """ - Decorator that converts a function into a filter:: - - @simplefilter - def lowercase(self, lexer, stream, options): - for ttype, value in stream: - yield ttype, value.lower() - """ - return type(f.__name__, (FunctionFilter,), { - '__module__': getattr(f, '__module__'), - '__doc__': f.__doc__, - 'function': f, - }) - - -class Filter: - """ - Default filter. Subclass this class or use the `simplefilter` - decorator to create own filters. - """ - - def __init__(self, **options): - self.options = options - - def filter(self, lexer, stream): - raise NotImplementedError() - - -class FunctionFilter(Filter): - """ - Abstract class used by `simplefilter` to create simple - function filters on the fly. The `simplefilter` decorator - automatically creates subclasses of this class for - functions passed to it. - """ - function = None - - def __init__(self, **options): - if not hasattr(self, 'function'): - raise TypeError('%r used without bound function' % - self.__class__.__name__) - Filter.__init__(self, **options) - - def filter(self, lexer, stream): - # pylint: disable=not-callable - yield from self.function(lexer, stream, self.options) diff --git a/spaces/ali-ghamdan/deoldify/fastai/gen_doc/docstrings.py b/spaces/ali-ghamdan/deoldify/fastai/gen_doc/docstrings.py deleted file mode 100644 index dad705823c3570f0968b12ce25931c227f0735f9..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/gen_doc/docstrings.py +++ /dev/null @@ -1,142 +0,0 @@ -# https://github.com/openstack/rally/blob/master/rally/common/plugin/info.py -# Copyright 2015: Mirantis Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import re -import sys - -__all__ = ['parse_docstring'] - - -FIELDS = 'param|val' # supported fields -PARAM_OR_RETURN_REGEX = re.compile(f":(?:{FIELDS}|return)") -RETURN_REGEX = re.compile(":return: (?P.*)", re.S) -NEW_REGEX = re.compile(f":(?P{FIELDS}) (?P[\*\w]+): (?P.*?)" - f"(?:(?=:(?:{FIELDS}|return|raises))|\Z)", re.S) - -def trim(docstring): - """trim function from PEP-257""" - if not docstring: - return "" - # Convert tabs to spaces (following the normal Python rules) - # and split into a list of lines: - lines = docstring.expandtabs().splitlines() - # Determine minimum indentation (first line doesn't count): - indent = sys.maxsize - for line in lines[1:]: - stripped = line.lstrip() - if stripped: - indent = min(indent, len(line) - len(stripped)) - # Remove indentation (first line is special): - trimmed = [lines[0].strip()] - if indent < sys.maxsize: - for line in lines[1:]: - trimmed.append(line[indent:].rstrip()) - # Strip off trailing and leading blank lines: - while trimmed and not trimmed[-1]: - trimmed.pop() - while trimmed and not trimmed[0]: - trimmed.pop(0) - - # Current code/unittests expects a line return at - # end of multiline docstrings - # workaround expected behavior from unittests - if "\n" in docstring: - trimmed.append("") - - # Return a single string: - return "\n".join(trimmed) - - -def reindent(string): - return "\n".join(l.strip() for l in string.strip().split("\n")) - - -def parse_docstring(docstring): - """Parse the docstring into its components. - - :return: a dictionary of form - { - "short_description": ..., - "long_description": ..., - "params": [{"name": ..., "doc": ...}, ...], - "vals": [{"name": ..., "doc": ...}, ...], - "return": ... - } - """ - - short_description = long_description = return_str = "" - args = [] - - if docstring: - docstring = trim(docstring.lstrip("\n")) - - lines = docstring.split("\n", 1) - short_description = lines[0] - - if len(lines) > 1: - long_description = lines[1].strip() - - params_return_desc = None - - match = PARAM_OR_RETURN_REGEX.search(long_description) - if match: - long_desc_end = match.start() - params_return_desc = long_description[long_desc_end:].strip() - long_description = long_description[:long_desc_end].rstrip() - - if params_return_desc: - args = [ - {"name": name, "doc": trim(doc), "field": field} - for field, name, doc in NEW_REGEX.findall(params_return_desc) - ] - match = RETURN_REGEX.search(params_return_desc) - if match: - return_str = reindent(match.group("doc")) - comments = {p['name']: p['doc'] for p in args} - return { - "short_description": short_description, - "long_description": long_description, - "args": args, - "comments": comments, - "return": return_str - } - - -class InfoMixin(object): - - @classmethod - def _get_doc(cls): - """Return documentary of class - - By default it returns docstring of class, but it can be overridden - for example for cases like merging own docstring with parent - """ - return cls.__doc__ - - @classmethod - def get_info(cls): - doc = parse_docstring(cls._get_doc()) - - return { - "name": cls.get_name(), - "platform": cls.get_platform(), - "module": cls.__module__, - "title": doc["short_description"], - "description": doc["long_description"], - "parameters": doc["params"], - "schema": getattr(cls, "CONFIG_SCHEMA", None), - "return": doc["return"] - } diff --git a/spaces/alphunt/diffdock-alphunt-demo/confidence/confidence_train.py b/spaces/alphunt/diffdock-alphunt-demo/confidence/confidence_train.py deleted file mode 100644 index 8130ee85f2607635a8a0db71f226896f4b4a690b..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/confidence/confidence_train.py +++ /dev/null @@ -1,319 +0,0 @@ -import gc -import math -import os - -import shutil - -from argparse import Namespace, ArgumentParser, FileType -import torch.nn.functional as F - -import wandb -import torch -from sklearn.metrics import roc_auc_score -from torch_geometric.loader import DataListLoader, DataLoader -from tqdm import tqdm - -from confidence.dataset import ConfidenceDataset -from utils.training import AverageMeter - -torch.multiprocessing.set_sharing_strategy('file_system') - -import yaml -from utils.utils import save_yaml_file, get_optimizer_and_scheduler, get_model - - -parser = ArgumentParser() -parser.add_argument('--config', type=FileType(mode='r'), default=None) -parser.add_argument('--original_model_dir', type=str, default='workdir', help='Path to folder with trained model and hyperparameters') -parser.add_argument('--restart_dir', type=str, default=None, help='') -parser.add_argument('--use_original_model_cache', action='store_true', default=False, help='If this is true, the same dataset as in the original model will be used. Otherwise, the dataset parameters are used.') -parser.add_argument('--data_dir', type=str, default='data/PDBBind_processed/', help='Folder containing original structures') -parser.add_argument('--ckpt', type=str, default='best_model.pt', help='Checkpoint to use inside the folder') -parser.add_argument('--model_save_frequency', type=int, default=0, help='Frequency with which to save the last model. If 0, then only the early stopping criterion best model is saved and overwritten.') -parser.add_argument('--best_model_save_frequency', type=int, default=0, help='Frequency with which to save the best model. If 0, then only the early stopping criterion best model is saved and overwritten.') -parser.add_argument('--run_name', type=str, default='test_confidence', help='') -parser.add_argument('--project', type=str, default='diffdock_confidence', help='') -parser.add_argument('--split_train', type=str, default='data/splits/timesplit_no_lig_overlap_train', help='Path of file defining the split') -parser.add_argument('--split_val', type=str, default='data/splits/timesplit_no_lig_overlap_val', help='Path of file defining the split') -parser.add_argument('--split_test', type=str, default='data/splits/timesplit_test', help='Path of file defining the split') - -# Inference parameters for creating the positions and rmsds that the confidence predictor will be trained on. -parser.add_argument('--cache_path', type=str, default='data/cacheNew', help='Folder from where to load/restore cached dataset') -parser.add_argument('--cache_ids_to_combine', nargs='+', type=str, default=None, help='RMSD value below which a prediction is considered a postitive. This can also be multiple cutoffs.') -parser.add_argument('--cache_creation_id', type=int, default=None, help='number of times that inference is run on the full dataset before concatenating it and coming up with the full confidence dataset') -parser.add_argument('--wandb', action='store_true', default=False, help='') -parser.add_argument('--inference_steps', type=int, default=2, help='Number of denoising steps') -parser.add_argument('--samples_per_complex', type=int, default=3, help='') -parser.add_argument('--balance', action='store_true', default=False, help='If this is true than we do not force the samples seen during training to be the same amount of negatives as positives') -parser.add_argument('--rmsd_prediction', action='store_true', default=False, help='') -parser.add_argument('--rmsd_classification_cutoff', nargs='+', type=float, default=2, help='RMSD value below which a prediction is considered a postitive. This can also be multiple cutoffs.') - -parser.add_argument('--log_dir', type=str, default='workdir', help='') -parser.add_argument('--main_metric', type=str, default='accuracy', help='Metric to track for early stopping. Mostly [loss, accuracy, ROC AUC]') -parser.add_argument('--main_metric_goal', type=str, default='max', help='Can be [min, max]') -parser.add_argument('--transfer_weights', action='store_true', default=False, help='') -parser.add_argument('--batch_size', type=int, default=5, help='') -parser.add_argument('--lr', type=float, default=1e-3, help='') -parser.add_argument('--w_decay', type=float, default=0.0, help='') -parser.add_argument('--scheduler', type=str, default='plateau', help='') -parser.add_argument('--scheduler_patience', type=int, default=20, help='') -parser.add_argument('--n_epochs', type=int, default=5, help='') - -# Dataset -parser.add_argument('--limit_complexes', type=int, default=0, help='') -parser.add_argument('--all_atoms', action='store_true', default=True, help='') -parser.add_argument('--multiplicity', type=int, default=1, help='') -parser.add_argument('--chain_cutoff', type=float, default=10, help='') -parser.add_argument('--receptor_radius', type=float, default=30, help='') -parser.add_argument('--c_alpha_max_neighbors', type=int, default=10, help='') -parser.add_argument('--atom_radius', type=float, default=5, help='') -parser.add_argument('--atom_max_neighbors', type=int, default=8, help='') -parser.add_argument('--matching_popsize', type=int, default=20, help='') -parser.add_argument('--matching_maxiter', type=int, default=20, help='') -parser.add_argument('--max_lig_size', type=int, default=None, help='Maximum number of heavy atoms') -parser.add_argument('--remove_hs', action='store_true', default=False, help='remove Hs') -parser.add_argument('--num_conformers', type=int, default=1, help='') -parser.add_argument('--esm_embeddings_path', type=str, default=None,help='If this is set then the LM embeddings at that path will be used for the receptor features') -parser.add_argument('--no_torsion', action='store_true', default=False, help='') - -# Model -parser.add_argument('--num_conv_layers', type=int, default=2, help='Number of interaction layers') -parser.add_argument('--max_radius', type=float, default=5.0, help='Radius cutoff for geometric graph') -parser.add_argument('--scale_by_sigma', action='store_true', default=True, help='Whether to normalise the score') -parser.add_argument('--ns', type=int, default=16, help='Number of hidden features per node of order 0') -parser.add_argument('--nv', type=int, default=4, help='Number of hidden features per node of order >0') -parser.add_argument('--distance_embed_dim', type=int, default=32, help='') -parser.add_argument('--cross_distance_embed_dim', type=int, default=32, help='') -parser.add_argument('--no_batch_norm', action='store_true', default=False, help='If set, it removes the batch norm') -parser.add_argument('--use_second_order_repr', action='store_true', default=False, help='Whether to use only up to first order representations or also second') -parser.add_argument('--cross_max_distance', type=float, default=80, help='') -parser.add_argument('--dynamic_max_cross', action='store_true', default=False, help='') -parser.add_argument('--dropout', type=float, default=0.0, help='MLP dropout') -parser.add_argument('--embedding_type', type=str, default="sinusoidal", help='') -parser.add_argument('--sigma_embed_dim', type=int, default=32, help='') -parser.add_argument('--embedding_scale', type=int, default=10000, help='') -parser.add_argument('--confidence_no_batchnorm', action='store_true', default=False, help='') -parser.add_argument('--confidence_dropout', type=float, default=0.0, help='MLP dropout in confidence readout') - -args = parser.parse_args() -if args.config: - config_dict = yaml.load(args.config, Loader=yaml.FullLoader) - arg_dict = args.__dict__ - for key, value in config_dict.items(): - if isinstance(value, list): - for v in value: - arg_dict[key].append(v) - else: - arg_dict[key] = value - args.config = args.config.name -assert(args.main_metric_goal == 'max' or args.main_metric_goal == 'min') - -def train_epoch(model, loader, optimizer, rmsd_prediction): - model.train() - meter = AverageMeter(['confidence_loss']) - - for data in tqdm(loader, total=len(loader)): - if device.type == 'cuda' and len(data) % torch.cuda.device_count() == 1 or device.type == 'cpu' and data.num_graphs == 1: - print("Skipping batch of size 1 since otherwise batchnorm would not work.") - optimizer.zero_grad() - try: - pred = model(data) - if rmsd_prediction: - labels = torch.cat([graph.rmsd for graph in data]).to(device) if isinstance(data, list) else data.rmsd - confidence_loss = F.mse_loss(pred, labels) - else: - if isinstance(args.rmsd_classification_cutoff, list): - labels = torch.cat([graph.y_binned for graph in data]).to(device) if isinstance(data, list) else data.y_binned - confidence_loss = F.cross_entropy(pred, labels) - else: - labels = torch.cat([graph.y for graph in data]).to(device) if isinstance(data, list) else data.y - confidence_loss = F.binary_cross_entropy_with_logits(pred, labels) - confidence_loss.backward() - optimizer.step() - meter.add([confidence_loss.cpu().detach()]) - except RuntimeError as e: - if 'out of memory' in str(e): - print('| WARNING: ran out of memory, skipping batch') - for p in model.parameters(): - if p.grad is not None: - del p.grad # free some memory - torch.cuda.empty_cache() - gc.collect() - continue - else: - raise e - - return meter.summary() - -def test_epoch(model, loader, rmsd_prediction): - model.eval() - meter = AverageMeter(['loss'], unpooled_metrics=True) if rmsd_prediction else AverageMeter(['confidence_loss', 'accuracy', 'ROC AUC'], unpooled_metrics=True) - all_labels = [] - all_affinities = [] - for data in tqdm(loader, total=len(loader)): - try: - with torch.no_grad(): - pred = model(data) - affinity_loss = torch.tensor(0.0, dtype=torch.float, device=pred[0].device) - accuracy = torch.tensor(0.0, dtype=torch.float, device=pred[0].device) - if rmsd_prediction: - labels = torch.cat([graph.rmsd for graph in data]).to(device) if isinstance(data, list) else data.rmsd - confidence_loss = F.mse_loss(pred, labels) - meter.add([confidence_loss.cpu().detach()]) - else: - if isinstance(args.rmsd_classification_cutoff, list): - labels = torch.cat([graph.y_binned for graph in data]).to(device) if isinstance(data,list) else data.y_binned - confidence_loss = F.cross_entropy(pred, labels) - else: - labels = torch.cat([graph.y for graph in data]).to(device) if isinstance(data, list) else data.y - confidence_loss = F.binary_cross_entropy_with_logits(pred, labels) - accuracy = torch.mean((labels == (pred > 0).float()).float()) - try: - roc_auc = roc_auc_score(labels.detach().cpu().numpy(), pred.detach().cpu().numpy()) - except ValueError as e: - if 'Only one class present in y_true. ROC AUC score is not defined in that case.' in str(e): - roc_auc = 0 - else: - raise e - meter.add([confidence_loss.cpu().detach(), accuracy.cpu().detach(), torch.tensor(roc_auc)]) - all_labels.append(labels) - - except RuntimeError as e: - if 'out of memory' in str(e): - print('| WARNING: ran out of memory, skipping batch') - for p in model.parameters(): - if p.grad is not None: - del p.grad # free some memory - torch.cuda.empty_cache() - continue - else: - raise e - - all_labels = torch.cat(all_labels) - - if rmsd_prediction: - baseline_metric = ((all_labels - all_labels.mean()).abs()).mean() - else: - baseline_metric = all_labels.sum() / len(all_labels) - results = meter.summary() - results.update({'baseline_metric': baseline_metric}) - return meter.summary(), baseline_metric - - -def train(args, model, optimizer, scheduler, train_loader, val_loader, run_dir): - best_val_metric = math.inf if args.main_metric_goal == 'min' else 0 - best_epoch = 0 - - print("Starting training...") - for epoch in range(args.n_epochs): - logs = {} - train_metrics = train_epoch(model, train_loader, optimizer, args.rmsd_prediction) - print("Epoch {}: Training loss {:.4f}".format(epoch, train_metrics['confidence_loss'])) - - val_metrics, baseline_metric = test_epoch(model, val_loader, args.rmsd_prediction) - if args.rmsd_prediction: - print("Epoch {}: Validation loss {:.4f}".format(epoch, val_metrics['confidence_loss'])) - else: - print("Epoch {}: Validation loss {:.4f} accuracy {:.4f}".format(epoch, val_metrics['confidence_loss'], val_metrics['accuracy'])) - - if args.wandb: - logs.update({'valinf_' + k: v for k, v in val_metrics.items()}, step=epoch + 1) - logs.update({'train_' + k: v for k, v in train_metrics.items()}, step=epoch + 1) - logs.update({'mean_rmsd' if args.rmsd_prediction else 'fraction_positives': baseline_metric, - 'current_lr': optimizer.param_groups[0]['lr']}) - wandb.log(logs, step=epoch + 1) - - if scheduler: - scheduler.step(val_metrics[args.main_metric]) - - state_dict = model.module.state_dict() if device.type == 'cuda' else model.state_dict() - - if args.main_metric_goal == 'min' and val_metrics[args.main_metric] < best_val_metric or \ - args.main_metric_goal == 'max' and val_metrics[args.main_metric] > best_val_metric: - best_val_metric = val_metrics[args.main_metric] - best_epoch = epoch - torch.save(state_dict, os.path.join(run_dir, 'best_model.pt')) - if args.model_save_frequency > 0 and (epoch + 1) % args.model_save_frequency == 0: - torch.save(state_dict, os.path.join(run_dir, f'model_epoch{epoch+1}.pt')) - if args.best_model_save_frequency > 0 and (epoch + 1) % args.best_model_save_frequency == 0: - shutil.copyfile(os.path.join(run_dir, 'best_model.pt'), os.path.join(run_dir, f'best_model_epoch{epoch+1}.pt')) - - torch.save({ - 'epoch': epoch, - 'model': state_dict, - 'optimizer': optimizer.state_dict(), - }, os.path.join(run_dir, 'last_model.pt')) - - print("Best Validation accuracy {} on Epoch {}".format(best_val_metric, best_epoch)) - - -def construct_loader_confidence(args, device): - common_args = {'cache_path': args.cache_path, 'original_model_dir': args.original_model_dir, 'device': device, - 'inference_steps': args.inference_steps, 'samples_per_complex': args.samples_per_complex, - 'limit_complexes': args.limit_complexes, 'all_atoms': args.all_atoms, 'balance': args.balance, 'rmsd_classification_cutoff': args.rmsd_classification_cutoff, - 'use_original_model_cache': args.use_original_model_cache, 'cache_creation_id': args.cache_creation_id, "cache_ids_to_combine": args.cache_ids_to_combine} - loader_class = DataListLoader if torch.cuda.is_available() else DataLoader - - exception_flag = False - try: - train_dataset = ConfidenceDataset(split="train", args=args, **common_args) - train_loader = loader_class(dataset=train_dataset, batch_size=args.batch_size, shuffle=True) - except Exception as e: - if 'The generated ligand positions with cache_id do not exist:' in str(e): - print("HAPPENING | Encountered the following exception when loading the confidence train dataset:") - print(str(e)) - print("HAPPENING | We are still continuing because we want to try to generate the validation dataset if it has not been created yet:") - exception_flag = True - else: raise e - - val_dataset = ConfidenceDataset(split="val", args=args, **common_args) - val_loader = loader_class(dataset=val_dataset, batch_size=args.batch_size, shuffle=True) - - if exception_flag: raise Exception('We encountered the exception during train dataset loading: ', e) - return train_loader, val_loader - - -if __name__ == '__main__': - device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - with open(f'{args.original_model_dir}/model_parameters.yml') as f: - score_model_args = Namespace(**yaml.full_load(f)) - - # construct loader - train_loader, val_loader = construct_loader_confidence(args, device) - model = get_model(score_model_args if args.transfer_weights else args, device, t_to_sigma=None, confidence_mode=True) - optimizer, scheduler = get_optimizer_and_scheduler(args, model, scheduler_mode=args.main_metric_goal) - - if args.transfer_weights: - print("HAPPENING | Transferring weights from original_model_dir to the new model after using original_model_dir's arguments to construct the new model.") - checkpoint = torch.load(os.path.join(args.original_model_dir,args.ckpt), map_location=device) - model_state_dict = model.state_dict() - transfer_weights_dict = {k: v for k, v in checkpoint.items() if k in list(model_state_dict.keys())} - model_state_dict.update(transfer_weights_dict) # update the layers with the pretrained weights - model.load_state_dict(model_state_dict) - - elif args.restart_dir: - dict = torch.load(f'{args.restart_dir}/last_model.pt', map_location=torch.device('cpu')) - model.module.load_state_dict(dict['model'], strict=True) - optimizer.load_state_dict(dict['optimizer']) - print("Restarting from epoch", dict['epoch']) - - numel = sum([p.numel() for p in model.parameters()]) - print('Model with', numel, 'parameters') - - if args.wandb: - wandb.init( - entity='entity', - settings=wandb.Settings(start_method="fork"), - project=args.project, - name=args.run_name, - config=args - ) - wandb.log({'numel': numel}) - - # record parameters - run_dir = os.path.join(args.log_dir, args.run_name) - yaml_file_name = os.path.join(run_dir, 'model_parameters.yml') - save_yaml_file(yaml_file_name, args.__dict__) - args.device = device - - train(args, model, optimizer, scheduler, train_loader, val_loader, run_dir) diff --git a/spaces/anonymous-pits/pits/yin.py b/spaces/anonymous-pits/pits/yin.py deleted file mode 100644 index 0511d1372a20ef5f88f20b8943babfd9d7b3ebc0..0000000000000000000000000000000000000000 --- a/spaces/anonymous-pits/pits/yin.py +++ /dev/null @@ -1,166 +0,0 @@ -# remove np from https://github.com/dhchoi99/NANSY/blob/master/models/yin.py -# adapted from https://github.com/patriceguyot/Yin -# https://github.com/NVIDIA/mellotron/blob/master/yin.py - -import torch -import torch.nn.functional as F -from math import log2, ceil - - -def differenceFunction(x, N, tau_max): - """ - Compute difference function of data x. This corresponds to equation (6) in [1] - This solution is implemented directly with torch rfft. - - - :param x: audio data (Tensor) - :param N: length of data - :param tau_max: integration window size - :return: difference function - :rtype: list - """ - - #x = np.array(x, np.float64) #[B,T] - assert x.dim() == 2 - b, w = x.shape - if w < tau_max: - x = F.pad(x, (tau_max - w - (tau_max - w) // 2, (tau_max - w) // 2), - 'constant', - mode='reflect') - w = tau_max - #x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum())) - x_cumsum = torch.cat( - [torch.zeros([b, 1], device=x.device), (x * x).cumsum(dim=1)], dim=1) - size = w + tau_max - p2 = (size // 32).bit_length() - #p2 = ceil(log2(size+1 // 32)) - nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32) - size_pad = min(n * 2**p2 for n in nice_numbers if n * 2**p2 >= size) - fc = torch.fft.rfft(x, size_pad) #[B,F] - conv = torch.fft.irfft(fc * fc.conj())[:, :tau_max] - return x_cumsum[:, w:w - tau_max: - -1] + x_cumsum[:, w] - x_cumsum[:, :tau_max] - 2 * conv - - -def differenceFunction_np(x, N, tau_max): - """ - Compute difference function of data x. This corresponds to equation (6) in [1] - This solution is implemented directly with Numpy fft. - - - :param x: audio data - :param N: length of data - :param tau_max: integration window size - :return: difference function - :rtype: list - """ - - x = np.array(x, np.float64) - w = x.size - tau_max = min(tau_max, w) - x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum())) - size = w + tau_max - p2 = (size // 32).bit_length() - nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32) - size_pad = min(x * 2**p2 for x in nice_numbers if x * 2**p2 >= size) - fc = np.fft.rfft(x, size_pad) - conv = np.fft.irfft(fc * fc.conjugate())[:tau_max] - return x_cumsum[w:w - - tau_max:-1] + x_cumsum[w] - x_cumsum[:tau_max] - 2 * conv - - -def cumulativeMeanNormalizedDifferenceFunction(df, N, eps=1e-8): - """ - Compute cumulative mean normalized difference function (CMND). - - This corresponds to equation (8) in [1] - - :param df: Difference function - :param N: length of data - :return: cumulative mean normalized difference function - :rtype: list - """ - #np.seterr(divide='ignore', invalid='ignore') - # scipy method, assert df>0 for all element - # cmndf = df[1:] * np.asarray(list(range(1, N))) / (np.cumsum(df[1:]).astype(float) + eps) - B, _ = df.shape - cmndf = df[:, - 1:] * torch.arange(1, N, device=df.device, dtype=df.dtype).view( - 1, -1) / (df[:, 1:].cumsum(dim=-1) + eps) - return torch.cat( - [torch.ones([B, 1], device=df.device, dtype=df.dtype), cmndf], dim=-1) - - -def differenceFunctionTorch(xs: torch.Tensor, N, tau_max) -> torch.Tensor: - """pytorch backend batch-wise differenceFunction - has 1e-4 level error with input shape of (32, 22050*1.5) - Args: - xs: - N: - tau_max: - - Returns: - - """ - xs = xs.double() - w = xs.shape[-1] - tau_max = min(tau_max, w) - zeros = torch.zeros((xs.shape[0], 1)) - x_cumsum = torch.cat((torch.zeros((xs.shape[0], 1), device=xs.device), - (xs * xs).cumsum(dim=-1, dtype=torch.double)), - dim=-1) # B x w - size = w + tau_max - p2 = (size // 32).bit_length() - nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32) - size_pad = min(x * 2**p2 for x in nice_numbers if x * 2**p2 >= size) - - fcs = torch.fft.rfft(xs, n=size_pad, dim=-1) - convs = torch.fft.irfft(fcs * fcs.conj())[:, :tau_max] - y1 = torch.flip(x_cumsum[:, w - tau_max + 1:w + 1], dims=[-1]) - y = y1 + x_cumsum[:, w].unsqueeze(-1) - x_cumsum[:, :tau_max] - 2 * convs - return y - - -def cumulativeMeanNormalizedDifferenceFunctionTorch(dfs: torch.Tensor, - N, - eps=1e-8) -> torch.Tensor: - arange = torch.arange(1, N, device=dfs.device, dtype=torch.float64) - cumsum = torch.cumsum(dfs[:, 1:], dim=-1, - dtype=torch.float64).to(dfs.device) - - cmndfs = dfs[:, 1:] * arange / (cumsum + eps) - cmndfs = torch.cat( - (torch.ones(cmndfs.shape[0], 1, device=dfs.device), cmndfs), dim=-1) - return cmndfs - - -if __name__ == '__main__': - wav = torch.randn(32, int(22050 * 1.5)).cuda() - wav_numpy = wav.detach().cpu().numpy() - x = wav_numpy[0] - - w_len = 2048 - w_step = 256 - tau_max = 2048 - W = 2048 - - startFrames = list(range(0, x.shape[-1] - w_len, w_step)) - startFrames = np.asarray(startFrames) - # times = startFrames / sr - frames = [x[..., t:t + W] for t in startFrames] - frames = np.asarray(frames) - frames_torch = torch.from_numpy(frames).cuda() - - cmndfs0 = [] - for idx, frame in enumerate(frames): - df = differenceFunction(frame, frame.shape[-1], tau_max) - cmndf = cumulativeMeanNormalizedDifferenceFunction(df, tau_max) - cmndfs0.append(cmndf) - cmndfs0 = np.asarray(cmndfs0) - - dfs = differenceFunctionTorch(frames_torch, frames_torch.shape[-1], - tau_max) - cmndfs1 = cumulativeMeanNormalizedDifferenceFunctionTorch( - dfs, tau_max).detach().cpu().numpy() - print(cmndfs0.shape, cmndfs1.shape) - print(np.sum(np.abs(cmndfs0 - cmndfs1))) diff --git a/spaces/arborvitae/GalaxiCode.ai/app.py b/spaces/arborvitae/GalaxiCode.ai/app.py deleted file mode 100644 index 978ae0aa31f1be23a6d74cb9a8043103483f8694..0000000000000000000000000000000000000000 --- a/spaces/arborvitae/GalaxiCode.ai/app.py +++ /dev/null @@ -1,271 +0,0 @@ -from typing import Iterator - -import gradio as gr -import torch - -from model import get_input_token_length, run - -DEFAULT_SYSTEM_PROMPT = """\ -You are a software engineer reporting to a senior software engineer. Reply with highest quality, PhD level, detailed, logical, precise, clean answers. -""" -MAX_MAX_NEW_TOKENS = 2048 -DEFAULT_MAX_NEW_TOKENS = 1024 -MAX_INPUT_TOKEN_LENGTH = 4000 - -DESCRIPTION = """ -""" - -LICENSE = """ -

        ---- -""" - -if not torch.cuda.is_available(): - DESCRIPTION += '\n

        Running on CPU.

        ' - - -def clear_and_save_textbox(message: str) -> tuple[str, str]: - return '', message - - -def display_input(message: str, - history: list[tuple[str, str]]) -> list[tuple[str, str]]: - history.append((message, '')) - return history - - -def delete_prev_fn( - history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]: - try: - message, _ = history.pop() - except IndexError: - message = '' - return history, message or '' - - -def generate( - message: str, - history_with_input: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int, - temperature: float, - top_p: float, - top_k: int, -) -> Iterator[list[tuple[str, str]]]: - if max_new_tokens > MAX_MAX_NEW_TOKENS: - raise ValueError - - history = history_with_input[:-1] - generator = run(message, history, system_prompt, max_new_tokens, temperature, top_p, top_k) - try: - first_response = next(generator) - yield history + [(message, first_response)] - except StopIteration: - yield history + [(message, '')] - for response in generator: - yield history + [(message, response)] - - -def process_example(message: str) -> tuple[str, list[tuple[str, str]]]: - generator = generate(message, [], DEFAULT_SYSTEM_PROMPT, 1024, 1, 0.95, 50) - for x in generator: - pass - return '', x - - -def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None: - input_token_length = get_input_token_length(message, chat_history, system_prompt) - if input_token_length > MAX_INPUT_TOKEN_LENGTH: - raise gr.Error(f'The accumulated input is too long ({input_token_length} > {MAX_INPUT_TOKEN_LENGTH}). Clear your chat history and try again.') - - -with gr.Blocks(css='JohnSmith9982/small_and_pretty') as demo: - - gr.Markdown( - """ - # GalaxiCode.ai - Coding Smarter not Harder. - """) - with gr.Group(): - chatbot = gr.Chatbot(label='Chatbot') - with gr.Row(): - textbox = gr.Textbox( - container=False, - show_label=False, - placeholder='Type a message...', - scale=10, - ) - submit_button = gr.Button('Submit', - variant='primary', - scale=1, - min_width=0) - with gr.Row(): - retry_button = gr.Button('🔄 Retry', variant='secondary') - undo_button = gr.Button('↩️ Undo', variant='secondary') - clear_button = gr.Button('🗑️ Clear', variant='secondary') - - saved_input = gr.State() - - with gr.Accordion(label='Advanced options', open=False): - system_prompt = gr.Textbox(label='System prompt', - value=DEFAULT_SYSTEM_PROMPT, - lines=6) - max_new_tokens = gr.Slider( - label='Max new tokens', - minimum=1, - maximum=MAX_MAX_NEW_TOKENS, - step=1, - value=DEFAULT_MAX_NEW_TOKENS, - ) - temperature = gr.Slider( - label='Temperature', - minimum=0.1, - maximum=4.0, - step=0.1, - value=1.0, - ) - top_p = gr.Slider( - label='Top-p (nucleus sampling)', - minimum=0.05, - maximum=1.0, - step=0.05, - value=0.95, - ) - top_k = gr.Slider( - label='Top-k', - minimum=1, - maximum=1000, - step=1, - value=50, - ) - - gr.Examples( - examples=[ - "X_train, y_train, X_test, y_test = train_test_split(X, y, test_size=0.1)\n\n# Train a logistic regression model, predict the labels on the test set and compute the accuracy score", - "// Returns every other value in the array as a new array.\nfunction everyOther(arr) {", - "Poor English: She no went to the market. Corrected English:", - "def alternating(list1, list2):\n results = []\n for i in range(min(len(list1), len(list2))):\n results.append(list1[i])\n results.append(list2[i])\n if len(list1) > len(list2):\n \n else:\n results.extend(list2[i+1:])\n return results", - "def remove_non_ascii(s: str) -> str:\n \"\"\" \nprint(remove_non_ascii('afkdj$$('))", - ], - inputs=textbox, - outputs=[textbox, chatbot], - fn=process_example, - cache_examples=True, - ) - - gr.Markdown(LICENSE) - - textbox.submit( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - button_event_preprocess = submit_button.click( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - retry_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - undo_button.click( - - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=lambda x: x, - inputs=[saved_input], - outputs=textbox, - api_name=False, - queue=False, - ) - - clear_button.click( - fn=lambda: ([], ''), - outputs=[chatbot, saved_input], - queue=False, - api_name=False, - ) - -demo.queue(max_size=20).launch() \ No newline at end of file diff --git a/spaces/arnavkartikeya/SCRIPture-final/train_nlvr.py b/spaces/arnavkartikeya/SCRIPture-final/train_nlvr.py deleted file mode 100644 index 84b247bda2334c1fd894b6c11d33ef48c8e7df28..0000000000000000000000000000000000000000 --- a/spaces/arnavkartikeya/SCRIPture-final/train_nlvr.py +++ /dev/null @@ -1,213 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path -import json -import pickle - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import DataLoader -import torch.backends.cudnn as cudnn -import torch.distributed as dist - -from models.blip_nlvr import blip_nlvr - -import utils -from utils import cosine_lr_schedule, warmup_lr_schedule -from data import create_dataset, create_sampler, create_loader - -def train(model, data_loader, optimizer, epoch, device, config): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=50, fmt='{value:.6f}')) - metric_logger.add_meter('loss', utils.SmoothedValue(window_size=50, fmt='{value:.4f}')) - - header = 'Train Epoch: [{}]'.format(epoch) - print_freq = 50 - step_size = 10 - - for i,(image0, image1, text, targets) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - - images = torch.cat([image0, image1], dim=0) - images, targets = images.to(device), targets.to(device) - - loss = model(images, text, targets=targets, train=True) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - metric_logger.update(loss=loss.item()) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluate(model, data_loader, device, config): - # test - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - - header = 'Evaluation:' - print_freq = 50 - - for image0, image1, text, targets in metric_logger.log_every(data_loader, print_freq, header): - images = torch.cat([image0, image1], dim=0) - images, targets = images.to(device), targets.to(device) - - prediction = model(images, text, targets=targets, train=False) - - _, pred_class = prediction.max(1) - accuracy = (targets==pred_class).sum() / targets.size(0) - - metric_logger.meters['acc'].update(accuracy.item(), n=image0.size(0)) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating dataset") - datasets = create_dataset('nlvr', config) - - if args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler(datasets, [True,False,False], num_tasks, global_rank) - else: - samplers = [None, None, None] - - batch_size=[config['batch_size_train'],config['batch_size_test'],config['batch_size_test']] - train_loader, val_loader, test_loader = create_loader(datasets,samplers,batch_size=batch_size, - num_workers=[4,4,4],is_trains=[True,False,False], - collate_fns=[None,None,None]) - - #### Model #### - print("Creating model") - model = blip_nlvr(pretrained=config['pretrained'], image_size=config['image_size'], - vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - print("Start training") - start_time = time.time() - best = 0 - best_epoch = 0 - - for epoch in range(0, config['max_epoch']): - if not args.evaluate: - if args.distributed: - train_loader.sampler.set_epoch(epoch) - - cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr']) - - train_stats = train(model, train_loader, optimizer, epoch, device, config) - - val_stats = evaluate(model, val_loader, device, config) - test_stats = evaluate(model, test_loader, device, config) - - if utils.is_main_process(): - if args.evaluate: - log_stats = {**{f'val_{k}': v for k, v in val_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - } - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - else: - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'val_{k}': v for k, v in val_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - 'epoch': epoch, - } - - if float(val_stats['acc'])>best: - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth')) - best = float(val_stats['acc']) - best_epoch = epoch - - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - if args.evaluate: - break - - dist.barrier() - - if utils.is_main_process(): - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write("best epoch: %d"%best_epoch) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/nlvr.yaml') - parser.add_argument('--output_dir', default='output/NLVR') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/html.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/html.py deleted file mode 100644 index 1aa507e9cbea224ad5a11994997e5b8db526ad54..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/html.py +++ /dev/null @@ -1,236 +0,0 @@ -import json -import jinja2 - - -HTML_TEMPLATE = jinja2.Template( - """ -{%- if fullhtml -%} - - - -{%- endif %} - -{%- if not requirejs %} - - {%- if mode == 'vega-lite' %} - - {%- endif %} - -{%- endif %} -{%- if fullhtml %} -{%- if requirejs %} - - -{%- endif %} - - -{%- endif %} -
        - -{%- if fullhtml %} - - -{%- endif %} -""" -) - - -HTML_TEMPLATE_UNIVERSAL = jinja2.Template( - """ -
        - -""" -) - - -TEMPLATES = { - "standard": HTML_TEMPLATE, - "universal": HTML_TEMPLATE_UNIVERSAL, -} - - -def spec_to_html( - spec, - mode, - vega_version, - vegaembed_version, - vegalite_version=None, - base_url="https://cdn.jsdelivr.net/npm/", - output_div="vis", - embed_options=None, - json_kwds=None, - fullhtml=True, - requirejs=False, - template="standard", -): - """Embed a Vega/Vega-Lite spec into an HTML page - - Parameters - ---------- - spec : dict - a dictionary representing a vega-lite plot spec. - mode : string {'vega' | 'vega-lite'} - The rendering mode. This value is overridden by embed_options['mode'], - if it is present. - vega_version : string - For html output, the version of vega.js to use. - vegalite_version : string - For html output, the version of vegalite.js to use. - vegaembed_version : string - For html output, the version of vegaembed.js to use. - base_url : string (optional) - The base url from which to load the javascript libraries. - output_div : string (optional) - The id of the div element where the plot will be shown. - embed_options : dict (optional) - Dictionary of options to pass to the vega-embed script. Default - entry is {'mode': mode}. - json_kwds : dict (optional) - Dictionary of keywords to pass to json.dumps(). - fullhtml : boolean (optional) - If True (default) then return a full html page. If False, then return - an HTML snippet that can be embedded into an HTML page. - requirejs : boolean (optional) - If False (default) then load libraries from base_url using - - - - - - - -
        -
        -
        -
        -

        Real-Time Latent Consistency Model

        -

        ControlNet

        -

        - This demo showcases - LCM Image to Image pipeline - using - Diffusers with a MJPEG - stream server. -

        -

        - There are 0 user(s) sharing the same GPU, affecting - real-time performance. Maximum queue size is 4. Duplicate and run it on your - own GPU. -

        -
        -
        -

        Prompt

        -

        - Change the prompt to generate different images, accepts Compel syntax. -

        -
        - -
        -
        -
        -
        - Advanced Options -
        - - -
        - - - - 4 - - - - - 50 - - - - - 8.0 - - - - - 0.5 - - - - - 0.8 - - - - - 0.0 - - - - - 1.0 - - - - - 0.1 - - - - - 0.2 - - - - - - - -
        -
        - - -
        -
        - - -
        -
        - - - -
        - - -
        -
        - -
        -
        -
        -
        - - - -
        -
        - -
        - - - - -
        -
        -
        - - - \ No newline at end of file diff --git a/spaces/leogabraneth/text-generation-webui-main/cmd_macos.sh b/spaces/leogabraneth/text-generation-webui-main/cmd_macos.sh deleted file mode 100644 index 1b052e5c34bd43b7e898858d7993dd5f6a7a6f08..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/cmd_macos.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -cd "$(dirname "${BASH_SOURCE[0]}")" - -if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi - -# deactivate existing conda envs as needed to avoid conflicts -{ conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null - -# config -CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda" -INSTALL_ENV_DIR="$(pwd)/installer_files/env" - -# environment isolation -export PYTHONNOUSERSITE=1 -unset PYTHONPATH -unset PYTHONHOME -export CUDA_PATH="$INSTALL_ENV_DIR" -export CUDA_HOME="$CUDA_PATH" - -# activate env -source $CONDA_ROOT_PREFIX/etc/profile.d/conda.sh -conda activate $INSTALL_ENV_DIR -exec bash --norc diff --git a/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/pre_net.py b/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/pre_net.py deleted file mode 100644 index 886646a154c68298deeec09dbad736d617f73155..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/pre_net.py +++ /dev/null @@ -1,27 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - -class PreNet(nn.Module): - def __init__(self, in_dims, fc1_dims=256, fc2_dims=128, dropout=0.5): - super().__init__() - self.fc1 = nn.Linear(in_dims, fc1_dims) - self.fc2 = nn.Linear(fc1_dims, fc2_dims) - self.p = dropout - - def forward(self, x): - """forward - - Args: - x (3D tensor with size `[batch_size, num_chars, tts_embed_dims]`): input texts list - - Returns: - 3D tensor with size `[batch_size, num_chars, encoder_dims]` - - """ - x = self.fc1(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - x = self.fc2(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - return x diff --git a/spaces/lewiswu1209/MockingBird/synthesizer/utils/text.py b/spaces/lewiswu1209/MockingBird/synthesizer/utils/text.py deleted file mode 100644 index 29372174aec95cd2eac1ea40096fcc148f532b07..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/synthesizer/utils/text.py +++ /dev/null @@ -1,74 +0,0 @@ -from .symbols import symbols -from . import cleaners -import re - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r"(.*?)\{(.+?)\}(.*)") - - -def text_to_sequence(text, cleaner_names): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - - Returns: - List of integers corresponding to the symbols in the text - """ - sequence = [] - - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - sequence += _symbols_to_sequence(_clean_text(text, cleaner_names)) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - # Append EOS token - sequence.append(_symbol_to_id["~"]) - return sequence - - -def sequence_to_text(sequence): - """Converts a sequence of IDs back to a string""" - result = "" - for symbol_id in sequence: - if symbol_id in _id_to_symbol: - s = _id_to_symbol[symbol_id] - # Enclose ARPAbet back in curly braces: - if len(s) > 1 and s[0] == "@": - s = "{%s}" % s[1:] - result += s - return result.replace("}{", " ") - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception("Unknown cleaner: %s" % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(["@" + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s not in ("_", "~") diff --git a/spaces/lightli/bingo-newbing/src/components/tone-selector.tsx b/spaces/lightli/bingo-newbing/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
        -
        - 选择对话样式 -
        -
        -
          - { - ToneList.map(tone => ( -
        • onChange?.(tone.type)}> - -
        • - )) - } -
        -
        -
        - ) -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Animation Composer The Most Handy Motion Presets.epub.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Animation Composer The Most Handy Motion Presets.epub.md deleted file mode 100644 index 5d447f1d113cd472f4c8ec7ded96fd3b944379bd..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Animation Composer The Most Handy Motion Presets.epub.md +++ /dev/null @@ -1,8 +0,0 @@ - -

        Open the ActionScript 3.0 Preferences window by choosing File>Preferences or. Click the Animation Panel menu bar item to open the Animation Presets (code. No matter how many motion presets there.2). Adobe After Effects.

        -

        The most common animation presets in the production of an audio. The abstract periodization available in After Effects. It is the most basic and essential concept of Motion graphics - but how can you.

        -

        Animation Composer The Most Handy Motion Presets.epub


        Downloadhttps://bytlly.com/2uGx6D



        -

        Surround Shakes
        Often used for creating motion graphics. Create a Shake effect that can be used to animate a.

        • Shape: Rectangle
        • Intensity: High
        • Duration: 4 frames
        • Scale: Original
        • Skew: Horizontal
        • Perspective: None
        • Rotation: None
        • Transition: None
        Animation Composer The Most Handy Motion Presets.epub

        A new look at how Fit to Screen can be used to create a. The preset is meant to quickly. For example, you can have one of the default shapes and set the. Create some simple animations and always hit the key to see the.
        Adding a template to your Kdenlive project is a. Animation Composer The Most Handy Motion Presets.epub

        Toggle between all the motion presets.

        Create animations of trees. Quickly create a. Create animations without composing presets. Customize other filters.

        -

        The Ultimate Quick Export Checker/Monitor for Final Cut Pro X and Avid Media Composer (Advanced Motion), LINKEDIN_CR4680. Snapping, movement. Word in the Human Interface: Quick and Customizable Presets for Motion, PORTFOLIO_CR4792.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Annabelle 2014 Extras 1080p BluRay X264 Dual Audio English 51 Hindi 51 TBI.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Annabelle 2014 Extras 1080p BluRay X264 Dual Audio English 51 Hindi 51 TBI.md deleted file mode 100644 index f9c0db40bd0fd34693c194f6b239216f5dbdc1a6..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Annabelle 2014 Extras 1080p BluRay X264 Dual Audio English 51 Hindi 51 TBI.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Annabelle 2014 Extras 1080p BluRay X264 Dual Audio English 51 Hindi 51 TBI


        Download Ziphttps://bytlly.com/2uGyEw



        -
        - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Gunday _TOP_ Full Movie 2014 Bengali Version.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Gunday _TOP_ Full Movie 2014 Bengali Version.md deleted file mode 100644 index 954d4bcce6030b13f7d0221b004c0351a224a131..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Gunday _TOP_ Full Movie 2014 Bengali Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Gunday Full Movie 2014 Bengali Version


        Download ○○○ https://bytlly.com/2uGwbg



        -
        -Download Gunday (2014) Hindi Movie - www. ... The fully fledged trailer of Yash Raj Films' Gunday is here — it's fun, retro and it's action packed, and it looks like ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Ham Radio Deluxe 6.6.0.236 !!HOT!! Keygen [Full].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Ham Radio Deluxe 6.6.0.236 !!HOT!! Keygen [Full].md deleted file mode 100644 index e92fcb0160958fde727ebb61c7665a5104c961ed..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Ham Radio Deluxe 6.6.0.236 !!HOT!! Keygen [Full].md +++ /dev/null @@ -1,6 +0,0 @@ -

        Ham Radio Deluxe 6.6.0.236 Keygen [Full]


        Download Zip ✫✫✫ https://bytlly.com/2uGw3N



        -
        -Full.ISO.and.Keygen.Torrent.. Wic Reset Utility Key Generator Free ... Ham Radio Deluxe version 6.6.0.236 is now available for download. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Makhi Software For Odesk !LINK!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Makhi Software For Odesk !LINK!.md deleted file mode 100644 index f4f9a04a73a5896df949cace1563b5f61e7b6315..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Makhi Software For Odesk !LINK!.md +++ /dev/null @@ -1,8 +0,0 @@ -

        Makhi Software For Odesk


        Download ✒ ✒ ✒ https://bytlly.com/2uGvPQ



        - -Makhi Software For Odesk full version free download. Makhi Software For Odesk is a program with which you can automatically create templates for working with advertising agencies, furniture manufacturers and other companies. -The program interface is intuitive, and the learning process takes a few minutes. -After installing Makhi Software For Odesk, you can create and edit templates that will be used to work with advertising agencies, furniture manufacturers, representatives of IT companies and many others. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/lingbionlp/PhenoTagger-Demo/src/abbre_resolution.py b/spaces/lingbionlp/PhenoTagger-Demo/src/abbre_resolution.py deleted file mode 100644 index b5cbc2dd6f8c6db106b25c576a66b31d9d232ad9..0000000000000000000000000000000000000000 --- a/spaces/lingbionlp/PhenoTagger-Demo/src/abbre_resolution.py +++ /dev/null @@ -1,434 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Tue Aug 11 16:52:40 2020 - -@author: luol2 -""" - -import logging -import regex -import sys -import io - -""" -A Python 3 refactoring of Vincent Van Asch's Python 2 code at - -http://www.cnts.ua.ac.be/~vincent/scripts/abbreviations.py - -Based on - -A Simple Algorithm for Identifying Abbreviations Definitions in Biomedical Text -A. Schwartz and M. Hearst -Biocomputing, 2003, pp 451-462. - -""" - -logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) -log = logging.getLogger('Abbre') - - -class Candidate(str): - def __init__(self, value): - super().__init__() - self.start = 0 - self.stop = 0 - - def set_position(self, start, stop): - self.start = start - self.stop = stop - - -def yield_lines_from_file(file_path): - with open(file_path, 'rb') as f: - for line in f: - try: - line = line.decode('utf-8') - except UnicodeDecodeError: - line = line.decode('latin-1').encode('utf-8').decode('utf-8') - line = line.strip() - yield line - f.close() - - -def yield_lines_from_doc(doc_text): - for line in doc_text.split("\n"): - yield line.strip() - - -def best_candidates(sentence): - """ - :param sentence: line read from input file - :return: a Candidate iterator - """ - - if '(' in sentence: - # Check some things first - if sentence.count('(') != sentence.count(')'): - raise ValueError("Unbalanced parentheses: {}".format(sentence)) - - if sentence.find('(') > sentence.find(')'): - raise ValueError("First parentheses is right: {}".format(sentence)) - - closeindex = -1 - while 1: - # Look for open parenthesis - openindex = sentence.find('(', closeindex + 1) - - if openindex == -1: break - - # Look for closing parentheses - closeindex = openindex + 1 - open = 1 - skip = False - while open: - try: - char = sentence[closeindex] - except IndexError: - # We found an opening bracket but no associated closing bracket - # Skip the opening bracket - skip = True - break - if char == '(': - open += 1 - elif char in [')', ';', ':']: - open -= 1 - closeindex += 1 - - if skip: - closeindex = openindex + 1 - continue - - # Output if conditions are met - start = openindex + 1 - stop = closeindex - 1 - candidate = sentence[start:stop] - - # Take into account whitespace that should be removed - start = start + len(candidate) - len(candidate.lstrip()) - stop = stop - len(candidate) + len(candidate.rstrip()) - candidate = sentence[start:stop] - - if conditions(candidate): - new_candidate = Candidate(candidate) - new_candidate.set_position(start, stop) - yield new_candidate - - -def conditions(candidate): - """ - Based on Schwartz&Hearst - - 2 <= len(str) <= 10 - len(tokens) <= 2 - re.search('\p{L}', str) - str[0].isalnum() - - and extra: - if it matches (\p{L}\.?\s?){2,} - it is a good candidate. - - :param candidate: candidate abbreviation - :return: True if this is a good candidate - """ - viable = True - if regex.match('(\p{L}\.?\s?){2,}', candidate.lstrip()): - viable = True - if len(candidate) < 2 or len(candidate) > 10: - viable = False - if len(candidate.split()) > 2: - viable = False - if not regex.search('\p{L}', candidate): - viable = False - if not candidate[0].isalnum(): - viable = False - - return viable - - -def get_definition(candidate, sentence): - """ - Takes a candidate and a sentence and returns the definition candidate. - - The definintion candidate is the set of tokens (in front of the candidate) - that starts with a token starting with the first character of the candidate - - :param candidate: candidate abbreviation - :param sentence: current sentence (single line from input file) - :return: candidate definition for this abbreviation - """ - # Take the tokens in front of the candidate - tokens = regex.split(r'[\s\-]+', sentence[:candidate.start - 2].lower()) - #print(tokens) - # the char that we are looking for - key = candidate[0].lower() - - # Count the number of tokens that start with the same character as the candidate -# print(tokens) - firstchars = [t[0] for t in tokens] -# print(firstchars) - definition_freq = firstchars.count(key) - candidate_freq = candidate.lower().count(key) - - # Look for the list of tokens in front of candidate that - # have a sufficient number of tokens starting with key - if candidate_freq <= definition_freq: - # we should at least have a good number of starts - count = 0 - start = 0 - startindex = len(firstchars) - 1 - - while count < candidate_freq: - if abs(start) > len(firstchars): - raise ValueError("candiate {} not found".format(candidate)) - start -= 1 - # Look up key in the definition - try: - startindex = firstchars.index(key, len(firstchars) + start) - except ValueError: - pass - - # Count the number of keys in definition - count = firstchars[startindex:].count(key) - - # We found enough keys in the definition so return the definition as a definition candidate - start = len(' '.join(tokens[:startindex])) - stop = candidate.start - 1 - candidate = sentence[start:stop] - - # Remove whitespace - start = start + len(candidate) - len(candidate.lstrip()) - stop = stop - len(candidate) + len(candidate.rstrip()) - candidate = sentence[start:stop] - - new_candidate = Candidate(candidate) - new_candidate.set_position(start, stop) - #print('new_candidate:') - #print(new_candidate,start,stop) - return new_candidate - - else: - raise ValueError('There are less keys in the tokens in front of candidate than there are in the candidate') - - -def select_definition(definition, abbrev): - """ - Takes a definition candidate and an abbreviation candidate - and returns True if the chars in the abbreviation occur in the definition - - Based on - A simple algorithm for identifying abbreviation definitions in biomedical texts, Schwartz & Hearst - :param definition: candidate definition - :param abbrev: candidate abbreviation - :return: - """ - - - if len(definition) < len(abbrev): - raise ValueError('Abbreviation is longer than definition') - - if abbrev in definition.split(): - raise ValueError('Abbreviation is full word of definition') - - sindex = -1 - lindex = -1 - - while 1: - try: - longchar = definition[lindex].lower() - except IndexError: - raise - - shortchar = abbrev[sindex].lower() - - if not shortchar.isalnum(): - sindex -= 1 - - if sindex == -1 * len(abbrev): - if shortchar == longchar: - if lindex == -1 * len(definition) or not definition[lindex - 1].isalnum(): - break - else: - lindex -= 1 - else: - lindex -= 1 - if lindex == -1 * (len(definition) + 1): - raise ValueError("definition {} was not found in {}".format(abbrev, definition)) - - else: - if shortchar == longchar: - sindex -= 1 - lindex -= 1 - else: - lindex -= 1 -# print('lindex:',lindex,len(definition),definition[lindex:len(definition)]) - new_candidate = Candidate(definition[lindex:len(definition)]) - new_candidate.set_position(definition.start+lindex+len(definition), definition.stop) - definition = new_candidate - - tokens = len(definition.split()) - length = len(abbrev) - - if tokens > min([length + 5, length * 2]): - raise ValueError("did not meet min(|A|+5, |A|*2) constraint") - - # Do not return definitions that contain unbalanced parentheses - if definition.count('(') != definition.count(')'): - raise ValueError("Unbalanced parentheses not allowed in a definition") -# print('select:') -# print(definition,definition.start, definition.stop) - new_definition_dict={'definition':definition,'start':definition.start,'stop':definition.stop} - return new_definition_dict - - -def extract_abbreviation_definition_pairs(file_path=None, doc_text=None): - abbrev_map = [] - omit = 0 - written = 0 - if file_path: - sentence_iterator = enumerate(yield_lines_from_file(file_path)) - elif doc_text: - sentence_iterator = enumerate(yield_lines_from_doc(doc_text)) - else: - return abbrev_map - - for i, sentence in sentence_iterator: - #print(sentence) - try: - for candidate in best_candidates(sentence): - #print(candidate) - try: - #print('begin get definition') - definition = get_definition(candidate, sentence) - #print('get_definition:') - #print(definition) - - except (ValueError, IndexError) as e: - #log.debug("{} Omitting candidate {}. Reason: {}".format(i, candidate, e.args[0])) - omit += 1 - else: - try: - definition_dict = select_definition(definition, candidate) - except (ValueError, IndexError) as e: - #log.debug("{} Omitting definition {} for candidate {}. Reason: {}".format(i, definition_dict, candidate, e.args[0])) - omit += 1 - else: - definition_dict['abbre']=candidate - abbrev_map.append(definition_dict) - written += 1 - except (ValueError, IndexError) as e: - log.debug("{} Error processing sentence {}: {}".format(i, sentence, e.args[0])) - log.debug("{} abbreviations detected and kept ({} omitted)".format(written, omit)) - return abbrev_map - -def postprocess_abbr(ner_result,ori_text): - - final_result={} - if len(ner_result)==0: - return [] - # abbr recognition - abbr_result=extract_abbreviation_definition_pairs(doc_text=ori_text) - - # read ner results - nor_loc_list={} #{entity_name_location:entity_information} - - for ele in ner_result: - nor_loc_list[str(ele[0])+' '+str(ele[1])]=ele - final_result['\t'.join(ele)]=[int(ele[0]),int(ele[1])] - - #abbr matching - for abbr in abbr_result: - abbr_index=str(abbr['start'])+' '+str(abbr['stop']) - if abbr_index in nor_loc_list.keys(): - - line=ori_text - abbr_text=abbr['abbre'] - abbr_eid=0 - while line.find(abbr_text)>=0: - abbr_sid=line.find(abbr_text)+abbr_eid - abbr_eid=abbr_sid+len(abbr_text) - # print(abbr_sid,abbr_eid) - if abbr_sid>0 and abbr_eid0 and abbr_eid==len(ori_text): - if ori_text[abbr_sid-1].isalnum()==False : - final_result[str(abbr_sid)+'\t'+str(abbr_eid)+'\t'+nor_loc_list[abbr_index][2]+'\t'+nor_loc_list[abbr_index][3]]=[abbr_sid,abbr_eid] - line=ori_text[abbr_eid:] - # print(final_result) - sorted_final_result=sorted(final_result.items(), key=lambda kv:(kv[1]), reverse=False) - final_result=[] - for ele in sorted_final_result: - final_result.append(ele[0].split('\t')) - return final_result - -def ner_abbr(ner_result,abbr_result,ori_text): - # read ner results - nor_name_list={} #{entity_name:entity_information} - nor_loc_list={} #{entity_name_location:entity_information} - final_result={} #{entity_information:location} use to sort - for ele in ner_result: - temp_seg=ele.split('\t') - nor_loc_list[temp_seg[0]+' '+temp_seg[1]]=temp_seg - nor_name_list[temp_seg[2].lower()]=temp_seg - final_result['\t'.join(temp_seg[0:4])]=[int(temp_seg[0]),int(temp_seg[1])] - - #abbr matching - for abbr in abbr_result: - abbr_index=str(abbr['start'])+' '+str(abbr['stop']) - if abbr_index in nor_loc_list.keys(): - - line=ori_text - abbr_text=abbr['abbre'] - abbr_eid=0 - while line.find(abbr_text)>=0: - abbr_sid=line.find(abbr_text)+abbr_eid - abbr_eid=abbr_sid+len(abbr_text) - # print(abbr_sid,abbr_eid) - if abbr_sid>0 and abbr_eid0 and abbr_eid==len(ori_text): - if ori_text[abbr_sid-1].isalnum()==False : - final_result[str(abbr_sid)+'\t'+str(abbr_eid)+'\t'+abbr_text+'\t'+nor_loc_list[abbr_index][3]]=[abbr_sid,abbr_eid] - line=ori_text[abbr_eid:] - # print(final_result) - final_result=sorted(final_result.items(), key=lambda kv:(kv[1]), reverse=False) - - return final_result - - - - -if __name__ == '__main__': - path='//panfs/pan1/bionlp/lulab/luoling/HPO_project/diseaseTag/data/test/results/' - fin=open(path+'NCBI_test_phecr_95.tsv','r',encoding='utf-8') - context=fin.read().strip().split('\n\n') - fin.close() - fout=open(path+'NCBI_test_phecr_abbre_95.tsv','w',encoding='utf-8') - for doc in context: - lines=doc.split('\n') - ori_text=lines[1] - # print(ori_text) - fout.write(lines[0]+'\n'+lines[1]+'\n') - if len(lines)>2: - abbr_result=extract_abbreviation_definition_pairs(doc_text=ori_text) - print(abbr_result) - abbr_out=ner_abbr(lines[2:],abbr_result,ori_text) - else: - abbr_out=[] - # print('final:',abbr_out) - for ele in abbr_out: - fout.write(ele[0]+'\n') - fout.write('\n') - # sys.exit() - fout.close() - #last_out=combine_ml_dict_fn(abbr_out,infile) - #print(last_out) - - diff --git a/spaces/lixq/bingo61/src/lib/bots/bing/tts.ts b/spaces/lixq/bingo61/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/lj1995/vocal2guitar/train/utils.py b/spaces/lj1995/vocal2guitar/train/utils.py deleted file mode 100644 index b7d294a72e17972503719d91465b892f54f1efa0..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/train/utils.py +++ /dev/null @@ -1,483 +0,0 @@ -import os, traceback -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - - ################## - def go(model, bkey): - saved_state_dict = checkpoint_dict[bkey] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): # 模型需要的shape - try: - new_state_dict[k] = saved_state_dict[k] - if saved_state_dict[k].shape != state_dict[k].shape: - print( - "shape-%s-mismatch|need-%s|get-%s" - % (k, state_dict[k].shape, saved_state_dict[k].shape) - ) # - raise KeyError - except: - # logger.info(traceback.format_exc()) - logger.info("%s is not in the checkpoint" % k) # pretrain缺失的 - new_state_dict[k] = v # 模型自带的随机值 - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - - go(combd, "combd") - go(sbd, "sbd") - ############# - logger.info("Loaded model weights") - - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None and load_opt == 1 - ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch - # try: - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - # except: - # traceback.print_exc() - logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -# def load_checkpoint(checkpoint_path, model, optimizer=None): -# assert os.path.isfile(checkpoint_path) -# checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') -# iteration = checkpoint_dict['iteration'] -# learning_rate = checkpoint_dict['learning_rate'] -# if optimizer is not None: -# optimizer.load_state_dict(checkpoint_dict['optimizer']) -# # print(1111) -# saved_state_dict = checkpoint_dict['model'] -# # print(1111) -# -# if hasattr(model, 'module'): -# state_dict = model.module.state_dict() -# else: -# state_dict = model.state_dict() -# new_state_dict= {} -# for k, v in state_dict.items(): -# try: -# new_state_dict[k] = saved_state_dict[k] -# except: -# logger.info("%s is not in the checkpoint" % k) -# new_state_dict[k] = v -# if hasattr(model, 'module'): -# model.module.load_state_dict(new_state_dict) -# else: -# model.load_state_dict(new_state_dict) -# logger.info("Loaded checkpoint '{}' (epoch {})" .format( -# checkpoint_path, iteration)) -# return model, optimizer, learning_rate, iteration -def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): # 模型需要的shape - try: - new_state_dict[k] = saved_state_dict[k] - if saved_state_dict[k].shape != state_dict[k].shape: - print( - "shape-%s-mismatch|need-%s|get-%s" - % (k, state_dict[k].shape, saved_state_dict[k].shape) - ) # - raise KeyError - except: - # logger.info(traceback.format_exc()) - logger.info("%s is not in the checkpoint" % k) # pretrain缺失的 - new_state_dict[k] = v # 模型自带的随机值 - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - logger.info("Loaded model weights") - - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None and load_opt == 1 - ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch - # try: - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - # except: - # traceback.print_exc() - logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at epoch {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at epoch {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(combd, "module"): - state_dict_combd = combd.module.state_dict() - else: - state_dict_combd = combd.state_dict() - if hasattr(sbd, "module"): - state_dict_sbd = sbd.module.state_dict() - else: - state_dict_sbd = sbd.state_dict() - torch.save( - { - "combd": state_dict_combd, - "sbd": state_dict_sbd, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def summarize( - writer, - global_step, - scalars={}, - histograms={}, - images={}, - audios={}, - audio_sampling_rate=22050, -): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow( - alignment.transpose(), aspect="auto", origin="lower", interpolation="none" - ) - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding="utf-8") as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - """ - todo: - 结尾七人组: - 保存频率、总epoch done - bs done - pretrainG、pretrainD done - 卡号:os.en["CUDA_VISIBLE_DEVICES"] done - if_latest done - 模型:if_f0 done - 采样率:自动选择config done - 是否缓存数据集进GPU:if_cache_data_in_gpu done - - -m: - 自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done - -c不要了 - """ - parser = argparse.ArgumentParser() - # parser.add_argument('-c', '--config', type=str, default="configs/40k.json",help='JSON file for configuration') - parser.add_argument( - "-se", - "--save_every_epoch", - type=int, - required=True, - help="checkpoint save frequency (epoch)", - ) - parser.add_argument( - "-te", "--total_epoch", type=int, required=True, help="total_epoch" - ) - parser.add_argument( - "-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path" - ) - parser.add_argument( - "-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path" - ) - parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -") - parser.add_argument( - "-bs", "--batch_size", type=int, required=True, help="batch size" - ) - parser.add_argument( - "-e", "--experiment_dir", type=str, required=True, help="experiment dir" - ) # -m - parser.add_argument( - "-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k" - ) - parser.add_argument( - "-sw", - "--save_every_weights", - type=str, - default="0", - help="save the extracted model in weights directory when saving checkpoints", - ) - parser.add_argument( - "-v", "--version", type=str, required=True, help="model version" - ) - parser.add_argument( - "-f0", - "--if_f0", - type=int, - required=True, - help="use f0 as one of the inputs of the model, 1 or 0", - ) - parser.add_argument( - "-l", - "--if_latest", - type=int, - required=True, - help="if only save the latest G/D pth file, 1 or 0", - ) - parser.add_argument( - "-c", - "--if_cache_data_in_gpu", - type=int, - required=True, - help="if caching the dataset in GPU memory, 1 or 0", - ) - - args = parser.parse_args() - name = args.experiment_dir - experiment_dir = os.path.join("./logs", args.experiment_dir) - - if not os.path.exists(experiment_dir): - os.makedirs(experiment_dir) - - config_path = "configs/%s.json" % args.sample_rate - config_save_path = os.path.join(experiment_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = hparams.experiment_dir = experiment_dir - hparams.save_every_epoch = args.save_every_epoch - hparams.name = name - hparams.total_epoch = args.total_epoch - hparams.pretrainG = args.pretrainG - hparams.pretrainD = args.pretrainD - hparams.version = args.version - hparams.gpus = args.gpus - hparams.train.batch_size = args.batch_size - hparams.sample_rate = args.sample_rate - hparams.if_f0 = args.if_f0 - hparams.if_latest = args.if_latest - hparams.save_every_weights = args.save_every_weights - hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu - hparams.data.training_files = "%s/filelist.txt" % experiment_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/transform.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/transform.h deleted file mode 100644 index b70333093fd48b6c23fa2e8ec3ab20a8e51cad9f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/transform.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a fill of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the transform.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch transform - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_TRANSFORM_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/transform.h> -#include __THRUST_HOST_SYSTEM_TRANSFORM_HEADER -#undef __THRUST_HOST_SYSTEM_TRANSFORM_HEADER - -#define __THRUST_DEVICE_SYSTEM_TRANSFORM_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/transform.h> -#include __THRUST_DEVICE_SYSTEM_TRANSFORM_HEADER -#undef __THRUST_DEVICE_SYSTEM_TRANSFORM_HEADER - diff --git a/spaces/matthoffner/chatbot/utils/app/settings.ts b/spaces/matthoffner/chatbot/utils/app/settings.ts deleted file mode 100644 index acf371e92be468294267ee61d9a913bca60bf5a6..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/utils/app/settings.ts +++ /dev/null @@ -1,23 +0,0 @@ -import { Settings } from '@/types/settings'; - -const STORAGE_KEY = 'settings'; - -export const getSettings = (): Settings => { - let settings: Settings = { - theme: 'dark', - }; - const settingsJson = localStorage.getItem(STORAGE_KEY); - if (settingsJson) { - try { - let savedSettings = JSON.parse(settingsJson) as Settings; - settings = Object.assign(settings, savedSettings); - } catch (e) { - console.error(e); - } - } - return settings; -}; - -export const saveSettings = (settings: Settings) => { - localStorage.setItem(STORAGE_KEY, JSON.stringify(settings)); -}; diff --git a/spaces/mehdidc/ae_gen/README.md b/spaces/mehdidc/ae_gen/README.md deleted file mode 100644 index 2ceea8db77fff46a86ff87ea99f7f97934508070..0000000000000000000000000000000000000000 --- a/spaces/mehdidc/ae_gen/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ae Gen -emoji: 💻 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/merve/anonymization/server-side/fill-in-the-blank/node/split-post-cache.js b/spaces/merve/anonymization/server-side/fill-in-the-blank/node/split-post-cache.js deleted file mode 100644 index 5ffee3bb3d706d2eb01fefb71d3e5d7ae997bd53..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/server-side/fill-in-the-blank/node/split-post-cache.js +++ /dev/null @@ -1,23 +0,0 @@ -import urlSlug from 'url-slug' - -import ss from 'scrape-stl' -var {d3, jp, fs, io, _} = ss - -import { URL } from 'url' -var __dirname = new URL('.', import.meta.url).pathname - - -var datadir = __dirname + '/../../../source/fill-in-the-blank/data/' -var postCache = io.readDataSync(datadir + 'post-cache.json') - -var cacheKey2filename = {} -Object.entries(postCache).forEach(([key, value]) => { - var filename = urlSlug(key) + '.json' - io.writeDataSync(datadir + filename, value) - cacheKey2filename[key] = filename -}) - -fs.writeFileSync( - datadir + 'cachekey2filename.js', - `window.cacheKey2filename = ${JSON.stringify(cacheKey2filename, null, 2)}` -) diff --git a/spaces/merve/fill-in-the-blank/public/base-rate/script.js b/spaces/merve/fill-in-the-blank/public/base-rate/script.js deleted file mode 100644 index efc40861466afc2bb19cee8d3ef6cd5a98d80ddc..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/base-rate/script.js +++ /dev/null @@ -1,317 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - - -console.clear() -var ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - -window.renderFns = [] - -window.m = (function(){ - var rv = {b: .7, tpr: .8, fnr: .5, update, str: 'kids', titleStr: 'Children',} - - function update(obj={}){ - Object.assign(rv, obj) - window.renderFns.forEach(d => d()) - } - - return rv -})() - -window.f = (function(){ - var rv = {b: .3, tpr: .8, fnr: .5, update, str: 'adults', titleStr: 'Adults'} - - function update(obj={}){ - window.renderFns.forEach(d => d()) - } - - return rv -})() - - -var wLarge = d3.clamp(0, innerWidth/2 - 30, 300) - -d3.select('#big-matrix').html('') - .appendMany('div.big-container', [{w: wLarge, s: f, isText: 1}, {w: wLarge, s: m, isText: 1}]) - .each(drawMatrix) - - -addPattern(10, `pattern-${wLarge}-`) -addPattern(5, 'pattern-50-') - -function addPattern(s, str){ - var cColors = [colors.sick, colors.sick, colors.well, colors.well, lcolors.sick, lcolors.sick, lcolors.well, lcolors.well] - var rColors = [lcolors.sick, lcolors.well, lcolors.sick, lcolors.well, llcolors.sick, llcolors.well, llcolors.sick, llcolors.well] - - d3.select('#big-matrix') - .append('svg') - .st({height: 0, position: 'absolute'}) - .append('defs').appendMany('pattern', d3.range(8)) - .at({ id: i => str + i, width: s, height: s}) - .attr('patternUnits', 'userSpaceOnUse') - .append('rect') - .at({width: s, height: s, fill: i => rColors[i]}) - .parent().append('circle') - .at({r: s == 10 ? 2.5 : 1.5, cx: s/2, cy: s/2, fill: i => cColors[i]}) -} - - -var scale = d3.clamp(0, ((innerWidth - 50) / 3)/280, 1) -var isScaled = scale != 1 - -d3.select('#metrics').html('').st({height: 350*scale + 30}) - .appendMany('div', [0, 1, 2]) - .st({width: 280*scale, display: 'inline-block'}) - .append('div') - .st({transform: `scale(${scale})`, transformOrigin: '0% 0%'}) - .append('div.metrics-container').st({width: 280}) - .each(drawMetric) - -d3.selectAll('rect.drag') - .on('mouseover.style', d => d3.selectAll('rect.' + d).st({strokeWidth: 3, stroke: '#000'})) - .on('mouseout.style', d => d3.selectAll('rect.' + d).st({strokeWidth: 0})) - -function drawMetric(i){ - var sel = d3.select(this) - - var text = [ - // 'Percentage of sick people
        who test positive', - 'Percentage of sick people
        who test positive', - 'Percentage of positive tests
        who are actually sick', - 'Percentage of well people
        who test negative', - ][i] - - var percentFn = [ - s => s.tpr, - s => s.b*s.tpr/(s.b*s.tpr + (1 - s.b)*(s.fnr)), - s => 1 - s.fnr, - ][i] - - var colors = [ - ['#f0f', '#fcf', '#fff', '#fff'], - ['#f0f', '#fff', '#fcf', '#fff'], - ['#fff', '#fff', '#fcf', '#f0f'], - ][i] - - sel.append('h3').st({marginBottom: 20, fontSize: isScaled ? 30 : 20}).html(isScaled ? text.replace('
        ', '') : text) - - var h = 200 - var width = 100 - - var fDiv = sel.append('div').st({position: 'relative', top: -h + 7}) - .datum({w: 50, s: f, isText: 0, colors}).each(drawMatrix) - - var svg = sel.append('svg') - .at({width, height: h}) - .st({fontSize: 14, fontFamily: 'monospace'}) - - svg.append('path').at({stroke: '#ccc', d: `M ${width/2 + .5} 0 V ${h}`}) - - var errorSel = svg.append('path') - .translate(width/2 + .5, 0) - .at({stroke: 'orange', strokeWidth: 3}) - - var fSel = svg.append('g') - var mSel = svg.append('g') - - mSel.append('circle').at({r: 4, cx: width/2 + .5, fill: 'none', stroke: '#000'}) - fSel.append('circle').at({r: 4, cx: width/2 + .5, fill: 'none', stroke: '#000'}) - - var fTextSel = fSel.append('text').text('23%') - .at({dy: '.33em', textAnchor: 'middle', x: width/4 - 3, fontSize: isScaled ? 20 : 16}) - var mTextSel = mSel.append('text').text('23%') - .at({dy: '.33em', textAnchor: 'middle', x: width/4*3 + 5, fontSize: isScaled ? 20 : 16}) - - fSel.append('text').text('Adults').st({fontSize: isScaled ? 18 : 12}) - .at({textAnchor: 'middle', x: -23, y: -30}) - mSel.append('text').text('Children').st({fontSize: isScaled ? 18 : 12}) - .at({textAnchor: 'middle', x: 124, y: -30}) - - var mDiv = sel.append('div').st({position: 'relative', top: -h + 7}) - .datum({w: 50, s: m, isText: 0, colors}).each(drawMatrix) - - - renderFns.push(() => { - var fPercent = percentFn(f) - fSel.translate(h - h*fPercent, 1) - fTextSel.text(d3.format('.0%')(fPercent)) - - var mPercent = percentFn(m) - mSel.translate(h - h*mPercent, 1) - mTextSel.text(d3.format('.0%')(mPercent)) - - fDiv.translate(h - h*fPercent, 1) - mDiv.translate(h - h*mPercent, 1) - - errorSel.at({d: 'M 0 ' + (h - h*fPercent) + ' V ' + (h - h*mPercent) }) - }) -} - -function drawMatrix({s, w, isText, colors}){ - var svg = d3.select(this).append('svg') - .at({width: w, height: w}) - - - svg.append('rect').at({width: w + 1, height: w + 1}) - - if (!colors) colors = ['#000', '#000', '#000', '#000'] - - var rects = [ - {n: 'tp', x: 0, y: 0, width: _ => s.b*w, height: _ => s.tpr*w}, - {n: 'fn', x: 0, y: _ => 1 + s.tpr*w, width: _ => s.b*w, height: _ => w - s.tpr*w}, - {n: 'fp', x: _ => 1 + s.b*w, y: 0, width: _ => w - s.b*w, height: _ => s.fnr*w}, - {n: 'tn', x: _ => 1 + s.b*w, y: _ => 1 + s.fnr*w, width: _ => w - s.b*w, height: _ => w - s.fnr*w}, - ] - rects.forEach((d, i) => d.i = i) - - var rectSel = svg.appendMany('rect', rects) - .at({fill: d => `url(#pattern-${w}-${d.i}`}) - // .at({opacity: d => colors[d.i] == '#fff' ? .5 : 1}) - // .at({fill: d => `url(#pattern-${w}-${d.i + (colors[d.i] == '#ccc' ? 4 : 0)})`}) - // .at({fill: d => colors[d.i] == '#ccc' ? '#000' : `url(#pattern-${w}-${d.i + (colors[d.i] == '#ccc' ? 4 : 0)})`}) - .each(function(d){ d.sel = d3.select(this) }) - rectSel.filter(d => colors[d.i] == '#fff').at({fill: '#eee'}) - - var bh = .5 - svg.append('rect.tpr').at({height: bh}).translate(-bh/2, 1) - .datum('tpr') - - svg.append('rect.fnr').at({height: bh}).translate(-bh/2, 1) - .datum('fnr') - - svg.append('rect.b').at({width: bh, height: w}).translate(-bh/2, 0) - .datum('b') - - var bh = 20 - svg.append('rect.drag.tpr').at({height: bh}).translate(-bh/2, 1) - .call(makeDrag('tpr', 1)).datum('tpr').call(d3.attachTooltip).on('mouseover', ttFormat) - - svg.append('rect.drag.fnr').at({height: bh}).translate(-bh/2, 1) - .call(makeDrag('fnr', 1)).datum('fnr').call(d3.attachTooltip).on('mouseover', ttFormat) - - svg.append('rect.drag.b').at({width: bh, height: w}).translate(-bh/2, 0) - .call(makeDrag('b', 0)).datum('b').call(d3.attachTooltip).on('mouseover', ttFormat) - - - var tprRect = svg.selectAll('rect.tpr') - var fnrRect = svg.selectAll('rect.fnr') - var bRect = svg.selectAll('rect.b') - - function ttFormat(str){ - var html = '' - if (str == 'tpr') html = `${d3.format('.0%')(s.tpr)} of sick ${s.titleStr.toLowerCase()} test positive` - if (str == 'fnr') html = `${d3.format('.0%')(s.fnr)} of well ${s.titleStr.toLowerCase()} test negative` - if (str == 'b') html = `${d3.format('.0%')(s.b)} of ${s.titleStr.toLowerCase()} are sick` - ttSel.html(html) - } - - function makeDrag(str, index){ - - return d3.drag() - .on('drag', function(){ - var percent = d3.mouse(this)[index]/w - s[str] = d3.clamp(.15, percent, .85) - - window.basetimer.stop() - s.update() - - ttMove() - ttFormat(str) - }) - .on('start', _ => svg.classed('dragging', 1)) - .on('end', _ => svg.classed('dragging', 0)) - } - - renderFns.push(() => { - rectSel.each(d => d.sel.at(d)) - - tprRect.at({width: w*s.b, y: w*s.tpr}) - fnrRect.at({x: w*s.b, width: w - w*s.b, y: w*s.fnr}) - bRect.at({x: w*s.b}) - - // s => s.tpr, - // s => s.b*s.tpr/(s.b*s.tpr + (1 - s.b)*(s.fnr)), - // s => 1 - s.fnr, - if (!isText) return - }) - - - if (!isText) return - - svg.append('text').text(s.titleStr).at({textAnchor: 'middle', x: w/2, y: -8, fontSize: 20}) - - if (innerWidth < 800) return - // if (true) - - svg.appendMany('text', d3.range(4)).each(function(i){ - var isSick = i < 2 - var isPos = i % 2 - - var pad = 5 - d3.select(this) - .translate([isSick ? pad : w - pad, isPos ? 13 : w - 23]) - .at({ - textAnchor: isSick ? 'start' : 'end', - fill: '#000', - fontSize: 12, - fontFamily: 'monospace', - pointerEvents: 'none', - }) - .tspans([ - ' test : ' + (isPos ? 'sick' : 'well'), - 'truth: ' + (isSick ? 'sick' : 'well')]) - }) -} - - -if (window.basetimer) window.basetimer.stop() -window.basetimer = d3.timer(t => { - - var val = t/1000 % (Math.PI*4) - - if (val < Math.PI*2){ - m.b = (Math.sin(val + Math.PI/2))/4 + .4 - } else if (Math.PI*3 < val && val < Math.PI*5 || true){ - f.tpr = (Math.sin(val + Math.PI/2))/4 + .4 - } - m.update() -}) - - - - - -m.update() - - - -function ttMove(d){ - if (!ttSel.size()) return; - - var e = d3.event.sourceEvent, - x = e.clientX, - y = e.clientY, - bb = ttSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight > y + 20 + bb.height ? y + 20 : y - bb.height - 20; - - ttSel - .style('left', left +'px') - .style('top', top + 'px'); -} - diff --git a/spaces/merve/fill-in-the-blank/public/third_party/alea.js b/spaces/merve/fill-in-the-blank/public/third_party/alea.js deleted file mode 100644 index 9effe485ca14df5d6923e20adefaa794b939ee26..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/third_party/alea.js +++ /dev/null @@ -1,3 +0,0 @@ -// https://github.com/davidbau/seedrandom Copyright 2019 David Bau - -!function(n,t,e){function u(n){var t=this,e=function(){var s=4022871197;return function(n){n=String(n);for(var t=0;t>>0,s=(e*=s)>>>0,s+=4294967296*(e-=s)}return 2.3283064365386963e-10*(s>>>0)}}();t.next=function(){var n=2091639*t.s0+2.3283064365386963e-10*t.c;return t.s0=t.s1,t.s1=t.s2,t.s2=n-(t.c=0|n)},t.c=1,t.s0=e(" "),t.s1=e(" "),t.s2=e(" "),t.s0-=e(n),t.s0<0&&(t.s0+=1),t.s1-=e(n),t.s1<0&&(t.s1+=1),t.s2-=e(n),t.s2<0&&(t.s2+=1),e=null}function o(n,t){return t.c=n.c,t.s0=n.s0,t.s1=n.s1,t.s2=n.s2,t}function s(n,t){var e=new u(n),s=t&&t.state,r=e.next;return r.int32=function(){return 4294967296*e.next()|0},r.double=function(){return r()+11102230246251565e-32*(2097152*r()|0)},r.quick=r,s&&("object"==typeof s&&o(s,e),r.state=function(){return o(e,{})}),r}t&&t.exports?t.exports=s:e&&e.amd?e(function(){return s}):this.alea=s}(0,"object"==typeof module&&module,"function"==typeof define&&define); \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/public/uncertainty-calibration/weatherdata.js b/spaces/merve/measuring-fairness/public/uncertainty-calibration/weatherdata.js deleted file mode 100644 index 9fb29abd04cf81496773adb6fbab7a1b9cb513e0..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/uncertainty-calibration/weatherdata.js +++ /dev/null @@ -1,255 +0,0 @@ -var weatherdata = [{'h': 0, -'id': 0, -'label': 0, -'original_score': 0.12433152687398698, -'score': 0.12433152687398698}, -{'h': 1, -'id': 1, -'label': 0, -'original_score': 0.2014203772169771, -'score': 0.2014203772169771}, -{'h': 2, -'id': 2, -'label': 1, -'original_score': 0.2626685491019668, -'score': 0.2626685491019668}, -{'h': 3, -'id': 3, -'label': 0, -'original_score': 0.10619382887946915, -'score': 0.10619382887946915}, -{'h': 4, -'id': 4, -'label': 0, -'original_score': 0.1536112957212682, -'score': 0.1536112957212682}, -{'h': 5, -'id': 5, -'label': 0, -'original_score': 0.2660219680553572, -'score': 0.2660219680553572}, -{'h': 6, -'id': 6, -'label': 0, -'original_score': 0.1886698681338711, -'score': 0.1886698681338711}, -{'h': 7, -'id': 7, -'label': 0, -'original_score': 0.302266784816097, -'score': 0.302266784816097}, -{'h': 8, -'id': 8, -'label': 0, -'original_score': 0.15496114380196338, -'score': 0.15496114380196338}, -{'h': 9, -'id': 9, -'label': 0, -'original_score': 0.19763504609985533, -'score': 0.19763504609985533}, -{'h': 0, -'id': 10, -'label': 0, -'original_score': 0.38247000184830054, -'score': 0.38247000184830054}, -{'h': 1, -'id': 11, -'label': 1, -'original_score': 0.3363518147573557, -'score': 0.3363518147573557}, -{'h': 2, -'id': 12, -'label': 1, -'original_score': 0.4947967422959128, -'score': 0.4947967422959128}, -{'h': 3, -'id': 13, -'label': 0, -'original_score': 0.38675988136018435, -'score': 0.38675988136018435}, -{'h': 4, -'id': 14, -'label': 0, -'original_score': 0.3755618748258325, -'score': 0.3755618748258325}, -{'h': 5, -'id': 15, -'label': 0, -'original_score': 0.39394252133526547, -'score': 0.39394252133526547}, -{'h': 6, -'id': 16, -'label': 1, -'original_score': 0.47996692559311144, -'score': 0.47996692559311144}, -{'h': 7, -'id': 17, -'label': 0, -'original_score': 0.4520919890835573, -'score': 0.4520919890835573}, -{'h': 8, -'id': 18, -'label': 0, -'original_score': 0.49128398887598235, -'score': 0.49128398887598235}, -{'h': 9, -'id': 19, -'label': 0, -'original_score': 0.4934231460040127, -'score': 0.4934231460040127}, -{'h': 0, -'id': 20, -'label': 1, -'original_score': 0.6023370616966761, -'score': 0.6023370616966761}, -{'h': 1, -'id': 21, -'label': 0, -'original_score': 0.5588319919664324, -'score': 0.5588319919664324}, -{'h': 2, -'id': 22, -'label': 1, -'original_score': 0.5372993269470902, -'score': 0.5372993269470902}, -{'h': 3, -'id': 23, -'label': 1, -'original_score': 0.6056881032306126, -'score': 0.6056881032306126}, -{'h': 4, -'id': 24, -'label': 1, -'original_score': 0.5777333354677878, -'score': 0.5777333354677878}, -{'h': 5, -'id': 25, -'label': 0, -'original_score': 0.5684077659316352, -'score': 0.5684077659316352}, -{'h': 6, -'id': 26, -'label': 0, -'original_score': 0.5583886351009575, -'score': 0.5583886351009575}, -{'h': 7, -'id': 27, -'label': 0, -'original_score': 0.585107016245853, -'score': 0.585107016245853}, -{'h': 4, -'id': 28, -'label': 0, -'original_score': 0.5024398267017434, -'score': 0.5024398267017434}, -{'h': 7, -'id': 29, -'label': 1, -'original_score': 0.5119051369645927, -'score': 0.5119051369645927}, -{'h': 0, -'id': 30, -'label': 1, -'original_score': 0.6874421886689279, -'score': 0.6874421886689279}, -{'h': 1, -'id': 31, -'label': 1, -'original_score': 0.7622939478182656, -'score': 0.7622939478182656}, -{'h': 2, -'id': 32, -'label': 1, -'original_score': 0.8240376576917314, -'score': 0.8240376576917314}, -{'h': 3, -'id': 33, -'label': 0, -'original_score': 0.8491598185092843, -'score': 0.8491598185092843}, -{'h': 4, -'id': 34, -'label': 1, -'original_score': 0.7585879921321647, -'score': 0.7585879921321647}, -{'h': 5, -'id': 35, -'label': 0, -'original_score': 0.76396242565466, -'score': 0.76396242565466}, -{'h': 6, -'id': 36, -'label': 1, -'original_score': 0.7498984213509621, -'score': 0.7498984213509621}, -{'h': 7, -'id': 37, -'label': 1, -'original_score': 0.6642342379293016, -'score': 0.6642342379293016}, -{'h': 8, -'id': 38, -'label': 0, -'original_score': 0.7594027841393808, -'score': 0.7594027841393808}, -{'h': 9, -'id': 39, -'label': 1, -'original_score': 0.816737760918518, -'score': 0.816737760918518}, -{'h': 0, -'id': 40, -'label': 1, -'original_score': 0.8926172493334218, -'score': 0.8926172493334218}, -{'h': 1, -'id': 41, -'label': 0, -'original_score': 0.9194132577983325, -'score': 0.9194132577983325}, -{'h': 2, -'id': 42, -'label': 1, -'original_score': 0.8603862951854552, -'score': 0.8603862951854552}, -{'h': 3, -'id': 43, -'label': 1, -'original_score': 0.9093601089110575, -'score': 0.9093601089110575}, -{'h': 4, -'id': 44, -'label': 1, -'original_score': 0.9442430043437404, -'score': 0.9442430043437404}, -{'h': 5, -'id': 45, -'label': 1, -'original_score': 0.8778942613680896, -'score': 0.8778942613680896}, -{'h': 6, -'id': 46, -'label': 1, -'original_score': 0.8873305075007553, -'score': 0.8873305075007553}, -{'h': 7, -'id': 47, -'label': 1, -'original_score': 0.8786043110234295, -'score': 0.8786043110234295}, -{'h': 8, -'id': 48, -'label': 1, -'original_score': 0.8682870444345626, -'score': 0.8682870444345626}, -{'h': 9, -'id': 49, -'label': 1, -'original_score': 0.8698959578262738, -'score': 0.8698959578262738}] - - -weatherdata.forEach(d => { - d.is_filter = d.label && Math.random() < .6 -}) \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js b/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js deleted file mode 100644 index ee7c8a4f14939e8d09185fd47b2b43c8e3c37b11..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js +++ /dev/null @@ -1,200 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -console.clear() - -window.init = function(){ - var initFns = [window.initUtil, window.initScatter, window.initPair] - if (!initFns.every(d => d)) return - - window.util = initUtil() - - function parseTidy(csvStr, sentences){ - var tidy = d3.csvParse(csvStr, d => { - return { - e0: +d.e0, - e1: +d.e1, - i0: +d.i0, - i1: +d.i1, - tokenIndex: +d.tokenIndex, - sentenceIndex: +d.sentenceIndex, - } - }) - - var bySentence = d3.nestBy(tidy, d => d.sentenceIndex) - bySentence.forEach(sent => { - sent.sentenceIndex = +sent.key - sent.s0 = sentences[sent.sentenceIndex].s0 - sent.s1 = sentences[sent.sentenceIndex].s1 - sent.orig = sentences[sent.sentenceIndex].orig - - sent.corr = ss.sampleCorrelation( - sent.map(d => Math.min(d.i0, 300)), - sent.map(d => Math.min(d.i1, 300)) - ) - // sent.corr = ss.sampleCorrelation(sent.map(d => d.e0), sent.map(d => d.e1)) - }) - - return bySentence - } - - var bySentenceA = parseTidy(python_data.tidyCSV_A, python_data.sentences_A) - var bySentenceB = parseTidy(python_data.tidyCSV_B, python_data.sentences_B) - var bySentence = bySentenceA.map((a, i) => { - var b = bySentenceB[i] - var orig = a.orig - .replace('in 1918, ', '') - .replace('in texas, ', '') - .replace('in texas, ', '') - - return {a, b, orig} - }) - - var sel = d3.select('.container').html(` -
        -
        -
        -
        -
        -
        -
        -
        -
        - `) - .st({width: 1400}) - d3.selectAll('.list,.scatter').st({width: 430, display: 'inline-block', verticalAlign: 'top'}) - - d3.selectAll('.pair-a,.pair-b,.pair-ab').st({width: 400, display: 'inline-block', verticalAlign: 'top'}) - - function initScatter(bySentence, sel){ - var c = d3.conventions({ - sel: sel.st({width: 350}), - height: 100, - width: 300, - height: 300, - margin: {left: 40, top: 17, bottom: 60} - }) - - var domain = d3.extent(bySentence.map(d => d.a.corr).concat(bySentence.map(d => d.b.corr))) - - - c.x.domain(domain).nice() - c.y.domain(domain).nice() - c.xAxis.ticks(5) - c.yAxis.ticks(5) - d3.drawAxis(c) - c.svg.selectAll('.tick').st({display: 'block'}) - - util.ggPlotBg(c) - util.addAxisLabel(c, - python_data.slug_A + ' coefficients (avg ' + util.corrFmt(d3.mean(bySentence, d => d.a.corr)) + ')', - python_data.slug_B + ' coefficients (avg ' + util.corrFmt(d3.mean(bySentence, d => d.b.corr)) + ')', - ) - - - c.svg.append('path').at({d: `M 0 ${c.height} L ${c.width} 0`, stroke: '#fff', strokeWidth: 2}) - - c.svg.appendMany('circle.sentence', bySentence) - .translate(d => [c.x(d.a.corr), c.y(d.b.corr)]) - .at({ - r: 3, - fill: 'none', - stroke: '#000' - }) - .on('mouseover', setSentenceAsPair) - } - initScatter(bySentence, d3.select('.scatter')) - - - function initList(bySentence, sel){ - var tableSel = sel - .st({height: 300 + 17, overflowY: 'scroll', cursor: 'default', position: 'relative'}) - .append('table') - .st({fontSize: 12}) - - tableSel.append('tr.header') - .html(` - ${python_data.slug_A} - ${python_data.slug_B} - template - `) - - var rowSel = tableSel - .appendMany('tr.sentence', _.sortBy(bySentence, d => d.a.corr)) - .on('mouseover', setSentenceAsPair) - .st({padding: 2, fontSize: 12}) - .html(d => ` - ${util.corrFmt(d.a.corr)} - ${util.corrFmt(d.b.corr)} - ${d.orig.replace('[', '').replace(']', '')} - `) - - } - initList(bySentence, d3.select('.list')) - - - function setSentenceAsPair(s){ - function drawScatter(type){ - var st = s - if (type.length == 2){ - st.e0 = s.a.e0.map((e0, i) => e0 - s.a.e1[i]) - st.e1 = s.b.e0.map((e0, i) => e0 - s.b.e1[i]) - - st.label0 = python_data.slug_A + ' dif' - st.label1 = python_data.slug_B + ' dif' - st.isDifference = false - st.count = (python_settings.count || 150)*2 - } else { - st = s[type] - st.e0 = d3.range(python_data.vocab.length).map(d => -Infinity) - st.e1 = d3.range(python_data.vocab.length).map(d => -Infinity) - st.forEach(d => { - st.e0[d.tokenIndex] = d.e0 - st.e1[d.tokenIndex] = d.e1 - }) - - st.label0 = st.s0 - st.label1 = st.s1 - - st.isDifference = python_settings.isDifference - st.count = python_settings.count || 150 - - st.topLabel = type == 'a' ? python_data.slug_A : python_data.slug_B - } - - st.vocab = python_data.vocab - - var sel = d3.select('.pair-' + type).html('').st({width: 400, marginRight: 40}) - initPair(st, sel.append('div')) - } - drawScatter('b') - drawScatter('a') - drawScatter('ab') - - d3.selectAll('.sentence').classed('active', d => d == s) - - d3.selectAll('tr.sentence').filter(d => d == s) - .each(function(){ - this.scrollIntoView({ block: 'nearest', inline: 'nearest'}) - }) - } - setSentenceAsPair(bySentence[0]) - -} - - - -window.init() - diff --git a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/draw_slides.js b/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/draw_slides.js deleted file mode 100644 index 17ab651b01bc454c7168d55d28d5d8b42b26379b..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/draw_slides.js +++ /dev/null @@ -1,160 +0,0 @@ -window.drawSlides = function(){ - var slides = [ - { - id: 'intro', - visible_threshold: 0, //Also sets pointerEvents - visible_tmessage: 0, - visible_calibration: 0, - constant_model_score: 0, - }, - { - id: 'thresholding', - visible_threshold: 1, - visible_tmessage: 0, - visible_calibration: 0, - constant_model_score: 0, - // target_thresholds: [0, 0.25, 0.35, 0.6, 0.7, 1] - target_threshold: .4 - }, - { - id: 'adjustable_thresholding', - visible_threshold: 1, - visible_tmessage: 1, - visible_calibration: 0, - constant_model_score: 0, - target_threshold: .47 - // target_thresholds: [0, 0.25, 0.35, 0.6, 0.7, 1] - }, - { - id: 'calibration', - visible_threshold: 0, - visible_tmessage: 0, - visible_calibration: 1, - constant_model_score: 0, - target_thresholds: [0, 0.2, 0.4, 0.6, 0.8, 1] - }, - { - id: 'adjusting_calibration', - visible_threshold: 0, - visible_tmessage: 0, - visible_calibration: 1, - constant_model_score: 0, - target_thresholds: [0, 0.15, 0.45, 0.55, 0.83, 1] - }, - // { - // id: 'improving_calibration', - // visible_threshold: 0, - // visible_calibration: 1, - // constant_model_score: 1, - // target_thresholds: [0, 0.305, 0.407, 0.503, 0.649, 1], - // }, - { - id: 'shifting_data', - visible_threshold: 0, - visible_tmessage: 0, - visible_calibration: 1, - constant_model_score: 1, - filter_rain: true - }, - { - id: 'beyond_calibration', - visible_threshold: 0, - visible_tmessage: 0, - visible_calibration: 1, - constant_model_score: 1, - target_thresholds: [0, .02, .04, .96, .98, 1], - }, - - ] - - var prevSlide = null; - - var gs = d3.graphScroll() - .container(d3.select('#container')) - .graph(d3.selectAll('#container #graph')) - .eventId('uniqueId1') // namespace for scroll and resize events - .sections(d3.selectAll('#container #sections > div')) - .offset(window.isMobile ? 300 : 200) - .on('active', function(i){ - try{ - var slide = slides.slide = slides[i] - - if (!slide) return console.log(`missing slide ${i}`) - - // if(slide.id != 'slide1'){ - // weatherGraph.prediction_sel.at({opacity:0}); - // } - - // if(slide.constant_model_score){ - // weatherGraph.icon_sel.transition().duration(500) - // .at({y: constant_score}) - // } - // else { - // weatherGraph.icon_sel.transition().duration(500) - // .at({y: d => c.y(d.h)}) - // } - - //weatherGraph.threshold_sel.classed('temp') - - var transition_duration = prevSlide ? 500 : 0; - - // Animate threshold and thresholds between slides - var durationScale = 1 - if (prevSlide){ - durationScale = prevSlide.visible_calibration == slide.visible_calibration ? 1 : 3 - } - if (slide.target_thresholds){ - weatherGraph.setThresholds(slide.target_thresholds, transition_duration*durationScale) - } - if (slide.target_threshold){ - weatherGraph.setThreshold(slide.target_threshold, transition_duration*durationScale) - } - - calibrationCurve.renderBuckets() - - - weatherGraph.thresholdSel - .st({pointerEvents: slide.visible_threshold ? 'all' : 'none'}) - .transition().duration(transition_duration) - .st({opacity: slide.visible_threshold}); - - weatherGraph.messageSel - .transition().duration(transition_duration) - .st({opacity: slide.visible_tmessage}); - - weatherGraph.predictionSel - .transition().duration(transition_duration) - .at({strokeOpacity: slide.visible_threshold ? 1: 0}); - - weatherGraph.weatherGroupSel - .transition().duration(transition_duration) - .ease(d3.easeBounce).delay((d, i) => Math.random()*transition_duration) - .st({opacity: d => slide.filter_rain && d.is_filter ? 0 : 1}) - - weatherGraph.thresholdsGroupSel - .st({pointerEvents: slide.visible_calibration ? 'all' : 'none'}) - .transition().duration(transition_duration) - .st({opacity: slide.visible_calibration}) - - calibrationCurve.c.svg - .transition().duration(transition_duration) - .st({opacity: slide.visible_calibration}) - - - prevSlide = slide; - } catch (e){ - console.log(e) - } - }) - - return slides -} - -if (window.init) window.init() - - -/* - - - -*/ \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/source/data-leak/style.css b/spaces/merve/uncertainty-calibration/source/data-leak/style.css deleted file mode 100644 index f6d1cf1c23de849148d5754c19b5aafe77c63595..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/data-leak/style.css +++ /dev/null @@ -1,176 +0,0 @@ -body{ - -} - - -p{ - margin-left: 0px auto; - margin-right: 0px auto; - margin: 0px auto; - margin-top: 1em; - margin-bottom: 1em; -} -h3, .post-summary, h1x, p{ - max-width: 650px; -} - -#recirc{ - max-width: 760px; -} - - -.white{ - stroke: #fff; - fill: none; - stroke-width: 1; -} - -.player{ - cursor: pointer; - stroke: #000; - stroke-width: 2; -} - -.button{ - border: .5px solid #000; - /*border-bottom-width: 4px;*/ - /*border-right-width: 4px;*/ - border-radius: 8px; - padding: 4px; - margin: 2px; - cursor: pointer; - display: inline-block; - /*font-family: monospace;*/ - /*font-family: 'Roboto Slab', serif;*/ - /*font-size: 16px;*/ - user-select: none; - font-family: 'Google Sans', sans-serif; - font-family: 'Roboto', Helvetica, sans-serif; - - /*font-weight: 300;*/ -} - -@media (min-width: 800px){ - .button{ - margin-bottom: -100px; - } -} - -.inline-button{ - display: inline; -} - -.button:hover{ - background: #eee !important; -} - -.button:active{ -} - -canvas{ - opacity: .9; -} - -svg{ - overflow: visible; -} - -.axis{ - font-size: 12px; - -} -.axis{ - color: #000; -} -.axis text{ - fill: #999; - font-family: 'Roboto', Helvetica, sans-serif; -} -.axis text.chart-title{ - fill: #000; - font-size: 16px; -} -.axis line{ - stroke: #ccc; - display: none; -} - -.domain{ - stroke: #ccc; - display: none; -} - -text, .chart-title{ - user-select: none; - /*pointer-events: none;*/ -} - - -.field{ - font-family: 'Google Sans', sans-serif; - font-family: 'Roboto', Helvetica, sans-serif; - margin-top: 10px; -} - -.chart-title span{ - padding: 4px; -} - -.chart-title span:last-child{ - color: #fff; -} - -.chart-title span:first-child{ - color: #000; -} - -#field-regression .white, #field-regression-leak .white{ - stroke: #ccc; -} - -#field-grass .button, #field-prediction .button{ - display: none; -} - -.face-container{ - max-width: 400px; - - margin: 0px auto; -} -.face-container img{ - width: 100%; -} - -.post-summary { - margin-bottom: 40px; -} - -p { - margin: 10 auto; -} - - - -.pointer{ - height: 0px; - position: relative; -} -.pointer div { - overflow: visible; - content: ""; - background-image: url(https://pair-code.github.io/interpretability/bert-tree/pointer.svg); - width: 27px; - height: 27px; - position: absolute; - left: -35px; - top: 0px; -} - - -.face-container:after{ - content: "M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in CCS, 2015."; - font-size: 12px; - color: #888; - line-height: 14px; - display: block; -} \ No newline at end of file diff --git a/spaces/mikeee/radiobee-aligner/docs/build/html/_static/underscore-1.13.1.js b/spaces/mikeee/radiobee-aligner/docs/build/html/_static/underscore-1.13.1.js deleted file mode 100644 index ffd77af9648a47d389f2d6976d4aa1c44d7ce7ce..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/docs/build/html/_static/underscore-1.13.1.js +++ /dev/null @@ -1,2042 +0,0 @@ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define('underscore', factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, (function () { - var current = global._; - var exports = global._ = factory(); - exports.noConflict = function () { global._ = current; return exports; }; - }())); -}(this, (function () { - // Underscore.js 1.13.1 - // https://underscorejs.org - // (c) 2009-2021 Jeremy Ashkenas, Julian Gonggrijp, and DocumentCloud and Investigative Reporters & Editors - // Underscore may be freely distributed under the MIT license. - - // Current version. - var VERSION = '1.13.1'; - - // Establish the root object, `window` (`self`) in the browser, `global` - // on the server, or `this` in some virtual machines. We use `self` - // instead of `window` for `WebWorker` support. - var root = typeof self == 'object' && self.self === self && self || - typeof global == 'object' && global.global === global && global || - Function('return this')() || - {}; - - // Save bytes in the minified (but not gzipped) version: - var ArrayProto = Array.prototype, ObjProto = Object.prototype; - var SymbolProto = typeof Symbol !== 'undefined' ? Symbol.prototype : null; - - // Create quick reference variables for speed access to core prototypes. - var push = ArrayProto.push, - slice = ArrayProto.slice, - toString = ObjProto.toString, - hasOwnProperty = ObjProto.hasOwnProperty; - - // Modern feature detection. - var supportsArrayBuffer = typeof ArrayBuffer !== 'undefined', - supportsDataView = typeof DataView !== 'undefined'; - - // All **ECMAScript 5+** native function implementations that we hope to use - // are declared here. - var nativeIsArray = Array.isArray, - nativeKeys = Object.keys, - nativeCreate = Object.create, - nativeIsView = supportsArrayBuffer && ArrayBuffer.isView; - - // Create references to these builtin functions because we override them. - var _isNaN = isNaN, - _isFinite = isFinite; - - // Keys in IE < 9 that won't be iterated by `for key in ...` and thus missed. - var hasEnumBug = !{toString: null}.propertyIsEnumerable('toString'); - var nonEnumerableProps = ['valueOf', 'isPrototypeOf', 'toString', - 'propertyIsEnumerable', 'hasOwnProperty', 'toLocaleString']; - - // The largest integer that can be represented exactly. - var MAX_ARRAY_INDEX = Math.pow(2, 53) - 1; - - // Some functions take a variable number of arguments, or a few expected - // arguments at the beginning and then a variable number of values to operate - // on. This helper accumulates all remaining arguments past the function’s - // argument length (or an explicit `startIndex`), into an array that becomes - // the last argument. Similar to ES6’s "rest parameter". - function restArguments(func, startIndex) { - startIndex = startIndex == null ? func.length - 1 : +startIndex; - return function() { - var length = Math.max(arguments.length - startIndex, 0), - rest = Array(length), - index = 0; - for (; index < length; index++) { - rest[index] = arguments[index + startIndex]; - } - switch (startIndex) { - case 0: return func.call(this, rest); - case 1: return func.call(this, arguments[0], rest); - case 2: return func.call(this, arguments[0], arguments[1], rest); - } - var args = Array(startIndex + 1); - for (index = 0; index < startIndex; index++) { - args[index] = arguments[index]; - } - args[startIndex] = rest; - return func.apply(this, args); - }; - } - - // Is a given variable an object? - function isObject(obj) { - var type = typeof obj; - return type === 'function' || type === 'object' && !!obj; - } - - // Is a given value equal to null? - function isNull(obj) { - return obj === null; - } - - // Is a given variable undefined? - function isUndefined(obj) { - return obj === void 0; - } - - // Is a given value a boolean? - function isBoolean(obj) { - return obj === true || obj === false || toString.call(obj) === '[object Boolean]'; - } - - // Is a given value a DOM element? - function isElement(obj) { - return !!(obj && obj.nodeType === 1); - } - - // Internal function for creating a `toString`-based type tester. - function tagTester(name) { - var tag = '[object ' + name + ']'; - return function(obj) { - return toString.call(obj) === tag; - }; - } - - var isString = tagTester('String'); - - var isNumber = tagTester('Number'); - - var isDate = tagTester('Date'); - - var isRegExp = tagTester('RegExp'); - - var isError = tagTester('Error'); - - var isSymbol = tagTester('Symbol'); - - var isArrayBuffer = tagTester('ArrayBuffer'); - - var isFunction = tagTester('Function'); - - // Optimize `isFunction` if appropriate. Work around some `typeof` bugs in old - // v8, IE 11 (#1621), Safari 8 (#1929), and PhantomJS (#2236). - var nodelist = root.document && root.document.childNodes; - if (typeof /./ != 'function' && typeof Int8Array != 'object' && typeof nodelist != 'function') { - isFunction = function(obj) { - return typeof obj == 'function' || false; - }; - } - - var isFunction$1 = isFunction; - - var hasObjectTag = tagTester('Object'); - - // In IE 10 - Edge 13, `DataView` has string tag `'[object Object]'`. - // In IE 11, the most common among them, this problem also applies to - // `Map`, `WeakMap` and `Set`. - var hasStringTagBug = ( - supportsDataView && hasObjectTag(new DataView(new ArrayBuffer(8))) - ), - isIE11 = (typeof Map !== 'undefined' && hasObjectTag(new Map)); - - var isDataView = tagTester('DataView'); - - // In IE 10 - Edge 13, we need a different heuristic - // to determine whether an object is a `DataView`. - function ie10IsDataView(obj) { - return obj != null && isFunction$1(obj.getInt8) && isArrayBuffer(obj.buffer); - } - - var isDataView$1 = (hasStringTagBug ? ie10IsDataView : isDataView); - - // Is a given value an array? - // Delegates to ECMA5's native `Array.isArray`. - var isArray = nativeIsArray || tagTester('Array'); - - // Internal function to check whether `key` is an own property name of `obj`. - function has$1(obj, key) { - return obj != null && hasOwnProperty.call(obj, key); - } - - var isArguments = tagTester('Arguments'); - - // Define a fallback version of the method in browsers (ahem, IE < 9), where - // there isn't any inspectable "Arguments" type. - (function() { - if (!isArguments(arguments)) { - isArguments = function(obj) { - return has$1(obj, 'callee'); - }; - } - }()); - - var isArguments$1 = isArguments; - - // Is a given object a finite number? - function isFinite$1(obj) { - return !isSymbol(obj) && _isFinite(obj) && !isNaN(parseFloat(obj)); - } - - // Is the given value `NaN`? - function isNaN$1(obj) { - return isNumber(obj) && _isNaN(obj); - } - - // Predicate-generating function. Often useful outside of Underscore. - function constant(value) { - return function() { - return value; - }; - } - - // Common internal logic for `isArrayLike` and `isBufferLike`. - function createSizePropertyCheck(getSizeProperty) { - return function(collection) { - var sizeProperty = getSizeProperty(collection); - return typeof sizeProperty == 'number' && sizeProperty >= 0 && sizeProperty <= MAX_ARRAY_INDEX; - } - } - - // Internal helper to generate a function to obtain property `key` from `obj`. - function shallowProperty(key) { - return function(obj) { - return obj == null ? void 0 : obj[key]; - }; - } - - // Internal helper to obtain the `byteLength` property of an object. - var getByteLength = shallowProperty('byteLength'); - - // Internal helper to determine whether we should spend extensive checks against - // `ArrayBuffer` et al. - var isBufferLike = createSizePropertyCheck(getByteLength); - - // Is a given value a typed array? - var typedArrayPattern = /\[object ((I|Ui)nt(8|16|32)|Float(32|64)|Uint8Clamped|Big(I|Ui)nt64)Array\]/; - function isTypedArray(obj) { - // `ArrayBuffer.isView` is the most future-proof, so use it when available. - // Otherwise, fall back on the above regular expression. - return nativeIsView ? (nativeIsView(obj) && !isDataView$1(obj)) : - isBufferLike(obj) && typedArrayPattern.test(toString.call(obj)); - } - - var isTypedArray$1 = supportsArrayBuffer ? isTypedArray : constant(false); - - // Internal helper to obtain the `length` property of an object. - var getLength = shallowProperty('length'); - - // Internal helper to create a simple lookup structure. - // `collectNonEnumProps` used to depend on `_.contains`, but this led to - // circular imports. `emulatedSet` is a one-off solution that only works for - // arrays of strings. - function emulatedSet(keys) { - var hash = {}; - for (var l = keys.length, i = 0; i < l; ++i) hash[keys[i]] = true; - return { - contains: function(key) { return hash[key]; }, - push: function(key) { - hash[key] = true; - return keys.push(key); - } - }; - } - - // Internal helper. Checks `keys` for the presence of keys in IE < 9 that won't - // be iterated by `for key in ...` and thus missed. Extends `keys` in place if - // needed. - function collectNonEnumProps(obj, keys) { - keys = emulatedSet(keys); - var nonEnumIdx = nonEnumerableProps.length; - var constructor = obj.constructor; - var proto = isFunction$1(constructor) && constructor.prototype || ObjProto; - - // Constructor is a special case. - var prop = 'constructor'; - if (has$1(obj, prop) && !keys.contains(prop)) keys.push(prop); - - while (nonEnumIdx--) { - prop = nonEnumerableProps[nonEnumIdx]; - if (prop in obj && obj[prop] !== proto[prop] && !keys.contains(prop)) { - keys.push(prop); - } - } - } - - // Retrieve the names of an object's own properties. - // Delegates to **ECMAScript 5**'s native `Object.keys`. - function keys(obj) { - if (!isObject(obj)) return []; - if (nativeKeys) return nativeKeys(obj); - var keys = []; - for (var key in obj) if (has$1(obj, key)) keys.push(key); - // Ahem, IE < 9. - if (hasEnumBug) collectNonEnumProps(obj, keys); - return keys; - } - - // Is a given array, string, or object empty? - // An "empty" object has no enumerable own-properties. - function isEmpty(obj) { - if (obj == null) return true; - // Skip the more expensive `toString`-based type checks if `obj` has no - // `.length`. - var length = getLength(obj); - if (typeof length == 'number' && ( - isArray(obj) || isString(obj) || isArguments$1(obj) - )) return length === 0; - return getLength(keys(obj)) === 0; - } - - // Returns whether an object has a given set of `key:value` pairs. - function isMatch(object, attrs) { - var _keys = keys(attrs), length = _keys.length; - if (object == null) return !length; - var obj = Object(object); - for (var i = 0; i < length; i++) { - var key = _keys[i]; - if (attrs[key] !== obj[key] || !(key in obj)) return false; - } - return true; - } - - // If Underscore is called as a function, it returns a wrapped object that can - // be used OO-style. This wrapper holds altered versions of all functions added - // through `_.mixin`. Wrapped objects may be chained. - function _$1(obj) { - if (obj instanceof _$1) return obj; - if (!(this instanceof _$1)) return new _$1(obj); - this._wrapped = obj; - } - - _$1.VERSION = VERSION; - - // Extracts the result from a wrapped and chained object. - _$1.prototype.value = function() { - return this._wrapped; - }; - - // Provide unwrapping proxies for some methods used in engine operations - // such as arithmetic and JSON stringification. - _$1.prototype.valueOf = _$1.prototype.toJSON = _$1.prototype.value; - - _$1.prototype.toString = function() { - return String(this._wrapped); - }; - - // Internal function to wrap or shallow-copy an ArrayBuffer, - // typed array or DataView to a new view, reusing the buffer. - function toBufferView(bufferSource) { - return new Uint8Array( - bufferSource.buffer || bufferSource, - bufferSource.byteOffset || 0, - getByteLength(bufferSource) - ); - } - - // We use this string twice, so give it a name for minification. - var tagDataView = '[object DataView]'; - - // Internal recursive comparison function for `_.isEqual`. - function eq(a, b, aStack, bStack) { - // Identical objects are equal. `0 === -0`, but they aren't identical. - // See the [Harmony `egal` proposal](https://wiki.ecmascript.org/doku.php?id=harmony:egal). - if (a === b) return a !== 0 || 1 / a === 1 / b; - // `null` or `undefined` only equal to itself (strict comparison). - if (a == null || b == null) return false; - // `NaN`s are equivalent, but non-reflexive. - if (a !== a) return b !== b; - // Exhaust primitive checks - var type = typeof a; - if (type !== 'function' && type !== 'object' && typeof b != 'object') return false; - return deepEq(a, b, aStack, bStack); - } - - // Internal recursive comparison function for `_.isEqual`. - function deepEq(a, b, aStack, bStack) { - // Unwrap any wrapped objects. - if (a instanceof _$1) a = a._wrapped; - if (b instanceof _$1) b = b._wrapped; - // Compare `[[Class]]` names. - var className = toString.call(a); - if (className !== toString.call(b)) return false; - // Work around a bug in IE 10 - Edge 13. - if (hasStringTagBug && className == '[object Object]' && isDataView$1(a)) { - if (!isDataView$1(b)) return false; - className = tagDataView; - } - switch (className) { - // These types are compared by value. - case '[object RegExp]': - // RegExps are coerced to strings for comparison (Note: '' + /a/i === '/a/i') - case '[object String]': - // Primitives and their corresponding object wrappers are equivalent; thus, `"5"` is - // equivalent to `new String("5")`. - return '' + a === '' + b; - case '[object Number]': - // `NaN`s are equivalent, but non-reflexive. - // Object(NaN) is equivalent to NaN. - if (+a !== +a) return +b !== +b; - // An `egal` comparison is performed for other numeric values. - return +a === 0 ? 1 / +a === 1 / b : +a === +b; - case '[object Date]': - case '[object Boolean]': - // Coerce dates and booleans to numeric primitive values. Dates are compared by their - // millisecond representations. Note that invalid dates with millisecond representations - // of `NaN` are not equivalent. - return +a === +b; - case '[object Symbol]': - return SymbolProto.valueOf.call(a) === SymbolProto.valueOf.call(b); - case '[object ArrayBuffer]': - case tagDataView: - // Coerce to typed array so we can fall through. - return deepEq(toBufferView(a), toBufferView(b), aStack, bStack); - } - - var areArrays = className === '[object Array]'; - if (!areArrays && isTypedArray$1(a)) { - var byteLength = getByteLength(a); - if (byteLength !== getByteLength(b)) return false; - if (a.buffer === b.buffer && a.byteOffset === b.byteOffset) return true; - areArrays = true; - } - if (!areArrays) { - if (typeof a != 'object' || typeof b != 'object') return false; - - // Objects with different constructors are not equivalent, but `Object`s or `Array`s - // from different frames are. - var aCtor = a.constructor, bCtor = b.constructor; - if (aCtor !== bCtor && !(isFunction$1(aCtor) && aCtor instanceof aCtor && - isFunction$1(bCtor) && bCtor instanceof bCtor) - && ('constructor' in a && 'constructor' in b)) { - return false; - } - } - // Assume equality for cyclic structures. The algorithm for detecting cyclic - // structures is adapted from ES 5.1 section 15.12.3, abstract operation `JO`. - - // Initializing stack of traversed objects. - // It's done here since we only need them for objects and arrays comparison. - aStack = aStack || []; - bStack = bStack || []; - var length = aStack.length; - while (length--) { - // Linear search. Performance is inversely proportional to the number of - // unique nested structures. - if (aStack[length] === a) return bStack[length] === b; - } - - // Add the first object to the stack of traversed objects. - aStack.push(a); - bStack.push(b); - - // Recursively compare objects and arrays. - if (areArrays) { - // Compare array lengths to determine if a deep comparison is necessary. - length = a.length; - if (length !== b.length) return false; - // Deep compare the contents, ignoring non-numeric properties. - while (length--) { - if (!eq(a[length], b[length], aStack, bStack)) return false; - } - } else { - // Deep compare objects. - var _keys = keys(a), key; - length = _keys.length; - // Ensure that both objects contain the same number of properties before comparing deep equality. - if (keys(b).length !== length) return false; - while (length--) { - // Deep compare each member - key = _keys[length]; - if (!(has$1(b, key) && eq(a[key], b[key], aStack, bStack))) return false; - } - } - // Remove the first object from the stack of traversed objects. - aStack.pop(); - bStack.pop(); - return true; - } - - // Perform a deep comparison to check if two objects are equal. - function isEqual(a, b) { - return eq(a, b); - } - - // Retrieve all the enumerable property names of an object. - function allKeys(obj) { - if (!isObject(obj)) return []; - var keys = []; - for (var key in obj) keys.push(key); - // Ahem, IE < 9. - if (hasEnumBug) collectNonEnumProps(obj, keys); - return keys; - } - - // Since the regular `Object.prototype.toString` type tests don't work for - // some types in IE 11, we use a fingerprinting heuristic instead, based - // on the methods. It's not great, but it's the best we got. - // The fingerprint method lists are defined below. - function ie11fingerprint(methods) { - var length = getLength(methods); - return function(obj) { - if (obj == null) return false; - // `Map`, `WeakMap` and `Set` have no enumerable keys. - var keys = allKeys(obj); - if (getLength(keys)) return false; - for (var i = 0; i < length; i++) { - if (!isFunction$1(obj[methods[i]])) return false; - } - // If we are testing against `WeakMap`, we need to ensure that - // `obj` doesn't have a `forEach` method in order to distinguish - // it from a regular `Map`. - return methods !== weakMapMethods || !isFunction$1(obj[forEachName]); - }; - } - - // In the interest of compact minification, we write - // each string in the fingerprints only once. - var forEachName = 'forEach', - hasName = 'has', - commonInit = ['clear', 'delete'], - mapTail = ['get', hasName, 'set']; - - // `Map`, `WeakMap` and `Set` each have slightly different - // combinations of the above sublists. - var mapMethods = commonInit.concat(forEachName, mapTail), - weakMapMethods = commonInit.concat(mapTail), - setMethods = ['add'].concat(commonInit, forEachName, hasName); - - var isMap = isIE11 ? ie11fingerprint(mapMethods) : tagTester('Map'); - - var isWeakMap = isIE11 ? ie11fingerprint(weakMapMethods) : tagTester('WeakMap'); - - var isSet = isIE11 ? ie11fingerprint(setMethods) : tagTester('Set'); - - var isWeakSet = tagTester('WeakSet'); - - // Retrieve the values of an object's properties. - function values(obj) { - var _keys = keys(obj); - var length = _keys.length; - var values = Array(length); - for (var i = 0; i < length; i++) { - values[i] = obj[_keys[i]]; - } - return values; - } - - // Convert an object into a list of `[key, value]` pairs. - // The opposite of `_.object` with one argument. - function pairs(obj) { - var _keys = keys(obj); - var length = _keys.length; - var pairs = Array(length); - for (var i = 0; i < length; i++) { - pairs[i] = [_keys[i], obj[_keys[i]]]; - } - return pairs; - } - - // Invert the keys and values of an object. The values must be serializable. - function invert(obj) { - var result = {}; - var _keys = keys(obj); - for (var i = 0, length = _keys.length; i < length; i++) { - result[obj[_keys[i]]] = _keys[i]; - } - return result; - } - - // Return a sorted list of the function names available on the object. - function functions(obj) { - var names = []; - for (var key in obj) { - if (isFunction$1(obj[key])) names.push(key); - } - return names.sort(); - } - - // An internal function for creating assigner functions. - function createAssigner(keysFunc, defaults) { - return function(obj) { - var length = arguments.length; - if (defaults) obj = Object(obj); - if (length < 2 || obj == null) return obj; - for (var index = 1; index < length; index++) { - var source = arguments[index], - keys = keysFunc(source), - l = keys.length; - for (var i = 0; i < l; i++) { - var key = keys[i]; - if (!defaults || obj[key] === void 0) obj[key] = source[key]; - } - } - return obj; - }; - } - - // Extend a given object with all the properties in passed-in object(s). - var extend = createAssigner(allKeys); - - // Assigns a given object with all the own properties in the passed-in - // object(s). - // (https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) - var extendOwn = createAssigner(keys); - - // Fill in a given object with default properties. - var defaults = createAssigner(allKeys, true); - - // Create a naked function reference for surrogate-prototype-swapping. - function ctor() { - return function(){}; - } - - // An internal function for creating a new object that inherits from another. - function baseCreate(prototype) { - if (!isObject(prototype)) return {}; - if (nativeCreate) return nativeCreate(prototype); - var Ctor = ctor(); - Ctor.prototype = prototype; - var result = new Ctor; - Ctor.prototype = null; - return result; - } - - // Creates an object that inherits from the given prototype object. - // If additional properties are provided then they will be added to the - // created object. - function create(prototype, props) { - var result = baseCreate(prototype); - if (props) extendOwn(result, props); - return result; - } - - // Create a (shallow-cloned) duplicate of an object. - function clone(obj) { - if (!isObject(obj)) return obj; - return isArray(obj) ? obj.slice() : extend({}, obj); - } - - // Invokes `interceptor` with the `obj` and then returns `obj`. - // The primary purpose of this method is to "tap into" a method chain, in - // order to perform operations on intermediate results within the chain. - function tap(obj, interceptor) { - interceptor(obj); - return obj; - } - - // Normalize a (deep) property `path` to array. - // Like `_.iteratee`, this function can be customized. - function toPath$1(path) { - return isArray(path) ? path : [path]; - } - _$1.toPath = toPath$1; - - // Internal wrapper for `_.toPath` to enable minification. - // Similar to `cb` for `_.iteratee`. - function toPath(path) { - return _$1.toPath(path); - } - - // Internal function to obtain a nested property in `obj` along `path`. - function deepGet(obj, path) { - var length = path.length; - for (var i = 0; i < length; i++) { - if (obj == null) return void 0; - obj = obj[path[i]]; - } - return length ? obj : void 0; - } - - // Get the value of the (deep) property on `path` from `object`. - // If any property in `path` does not exist or if the value is - // `undefined`, return `defaultValue` instead. - // The `path` is normalized through `_.toPath`. - function get(object, path, defaultValue) { - var value = deepGet(object, toPath(path)); - return isUndefined(value) ? defaultValue : value; - } - - // Shortcut function for checking if an object has a given property directly on - // itself (in other words, not on a prototype). Unlike the internal `has` - // function, this public version can also traverse nested properties. - function has(obj, path) { - path = toPath(path); - var length = path.length; - for (var i = 0; i < length; i++) { - var key = path[i]; - if (!has$1(obj, key)) return false; - obj = obj[key]; - } - return !!length; - } - - // Keep the identity function around for default iteratees. - function identity(value) { - return value; - } - - // Returns a predicate for checking whether an object has a given set of - // `key:value` pairs. - function matcher(attrs) { - attrs = extendOwn({}, attrs); - return function(obj) { - return isMatch(obj, attrs); - }; - } - - // Creates a function that, when passed an object, will traverse that object’s - // properties down the given `path`, specified as an array of keys or indices. - function property(path) { - path = toPath(path); - return function(obj) { - return deepGet(obj, path); - }; - } - - // Internal function that returns an efficient (for current engines) version - // of the passed-in callback, to be repeatedly applied in other Underscore - // functions. - function optimizeCb(func, context, argCount) { - if (context === void 0) return func; - switch (argCount == null ? 3 : argCount) { - case 1: return function(value) { - return func.call(context, value); - }; - // The 2-argument case is omitted because we’re not using it. - case 3: return function(value, index, collection) { - return func.call(context, value, index, collection); - }; - case 4: return function(accumulator, value, index, collection) { - return func.call(context, accumulator, value, index, collection); - }; - } - return function() { - return func.apply(context, arguments); - }; - } - - // An internal function to generate callbacks that can be applied to each - // element in a collection, returning the desired result — either `_.identity`, - // an arbitrary callback, a property matcher, or a property accessor. - function baseIteratee(value, context, argCount) { - if (value == null) return identity; - if (isFunction$1(value)) return optimizeCb(value, context, argCount); - if (isObject(value) && !isArray(value)) return matcher(value); - return property(value); - } - - // External wrapper for our callback generator. Users may customize - // `_.iteratee` if they want additional predicate/iteratee shorthand styles. - // This abstraction hides the internal-only `argCount` argument. - function iteratee(value, context) { - return baseIteratee(value, context, Infinity); - } - _$1.iteratee = iteratee; - - // The function we call internally to generate a callback. It invokes - // `_.iteratee` if overridden, otherwise `baseIteratee`. - function cb(value, context, argCount) { - if (_$1.iteratee !== iteratee) return _$1.iteratee(value, context); - return baseIteratee(value, context, argCount); - } - - // Returns the results of applying the `iteratee` to each element of `obj`. - // In contrast to `_.map` it returns an object. - function mapObject(obj, iteratee, context) { - iteratee = cb(iteratee, context); - var _keys = keys(obj), - length = _keys.length, - results = {}; - for (var index = 0; index < length; index++) { - var currentKey = _keys[index]; - results[currentKey] = iteratee(obj[currentKey], currentKey, obj); - } - return results; - } - - // Predicate-generating function. Often useful outside of Underscore. - function noop(){} - - // Generates a function for a given object that returns a given property. - function propertyOf(obj) { - if (obj == null) return noop; - return function(path) { - return get(obj, path); - }; - } - - // Run a function **n** times. - function times(n, iteratee, context) { - var accum = Array(Math.max(0, n)); - iteratee = optimizeCb(iteratee, context, 1); - for (var i = 0; i < n; i++) accum[i] = iteratee(i); - return accum; - } - - // Return a random integer between `min` and `max` (inclusive). - function random(min, max) { - if (max == null) { - max = min; - min = 0; - } - return min + Math.floor(Math.random() * (max - min + 1)); - } - - // A (possibly faster) way to get the current timestamp as an integer. - var now = Date.now || function() { - return new Date().getTime(); - }; - - // Internal helper to generate functions for escaping and unescaping strings - // to/from HTML interpolation. - function createEscaper(map) { - var escaper = function(match) { - return map[match]; - }; - // Regexes for identifying a key that needs to be escaped. - var source = '(?:' + keys(map).join('|') + ')'; - var testRegexp = RegExp(source); - var replaceRegexp = RegExp(source, 'g'); - return function(string) { - string = string == null ? '' : '' + string; - return testRegexp.test(string) ? string.replace(replaceRegexp, escaper) : string; - }; - } - - // Internal list of HTML entities for escaping. - var escapeMap = { - '&': '&', - '<': '<', - '>': '>', - '"': '"', - "'": ''', - '`': '`' - }; - - // Function for escaping strings to HTML interpolation. - var _escape = createEscaper(escapeMap); - - // Internal list of HTML entities for unescaping. - var unescapeMap = invert(escapeMap); - - // Function for unescaping strings from HTML interpolation. - var _unescape = createEscaper(unescapeMap); - - // By default, Underscore uses ERB-style template delimiters. Change the - // following template settings to use alternative delimiters. - var templateSettings = _$1.templateSettings = { - evaluate: /<%([\s\S]+?)%>/g, - interpolate: /<%=([\s\S]+?)%>/g, - escape: /<%-([\s\S]+?)%>/g - }; - - // When customizing `_.templateSettings`, if you don't want to define an - // interpolation, evaluation or escaping regex, we need one that is - // guaranteed not to match. - var noMatch = /(.)^/; - - // Certain characters need to be escaped so that they can be put into a - // string literal. - var escapes = { - "'": "'", - '\\': '\\', - '\r': 'r', - '\n': 'n', - '\u2028': 'u2028', - '\u2029': 'u2029' - }; - - var escapeRegExp = /\\|'|\r|\n|\u2028|\u2029/g; - - function escapeChar(match) { - return '\\' + escapes[match]; - } - - // In order to prevent third-party code injection through - // `_.templateSettings.variable`, we test it against the following regular - // expression. It is intentionally a bit more liberal than just matching valid - // identifiers, but still prevents possible loopholes through defaults or - // destructuring assignment. - var bareIdentifier = /^\s*(\w|\$)+\s*$/; - - // JavaScript micro-templating, similar to John Resig's implementation. - // Underscore templating handles arbitrary delimiters, preserves whitespace, - // and correctly escapes quotes within interpolated code. - // NB: `oldSettings` only exists for backwards compatibility. - function template(text, settings, oldSettings) { - if (!settings && oldSettings) settings = oldSettings; - settings = defaults({}, settings, _$1.templateSettings); - - // Combine delimiters into one regular expression via alternation. - var matcher = RegExp([ - (settings.escape || noMatch).source, - (settings.interpolate || noMatch).source, - (settings.evaluate || noMatch).source - ].join('|') + '|$', 'g'); - - // Compile the template source, escaping string literals appropriately. - var index = 0; - var source = "__p+='"; - text.replace(matcher, function(match, escape, interpolate, evaluate, offset) { - source += text.slice(index, offset).replace(escapeRegExp, escapeChar); - index = offset + match.length; - - if (escape) { - source += "'+\n((__t=(" + escape + "))==null?'':_.escape(__t))+\n'"; - } else if (interpolate) { - source += "'+\n((__t=(" + interpolate + "))==null?'':__t)+\n'"; - } else if (evaluate) { - source += "';\n" + evaluate + "\n__p+='"; - } - - // Adobe VMs need the match returned to produce the correct offset. - return match; - }); - source += "';\n"; - - var argument = settings.variable; - if (argument) { - // Insure against third-party code injection. (CVE-2021-23358) - if (!bareIdentifier.test(argument)) throw new Error( - 'variable is not a bare identifier: ' + argument - ); - } else { - // If a variable is not specified, place data values in local scope. - source = 'with(obj||{}){\n' + source + '}\n'; - argument = 'obj'; - } - - source = "var __t,__p='',__j=Array.prototype.join," + - "print=function(){__p+=__j.call(arguments,'');};\n" + - source + 'return __p;\n'; - - var render; - try { - render = new Function(argument, '_', source); - } catch (e) { - e.source = source; - throw e; - } - - var template = function(data) { - return render.call(this, data, _$1); - }; - - // Provide the compiled source as a convenience for precompilation. - template.source = 'function(' + argument + '){\n' + source + '}'; - - return template; - } - - // Traverses the children of `obj` along `path`. If a child is a function, it - // is invoked with its parent as context. Returns the value of the final - // child, or `fallback` if any child is undefined. - function result(obj, path, fallback) { - path = toPath(path); - var length = path.length; - if (!length) { - return isFunction$1(fallback) ? fallback.call(obj) : fallback; - } - for (var i = 0; i < length; i++) { - var prop = obj == null ? void 0 : obj[path[i]]; - if (prop === void 0) { - prop = fallback; - i = length; // Ensure we don't continue iterating. - } - obj = isFunction$1(prop) ? prop.call(obj) : prop; - } - return obj; - } - - // Generate a unique integer id (unique within the entire client session). - // Useful for temporary DOM ids. - var idCounter = 0; - function uniqueId(prefix) { - var id = ++idCounter + ''; - return prefix ? prefix + id : id; - } - - // Start chaining a wrapped Underscore object. - function chain(obj) { - var instance = _$1(obj); - instance._chain = true; - return instance; - } - - // Internal function to execute `sourceFunc` bound to `context` with optional - // `args`. Determines whether to execute a function as a constructor or as a - // normal function. - function executeBound(sourceFunc, boundFunc, context, callingContext, args) { - if (!(callingContext instanceof boundFunc)) return sourceFunc.apply(context, args); - var self = baseCreate(sourceFunc.prototype); - var result = sourceFunc.apply(self, args); - if (isObject(result)) return result; - return self; - } - - // Partially apply a function by creating a version that has had some of its - // arguments pre-filled, without changing its dynamic `this` context. `_` acts - // as a placeholder by default, allowing any combination of arguments to be - // pre-filled. Set `_.partial.placeholder` for a custom placeholder argument. - var partial = restArguments(function(func, boundArgs) { - var placeholder = partial.placeholder; - var bound = function() { - var position = 0, length = boundArgs.length; - var args = Array(length); - for (var i = 0; i < length; i++) { - args[i] = boundArgs[i] === placeholder ? arguments[position++] : boundArgs[i]; - } - while (position < arguments.length) args.push(arguments[position++]); - return executeBound(func, bound, this, this, args); - }; - return bound; - }); - - partial.placeholder = _$1; - - // Create a function bound to a given object (assigning `this`, and arguments, - // optionally). - var bind = restArguments(function(func, context, args) { - if (!isFunction$1(func)) throw new TypeError('Bind must be called on a function'); - var bound = restArguments(function(callArgs) { - return executeBound(func, bound, context, this, args.concat(callArgs)); - }); - return bound; - }); - - // Internal helper for collection methods to determine whether a collection - // should be iterated as an array or as an object. - // Related: https://people.mozilla.org/~jorendorff/es6-draft.html#sec-tolength - // Avoids a very nasty iOS 8 JIT bug on ARM-64. #2094 - var isArrayLike = createSizePropertyCheck(getLength); - - // Internal implementation of a recursive `flatten` function. - function flatten$1(input, depth, strict, output) { - output = output || []; - if (!depth && depth !== 0) { - depth = Infinity; - } else if (depth <= 0) { - return output.concat(input); - } - var idx = output.length; - for (var i = 0, length = getLength(input); i < length; i++) { - var value = input[i]; - if (isArrayLike(value) && (isArray(value) || isArguments$1(value))) { - // Flatten current level of array or arguments object. - if (depth > 1) { - flatten$1(value, depth - 1, strict, output); - idx = output.length; - } else { - var j = 0, len = value.length; - while (j < len) output[idx++] = value[j++]; - } - } else if (!strict) { - output[idx++] = value; - } - } - return output; - } - - // Bind a number of an object's methods to that object. Remaining arguments - // are the method names to be bound. Useful for ensuring that all callbacks - // defined on an object belong to it. - var bindAll = restArguments(function(obj, keys) { - keys = flatten$1(keys, false, false); - var index = keys.length; - if (index < 1) throw new Error('bindAll must be passed function names'); - while (index--) { - var key = keys[index]; - obj[key] = bind(obj[key], obj); - } - return obj; - }); - - // Memoize an expensive function by storing its results. - function memoize(func, hasher) { - var memoize = function(key) { - var cache = memoize.cache; - var address = '' + (hasher ? hasher.apply(this, arguments) : key); - if (!has$1(cache, address)) cache[address] = func.apply(this, arguments); - return cache[address]; - }; - memoize.cache = {}; - return memoize; - } - - // Delays a function for the given number of milliseconds, and then calls - // it with the arguments supplied. - var delay = restArguments(function(func, wait, args) { - return setTimeout(function() { - return func.apply(null, args); - }, wait); - }); - - // Defers a function, scheduling it to run after the current call stack has - // cleared. - var defer = partial(delay, _$1, 1); - - // Returns a function, that, when invoked, will only be triggered at most once - // during a given window of time. Normally, the throttled function will run - // as much as it can, without ever going more than once per `wait` duration; - // but if you'd like to disable the execution on the leading edge, pass - // `{leading: false}`. To disable execution on the trailing edge, ditto. - function throttle(func, wait, options) { - var timeout, context, args, result; - var previous = 0; - if (!options) options = {}; - - var later = function() { - previous = options.leading === false ? 0 : now(); - timeout = null; - result = func.apply(context, args); - if (!timeout) context = args = null; - }; - - var throttled = function() { - var _now = now(); - if (!previous && options.leading === false) previous = _now; - var remaining = wait - (_now - previous); - context = this; - args = arguments; - if (remaining <= 0 || remaining > wait) { - if (timeout) { - clearTimeout(timeout); - timeout = null; - } - previous = _now; - result = func.apply(context, args); - if (!timeout) context = args = null; - } else if (!timeout && options.trailing !== false) { - timeout = setTimeout(later, remaining); - } - return result; - }; - - throttled.cancel = function() { - clearTimeout(timeout); - previous = 0; - timeout = context = args = null; - }; - - return throttled; - } - - // When a sequence of calls of the returned function ends, the argument - // function is triggered. The end of a sequence is defined by the `wait` - // parameter. If `immediate` is passed, the argument function will be - // triggered at the beginning of the sequence instead of at the end. - function debounce(func, wait, immediate) { - var timeout, previous, args, result, context; - - var later = function() { - var passed = now() - previous; - if (wait > passed) { - timeout = setTimeout(later, wait - passed); - } else { - timeout = null; - if (!immediate) result = func.apply(context, args); - // This check is needed because `func` can recursively invoke `debounced`. - if (!timeout) args = context = null; - } - }; - - var debounced = restArguments(function(_args) { - context = this; - args = _args; - previous = now(); - if (!timeout) { - timeout = setTimeout(later, wait); - if (immediate) result = func.apply(context, args); - } - return result; - }); - - debounced.cancel = function() { - clearTimeout(timeout); - timeout = args = context = null; - }; - - return debounced; - } - - // Returns the first function passed as an argument to the second, - // allowing you to adjust arguments, run code before and after, and - // conditionally execute the original function. - function wrap(func, wrapper) { - return partial(wrapper, func); - } - - // Returns a negated version of the passed-in predicate. - function negate(predicate) { - return function() { - return !predicate.apply(this, arguments); - }; - } - - // Returns a function that is the composition of a list of functions, each - // consuming the return value of the function that follows. - function compose() { - var args = arguments; - var start = args.length - 1; - return function() { - var i = start; - var result = args[start].apply(this, arguments); - while (i--) result = args[i].call(this, result); - return result; - }; - } - - // Returns a function that will only be executed on and after the Nth call. - function after(times, func) { - return function() { - if (--times < 1) { - return func.apply(this, arguments); - } - }; - } - - // Returns a function that will only be executed up to (but not including) the - // Nth call. - function before(times, func) { - var memo; - return function() { - if (--times > 0) { - memo = func.apply(this, arguments); - } - if (times <= 1) func = null; - return memo; - }; - } - - // Returns a function that will be executed at most one time, no matter how - // often you call it. Useful for lazy initialization. - var once = partial(before, 2); - - // Returns the first key on an object that passes a truth test. - function findKey(obj, predicate, context) { - predicate = cb(predicate, context); - var _keys = keys(obj), key; - for (var i = 0, length = _keys.length; i < length; i++) { - key = _keys[i]; - if (predicate(obj[key], key, obj)) return key; - } - } - - // Internal function to generate `_.findIndex` and `_.findLastIndex`. - function createPredicateIndexFinder(dir) { - return function(array, predicate, context) { - predicate = cb(predicate, context); - var length = getLength(array); - var index = dir > 0 ? 0 : length - 1; - for (; index >= 0 && index < length; index += dir) { - if (predicate(array[index], index, array)) return index; - } - return -1; - }; - } - - // Returns the first index on an array-like that passes a truth test. - var findIndex = createPredicateIndexFinder(1); - - // Returns the last index on an array-like that passes a truth test. - var findLastIndex = createPredicateIndexFinder(-1); - - // Use a comparator function to figure out the smallest index at which - // an object should be inserted so as to maintain order. Uses binary search. - function sortedIndex(array, obj, iteratee, context) { - iteratee = cb(iteratee, context, 1); - var value = iteratee(obj); - var low = 0, high = getLength(array); - while (low < high) { - var mid = Math.floor((low + high) / 2); - if (iteratee(array[mid]) < value) low = mid + 1; else high = mid; - } - return low; - } - - // Internal function to generate the `_.indexOf` and `_.lastIndexOf` functions. - function createIndexFinder(dir, predicateFind, sortedIndex) { - return function(array, item, idx) { - var i = 0, length = getLength(array); - if (typeof idx == 'number') { - if (dir > 0) { - i = idx >= 0 ? idx : Math.max(idx + length, i); - } else { - length = idx >= 0 ? Math.min(idx + 1, length) : idx + length + 1; - } - } else if (sortedIndex && idx && length) { - idx = sortedIndex(array, item); - return array[idx] === item ? idx : -1; - } - if (item !== item) { - idx = predicateFind(slice.call(array, i, length), isNaN$1); - return idx >= 0 ? idx + i : -1; - } - for (idx = dir > 0 ? i : length - 1; idx >= 0 && idx < length; idx += dir) { - if (array[idx] === item) return idx; - } - return -1; - }; - } - - // Return the position of the first occurrence of an item in an array, - // or -1 if the item is not included in the array. - // If the array is large and already in sort order, pass `true` - // for **isSorted** to use binary search. - var indexOf = createIndexFinder(1, findIndex, sortedIndex); - - // Return the position of the last occurrence of an item in an array, - // or -1 if the item is not included in the array. - var lastIndexOf = createIndexFinder(-1, findLastIndex); - - // Return the first value which passes a truth test. - function find(obj, predicate, context) { - var keyFinder = isArrayLike(obj) ? findIndex : findKey; - var key = keyFinder(obj, predicate, context); - if (key !== void 0 && key !== -1) return obj[key]; - } - - // Convenience version of a common use case of `_.find`: getting the first - // object containing specific `key:value` pairs. - function findWhere(obj, attrs) { - return find(obj, matcher(attrs)); - } - - // The cornerstone for collection functions, an `each` - // implementation, aka `forEach`. - // Handles raw objects in addition to array-likes. Treats all - // sparse array-likes as if they were dense. - function each(obj, iteratee, context) { - iteratee = optimizeCb(iteratee, context); - var i, length; - if (isArrayLike(obj)) { - for (i = 0, length = obj.length; i < length; i++) { - iteratee(obj[i], i, obj); - } - } else { - var _keys = keys(obj); - for (i = 0, length = _keys.length; i < length; i++) { - iteratee(obj[_keys[i]], _keys[i], obj); - } - } - return obj; - } - - // Return the results of applying the iteratee to each element. - function map(obj, iteratee, context) { - iteratee = cb(iteratee, context); - var _keys = !isArrayLike(obj) && keys(obj), - length = (_keys || obj).length, - results = Array(length); - for (var index = 0; index < length; index++) { - var currentKey = _keys ? _keys[index] : index; - results[index] = iteratee(obj[currentKey], currentKey, obj); - } - return results; - } - - // Internal helper to create a reducing function, iterating left or right. - function createReduce(dir) { - // Wrap code that reassigns argument variables in a separate function than - // the one that accesses `arguments.length` to avoid a perf hit. (#1991) - var reducer = function(obj, iteratee, memo, initial) { - var _keys = !isArrayLike(obj) && keys(obj), - length = (_keys || obj).length, - index = dir > 0 ? 0 : length - 1; - if (!initial) { - memo = obj[_keys ? _keys[index] : index]; - index += dir; - } - for (; index >= 0 && index < length; index += dir) { - var currentKey = _keys ? _keys[index] : index; - memo = iteratee(memo, obj[currentKey], currentKey, obj); - } - return memo; - }; - - return function(obj, iteratee, memo, context) { - var initial = arguments.length >= 3; - return reducer(obj, optimizeCb(iteratee, context, 4), memo, initial); - }; - } - - // **Reduce** builds up a single result from a list of values, aka `inject`, - // or `foldl`. - var reduce = createReduce(1); - - // The right-associative version of reduce, also known as `foldr`. - var reduceRight = createReduce(-1); - - // Return all the elements that pass a truth test. - function filter(obj, predicate, context) { - var results = []; - predicate = cb(predicate, context); - each(obj, function(value, index, list) { - if (predicate(value, index, list)) results.push(value); - }); - return results; - } - - // Return all the elements for which a truth test fails. - function reject(obj, predicate, context) { - return filter(obj, negate(cb(predicate)), context); - } - - // Determine whether all of the elements pass a truth test. - function every(obj, predicate, context) { - predicate = cb(predicate, context); - var _keys = !isArrayLike(obj) && keys(obj), - length = (_keys || obj).length; - for (var index = 0; index < length; index++) { - var currentKey = _keys ? _keys[index] : index; - if (!predicate(obj[currentKey], currentKey, obj)) return false; - } - return true; - } - - // Determine if at least one element in the object passes a truth test. - function some(obj, predicate, context) { - predicate = cb(predicate, context); - var _keys = !isArrayLike(obj) && keys(obj), - length = (_keys || obj).length; - for (var index = 0; index < length; index++) { - var currentKey = _keys ? _keys[index] : index; - if (predicate(obj[currentKey], currentKey, obj)) return true; - } - return false; - } - - // Determine if the array or object contains a given item (using `===`). - function contains(obj, item, fromIndex, guard) { - if (!isArrayLike(obj)) obj = values(obj); - if (typeof fromIndex != 'number' || guard) fromIndex = 0; - return indexOf(obj, item, fromIndex) >= 0; - } - - // Invoke a method (with arguments) on every item in a collection. - var invoke = restArguments(function(obj, path, args) { - var contextPath, func; - if (isFunction$1(path)) { - func = path; - } else { - path = toPath(path); - contextPath = path.slice(0, -1); - path = path[path.length - 1]; - } - return map(obj, function(context) { - var method = func; - if (!method) { - if (contextPath && contextPath.length) { - context = deepGet(context, contextPath); - } - if (context == null) return void 0; - method = context[path]; - } - return method == null ? method : method.apply(context, args); - }); - }); - - // Convenience version of a common use case of `_.map`: fetching a property. - function pluck(obj, key) { - return map(obj, property(key)); - } - - // Convenience version of a common use case of `_.filter`: selecting only - // objects containing specific `key:value` pairs. - function where(obj, attrs) { - return filter(obj, matcher(attrs)); - } - - // Return the maximum element (or element-based computation). - function max(obj, iteratee, context) { - var result = -Infinity, lastComputed = -Infinity, - value, computed; - if (iteratee == null || typeof iteratee == 'number' && typeof obj[0] != 'object' && obj != null) { - obj = isArrayLike(obj) ? obj : values(obj); - for (var i = 0, length = obj.length; i < length; i++) { - value = obj[i]; - if (value != null && value > result) { - result = value; - } - } - } else { - iteratee = cb(iteratee, context); - each(obj, function(v, index, list) { - computed = iteratee(v, index, list); - if (computed > lastComputed || computed === -Infinity && result === -Infinity) { - result = v; - lastComputed = computed; - } - }); - } - return result; - } - - // Return the minimum element (or element-based computation). - function min(obj, iteratee, context) { - var result = Infinity, lastComputed = Infinity, - value, computed; - if (iteratee == null || typeof iteratee == 'number' && typeof obj[0] != 'object' && obj != null) { - obj = isArrayLike(obj) ? obj : values(obj); - for (var i = 0, length = obj.length; i < length; i++) { - value = obj[i]; - if (value != null && value < result) { - result = value; - } - } - } else { - iteratee = cb(iteratee, context); - each(obj, function(v, index, list) { - computed = iteratee(v, index, list); - if (computed < lastComputed || computed === Infinity && result === Infinity) { - result = v; - lastComputed = computed; - } - }); - } - return result; - } - - // Sample **n** random values from a collection using the modern version of the - // [Fisher-Yates shuffle](https://en.wikipedia.org/wiki/Fisher–Yates_shuffle). - // If **n** is not specified, returns a single random element. - // The internal `guard` argument allows it to work with `_.map`. - function sample(obj, n, guard) { - if (n == null || guard) { - if (!isArrayLike(obj)) obj = values(obj); - return obj[random(obj.length - 1)]; - } - var sample = isArrayLike(obj) ? clone(obj) : values(obj); - var length = getLength(sample); - n = Math.max(Math.min(n, length), 0); - var last = length - 1; - for (var index = 0; index < n; index++) { - var rand = random(index, last); - var temp = sample[index]; - sample[index] = sample[rand]; - sample[rand] = temp; - } - return sample.slice(0, n); - } - - // Shuffle a collection. - function shuffle(obj) { - return sample(obj, Infinity); - } - - // Sort the object's values by a criterion produced by an iteratee. - function sortBy(obj, iteratee, context) { - var index = 0; - iteratee = cb(iteratee, context); - return pluck(map(obj, function(value, key, list) { - return { - value: value, - index: index++, - criteria: iteratee(value, key, list) - }; - }).sort(function(left, right) { - var a = left.criteria; - var b = right.criteria; - if (a !== b) { - if (a > b || a === void 0) return 1; - if (a < b || b === void 0) return -1; - } - return left.index - right.index; - }), 'value'); - } - - // An internal function used for aggregate "group by" operations. - function group(behavior, partition) { - return function(obj, iteratee, context) { - var result = partition ? [[], []] : {}; - iteratee = cb(iteratee, context); - each(obj, function(value, index) { - var key = iteratee(value, index, obj); - behavior(result, value, key); - }); - return result; - }; - } - - // Groups the object's values by a criterion. Pass either a string attribute - // to group by, or a function that returns the criterion. - var groupBy = group(function(result, value, key) { - if (has$1(result, key)) result[key].push(value); else result[key] = [value]; - }); - - // Indexes the object's values by a criterion, similar to `_.groupBy`, but for - // when you know that your index values will be unique. - var indexBy = group(function(result, value, key) { - result[key] = value; - }); - - // Counts instances of an object that group by a certain criterion. Pass - // either a string attribute to count by, or a function that returns the - // criterion. - var countBy = group(function(result, value, key) { - if (has$1(result, key)) result[key]++; else result[key] = 1; - }); - - // Split a collection into two arrays: one whose elements all pass the given - // truth test, and one whose elements all do not pass the truth test. - var partition = group(function(result, value, pass) { - result[pass ? 0 : 1].push(value); - }, true); - - // Safely create a real, live array from anything iterable. - var reStrSymbol = /[^\ud800-\udfff]|[\ud800-\udbff][\udc00-\udfff]|[\ud800-\udfff]/g; - function toArray(obj) { - if (!obj) return []; - if (isArray(obj)) return slice.call(obj); - if (isString(obj)) { - // Keep surrogate pair characters together. - return obj.match(reStrSymbol); - } - if (isArrayLike(obj)) return map(obj, identity); - return values(obj); - } - - // Return the number of elements in a collection. - function size(obj) { - if (obj == null) return 0; - return isArrayLike(obj) ? obj.length : keys(obj).length; - } - - // Internal `_.pick` helper function to determine whether `key` is an enumerable - // property name of `obj`. - function keyInObj(value, key, obj) { - return key in obj; - } - - // Return a copy of the object only containing the allowed properties. - var pick = restArguments(function(obj, keys) { - var result = {}, iteratee = keys[0]; - if (obj == null) return result; - if (isFunction$1(iteratee)) { - if (keys.length > 1) iteratee = optimizeCb(iteratee, keys[1]); - keys = allKeys(obj); - } else { - iteratee = keyInObj; - keys = flatten$1(keys, false, false); - obj = Object(obj); - } - for (var i = 0, length = keys.length; i < length; i++) { - var key = keys[i]; - var value = obj[key]; - if (iteratee(value, key, obj)) result[key] = value; - } - return result; - }); - - // Return a copy of the object without the disallowed properties. - var omit = restArguments(function(obj, keys) { - var iteratee = keys[0], context; - if (isFunction$1(iteratee)) { - iteratee = negate(iteratee); - if (keys.length > 1) context = keys[1]; - } else { - keys = map(flatten$1(keys, false, false), String); - iteratee = function(value, key) { - return !contains(keys, key); - }; - } - return pick(obj, iteratee, context); - }); - - // Returns everything but the last entry of the array. Especially useful on - // the arguments object. Passing **n** will return all the values in - // the array, excluding the last N. - function initial(array, n, guard) { - return slice.call(array, 0, Math.max(0, array.length - (n == null || guard ? 1 : n))); - } - - // Get the first element of an array. Passing **n** will return the first N - // values in the array. The **guard** check allows it to work with `_.map`. - function first(array, n, guard) { - if (array == null || array.length < 1) return n == null || guard ? void 0 : []; - if (n == null || guard) return array[0]; - return initial(array, array.length - n); - } - - // Returns everything but the first entry of the `array`. Especially useful on - // the `arguments` object. Passing an **n** will return the rest N values in the - // `array`. - function rest(array, n, guard) { - return slice.call(array, n == null || guard ? 1 : n); - } - - // Get the last element of an array. Passing **n** will return the last N - // values in the array. - function last(array, n, guard) { - if (array == null || array.length < 1) return n == null || guard ? void 0 : []; - if (n == null || guard) return array[array.length - 1]; - return rest(array, Math.max(0, array.length - n)); - } - - // Trim out all falsy values from an array. - function compact(array) { - return filter(array, Boolean); - } - - // Flatten out an array, either recursively (by default), or up to `depth`. - // Passing `true` or `false` as `depth` means `1` or `Infinity`, respectively. - function flatten(array, depth) { - return flatten$1(array, depth, false); - } - - // Take the difference between one array and a number of other arrays. - // Only the elements present in just the first array will remain. - var difference = restArguments(function(array, rest) { - rest = flatten$1(rest, true, true); - return filter(array, function(value){ - return !contains(rest, value); - }); - }); - - // Return a version of the array that does not contain the specified value(s). - var without = restArguments(function(array, otherArrays) { - return difference(array, otherArrays); - }); - - // Produce a duplicate-free version of the array. If the array has already - // been sorted, you have the option of using a faster algorithm. - // The faster algorithm will not work with an iteratee if the iteratee - // is not a one-to-one function, so providing an iteratee will disable - // the faster algorithm. - function uniq(array, isSorted, iteratee, context) { - if (!isBoolean(isSorted)) { - context = iteratee; - iteratee = isSorted; - isSorted = false; - } - if (iteratee != null) iteratee = cb(iteratee, context); - var result = []; - var seen = []; - for (var i = 0, length = getLength(array); i < length; i++) { - var value = array[i], - computed = iteratee ? iteratee(value, i, array) : value; - if (isSorted && !iteratee) { - if (!i || seen !== computed) result.push(value); - seen = computed; - } else if (iteratee) { - if (!contains(seen, computed)) { - seen.push(computed); - result.push(value); - } - } else if (!contains(result, value)) { - result.push(value); - } - } - return result; - } - - // Produce an array that contains the union: each distinct element from all of - // the passed-in arrays. - var union = restArguments(function(arrays) { - return uniq(flatten$1(arrays, true, true)); - }); - - // Produce an array that contains every item shared between all the - // passed-in arrays. - function intersection(array) { - var result = []; - var argsLength = arguments.length; - for (var i = 0, length = getLength(array); i < length; i++) { - var item = array[i]; - if (contains(result, item)) continue; - var j; - for (j = 1; j < argsLength; j++) { - if (!contains(arguments[j], item)) break; - } - if (j === argsLength) result.push(item); - } - return result; - } - - // Complement of zip. Unzip accepts an array of arrays and groups - // each array's elements on shared indices. - function unzip(array) { - var length = array && max(array, getLength).length || 0; - var result = Array(length); - - for (var index = 0; index < length; index++) { - result[index] = pluck(array, index); - } - return result; - } - - // Zip together multiple lists into a single array -- elements that share - // an index go together. - var zip = restArguments(unzip); - - // Converts lists into objects. Pass either a single array of `[key, value]` - // pairs, or two parallel arrays of the same length -- one of keys, and one of - // the corresponding values. Passing by pairs is the reverse of `_.pairs`. - function object(list, values) { - var result = {}; - for (var i = 0, length = getLength(list); i < length; i++) { - if (values) { - result[list[i]] = values[i]; - } else { - result[list[i][0]] = list[i][1]; - } - } - return result; - } - - // Generate an integer Array containing an arithmetic progression. A port of - // the native Python `range()` function. See - // [the Python documentation](https://docs.python.org/library/functions.html#range). - function range(start, stop, step) { - if (stop == null) { - stop = start || 0; - start = 0; - } - if (!step) { - step = stop < start ? -1 : 1; - } - - var length = Math.max(Math.ceil((stop - start) / step), 0); - var range = Array(length); - - for (var idx = 0; idx < length; idx++, start += step) { - range[idx] = start; - } - - return range; - } - - // Chunk a single array into multiple arrays, each containing `count` or fewer - // items. - function chunk(array, count) { - if (count == null || count < 1) return []; - var result = []; - var i = 0, length = array.length; - while (i < length) { - result.push(slice.call(array, i, i += count)); - } - return result; - } - - // Helper function to continue chaining intermediate results. - function chainResult(instance, obj) { - return instance._chain ? _$1(obj).chain() : obj; - } - - // Add your own custom functions to the Underscore object. - function mixin(obj) { - each(functions(obj), function(name) { - var func = _$1[name] = obj[name]; - _$1.prototype[name] = function() { - var args = [this._wrapped]; - push.apply(args, arguments); - return chainResult(this, func.apply(_$1, args)); - }; - }); - return _$1; - } - - // Add all mutator `Array` functions to the wrapper. - each(['pop', 'push', 'reverse', 'shift', 'sort', 'splice', 'unshift'], function(name) { - var method = ArrayProto[name]; - _$1.prototype[name] = function() { - var obj = this._wrapped; - if (obj != null) { - method.apply(obj, arguments); - if ((name === 'shift' || name === 'splice') && obj.length === 0) { - delete obj[0]; - } - } - return chainResult(this, obj); - }; - }); - - // Add all accessor `Array` functions to the wrapper. - each(['concat', 'join', 'slice'], function(name) { - var method = ArrayProto[name]; - _$1.prototype[name] = function() { - var obj = this._wrapped; - if (obj != null) obj = method.apply(obj, arguments); - return chainResult(this, obj); - }; - }); - - // Named Exports - - var allExports = { - __proto__: null, - VERSION: VERSION, - restArguments: restArguments, - isObject: isObject, - isNull: isNull, - isUndefined: isUndefined, - isBoolean: isBoolean, - isElement: isElement, - isString: isString, - isNumber: isNumber, - isDate: isDate, - isRegExp: isRegExp, - isError: isError, - isSymbol: isSymbol, - isArrayBuffer: isArrayBuffer, - isDataView: isDataView$1, - isArray: isArray, - isFunction: isFunction$1, - isArguments: isArguments$1, - isFinite: isFinite$1, - isNaN: isNaN$1, - isTypedArray: isTypedArray$1, - isEmpty: isEmpty, - isMatch: isMatch, - isEqual: isEqual, - isMap: isMap, - isWeakMap: isWeakMap, - isSet: isSet, - isWeakSet: isWeakSet, - keys: keys, - allKeys: allKeys, - values: values, - pairs: pairs, - invert: invert, - functions: functions, - methods: functions, - extend: extend, - extendOwn: extendOwn, - assign: extendOwn, - defaults: defaults, - create: create, - clone: clone, - tap: tap, - get: get, - has: has, - mapObject: mapObject, - identity: identity, - constant: constant, - noop: noop, - toPath: toPath$1, - property: property, - propertyOf: propertyOf, - matcher: matcher, - matches: matcher, - times: times, - random: random, - now: now, - escape: _escape, - unescape: _unescape, - templateSettings: templateSettings, - template: template, - result: result, - uniqueId: uniqueId, - chain: chain, - iteratee: iteratee, - partial: partial, - bind: bind, - bindAll: bindAll, - memoize: memoize, - delay: delay, - defer: defer, - throttle: throttle, - debounce: debounce, - wrap: wrap, - negate: negate, - compose: compose, - after: after, - before: before, - once: once, - findKey: findKey, - findIndex: findIndex, - findLastIndex: findLastIndex, - sortedIndex: sortedIndex, - indexOf: indexOf, - lastIndexOf: lastIndexOf, - find: find, - detect: find, - findWhere: findWhere, - each: each, - forEach: each, - map: map, - collect: map, - reduce: reduce, - foldl: reduce, - inject: reduce, - reduceRight: reduceRight, - foldr: reduceRight, - filter: filter, - select: filter, - reject: reject, - every: every, - all: every, - some: some, - any: some, - contains: contains, - includes: contains, - include: contains, - invoke: invoke, - pluck: pluck, - where: where, - max: max, - min: min, - shuffle: shuffle, - sample: sample, - sortBy: sortBy, - groupBy: groupBy, - indexBy: indexBy, - countBy: countBy, - partition: partition, - toArray: toArray, - size: size, - pick: pick, - omit: omit, - first: first, - head: first, - take: first, - initial: initial, - last: last, - rest: rest, - tail: rest, - drop: rest, - compact: compact, - flatten: flatten, - without: without, - uniq: uniq, - unique: uniq, - union: union, - intersection: intersection, - difference: difference, - unzip: unzip, - transpose: unzip, - zip: zip, - object: object, - range: range, - chunk: chunk, - mixin: mixin, - 'default': _$1 - }; - - // Default Export - - // Add all of the Underscore functions to the wrapper object. - var _ = mixin(allExports); - // Legacy Node.js API. - _._ = _; - - return _; - -}))); -//# sourceMappingURL=underscore-umd.js.map diff --git a/spaces/mindspore-ai/Wukong-Huahua/utils.py b/spaces/mindspore-ai/Wukong-Huahua/utils.py deleted file mode 100644 index fff07e8be353de01d71765cf9d29b509b92e7e57..0000000000000000000000000000000000000000 --- a/spaces/mindspore-ai/Wukong-Huahua/utils.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import requests - - -def get_token(): - username = os.environ["username"] - domain_name = os.environ["domain_name"] - domain_pwd = os.environ["domain_pwd"] - url = os.environ["token_url"] - - requests_json = { - "auth": { - "identity": { - "methods": ["password"], - "password": { - "user": { - "name": username, - "password": domain_pwd, - "domain": { - "name": domain_name - } - } - } - }, - "scope": { - "project": { - "name": "cn-central-221" - } - } - } - } - - headers = { - "Content-Type": "text/plain" - } - - response = requests.post(url, json=requests_json, headers=headers) - - assert response.status_code == 201 - - result = response.headers - print("token success") - - return result['X-Subject-Token'] - - -if __name__ == "__main__": - get_token() diff --git a/spaces/mjdolan/Holiday-StyleGAN-NADA/op/conv2d_gradfix.py b/spaces/mjdolan/Holiday-StyleGAN-NADA/op/conv2d_gradfix.py deleted file mode 100644 index bb2f94bbcb8132299fd4d538972d32bd7ff6e7d6..0000000000000000000000000000000000000000 --- a/spaces/mjdolan/Holiday-StyleGAN-NADA/op/conv2d_gradfix.py +++ /dev/null @@ -1,227 +0,0 @@ -import contextlib -import warnings - -import torch -from torch import autograd -from torch.nn import functional as F - -enabled = True -weight_gradients_disabled = False - - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if could_use_op(input): - return conv2d_gradfix( - transpose=False, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=0, - dilation=dilation, - groups=groups, - ).apply(input, weight, bias) - - return F.conv2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - ) - - -def conv_transpose2d( - input, - weight, - bias=None, - stride=1, - padding=0, - output_padding=0, - groups=1, - dilation=1, -): - if could_use_op(input): - return conv2d_gradfix( - transpose=True, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=output_padding, - groups=groups, - dilation=dilation, - ).apply(input, weight, bias) - - return F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - output_padding=output_padding, - dilation=dilation, - groups=groups, - ) - - -def could_use_op(input): - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - - if input.device.type != "cuda": - return False - - if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]): - return True - - warnings.warn( - f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()." - ) - - return False - - -def ensure_tuple(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - - return xs - - -conv2d_gradfix_cache = dict() - - -def conv2d_gradfix( - transpose, weight_shape, stride, padding, output_padding, dilation, groups -): - ndim = 2 - weight_shape = tuple(weight_shape) - stride = ensure_tuple(stride, ndim) - padding = ensure_tuple(padding, ndim) - output_padding = ensure_tuple(output_padding, ndim) - dilation = ensure_tuple(dilation, ndim) - - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in conv2d_gradfix_cache: - return conv2d_gradfix_cache[key] - - common_kwargs = dict( - stride=stride, padding=padding, dilation=dilation, groups=groups - ) - - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - class Conv2d(autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - if not transpose: - out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - else: - out = F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - output_padding=output_padding, - **common_kwargs, - ) - - ctx.save_for_backward(input, weight) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input, grad_weight, grad_bias = None, None, None - - if ctx.needs_input_grad[0]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, weight, None) - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum((0, 2, 3)) - - return grad_input, grad_weight, grad_bias - - class Conv2dGradWeight(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation( - "aten::cudnn_convolution_backward_weight" - if not transpose - else "aten::cudnn_convolution_transpose_backward_weight" - ) - flags = [ - torch.backends.cudnn.benchmark, - torch.backends.cudnn.deterministic, - torch.backends.cudnn.allow_tf32, - ] - grad_weight = op( - weight_shape, - grad_output, - input, - padding, - stride, - dilation, - groups, - *flags, - ) - ctx.save_for_backward(grad_output, input) - - return grad_weight - - @staticmethod - def backward(ctx, grad_grad_weight): - grad_output, input = ctx.saved_tensors - grad_grad_output, grad_grad_input = None, None - - if ctx.needs_input_grad[0]: - grad_grad_output = Conv2d.apply(input, grad_grad_weight, None) - - if ctx.needs_input_grad[1]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, grad_grad_weight, None) - - return grad_grad_output, grad_grad_input - - conv2d_gradfix_cache[key] = Conv2d - - return Conv2d diff --git a/spaces/mlpc-lab/BLIVA/bliva/common/optims.py b/spaces/mlpc-lab/BLIVA/bliva/common/optims.py deleted file mode 100644 index d10dd3f9811d8d122616fadb41b31ec894979d34..0000000000000000000000000000000000000000 --- a/spaces/mlpc-lab/BLIVA/bliva/common/optims.py +++ /dev/null @@ -1,117 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import math - -from bliva.common.registry import registry - - -@registry.register_lr_scheduler("linear_warmup_step_lr") -class LinearWarmupStepLRScheduler: - def __init__( - self, - optimizer, - max_epoch, - min_lr, - init_lr, - decay_rate=1, - warmup_start_lr=-1, - warmup_steps=0, - **kwargs - ): - self.optimizer = optimizer - - self.max_epoch = max_epoch - self.min_lr = min_lr - - self.decay_rate = decay_rate - - self.init_lr = init_lr - self.warmup_steps = warmup_steps - self.warmup_start_lr = warmup_start_lr if warmup_start_lr >= 0 else init_lr - - def step(self, cur_epoch, cur_step): - if cur_epoch == 0: - warmup_lr_schedule( - step=cur_step, - optimizer=self.optimizer, - max_step=self.warmup_steps, - init_lr=self.warmup_start_lr, - max_lr=self.init_lr, - ) - else: - step_lr_schedule( - epoch=cur_epoch, - optimizer=self.optimizer, - init_lr=self.init_lr, - min_lr=self.min_lr, - decay_rate=self.decay_rate, - ) - - -@registry.register_lr_scheduler("linear_warmup_cosine_lr") -class LinearWarmupCosineLRScheduler: - def __init__( - self, - optimizer, - max_epoch, - min_lr, - init_lr, - warmup_steps=0, - warmup_start_lr=-1, - **kwargs - ): - self.optimizer = optimizer - - self.max_epoch = max_epoch - self.min_lr = min_lr - - self.init_lr = init_lr - self.warmup_steps = warmup_steps - self.warmup_start_lr = warmup_start_lr if warmup_start_lr >= 0 else init_lr - - def step(self, cur_epoch, cur_step): - # assuming the warmup iters less than one epoch - if cur_epoch == 0: - warmup_lr_schedule( - step=cur_step, - optimizer=self.optimizer, - max_step=self.warmup_steps, - init_lr=self.warmup_start_lr, - max_lr=self.init_lr, - ) - else: - cosine_lr_schedule( - epoch=cur_epoch, - optimizer=self.optimizer, - max_epoch=self.max_epoch, - init_lr=self.init_lr, - min_lr=self.min_lr, - ) - - -def cosine_lr_schedule(optimizer, epoch, max_epoch, init_lr, min_lr): - """Decay the learning rate""" - lr = (init_lr - min_lr) * 0.5 * ( - 1.0 + math.cos(math.pi * epoch / max_epoch) - ) + min_lr - for param_group in optimizer.param_groups: - param_group["lr"] = lr - - -def warmup_lr_schedule(optimizer, step, max_step, init_lr, max_lr): - """Warmup the learning rate""" - lr = min(max_lr, init_lr + (max_lr - init_lr) * step / max(max_step, 1)) - for param_group in optimizer.param_groups: - param_group["lr"] = lr - - -def step_lr_schedule(optimizer, epoch, init_lr, min_lr, decay_rate): - """Decay the learning rate""" - lr = max(min_lr, init_lr * (decay_rate**epoch)) - for param_group in optimizer.param_groups: - param_group["lr"] = lr diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/third_party/CLIP/clip/model.py b/spaces/mmlab-ntu/Segment-Any-RGBD/third_party/CLIP/clip/model.py deleted file mode 100644 index 8ea730a2cc8a992f9180428bd1fec7fc96aa89dd..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/third_party/CLIP/clip/model.py +++ /dev/null @@ -1,613 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved -# Modified by Feng Liang from https://github.com/openai/CLIP/blob/main/clip/model.py - -from collections import OrderedDict -from typing import Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential( - OrderedDict( - [ - ("-1", nn.AvgPool2d(stride)), - ( - "0", - nn.Conv2d( - inplanes, - planes * self.expansion, - 1, - stride=1, - bias=False, - ), - ), - ("1", nn.BatchNorm2d(planes * self.expansion)), - ] - ) - ) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu(self.bn1(self.conv1(x))) - out = self.relu(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__( - self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None - ): - super().__init__() - self.positional_embedding = nn.Parameter( - torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5 - ) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - self.grid_size = spacial_dim - - def forward(self, x, mask=None, return_cls=True): - b, c, gh, gw = x.shape - # remove irrelated feature - if mask is not None: - mask = F.interpolate(mask[:, None, ...], size=(gh, gw)).squeeze( - 1 - ) # [N,H,W] -> [N,grid,grid] - mask = (mask > 0.5).reshape(mask.shape[0], -1) - mask = torch.cat([mask, mask.new_ones(mask.shape[0], 1)], dim=1) - if x.size()[0] == 1: - x = x.expand(mask.shape[0], c, gh, gw) - - x = x.reshape(x.shape[0], c, gh * gw).permute(2, 0, 1) # NCHW -> (HW)NC - - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - positional_embedding = self.positional_embedding - if not (self.positional_embedding.shape[0] == x.shape[0]): - cls_pos = positional_embedding[0:1, :] - per_pos_embedding = ( - F.interpolate( - positional_embedding[1:, :] - .permute(1, 0) - .view(1, -1, self.grid_size, self.grid_size), - size=(gh, gw), - mode="bicubic", - ) - .reshape(-1, gh * gw) - .permute(1, 0) - ) - positional_embedding = torch.cat([cls_pos, per_pos_embedding]) - - x = x + positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x, - key=x, - value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat( - [self.q_proj.bias, self.k_proj.bias, self.v_proj.bias] - ), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False, - key_padding_mask=mask, - ) - - if return_cls: - return x[0] - else: - return x - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, input_resolution=224, width=64): - super().__init__() - self.output_dim = output_dim - self.input_resolution = input_resolution - - # the 3-layer stem - self.conv1 = nn.Conv2d( - 3, width // 2, kernel_size=3, stride=2, padding=1, bias=False - ) - self.bn1 = nn.BatchNorm2d(width // 2) - self.conv2 = nn.Conv2d( - width // 2, width // 2, kernel_size=3, padding=1, bias=False - ) - self.bn2 = nn.BatchNorm2d(width // 2) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.avgpool = nn.AvgPool2d(2) - self.relu = nn.ReLU(inplace=True) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d( - input_resolution // 32, embed_dim, heads, output_dim - ) - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x, mask: torch.Tensor = None, return_cls=True): - def stem(x): - for conv, bn in [ - (self.conv1, self.bn1), - (self.conv2, self.bn2), - (self.conv3, self.bn3), - ]: - x = self.relu(bn(conv(x))) - x = self.avgpool(x) - return x - - x = x.type(self.conv1.weight.dtype) - x = stem(x) # 1/4,1/4 - x = self.layer1(x) - x = self.layer2(x) # 1/8,1/8 - x = self.layer3(x) # 1/16,1/16 - x = self.layer4(x) # 1/32,1/32 - b, c, gh, gw = x.shape - x = self.attnpool(x, mask, return_cls) - if not return_cls: - return x[1:].permute(1, 0, 2).reshape(b, gh, gw, x.shape[-1]) # N,L,C - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential( - OrderedDict( - [ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)), - ] - ) - ) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor, **kwargs): - self.attn_mask = ( - self.attn_mask.to(dtype=x.dtype, device=x.device) - if self.attn_mask is not None - else None - ) - return self.attn( - x, x, x, need_weights=False, attn_mask=self.attn_mask, **kwargs - )[0] - - def forward(self, x: torch.Tensor, **kwargs): - x = x + self.attention(self.ln_1(x), **kwargs) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__( - self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None - ): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential( - *[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)] - ) - - def forward(self, x: torch.Tensor, **kwargs): - for block in self.resblocks: - x = block(x, **kwargs) - return x - - -class VisionTransformer(nn.Module): - def __init__( - self, - input_resolution: int, - patch_size: int, - mask_prompt_depth: int, - width: int, - layers: int, - heads: int, - output_dim: int, - ): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - self.conv1 = nn.Conv2d( - in_channels=3, - out_channels=width, - kernel_size=patch_size, - stride=patch_size, - bias=False, - ) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter( - scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width) - ) - self.grid_size = input_resolution // patch_size - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - self.mask_pool = nn.AvgPool2d(patch_size, stride=patch_size) - self.mask_prompt_depth = mask_prompt_depth - self.mask_embedding = nn.Parameter(torch.zeros(self.mask_prompt_depth, self.grid_size * self.grid_size, width)) - - def forward(self, x: torch.Tensor, m: torch.Tensor = None): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - if m is not None: - m = self.mask_pool(m.to(torch.float).squeeze()).reshape(m.shape[0], -1).unsqueeze(-1) - m = torch.ceil(m) - if self.mask_embedding.shape[1] == 1: - mask_embedding = self.mask_embedding.to(x.dtype).repeat(1, x.shape[1], 1) - else: - mask_embedding = self.mask_embedding.to(x.dtype) - x = x * m + mask_embedding[0].unsqueeze(0) * (1 - m) - - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - if m is not None: - for i, blk in enumerate(self.transformer.resblocks): - d = i + 1 - x = blk(x) - if d < self.mask_prompt_depth: - masked_x = x[1:, :, :] * m.permute(1, 0, 2) + \ - mask_embedding[d].unsqueeze(0).permute(1, 0, 2) * (1 - m.permute(1, 0, 2)) - x = torch.cat([x[:1, :, :], masked_x], dim=0) - else: - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - - -class CLIP(nn.Module): - def __init__( - self, - embed_dim: int, - # vision - image_resolution: int, - vision_layers: Union[Tuple[int, int, int, int], int], - vision_width: int, - vision_patch_size: int, - mask_prompt_depth: int, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int, - ): - super().__init__() - - self.context_length = context_length - - if isinstance(vision_layers, (tuple, list)): - vision_heads = vision_width * 32 // 64 - self.visual = ModifiedResNet( - layers=vision_layers, - output_dim=embed_dim, - heads=vision_heads, - input_resolution=image_resolution, - width=vision_width, - ) - else: - vision_heads = vision_width // 64 - self.visual = VisionTransformer( - input_resolution=image_resolution, - patch_size=vision_patch_size, - mask_prompt_depth=mask_prompt_depth, - width=vision_width, - layers=vision_layers, - heads=vision_heads, - output_dim=embed_dim, - ) - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask(), - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter( - torch.empty(self.context_length, transformer_width) - ) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - if isinstance(self.visual, ModifiedResNet): - if self.visual.attnpool is not None: - std = self.visual.attnpool.c_proj.in_features ** -0.5 - nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std) - - for resnet_block in [ - self.visual.layer1, - self.visual.layer2, - self.visual.layer3, - self.visual.layer4, - ]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - proj_std = (self.transformer.width ** -0.5) * ( - (2 * self.transformer.layers) ** -0.5 - ) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def dtype(self): - return self.visual.conv1.weight.dtype - - def encode_image(self, image, **kwargs): - return self.visual(image.type(self.dtype), **kwargs) - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_features = self.encode_image(image) - text_features = self.encode_text(text) - - # normalized features - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - text_features = text_features / text_features.norm(dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_image = logit_scale * image_features @ text_features.t() - logits_per_text = logit_scale * text_features @ image_features.t() - - # shape = [global_batch_size, global_batch_size] - return logits_per_image, logits_per_text - - -def convert_weights(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [ - *[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], - "in_proj_bias", - "bias_k", - "bias_v", - ]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model(state_dict: dict, mask_prompt_depth: int = 0): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len( - [ - k - for k in state_dict.keys() - if k.startswith("visual.") and k.endswith(".attn.in_proj_weight") - ] - ) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round( - (state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5 - ) - image_resolution = vision_patch_size * grid_size - else: - assert mask_prompt_depth == 0, 'ResNets do not support mask prompt tuning' - counts: list = [ - len( - set( - k.split(".")[2] - for k in state_dict - if k.startswith(f"visual.layer{b}") - ) - ) - for b in [1, 2, 3, 4] - ] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round( - (state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5 - ) - vision_patch_size = None - assert ( - output_width ** 2 + 1 - == state_dict["visual.attnpool.positional_embedding"].shape[0] - ) - image_resolution = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len( - set( - k.split(".")[2] - for k in state_dict - if k.startswith(f"transformer.resblocks") - ) - ) - - model = CLIP( - embed_dim, - image_resolution, - vision_layers, - vision_width, - vision_patch_size, - mask_prompt_depth, - context_length, - vocab_size, - transformer_width, - transformer_heads, - transformer_layers, - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - if key in state_dict: - del state_dict[key] - - convert_weights(model) - model.load_state_dict(state_dict, strict=False) - return model.eval() diff --git a/spaces/momegas/megabots/tests/test_api.py b/spaces/momegas/megabots/tests/test_api.py deleted file mode 100644 index d9a6d36228b95ce1f491f0ba8cb21c125abb9d24..0000000000000000000000000000000000000000 --- a/spaces/momegas/megabots/tests/test_api.py +++ /dev/null @@ -1,21 +0,0 @@ -import json -from fastapi.testclient import TestClient -from megabots import bot, create_api - -qnabot = bot("qna-over-docs", index="./examples/files") -app = create_api(qnabot) - -client = TestClient(app) - - -def test_successful_response(): - response = client.get("/v1/ask/What is your name?") - assert response.status_code == 200 - assert "answer" in response.json() - assert isinstance(response.json()["answer"], str) - - -def test_missing_question_parameter(): - response = client.get("/v1/ask/") - assert response.status_code == 404 - assert response.json() == {"detail": "Not Found"} diff --git a/spaces/monra/freegpt-webui/client/js/sidebar-toggler.js b/spaces/monra/freegpt-webui/client/js/sidebar-toggler.js deleted file mode 100644 index b23f94e3bfba5bac53432e1b557765736dabbab4..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/client/js/sidebar-toggler.js +++ /dev/null @@ -1,34 +0,0 @@ -const sidebar = document.querySelector(".sidebar"); -const menuButton = document.querySelector(".menu-button"); - -function toggleSidebar(event) { - if (sidebar.classList.contains("shown")) { - hideSidebar(event.target); - } else { - showSidebar(event.target); - } - window.scrollTo(0, 0); -} - -function showSidebar(target) { - sidebar.classList.add("shown"); - target.classList.add("rotated"); - document.body.style.overflow = "hidden"; -} - -function hideSidebar(target) { - sidebar.classList.remove("shown"); - target.classList.remove("rotated"); - document.body.style.overflow = "auto"; -} - -menuButton.addEventListener("click", toggleSidebar); - -document.body.addEventListener('click', function(event) { - if (event.target.matches('.conversation-title')) { - const menuButtonStyle = window.getComputedStyle(menuButton); - if (menuButtonStyle.display !== 'none') { - hideSidebar(menuButton); - } - } -}); diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py b/spaces/mshukor/UnIVAL/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py deleted file mode 100644 index b74bdfd456f9b7c546ce528173c77431b4f57ac1..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.tasks.translation import TranslationTask -from fairseq.tasks.language_modeling import LanguageModelingTask -from fairseq import checkpoint_utils -import argparse -from fairseq.tasks import register_task -import torch - - -@register_task("noisy_channel_translation") -class NoisyChannelTranslation(TranslationTask): - """ - Rescore the top k candidates from each beam using noisy channel modeling - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - TranslationTask.add_args(parser) - # fmt: off - parser.add_argument('--channel-model', metavar='FILE', - help='path to P(S|T) model. P(S|T) and P(T|S) must share source and target dictionaries.') - parser.add_argument('--combine-method', default='lm_only', - choices=['lm_only', 'noisy_channel'], - help="""method for combining direct and channel model scores. - lm_only: decode with P(T|S)P(T) - noisy_channel: decode with 1/t P(T|S) + 1/s(P(S|T)P(T))""") - parser.add_argument('--normalize-lm-scores-by-tgt-len', action='store_true', default=False, - help='normalize lm score by target length instead of source length') - parser.add_argument('--channel-scoring-type', default='log_norm', choices=['unnormalized', 'log_norm', 'k2_separate', 'src_vocab', 'src_vocab_batched'], - help="Normalize bw scores with log softmax or return bw scores without log softmax") - parser.add_argument('--top-k-vocab', default=0, type=int, - help='top k vocab IDs to use with `src_vocab` in channel model scoring') - parser.add_argument('--k2', default=50, type=int, - help='the top k2 candidates to rescore with the noisy channel model for each beam') - parser.add_argument('--ch-wt', default=1, type=float, - help='weight for the channel model') - parser.add_argument('--lm-model', metavar='FILE', - help='path to lm model file, to model P(T). P(T) must share the same vocab as the direct model on the target side') - parser.add_argument('--lm-data', metavar='FILE', - help='path to lm model training data for target language, used to properly load LM with correct dictionary') - parser.add_argument('--lm-wt', default=1, type=float, - help='the weight of the lm in joint decoding') - # fmt: on - - def build_generator( - self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None - ): - if getattr(args, "score_reference", False): - raise NotImplementedError() - else: - from .noisy_channel_sequence_generator import NoisyChannelSequenceGenerator - use_cuda = torch.cuda.is_available() and not self.args.cpu - assert self.args.lm_model is not None, '--lm-model required for noisy channel generation!' - assert self.args.lm_data is not None, '--lm-data required for noisy channel generation to map between LM and bitext vocabs' - if self.args.channel_model is not None: - import copy - ch_args_task = copy.deepcopy(self.args) - tmp = ch_args_task.source_lang - ch_args_task.source_lang = ch_args_task.target_lang - ch_args_task.target_lang = tmp - ch_args_task._name = 'translation' - channel_task = TranslationTask.setup_task(ch_args_task) - - arg_dict = {} - arg_dict['task'] = 'language_modeling' - arg_dict['sample_break_mode'] = 'eos' - arg_dict['data'] = self.args.lm_data - arg_dict['output_dictionary_size'] = -1 - lm_args = argparse.Namespace(**arg_dict) - lm_task = LanguageModelingTask.setup_task(lm_args) - lm_dict = lm_task.output_dictionary - - if self.args.channel_model is not None: - channel_models, _ = checkpoint_utils.load_model_ensemble(self.args.channel_model.split(':'), task=channel_task) - - for model in channel_models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if self.args.fp16: - model.half() - if use_cuda: - model.cuda() - else: - channel_models = None - - lm_models, _ = checkpoint_utils.load_model_ensemble(self.args.lm_model.split(':'), task=lm_task) - - for model in lm_models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if self.args.fp16: - model.half() - if use_cuda: - model.cuda() - return NoisyChannelSequenceGenerator( - combine_method=self.args.combine_method, - tgt_dict=self.target_dictionary, - src_dict=self.source_dictionary, - beam_size=getattr(args, 'beam', 5), - max_len_a=getattr(args, 'max_len_a', 0), - max_len_b=getattr(args, 'max_len_b', 200), - min_len=getattr(args, 'min_len', 1), - len_penalty=getattr(args, 'lenpen', 1), - unk_penalty=getattr(args, 'unkpen', 0), - temperature=getattr(args, 'temperature', 1.), - match_source_len=getattr(args, 'match_source_len', False), - no_repeat_ngram_size=getattr(args, 'no_repeat_ngram_size', 0), - normalize_scores=(not getattr(args, 'unnormalized', False)), - channel_models=channel_models, - k2=getattr(self.args, 'k2', 50), - ch_weight=getattr(self.args, 'ch_wt', 1), - channel_scoring_type=self.args.channel_scoring_type, - top_k_vocab=self.args.top_k_vocab, - lm_models=lm_models, - lm_dict=lm_dict, - lm_weight=getattr(self.args, 'lm_wt', 1), - normalize_lm_scores_by_tgt_len=getattr(self.args, 'normalize_lm_scores_by_tgt_len', False), - ) diff --git a/spaces/mthsk/sovits-models-misc/onnxexport/model_onnx.py b/spaces/mthsk/sovits-models-misc/onnxexport/model_onnx.py deleted file mode 100644 index e28bae95ec1e53aa05d06fc784ff86d55f228d60..0000000000000000000000000000000000000000 --- a/spaces/mthsk/sovits-models-misc/onnxexport/model_onnx.py +++ /dev/null @@ -1,335 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, z=None): - x = x + self.f0_emb(f0).transpose(1, 2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + z * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels, 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if spk_emb is not None: - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - self.predict_f0 = False - - def forward(self, c, f0, mel2ph, uv, noise=None, g=None): - - decoder_inp = F.pad(c, [0, 0, 1, 0]) - mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, c.shape[-1]]) - c = torch.gather(decoder_inp, 1, mel2ph_).transpose(1, 2) # [B, T, H] - - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2) - - if self.predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), z=noise) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/math/mathjax2.js b/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/math/mathjax2.js deleted file mode 100644 index daebe7e86b37eca79ba9bee285503194acb4cd6d..0000000000000000000000000000000000000000 --- a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/math/mathjax2.js +++ /dev/null @@ -1,89 +0,0 @@ -/** - * A plugin which enables rendering of math equations inside - * of reveal.js slides. Essentially a thin wrapper for MathJax. - * - * @author Hakim El Hattab - */ -export const MathJax2 = () => { - - // The reveal.js instance this plugin is attached to - let deck; - - let defaultOptions = { - messageStyle: 'none', - tex2jax: { - inlineMath: [ [ '$', '$' ], [ '\\(', '\\)' ] ], - skipTags: [ 'script', 'noscript', 'style', 'textarea', 'pre' ] - }, - skipStartupTypeset: true - }; - - function loadScript( url, callback ) { - - let head = document.querySelector( 'head' ); - let script = document.createElement( 'script' ); - script.type = 'text/javascript'; - script.src = url; - - // Wrapper for callback to make sure it only fires once - let finish = () => { - if( typeof callback === 'function' ) { - callback.call(); - callback = null; - } - } - - script.onload = finish; - - // IE - script.onreadystatechange = () => { - if ( this.readyState === 'loaded' ) { - finish(); - } - } - - // Normal browsers - head.appendChild( script ); - - } - - return { - id: 'mathjax2', - - init: function( reveal ) { - - deck = reveal; - - let revealOptions = deck.getConfig().mathjax2 || deck.getConfig().math || {}; - - let options = { ...defaultOptions, ...revealOptions }; - let mathjax = options.mathjax || 'https://cdn.jsdelivr.net/npm/mathjax@2/MathJax.js'; - let config = options.config || 'TeX-AMS_HTML-full'; - let url = mathjax + '?config=' + config; - - options.tex2jax = { ...defaultOptions.tex2jax, ...revealOptions.tex2jax }; - - options.mathjax = options.config = null; - - loadScript( url, function() { - - MathJax.Hub.Config( options ); - - // Typeset followed by an immediate reveal.js layout since - // the typesetting process could affect slide height - MathJax.Hub.Queue( [ 'Typeset', MathJax.Hub, deck.getRevealElement() ] ); - MathJax.Hub.Queue( deck.layout ); - - // Reprocess equations in slides when they turn visible - deck.on( 'slidechanged', function( event ) { - - MathJax.Hub.Queue( [ 'Typeset', MathJax.Hub, event.currentSlide ] ); - - } ); - - } ); - - } - } - -}; diff --git a/spaces/nakamura196/yolov5-ndl-layout/README.md b/spaces/nakamura196/yolov5-ndl-layout/README.md deleted file mode 100644 index f19bf3d46bf8c627ed62f3a1908e4b30a7c823bd..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-ndl-layout/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Yolov5 Ndl Layout -emoji: 🐢 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/utils/__init__.py b/spaces/nakas/MusicGenDemucs/audiocraft/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/audiocraft/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/naver/PUMP/datasets/utils.py b/spaces/naver/PUMP/datasets/utils.py deleted file mode 100644 index 857ac080d94e57106b4656d3c0bf65b6fc51b092..0000000000000000000000000000000000000000 --- a/spaces/naver/PUMP/datasets/utils.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright 2022-present NAVER Corp. -# CC BY-NC-SA 4.0 -# Available only for non-commercial use - -from pdb import set_trace as bb -import numpy as np -import torch - - -class DatasetWithRng: - """ Make sure that RNG is distributed properly when torch.dataloader() is used - """ - - def __init__(self, seed=None): - self.seed = seed - self.rng = np.random.default_rng(seed) - self._rng_children = set() - - def with_same_rng(self, dataset=None): - if dataset is not None: - assert isinstance(dataset, DatasetWithRng) and hasattr(dataset, 'rng'), bb() - self._rng_children.add( dataset ) - - # update all registered children - for db in self._rng_children: - db.rng = self.rng - db.with_same_rng() # recursive call - return dataset - - def init_worker(self, tid): - if self.seed is None: - self.rng = np.random.default_rng() - else: - self.rng = np.random.default_rng(self.seed + tid) - - -class WorkerWithRngInit: - " Dataset inherits from datasets.DatasetWithRng() and has an init_worker() function " - def __call__(self, tid): - torch.utils.data.get_worker_info().dataset.init_worker(tid) - - -def corres_from_homography(homography, W, H, grid=64): - s = max(1, min(W, H) // grid) # at least `grid` points in smallest dim - sx, sy = [slice(s//2, l, s) for l in (W, H)] - grid1 = np.mgrid[sy, sx][::-1].reshape(2,-1).T # (x1,y1) grid - - grid2 = applyh(homography, grid1) - scale = np.sqrt(np.abs(np.linalg.det(jacobianh(homography, grid1).T))) - - corres = np.c_[grid1, grid2, np.ones_like(scale), np.zeros_like(scale), scale] - return corres - - -def invh( H ): - return np.linalg.inv(H) - - -def applyh(H, p, ncol=2, norm=True): - """ Apply the homography to a list of 2d points in homogeneous coordinates. - - H: Homography (...x3x3 matrix/tensor) - p: numpy/torch/tuple of coordinates. Shape must be (...,2) or (...,3) - - Returns an array of projected 2d points. - """ - if isinstance(H, np.ndarray): - p = np.asarray(p) - elif isinstance(H, torch.Tensor): - p = torch.as_tensor(p, dtype=H.dtype) - - if p.shape[-1]+1 == H.shape[-1]: - H = H.swapaxes(-1,-2) # transpose H - p = p @ H[...,:-1,:] + H[...,-1:,:] - else: - p = H @ p.T - if p.ndim >= 2: p = p.swapaxes(-1,-2) - - if norm: - p /= p[...,-1:] - return p[...,:ncol] - - -def jacobianh(H, p): - """ H is an homography that maps: f_H(x,y) --> (f_1, f_2) - So the Jacobian J_H evaluated at p=(x,y) is a 2x2 matrix - Output shape = (2, 2, N) = (f_, xy, N) - - Example of derivative: - numx a*X + b*Y + c*Z - since x = ----- = --------------- - denom u*X + v*Y + w*Z - - numx' * denom - denom' * numx a*denom - u*numx - dx/dX = ----------------------------- = ---------------- - denom**2 denom**2 - """ - (a, b, c), (d, e, f), (u, v, w) = H - numx, numy, denom = applyh(H, p, ncol=3, norm=False).T - - # column x column x - J = np.float32(((a*denom - u*numx, b*denom - v*numx), # row f_1 - (d*denom - u*numy, e*denom - v*numy))) # row f_2 - return J / np.where(denom, denom*denom, np.nan) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CDock 3.0.8 MacOS [Full] !!HOT!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CDock 3.0.8 MacOS [Full] !!HOT!!.md deleted file mode 100644 index e2768a7737dffa275a1ab6aaba681c5351751dad..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/CDock 3.0.8 MacOS [Full] !!HOT!!.md +++ /dev/null @@ -1,19 +0,0 @@ -
        -

        cDock 3.0.8 MacOS [Full] - The Ultimate Dock Customization Tool for Mac Users

        -

        If you are looking for a way to personalize and enhance your Mac's Dock, look no further than cDock 3.0.8 MacOS [Full]. This powerful application gives you full control over the Dock's appearance and functionality, allowing you to create a Dock that suits your style and needs.

        -

        cDock 3.0.8 MacOS [Full] is the latest version of cDock, a popular Dock customization tool that has been around since 2014. cDock is designed to make theming your Dock easy and robust, with tons of options and features to choose from. You can change the Dock's shape, color, transparency, icons, indicators, badges, animations, reflections, shadows, and more. You can even enable some awesome hidden features that Apple doesn't want you to see, such as the 2D Dock, the Launchpad grid, the Dashboard overlay, and the Mission Control blur.

        -

        cDock 3.0.8 MacOS [Full]


        DOWNLOADhttps://urlcod.com/2uIbJC



        -

        cDock 3.0.8 MacOS [Full] is compatible with macOS 10.10 or later, and supports both Intel and Apple Silicon Macs. It is very easy to install and use, with a simple interface that lets you preview and apply your changes instantly. You can also save and load your custom themes, or download and install themes from other users online.

        -

        cDock 3.0.8 MacOS [Full] is more than just a Dock customization tool. It is a way to express your creativity and personality on your Mac. Whether you want a minimalist Dock, a colorful Dock, a futuristic Dock, or anything in between, cDock 3.0.8 MacOS [Full] can help you achieve it.

        -

        So what are you waiting for? Download cDock 3.0.8 MacOS [Full] today and discover what your Dock has been missing!

        - -

        One of the best features of cDock 3.0.8 MacOS [Full] is that it is very lightweight and fast. Unlike some other Dock customization tools that may slow down your Mac or cause conflicts with other apps, cDock 3.0.8 MacOS [Full] runs smoothly and seamlessly in the background, without affecting your Mac's performance or stability. You can enjoy your customized Dock without any worries or hassles.

        -

        Another great feature of cDock 3.0.8 MacOS [Full] is that it is constantly updated and improved by its developer, Wolfgang Baird. He listens to the feedback and suggestions of the users and adds new features and fixes regularly. He also provides excellent customer support and responds to any issues or questions promptly. You can rest assured that you are getting a quality product that is well-maintained and supported.

        -

        cDock 3.0.8 MacOS [Full] is not only a tool for customizing your Dock, but also a way to support the Mac community and the development of Mac apps. cDock 3.0.8 MacOS [Full] is part of the macEnhance project, which aims to provide a platform for Mac developers and users to share and discover new apps and plugins that enhance the Mac experience. By downloading cDock 3.0.8 MacOS [Full], you are also supporting this project and helping it grow.

        - -

        If you are ready to take your Dock to the next level, don't miss this opportunity to get cDock 3.0.8 MacOS [Full] for free. Yes, you read that right. cDock 3.0.8 MacOS [Full] is currently available for free for a limited time only. This is a special offer that you won't find anywhere else. All you have to do is visit the official website of cDock and download the app from there. No registration, no subscription, no hidden fees. Just download and enjoy.

        -

        But hurry up, because this offer won't last forever. cDock 3.0.8 MacOS [Full] is normally priced at $9.99, which is already a bargain for such a powerful and versatile app. But for a short period of time, you can get it for absolutely nothing. This is a rare chance to save money and get more value for your Mac.

        -

        -

        So what are you waiting for? Click the link below and get cDock 3.0.8 MacOS [Full] today before it's too late. You won't regret it.

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Masti Hindi Dubbed Movie Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Masti Hindi Dubbed Movie Download.md deleted file mode 100644 index 5bc1ff5b7c5d0ea76401049d450796d86ba2b041..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Masti Hindi Dubbed Movie Download.md +++ /dev/null @@ -1,19 +0,0 @@ -
        -

        How to Download Masti Hindi Dubbed Movie for Free

        -

        Masti is a 2004 Bollywood comedy film directed by Indra Kumar and starring Vivek Oberoi, Ritesh Deshmukh, Aftab Shivdasani, Ajay Devgn, Lara Dutta, Amrita Rao, Tara Sharma and Genelia D'Souza. The film revolves around three married friends who decide to have an extramarital affair to spice up their lives, but end up in a series of hilarious troubles.

        -

        If you are looking for a way to download Masti hindi dubbed movie for free, you have come to the right place. In this article, we will tell you how to find and download Masti hindi dubbed movie from various sources on the internet.

        -

        Masti hindi dubbed movie download


        Download Ziphttps://urlcod.com/2uIb6p



        -

        Download Masti Hindi Dubbed Movie from YouTube

        -

        One of the easiest ways to download Masti hindi dubbed movie for free is to use YouTube. YouTube is a popular video-sharing platform that hosts millions of videos, including movies, trailers, songs, clips and more. You can find Masti hindi dubbed movie on YouTube by searching for keywords like "Masti hindi dubbed movie", "Masti full movie in hindi", "Masti 2004 hindi dubbed movie" etc. You can also use filters like duration, upload date, quality and features to narrow down your search results.

        -

        Once you find the video of Masti hindi dubbed movie that you want to download, you can use a third-party tool like 4K Video Downloader, Y2Mate, SaveFrom.net or VidMate to download it to your device. These tools allow you to download videos from YouTube in various formats and resolutions. You can also choose to download only the audio or subtitles of the video. However, be careful while using these tools as they may contain ads or malware that can harm your device or data.

        -

        Download Masti Hindi Dubbed Movie from Filmyzilla

        -

        Another way to download Masti hindi dubbed movie for free is to use Filmyzilla. Filmyzilla is a notorious website that leaks Bollywood, Hollywood and South Indian movies in various languages and qualities. You can find Masti hindi dubbed movie on Filmyzilla by searching for keywords like "Masti hindi dubbed movie download", "Masti 2004 hindi dubbed movie download", "Masti full movie download in hindi" etc. You can also browse through categories like comedy, romance, action and more to find the movie.

        -

        Once you find the link of Masti hindi dubbed movie that you want to download, you can click on it and follow the instructions on the website to download it to your device. However, be aware that Filmyzilla is an illegal website that violates the copyright laws and may face legal actions from the authorities. Downloading movies from Filmyzilla may also expose your device or data to viruses or hackers.

        -

        Download Masti Hindi Dubbed Movie from Archive.org

        -

        A third way to download Masti hindi dubbed movie for free is to use Archive.org. Archive.org is a non-profit digital library that offers free access to millions of books, movies, music, software and more. You can find Masti hindi dubbed movie on Archive.org by searching for keywords like "Masti hindi dubbed movie", "Masti 2004 hindi dubbed movie", "Masti full movie in hindi" etc. You can also use filters like media type, year, language and collection to refine your search results.

        -

        Once you find the file of Masti hindi dubbed movie that you want to download, you can click on it and choose from various options like streaming online, downloading in different formats or adding to your library. You can also read reviews or comments from other users about the file. However, be mindful that Archive.org may not have the best quality or latest version of the movie as it depends on the uploads from its users.

        -

        Conclusion

        -

        Masti is a hilarious comedy film that will make you laugh out loud with its witty dialogues and funny situations. If you want to watch Masti hindi dubbed movie for free, you can try any of the methods mentioned above. However, we recommend you to watch the movie legally from official sources like Netflix, Amazon Prime Video or Hotstar as they offer high-quality streaming and downloading options with subtitles and other features. Moreover, watching

        -

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Official Samsung Galaxy Fold SM-F900U Stock Rom.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Official Samsung Galaxy Fold SM-F900U Stock Rom.md deleted file mode 100644 index 78ec847e50b51cdc560be346d2eafa2448970b07..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Official Samsung Galaxy Fold SM-F900U Stock Rom.md +++ /dev/null @@ -1,29 +0,0 @@ - -

        How to Download and Install Official Samsung Galaxy Fold SM-F900U Stock Rom

        -

        If you own a Samsung Galaxy Fold SM-F900U device and want to restore it to its original factory settings, you will need to download and install the official stock firmware for your device. The stock firmware is also useful if you want to upgrade or downgrade your device's software version, fix any software issues, or unbrick your device if it is stuck in a bootloop or hard brick.

        -

        In this article, we will show you how to download and install the official Samsung Galaxy Fold SM-F900U stock rom using Odin flash tool. Odin is a Windows-based program that allows you to flash firmware files on Samsung devices. You will also need to download and install the Samsung USB drivers and the latest version of Odin on your computer before proceeding.

        -

        Official Samsung Galaxy Fold SM-F900U Stock Rom


        Download Zip ———>>> https://urlcod.com/2uIbJ9



        -

        Download Official Samsung Galaxy Fold SM-F900U Stock Rom

        -

        The official Samsung Galaxy Fold SM-F900U stock rom is available for download from various sources on the internet. However, not all sources are reliable and safe, so you should always download the firmware file from a trusted website. One such website is FirmwareFile.com[^1^], which provides the official link to download the Samsung SM-F900U stock firmware rom (flash file) on your computer. The firmware file comes in a zip package containing flash file, flash tool, USB driver, and how-to flash manual.

        -

        To download the official Samsung Galaxy Fold SM-F900U stock rom from FirmwareFile.com[^1^], follow these steps:

        -
          -
        1. Go to https://firmwarefile.com/samsung-sm-f900u on your browser.
        2. -
        3. Scroll down and find the file name: SM-F900U_F900USQS6HWA1_F900UOYN6HWA1_ATT_4file.zip.
        4. -
        5. Click on either Mirror 1 (Free) or Mirror 2 (Paid) to start downloading the firmware file.
        6. -
        7. Wait for the download to complete and save the zip file on your computer.
        8. -
        -

        The file size of the firmware file is 5.56 GB and the Android version is 12. You can also check the country, flash tool, and how to flash tutorial on the same page.

        -

        Install Official Samsung Galaxy Fold SM-F900U Stock Rom

        -

        After downloading the official Samsung Galaxy Fold SM-F900U stock rom, you need to use Odin flash tool to install it on your device. Odin is a Windows-based program that allows you to flash firmware files on Samsung devices. You will also need to download and install the Samsung USB drivers and the latest version of Odin on your computer before proceeding.

        -

        To install the official Samsung Galaxy Fold SM-F900U stock rom using Odin flash tool, follow these steps:

        -
          -
        1. Extract the zip file containing the firmware file on your computer using any extraction tool like WinRAR or 7-Zip.
        2. -
        3. You will get four files: AP, BL, CP, and CSC. These are the files that you will flash on your device using Odin.
        4. -
        5. Download and install the Samsung USB drivers on your computer from https://developer.samsung.com/mobile/android-usb-driver.html.
        6. -
        7. Download and extract the latest version of Odin flash tool on your computer from https://odindownload.com/.
        8. -
        9. Run the Odin.exe file as administrator on your computer.
        10. -
        11. Power off your Samsung Galaxy Fold SM-F900U device and boot it into download mode by pressing and holding Volume Down + Bixby + Power buttons together for a few seconds.
        12. -
        13. Connect your device to your computer using a USB cable. You should see an ID:COM port number and a blue or green box in Odin indicating that your device is detected.
        14. -
        15. In Odin,

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rational Acoustics Smaart 7.4 Cracked ((EXCLUSIVE)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rational Acoustics Smaart 7.4 Cracked ((EXCLUSIVE)).md deleted file mode 100644 index 1646a0dff38267f9cdf828604236e4d2de03d6ab..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rational Acoustics Smaart 7.4 Cracked ((EXCLUSIVE)).md +++ /dev/null @@ -1,26 +0,0 @@ - -

          Rational Acoustics Smaart 7.4: A Powerful Tool for Audio Analysis and Optimization

          -

          Rational Acoustics Smaart 7.4 is a software application that allows users to measure, analyze, and optimize the sound quality of any audio system. It is widely used by audio professionals such as sound engineers, acousticians, consultants, and installers for live sound, studio recording, broadcast, and architectural acoustics applications.

          -

          rational acoustics smaart 7.4 cracked


          Download Filehttps://urlcod.com/2uI9DU



          -

          Smaart 7.4 provides a comprehensive set of features and functions for real-time spectrum analysis, transfer function measurement, impulse response measurement, signal generator, SPL metering, delay finder, and more. It also supports multiple inputs and outputs, allowing users to compare and align multiple audio signals simultaneously.

          -

          However, Smaart 7.4 is not a free software. It requires a license key to activate and use. The official price of Smaart 7.4 is $895 USD for a single user license. This may be too expensive for some users who want to try or use the software for personal or educational purposes.

          -

          That is why some users resort to downloading cracked versions of Smaart 7.4 from various websites or torrents. These cracked versions claim to bypass the license verification process and allow users to use the software without paying for it.

          -

          However, downloading and using cracked versions of Smaart 7.4 is not recommended for several reasons:

          -

          -
            -
          • It is illegal and unethical. Downloading and using cracked software violates the intellectual property rights of the software developer and may result in legal consequences.
          • -
          • It is unsafe and unreliable. Cracked software may contain viruses, malware, spyware, or other harmful programs that can damage your computer or compromise your privacy and security.
          • -
          • It is outdated and unsupported. Cracked software may not be compatible with the latest operating systems, drivers, or hardware devices. It may also lack the latest updates, bug fixes, or enhancements that the official software provides.
          • -
          • It is ineffective and inefficient. Cracked software may not perform as well as the official software in terms of accuracy, stability, speed, or usability. It may also cause errors, crashes, or glitches that can affect your audio measurements or optimizations.
          • -
          -

          Therefore, if you want to use Smaart 7.4 for your audio analysis and optimization needs, it is better to purchase a legitimate license from the official website of Rational Acoustics[^1^]. You can also download a free trial version of Smaart 7.4 from their website[^2^] that allows you to use the software for 30 days with full functionality.

          -

          By purchasing a legitimate license of Smaart 7.4, you will not only support the development and improvement of the software, but also enjoy the following benefits:

          -
            -
          • You will get access to the latest version of Smaart 7.4 with all the features and functions available.
          • -
          • You will get access to technical support and customer service from Rational Acoustics in case you encounter any issues or problems with the software.
          • -
          • You will get access to online resources and tutorials from Rational Acoustics that can help you learn how to use the software effectively and efficiently.
          • -
          • You will get access to discounts and offers from Rational Acoustics for future upgrades or purchases of other products or services.
          • -
          -

          In conclusion, Rational Acoustics Smaart 7.4 is a powerful tool for audio analysis and optimization that can help you achieve better sound quality for any audio system. However, downloading and using cracked versions of Smaart 7.4 is not advisable as it can pose legal, security, compatibility, performance, and quality risks. Therefore, it is better to purchase a legitimate license of Smaart 7.4 from the official website of Rational Acoustics or download a free trial version of Smaart 7.4 from their website.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Silent Hunter 3 Crack 1.4 Bl ((TOP)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Silent Hunter 3 Crack 1.4 Bl ((TOP)).md deleted file mode 100644 index ca5b9247f34d757c596e9338b305e7b139e927de..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Silent Hunter 3 Crack 1.4 Bl ((TOP)).md +++ /dev/null @@ -1,21 +0,0 @@ - -

          How to Install Silent Hunter 3 v1.4b Patch (Legacy Digital Download)

          -

          Silent Hunter 3 is a submarine simulation game that lets you experience the thrill of commanding a U-boat in World War II. However, if you have downloaded the game from a digital distribution platform, you may need to install a patch to fix some bugs and improve the game performance. Here is how to install the Silent Hunter 3 v1.4b patch (legacy digital download) for your game.

          -
            -
          1. Download the patch file from here if you have the North American version of the game, or from here if you have the international version of the game.
          2. -
          3. Extract the contents of the patch file to your main SH3 program folder, usually this is C:\Program Files\Silent Hunter 3.
          4. -
          5. Run the SP2 Rez fix.bat file in the SH3 program folder.
          6. -
          7. Edit the d3d9.cfg file in the SH3 program folder to include your desired resolution. For example, if you want to play in 1280 x 960 resolution, change the lines resX=1280 and resY=960.
          8. -
          9. Start your game and enjoy SH3 as it was meant to be.
          10. -
          -

          The Silent Hunter 3 v1.4b patch (legacy digital download) fixes some issues with campaign radio messages, multiplayer version, and sound effects. It also adds new features such as improved graphics, dynamic loading of mods, and more realistic damage models. If you want to enhance your SH3 experience even further, you can also check out some of the mods available for the game, such as Silent Hunter Resolution Fix or The Living Silent Hunter 3 2022 Edition.

          -

          Silent Hunter 3 Crack 1.4 Bl


          Download ⚹⚹⚹ https://urlcod.com/2uI9xX



          - -

          Silent Hunter 3 is not the only submarine simulation game that you can play on your PC. There are other games in the Silent Hunter series, such as Silent Hunter 4: Wolves of the Pacific and Silent Hunter 5: Battle of the Atlantic, that offer different scenarios and features. You can also try other games that simulate naval warfare, such as UBOAT, Cold Waters, or Wolfpack. These games will challenge your skills as a submarine commander and immerse you in the historical or fictional settings of naval combat.

          -

          If you are a fan of submarine simulation games, you should not miss the opportunity to play Silent Hunter 3 with the latest patch and mods. This game will give you hours of fun and excitement as you hunt down enemy ships, evade detection, and survive the harsh conditions of the sea. Silent Hunter 3 is a classic game that deserves a place in your game library.

          - -

          Conclusion

          -

          Silent Hunter 3 is a submarine simulation game that lets you experience the thrill of commanding a U-boat in World War II. It is a game that has stood the test of time and still offers a realistic and immersive gameplay. If you have downloaded the game from a digital distribution platform, you may need to install the Silent Hunter 3 v1.4b patch (legacy digital download) to fix some bugs and improve the game performance. You can also install some mods to enhance your SH3 experience even further. Silent Hunter 3 is a game that you should not miss if you are a fan of submarine simulation games.

          -

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Structural Steel Design 5th Edition Solution Manual Pdf.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Structural Steel Design 5th Edition Solution Manual Pdf.md deleted file mode 100644 index a0fbc9d5ecac5d9ef2ba95f0dffaae119e974e71..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Structural Steel Design 5th Edition Solution Manual Pdf.md +++ /dev/null @@ -1,24 +0,0 @@ - -

          How to Download Structural Steel Design 5th Edition Solution Manual Pdf for Free

          -

          If you are looking for a comprehensive and practical guide to structural steel design, you may want to check out the Structural Steel Design 5th Edition Solution Manual Pdf. This manual provides detailed solutions to the end-of-chapter problems in the textbook Structural Steel Design 5th Edition by Jack C. McCormac and Stephen F. Csernak. The manual covers topics such as design of tension members, beams, columns, beam-columns, connections, plate girders, composite construction, and seismic design.

          -

          However, buying the solution manual can be quite expensive, especially if you are on a tight budget. That's why we have compiled some tips on how to download Structural Steel Design 5th Edition Solution Manual Pdf for free. Here are some of the ways you can try:

          -

          Structural Steel Design 5th Edition Solution Manual Pdf


          Downloadhttps://urlcod.com/2uI9XH



          -
            -
          • Search for the solution manual on online file-sharing platforms such as Scribd, SlideShare, or Z-Library. These platforms allow users to upload and download various types of documents, including solution manuals. You may need to create an account or sign up for a free trial to access some of the files.
          • -
          • Look for the solution manual on online forums or communities related to structural engineering or civil engineering. You may find some generous members who are willing to share their copies of the solution manual or provide links to download it. You may need to join the forum or community and follow their rules and etiquette.
          • -
          • Ask your classmates or friends who have taken the same course or used the same textbook if they have a copy of the solution manual or know where to get it. They may be able to lend you their copy or send you a digital version via email or cloud storage.
          • -
          • Contact the authors or publishers of the textbook or solution manual and request a complimentary copy for academic purposes. You may need to provide proof of your enrollment or affiliation with an educational institution and explain why you need the solution manual.
          • -
          -

          By following these tips, you may be able to download Structural Steel Design 5th Edition Solution Manual Pdf for free and enhance your learning experience. However, please note that downloading copyrighted materials without permission may violate intellectual property rights and ethical standards. Therefore, we recommend that you use the solution manual only as a reference and not as a substitute for doing your own work.

          - -

          If you are wondering why you should study structural steel design, here are some of the benefits of this field of engineering:

          -
            -
          • Structural steel design is essential for building safe and efficient structures that can withstand various loads and environmental conditions. Structural steel design can be applied to bridges, buildings, towers, stadiums, and other types of structures.
          • -
          • Structural steel design is a challenging and rewarding discipline that requires creativity, problem-solving skills, and attention to detail. Structural steel design can help you develop your analytical and critical thinking abilities, as well as your communication and teamwork skills.
          • -
          • Structural steel design is a dynamic and evolving field that incorporates new technologies, standards, and research. Structural steel design can help you keep up with the latest developments and innovations in the engineering profession.
          • -
          -

          If you are interested in learning more about structural steel design, you may want to check out the textbook Structural Steel Design 5th Edition by Jack C. McCormac and Stephen F. Csernak[^1^] [^2^]. This textbook is ideal for undergraduate courses in steel design and is also useful as a reference for civil and environmental engineering professionals. The textbook has been fully updated to conform to the latest American Manual of Steel Construction and covers topics such as design of tension members, beams, columns, beam-columns, connections, plate girders, composite construction, and seismic design.

          -

          To enhance your learning experience, you may also want to use the Structural Steel Design 5th Edition Solution Manual Pdf that provides detailed solutions to the end-of-chapter problems in the textbook. The solution manual can help you check your understanding of the concepts and principles of structural steel design and prepare for your exams. However, as we mentioned before, buying the solution manual can be quite expensive. That's why we have shared some tips on how to download Structural Steel Design 5th Edition Solution Manual Pdf for free in this article.

          -

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/newsteam/stable-diffusion-img2img/app.py b/spaces/newsteam/stable-diffusion-img2img/app.py deleted file mode 100644 index 06f3655d15f611fe3751139bddac200f9c1622c4..0000000000000000000000000000000000000000 --- a/spaces/newsteam/stable-diffusion-img2img/app.py +++ /dev/null @@ -1,67 +0,0 @@ -import gradio as gr -import torch -#from torch import autocast // only for GPU - -from PIL import Image -import numpy as np -from io import BytesIO -import os -MY_SECRET_TOKEN=os.environ.get('HF_TOKEN_SD') - -#from diffusers import StableDiffusionPipeline -from diffusers import StableDiffusionImg2ImgPipeline - -print("hello sylvain") - -YOUR_TOKEN=MY_SECRET_TOKEN - -device="cpu" - -#prompt_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=YOUR_TOKEN) -#prompt_pipe.to(device) - -img_pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_auth_token=YOUR_TOKEN) -img_pipe.to(device) - -source_img = gr.Image(source="upload", type="filepath", label="init_img | 512*512 px") -gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[1], height="auto") - -def resize(value,img): - #baseheight = value - img = Image.open(img) - #hpercent = (baseheight/float(img.size[1])) - #wsize = int((float(img.size[0])*float(hpercent))) - #img = img.resize((wsize,baseheight), Image.Resampling.LANCZOS) - img = img.resize((value,value), Image.Resampling.LANCZOS) - return img - - -def infer(source_img, prompt, guide, steps, seed, strength): - generator = torch.Generator('cpu').manual_seed(seed) - - source_image = resize(512, source_img) - source_image.save('source.png') - - images_list = img_pipe([prompt] * 1, init_image=source_image, strength=strength, guidance_scale=guide, num_inference_steps=steps) - images = [] - safe_image = Image.open(r"unsafe.png") - - for i, image in enumerate(images_list["images"]): - if(images_list["nsfw_content_detected"][i]): - images.append(safe_image) - else: - images.append(image) - return images - -print("Great sylvain ! Everything is working fine !") - -title="Img2Img Stable Diffusion CPU" -description="

          Img2Img Stable Diffusion example using CPU and HF token.
          Warning: Slow process... ~5/10 min inference time. NSFW filter enabled.
          visitor badge

          " - -gr.Interface(fn=infer, inputs=[source_img, - "text", - gr.Slider(2, 15, value = 7, label = 'Guidence Scale'), - gr.Slider(10, 50, value = 25, step = 1, label = 'Number of Iterations'), - gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True), - gr.Slider(label='Strength', minimum = 0, maximum = 1, step = .05, value = .75)], - outputs=gallery,title=title,description=description, allow_flagging="manual", flagging_dir="flagged").queue(max_size=100).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/ngaggion/Chest-x-ray-HybridGNet-Segmentation/app.py b/spaces/ngaggion/Chest-x-ray-HybridGNet-Segmentation/app.py deleted file mode 100644 index 05ea82950aab00cabcbf745c4a5b3cc42ba32aec..0000000000000000000000000000000000000000 --- a/spaces/ngaggion/Chest-x-ray-HybridGNet-Segmentation/app.py +++ /dev/null @@ -1,288 +0,0 @@ -import numpy as np -import gradio as gr -import cv2 - -from models.HybridGNet2IGSC import Hybrid -from utils.utils import scipy_to_torch_sparse, genMatrixesLungsHeart -import scipy.sparse as sp -import torch -import pandas as pd -from zipfile import ZipFile - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") -hybrid = None - -def getDenseMask(landmarks, h, w): - - RL = landmarks[0:44] - LL = landmarks[44:94] - H = landmarks[94:] - - img = np.zeros([h, w], dtype = 'uint8') - - RL = RL.reshape(-1, 1, 2).astype('int') - LL = LL.reshape(-1, 1, 2).astype('int') - H = H.reshape(-1, 1, 2).astype('int') - - img = cv2.drawContours(img, [RL], -1, 1, -1) - img = cv2.drawContours(img, [LL], -1, 1, -1) - img = cv2.drawContours(img, [H], -1, 2, -1) - - return img - -def getMasks(landmarks, h, w): - - RL = landmarks[0:44] - LL = landmarks[44:94] - H = landmarks[94:] - - RL = RL.reshape(-1, 1, 2).astype('int') - LL = LL.reshape(-1, 1, 2).astype('int') - H = H.reshape(-1, 1, 2).astype('int') - - RL_mask = np.zeros([h, w], dtype = 'uint8') - LL_mask = np.zeros([h, w], dtype = 'uint8') - H_mask = np.zeros([h, w], dtype = 'uint8') - - RL_mask = cv2.drawContours(RL_mask, [RL], -1, 255, -1) - LL_mask = cv2.drawContours(LL_mask, [LL], -1, 255, -1) - H_mask = cv2.drawContours(H_mask, [H], -1, 255, -1) - - return RL_mask, LL_mask, H_mask - -def drawOnTop(img, landmarks, original_shape): - h, w = original_shape - output = getDenseMask(landmarks, h, w) - - image = np.zeros([h, w, 3]) - image[:,:,0] = img + 0.3 * (output == 1).astype('float') - 0.1 * (output == 2).astype('float') - image[:,:,1] = img + 0.3 * (output == 2).astype('float') - 0.1 * (output == 1).astype('float') - image[:,:,2] = img - 0.1 * (output == 1).astype('float') - 0.2 * (output == 2).astype('float') - - image = np.clip(image, 0, 1) - - RL, LL, H = landmarks[0:44], landmarks[44:94], landmarks[94:] - - # Draw the landmarks as dots - - for l in RL: - image = cv2.circle(image, (int(l[0]), int(l[1])), 5, (1, 0, 1), -1) - for l in LL: - image = cv2.circle(image, (int(l[0]), int(l[1])), 5, (1, 0, 1), -1) - for l in H: - image = cv2.circle(image, (int(l[0]), int(l[1])), 5, (1, 1, 0), -1) - - return image - - -def loadModel(device): - A, AD, D, U = genMatrixesLungsHeart() - N1 = A.shape[0] - N2 = AD.shape[0] - - A = sp.csc_matrix(A).tocoo() - AD = sp.csc_matrix(AD).tocoo() - D = sp.csc_matrix(D).tocoo() - U = sp.csc_matrix(U).tocoo() - - D_ = [D.copy()] - U_ = [U.copy()] - - config = {} - - config['n_nodes'] = [N1, N1, N1, N2, N2, N2] - A_ = [A.copy(), A.copy(), A.copy(), AD.copy(), AD.copy(), AD.copy()] - - A_t, D_t, U_t = ([scipy_to_torch_sparse(x).to(device) for x in X] for X in (A_, D_, U_)) - - config['latents'] = 64 - config['inputsize'] = 1024 - - f = 32 - config['filters'] = [2, f, f, f, f//2, f//2, f//2] - config['skip_features'] = f - - hybrid = Hybrid(config.copy(), D_t, U_t, A_t).to(device) - hybrid.load_state_dict(torch.load("weights/weights.pt", map_location=torch.device(device))) - hybrid.eval() - - return hybrid - - -def pad_to_square(img): - h, w = img.shape[:2] - - if h > w: - padw = (h - w) - auxw = padw % 2 - img = np.pad(img, ((0, 0), (padw//2, padw//2 + auxw)), 'constant') - - padh = 0 - auxh = 0 - - else: - padh = (w - h) - auxh = padh % 2 - img = np.pad(img, ((padh//2, padh//2 + auxh), (0, 0)), 'constant') - - padw = 0 - auxw = 0 - - return img, (padh, padw, auxh, auxw) - - -def preprocess(input_img): - img, padding = pad_to_square(input_img) - - h, w = img.shape[:2] - if h != 1024 or w != 1024: - img = cv2.resize(img, (1024, 1024), interpolation = cv2.INTER_CUBIC) - - return img, (h, w, padding) - - -def removePreprocess(output, info): - h, w, padding = info - - if h != 1024 or w != 1024: - output = output * h - else: - output = output * 1024 - - padh, padw, auxh, auxw = padding - - output[:, 0] = output[:, 0] - padw//2 - output[:, 1] = output[:, 1] - padh//2 - - return output - - -def zip_files(files): - with ZipFile("complete_results.zip", "w") as zipObj: - for idx, file in enumerate(files): - zipObj.write(file, arcname=file.split("/")[-1]) - return "complete_results.zip" - - -def segment(input_img): - global hybrid, device - - if hybrid is None: - hybrid = loadModel(device) - - input_img = cv2.imread(input_img, 0) / 255.0 - original_shape = input_img.shape[:2] - - img, (h, w, padding) = preprocess(input_img) - - data = torch.from_numpy(img).unsqueeze(0).unsqueeze(0).to(device).float() - - with torch.no_grad(): - output = hybrid(data)[0].cpu().numpy().reshape(-1, 2) - - output = removePreprocess(output, (h, w, padding)) - - output = output.astype('int') - - outseg = drawOnTop(input_img, output, original_shape) - - seg_to_save = (outseg.copy() * 255).astype('uint8') - cv2.imwrite("tmp/overlap_segmentation.png" , cv2.cvtColor(seg_to_save, cv2.COLOR_RGB2BGR)) - - RL = output[0:44] - LL = output[44:94] - H = output[94:] - - np.savetxt("tmp/RL_landmarks.txt", RL, delimiter=" ", fmt="%d") - np.savetxt("tmp/LL_landmarks.txt", LL, delimiter=" ", fmt="%d") - np.savetxt("tmp/H_landmarks.txt", H, delimiter=" ", fmt="%d") - - RL_mask, LL_mask, H_mask = getMasks(output, original_shape[0], original_shape[1]) - - cv2.imwrite("tmp/RL_mask.png", RL_mask) - cv2.imwrite("tmp/LL_mask.png", LL_mask) - cv2.imwrite("tmp/H_mask.png", H_mask) - - zip = zip_files(["tmp/RL_landmarks.txt", "tmp/LL_landmarks.txt", "tmp/H_landmarks.txt", "tmp/RL_mask.png", "tmp/LL_mask.png", "tmp/H_mask.png", "tmp/overlap_segmentation.png"]) - - return outseg, ["tmp/RL_landmarks.txt", "tmp/LL_landmarks.txt", "tmp/H_landmarks.txt", "tmp/RL_mask.png", "tmp/LL_mask.png", "tmp/H_mask.png", "tmp/overlap_segmentation.png", zip] - -if __name__ == "__main__": - - with gr.Blocks() as demo: - - gr.Markdown(""" - # Chest X-ray HybridGNet Segmentation. - - Demo of the HybridGNet model introduced in "Improving anatomical plausibility in medical image segmentation via hybrid graph neural networks: applications to chest x-ray analysis." - - Instructions: - 1. Upload a chest X-ray image (PA or AP) in PNG or JPEG format. - 2. Click on "Segment Image". - - Note: Pre-processing is not needed, it will be done automatically and removed after the segmentation. - - Please check citations below. - """) - - with gr.Tab("Segment Image"): - with gr.Row(): - with gr.Column(): - image_input = gr.Image(type="filepath", height=750) - - with gr.Row(): - clear_button = gr.Button("Clear") - image_button = gr.Button("Segment Image") - - gr.Examples(inputs=image_input, examples=['utils/example1.jpg','utils/example2.jpg','utils/example3.png','utils/example4.jpg']) - - with gr.Column(): - image_output = gr.Image(type="filepath", height=750) - results = gr.File() - - gr.Markdown(""" - If you use this code, please cite: - - ``` - @article{gaggion2022TMI, - doi = {10.1109/tmi.2022.3224660}, - url = {https://doi.org/10.1109%2Ftmi.2022.3224660}, - year = 2022, - publisher = {Institute of Electrical and Electronics Engineers ({IEEE})}, - author = {Nicolas Gaggion and Lucas Mansilla and Candelaria Mosquera and Diego H. Milone and Enzo Ferrante}, - title = {Improving anatomical plausibility in medical image segmentation via hybrid graph neural networks: applications to chest x-ray analysis}, - journal = {{IEEE} Transactions on Medical Imaging} - } - ``` - - This model was trained following the procedure explained on: - - ``` - @misc{gaggion2022ISBI, - title={Multi-center anatomical segmentation with heterogeneous labels via landmark-based models}, - author={Nicolás Gaggion and Maria Vakalopoulou and Diego H. Milone and Enzo Ferrante}, - year={2022}, - eprint={2211.07395}, - archivePrefix={arXiv}, - primaryClass={eess.IV} - } - ``` - - Example images extracted from Wikipedia, released under: - 1. CC0 Universial Public Domain. Source: https://commons.wikimedia.org/wiki/File:Normal_posteroanterior_(PA)_chest_radiograph_(X-ray).jpg - 2. Creative Commons Attribution-Share Alike 4.0 International. Source: https://commons.wikimedia.org/wiki/File:Chest_X-ray.jpg - 3. Creative Commons Attribution 3.0 Unported. Source https://commons.wikimedia.org/wiki/File:Implantable_cardioverter_defibrillator_chest_X-ray.jpg - 4. Creative Commons Attribution-Share Alike 3.0 Unported. Source: https://commons.wikimedia.org/wiki/File:Medical_X-Ray_imaging_PRD06_nevit.jpg - - Author: Nicolás Gaggion - Website: [ngaggion.github.io](https://ngaggion.github.io/) - - """) - - - clear_button.click(lambda: None, None, image_input, queue=False) - clear_button.click(lambda: None, None, image_output, queue=False) - - image_button.click(segment, inputs=image_input, outputs=[image_output, results], queue=False) - - demo.launch() diff --git a/spaces/nielsr/vilt-nlvr/README.md b/spaces/nielsr/vilt-nlvr/README.md deleted file mode 100644 index 84beebaaebc13a908f011ad30e2bacfbb92899a4..0000000000000000000000000000000000000000 --- a/spaces/nielsr/vilt-nlvr/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Vilt Nlvr -emoji: 🚀 -colorFrom: yellow -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/nkigumnov/banks-ethics-sentiment/app.py b/spaces/nkigumnov/banks-ethics-sentiment/app.py deleted file mode 100644 index fd9c2310f6803c760501f1b0a7b998115ade09d2..0000000000000000000000000000000000000000 --- a/spaces/nkigumnov/banks-ethics-sentiment/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import gradio as gr - -from model import inference - - -def predict(sentence: str): - model_response = inference({"sentence": sentence}) - prob = model_response["answer"] - df = { - "1": float(prob[1][0][2]), - "0": float(prob[1][0][1]), - "-1": float(prob[1][0][0]), - "Communication": float(prob[0][0][1]), - "Quality": float(prob[0][0][2]), - "Price": float(prob[0][0][3]), - "Safety": float(prob[0][0][4]), - } - return ( - df["1"], - df["0"], - df["-1"], - df["Communication"], - df["Quality"], - df["Price"], - df["Safety"], - ) - - -if __name__ == "__main__": - print("App started") - - gr.Interface( - fn=predict, - title="Try it yourself!", - inputs=gr.Textbox(lines=3, placeholder="Sentence here..."), - outputs=[ - gr.Number(0.0, label="1"), - gr.Number(0.0, label="0"), - gr.Number(0.0, label="-1"), - gr.Number(0.0, label="Communication"), - gr.Number(0.0, label="Quality"), - gr.Number(0.0, label="Price"), - gr.Number(0.0, label="Safety"), - ], - ).launch() diff --git a/spaces/oliver2023/chatgpt-on-wechat/common/singleton.py b/spaces/oliver2023/chatgpt-on-wechat/common/singleton.py deleted file mode 100644 index b46095c59e929b40c91e1e0794abb28601600f08..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/common/singleton.py +++ /dev/null @@ -1,9 +0,0 @@ -def singleton(cls): - instances = {} - - def get_instance(*args, **kwargs): - if cls not in instances: - instances[cls] = cls(*args, **kwargs) - return instances[cls] - - return get_instance diff --git a/spaces/openskyml/fast-sdxl-stable-diffusion-xl/app.py b/spaces/openskyml/fast-sdxl-stable-diffusion-xl/app.py deleted file mode 100644 index 84e344696db6ab79c94ed35609111aa55e94d380..0000000000000000000000000000000000000000 --- a/spaces/openskyml/fast-sdxl-stable-diffusion-xl/app.py +++ /dev/null @@ -1,203 +0,0 @@ -# This space used model: stabilityai/stable-diffusion-xl-base-1.0 -# and model: stabilityai/stable-diffusion-xl-refiner-1.0 - - -import numpy as np -import gradio as gr -import requests -import time -import json -import base64 -import os -from PIL import Image -from io import BytesIO - -batch_size=1 -batch_count=1 - -class Prodia: - def __init__(self, api_key, base=None): - self.base = base or "https://api.prodia.com/v1" - self.headers = { - "X-Prodia-Key": api_key - } - - def generate(self, params): - response = self._post(f"{self.base}/sdxl/generate", params) - return response.json() - - def get_job(self, job_id): - response = self._get(f"{self.base}/job/{job_id}") - return response.json() - - def wait(self, job): - job_result = job - - while job_result['status'] not in ['succeeded', 'failed']: - time.sleep(0.25) - job_result = self.get_job(job['job']) - - return job_result - - def list_models(self): - response = self._get(f"{self.base}/sdxl/models") - return response.json() - - def list_samplers(self): - response = self._get(f"{self.base}/sdxl/samplers") - return response.json() - - def _post(self, url, params): - headers = { - **self.headers, - "Content-Type": "application/json" - } - response = requests.post(url, headers=headers, data=json.dumps(params)) - - if response.status_code != 200: - raise Exception(f"Bad Prodia Response: {response.status_code}") - - return response - - def _get(self, url): - response = requests.get(url, headers=self.headers) - - if response.status_code != 200: - raise Exception(f"Bad Prodia Response: {response.status_code}") - - return response - - -def image_to_base64(image_path): - # Open the image with PIL - with Image.open(image_path) as image: - # Convert the image to bytes - buffered = BytesIO() - image.save(buffered, format="PNG") # You can change format to PNG if needed - - # Encode the bytes to base64 - img_str = base64.b64encode(buffered.getvalue()) - - return img_str.decode('utf-8') # Convert bytes to string - - - -prodia_client = Prodia(api_key=os.getenv("API_KEY")) - -def flip_text(prompt, negative_prompt, steps, cfg_scale, width, height, seed): - result = prodia_client.generate({ - "prompt": prompt, - "negative_prompt": negative_prompt, - "model": "sd_xl_base_1.0.safetensors [be9edd61]", - "steps": steps, - "sampler": "DPM++ 2M Karras", - "cfg_scale": cfg_scale, - "width": width, - "height": height, - "seed": seed - }) - - job = prodia_client.wait(result) - - return job["imageUrl"] - -css = """ - #prompt-container .form{ - border-top-right-radius: 0; - border-bottom-right-radius: 0; - } - #prompt-text-input, #negative-prompt-text-input{padding: .45rem 0.625rem} - #component-16{border-top-width: 1px!important;margin-top: 1em} - .image_duplication{position: absolute; width: 100px; left: 50px} - .tabitem{border: 0 !important}.style(mobile_collapse=False, equal_height=True).style(mobile_collapse=False, equal_height=True).style(mobile_collapse=False, equal_height=True).style(mobile_collapse=False, equal_height=True - #gen-button{ - border-top-left-radius:0; - border-bottom-left-radius:0; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } -""" - -with gr.Blocks(css=css) as demo: - gr.HTML( - """ -
          -
          - - - - - - - - - - - - - - - - - - - - - - - - - -

          - Fast SDXL -

          -
          -
          - """) - with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True): - prompt = gr.Textbox(label="Prompt", placeholder="a cute cat, 8k", show_label=True, lines=1, elem_id="prompt-text-input") - text_button = gr.Button("Generate", variant='primary', elem_id="gen-button") - with gr.Row(): - with gr.Column(scale=1): - image_output = gr.Image(elem_id="gallery") - with gr.Row(): - with gr.Accordion("Additionals inputs", open=False): - with gr.Column(scale=1): - negative_prompt = gr.Textbox(label="Negative Prompt", value="text, blurry", placeholder="What you don't want to see in the image", show_label=True, lines=1, elem_id="negative-prompt-text-input") - with gr.Column(scale=1): - steps = gr.Slider(label="Sampling Steps", minimum=1, maximum=30, value=25, step=1) - cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, value=7, step=1) - seed = gr.Number(label="Seed", value=-1) - with gr.Column(scale=1): - width = gr.Slider(label="↔️ Width", minimum=1024, maximum=1024, value=1024, step=8) - height = gr.Slider(label="↕️ Height", minimum=1024, maximum=1024, value=1024, step=8) - - - - - - text_button.click(flip_text, inputs=[prompt, negative_prompt, steps, cfg_scale, width, height, seed], outputs=image_output) - -demo.queue(concurrency_count=16, max_size=20, api_open=False).launch(max_threads=64) diff --git a/spaces/osanseviero/draw123/app.py b/spaces/osanseviero/draw123/app.py deleted file mode 100644 index 3e37e1b8272672601eb57e8688b029316de55d3e..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/draw123/app.py +++ /dev/null @@ -1,43 +0,0 @@ -from pathlib import Path - -import torch -import gradio as gr -from torch import nn - - -LABELS = Path('class_names.txt').read_text().splitlines() - -model = nn.Sequential( - nn.Conv2d(1, 32, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(32, 64, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Conv2d(64, 128, 3, padding='same'), - nn.ReLU(), - nn.MaxPool2d(2), - nn.Flatten(), - nn.Linear(1152, 256), - nn.ReLU(), - nn.Linear(256, len(LABELS)), -) -state_dict = torch.load('pytorch_model.bin', map_location='cpu') -model.load_state_dict(state_dict, strict=False) -model.eval() - -def predict(im): - x = torch.tensor(im, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255. - - with torch.no_grad(): - out = model(x) - - probabilities = torch.nn.functional.softmax(out[0], dim=0) - - values, indices = torch.topk(probabilities, 5) - - return {LABELS[i]: v.item() for i, v in zip(indices, values)} - - -interface = gr.Interface(predict, inputs='sketchpad', outputs='label', theme="default", description="Who wants to play Pictionary? Draw a common object like a shovel or a laptop, and the algorithm will guess in real time!", live=True) -interface.launch(debug=True) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/utils.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/utils.py deleted file mode 100644 index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -"""Utils for monoDepth.""" -import sys -import re -import numpy as np -import cv2 -import torch - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - - if width_orig > height_orig: - scale = width_orig / 384 - else: - scale = height_orig / 384 - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - - depth_resized = cv2.resize( - depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = np.zeros(depth.shape, dtype=depth.type) - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return diff --git a/spaces/pngwn/nextjs/next.config.js b/spaces/pngwn/nextjs/next.config.js deleted file mode 100644 index 5e0c1759725335a64518602927e4d24ae1d17e57..0000000000000000000000000000000000000000 --- a/spaces/pngwn/nextjs/next.config.js +++ /dev/null @@ -1,3 +0,0 @@ -module.exports = { - basePath: '/staticspaceiframe/pngwn/nextjs/out', -} \ No newline at end of file diff --git a/spaces/priyank-m/vit-bert-ocr/README.md b/spaces/priyank-m/vit-bert-ocr/README.md deleted file mode 100644 index 8488aa39e6dbdc9519529c9f948de2079a9c4d74..0000000000000000000000000000000000000000 --- a/spaces/priyank-m/vit-bert-ocr/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Vit Bert Ocr -emoji: 😻 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/_deprecate.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/_deprecate.py deleted file mode 100644 index 2f2a3df13e312aed847e482a067c2c10e4fd5632..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/_deprecate.py +++ /dev/null @@ -1,69 +0,0 @@ -from __future__ import annotations - -import warnings - -from . import __version__ - - -def deprecate( - deprecated: str, - when: int | None, - replacement: str | None = None, - *, - action: str | None = None, - plural: bool = False, -) -> None: - """ - Deprecations helper. - - :param deprecated: Name of thing to be deprecated. - :param when: Pillow major version to be removed in. - :param replacement: Name of replacement. - :param action: Instead of "replacement", give a custom call to action - e.g. "Upgrade to new thing". - :param plural: if the deprecated thing is plural, needing "are" instead of "is". - - Usually of the form: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - Use [replacement] instead." - - You can leave out the replacement sentence: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd)" - - Or with another call to action: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - [action]." - """ - - is_ = "are" if plural else "is" - - if when is None: - removed = "a future version" - elif when <= int(__version__.split(".")[0]): - msg = f"{deprecated} {is_} deprecated and should be removed." - raise RuntimeError(msg) - elif when == 11: - removed = "Pillow 11 (2024-10-15)" - else: - msg = f"Unknown removal version: {when}. Update {__name__}?" - raise ValueError(msg) - - if replacement and action: - msg = "Use only one of 'replacement' and 'action'" - raise ValueError(msg) - - if replacement: - action = f". Use {replacement} instead." - elif action: - action = f". {action.rstrip('.')}." - else: - action = "" - - warnings.warn( - f"{deprecated} {is_} deprecated and will be removed in {removed}{action}", - DeprecationWarning, - stacklevel=3, - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-2853eb31.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-2853eb31.css deleted file mode 100644 index 8657e4c7112cc9a8232f875b00f9cf9aaac5e9f6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-2853eb31.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-vt1mxs{display:flex;position:relative;flex-direction:column}div.svelte-vt1mxs>*,div.svelte-vt1mxs>.form>*{width:var(--size-full)}.gap.svelte-vt1mxs{gap:var(--layout-gap)}.hide.svelte-vt1mxs{display:none}.compact.svelte-vt1mxs>*,.compact.svelte-vt1mxs .box{border-radius:0}.compact.svelte-vt1mxs,.panel.svelte-vt1mxs{border:solid var(--panel-border-width) var(--panel-border-color);border-radius:var(--container-radius);background:var(--panel-background-fill);padding:var(--spacing-lg)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/zip.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/zip.py deleted file mode 100644 index 962195a9014a8641a46e6d0875b094ed994320e1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/zip.py +++ /dev/null @@ -1,32 +0,0 @@ -""" -Generate zip test data files. -""" - -import contextlib -import os -import pathlib -import zipfile - -import zipp - - -def make_zip_file(src, dst): - """ - Zip the files in src into a new zipfile at dst. - """ - with zipfile.ZipFile(dst, 'w') as zf: - for src_path, rel in walk(src): - dst_name = src.name / pathlib.PurePosixPath(rel.as_posix()) - zf.write(src_path, dst_name) - zipp.CompleteDirs.inject(zf) - return dst - - -def walk(datapath): - for dirpath, dirnames, filenames in os.walk(datapath): - with contextlib.suppress(ValueError): - dirnames.remove('__pycache__') - for filename in filenames: - res = pathlib.Path(dirpath) / filename - rel = res.relative_to(datapath) - yield res, rel diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/core.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/core.py deleted file mode 100644 index 1cdc739731bfb073a580202f0cd0a36c5d3cb5aa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/core.py +++ /dev/null @@ -1,216 +0,0 @@ -import sys -from distutils.core import Distribution - -if 'setuptools' in sys.modules: - have_setuptools = True - from setuptools import setup as old_setup - # easy_install imports math, it may be picked up from cwd - from setuptools.command import easy_install - try: - # very old versions of setuptools don't have this - from setuptools.command import bdist_egg - except ImportError: - have_setuptools = False -else: - from distutils.core import setup as old_setup - have_setuptools = False - -import warnings -import distutils.core -import distutils.dist - -from numpy.distutils.extension import Extension # noqa: F401 -from numpy.distutils.numpy_distribution import NumpyDistribution -from numpy.distutils.command import config, config_compiler, \ - build, build_py, build_ext, build_clib, build_src, build_scripts, \ - sdist, install_data, install_headers, install, bdist_rpm, \ - install_clib -from numpy.distutils.misc_util import is_sequence, is_string - -numpy_cmdclass = {'build': build.build, - 'build_src': build_src.build_src, - 'build_scripts': build_scripts.build_scripts, - 'config_cc': config_compiler.config_cc, - 'config_fc': config_compiler.config_fc, - 'config': config.config, - 'build_ext': build_ext.build_ext, - 'build_py': build_py.build_py, - 'build_clib': build_clib.build_clib, - 'sdist': sdist.sdist, - 'install_data': install_data.install_data, - 'install_headers': install_headers.install_headers, - 'install_clib': install_clib.install_clib, - 'install': install.install, - 'bdist_rpm': bdist_rpm.bdist_rpm, - } -if have_setuptools: - # Use our own versions of develop and egg_info to ensure that build_src is - # handled appropriately. - from numpy.distutils.command import develop, egg_info - numpy_cmdclass['bdist_egg'] = bdist_egg.bdist_egg - numpy_cmdclass['develop'] = develop.develop - numpy_cmdclass['easy_install'] = easy_install.easy_install - numpy_cmdclass['egg_info'] = egg_info.egg_info - -def _dict_append(d, **kws): - for k, v in kws.items(): - if k not in d: - d[k] = v - continue - dv = d[k] - if isinstance(dv, tuple): - d[k] = dv + tuple(v) - elif isinstance(dv, list): - d[k] = dv + list(v) - elif isinstance(dv, dict): - _dict_append(dv, **v) - elif is_string(dv): - assert is_string(v) - d[k] = v - else: - raise TypeError(repr(type(dv))) - -def _command_line_ok(_cache=None): - """ Return True if command line does not contain any - help or display requests. - """ - if _cache: - return _cache[0] - elif _cache is None: - _cache = [] - ok = True - display_opts = ['--'+n for n in Distribution.display_option_names] - for o in Distribution.display_options: - if o[1]: - display_opts.append('-'+o[1]) - for arg in sys.argv: - if arg.startswith('--help') or arg=='-h' or arg in display_opts: - ok = False - break - _cache.append(ok) - return ok - -def get_distribution(always=False): - dist = distutils.core._setup_distribution - # XXX Hack to get numpy installable with easy_install. - # The problem is easy_install runs it's own setup(), which - # sets up distutils.core._setup_distribution. However, - # when our setup() runs, that gets overwritten and lost. - # We can't use isinstance, as the DistributionWithoutHelpCommands - # class is local to a function in setuptools.command.easy_install - if dist is not None and \ - 'DistributionWithoutHelpCommands' in repr(dist): - dist = None - if always and dist is None: - dist = NumpyDistribution() - return dist - -def setup(**attr): - - cmdclass = numpy_cmdclass.copy() - - new_attr = attr.copy() - if 'cmdclass' in new_attr: - cmdclass.update(new_attr['cmdclass']) - new_attr['cmdclass'] = cmdclass - - if 'configuration' in new_attr: - # To avoid calling configuration if there are any errors - # or help request in command in the line. - configuration = new_attr.pop('configuration') - - old_dist = distutils.core._setup_distribution - old_stop = distutils.core._setup_stop_after - distutils.core._setup_distribution = None - distutils.core._setup_stop_after = "commandline" - try: - dist = setup(**new_attr) - finally: - distutils.core._setup_distribution = old_dist - distutils.core._setup_stop_after = old_stop - if dist.help or not _command_line_ok(): - # probably displayed help, skip running any commands - return dist - - # create setup dictionary and append to new_attr - config = configuration() - if hasattr(config, 'todict'): - config = config.todict() - _dict_append(new_attr, **config) - - # Move extension source libraries to libraries - libraries = [] - for ext in new_attr.get('ext_modules', []): - new_libraries = [] - for item in ext.libraries: - if is_sequence(item): - lib_name, build_info = item - _check_append_ext_library(libraries, lib_name, build_info) - new_libraries.append(lib_name) - elif is_string(item): - new_libraries.append(item) - else: - raise TypeError("invalid description of extension module " - "library %r" % (item,)) - ext.libraries = new_libraries - if libraries: - if 'libraries' not in new_attr: - new_attr['libraries'] = [] - for item in libraries: - _check_append_library(new_attr['libraries'], item) - - # sources in ext_modules or libraries may contain header files - if ('ext_modules' in new_attr or 'libraries' in new_attr) \ - and 'headers' not in new_attr: - new_attr['headers'] = [] - - # Use our custom NumpyDistribution class instead of distutils' one - new_attr['distclass'] = NumpyDistribution - - return old_setup(**new_attr) - -def _check_append_library(libraries, item): - for libitem in libraries: - if is_sequence(libitem): - if is_sequence(item): - if item[0]==libitem[0]: - if item[1] is libitem[1]: - return - warnings.warn("[0] libraries list contains %r with" - " different build_info" % (item[0],), - stacklevel=2) - break - else: - if item==libitem[0]: - warnings.warn("[1] libraries list contains %r with" - " no build_info" % (item[0],), - stacklevel=2) - break - else: - if is_sequence(item): - if item[0]==libitem: - warnings.warn("[2] libraries list contains %r with" - " no build_info" % (item[0],), - stacklevel=2) - break - else: - if item==libitem: - return - libraries.append(item) - -def _check_append_ext_library(libraries, lib_name, build_info): - for item in libraries: - if is_sequence(item): - if item[0]==lib_name: - if item[1] is build_info: - return - warnings.warn("[3] libraries list contains %r with" - " different build_info" % (lib_name,), - stacklevel=2) - break - elif item==lib_name: - warnings.warn("[4] libraries list contains %r with" - " no build_info" % (lib_name,), - stacklevel=2) - break - libraries.append((lib_name, build_info)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_period.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_period.py deleted file mode 100644 index 63297c20daa97f1122eb696f75e5e12027d8dcbe..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_period.py +++ /dev/null @@ -1,143 +0,0 @@ -""" -This file contains a minimal set of tests for compliance with the extension -array interface test suite, and should contain no other tests. -The test suite for the full functionality of the array is located in -`pandas/tests/arrays/`. - -The tests in this file are inherited from the BaseExtensionTests, and only -minimal tweaks should be applied to get the tests passing (by overwriting a -parent method). - -Additional tests should either be added to one of the BaseExtensionTests -classes (if they are relevant for the extension interface for all dtypes), or -be added to the array-specific tests in `pandas/tests/arrays/`. - -""" -import numpy as np -import pytest - -from pandas._libs import iNaT -from pandas.compat import is_platform_windows -from pandas.compat.numpy import np_version_gte1p24 - -from pandas.core.dtypes.dtypes import PeriodDtype - -import pandas._testing as tm -from pandas.core.arrays import PeriodArray -from pandas.tests.extension import base - - -@pytest.fixture(params=["D", "2D"]) -def dtype(request): - return PeriodDtype(freq=request.param) - - -@pytest.fixture -def data(dtype): - return PeriodArray(np.arange(1970, 2070), dtype=dtype) - - -@pytest.fixture -def data_for_sorting(dtype): - return PeriodArray([2018, 2019, 2017], dtype=dtype) - - -@pytest.fixture -def data_missing(dtype): - return PeriodArray([iNaT, 2017], dtype=dtype) - - -@pytest.fixture -def data_missing_for_sorting(dtype): - return PeriodArray([2018, iNaT, 2017], dtype=dtype) - - -@pytest.fixture -def data_for_grouping(dtype): - B = 2018 - NA = iNaT - A = 2017 - C = 2019 - return PeriodArray([B, B, NA, NA, A, A, B, C], dtype=dtype) - - -class BasePeriodTests: - pass - - -class TestPeriodDtype(BasePeriodTests, base.BaseDtypeTests): - pass - - -class TestConstructors(BasePeriodTests, base.BaseConstructorsTests): - pass - - -class TestGetitem(BasePeriodTests, base.BaseGetitemTests): - pass - - -class TestIndex(base.BaseIndexTests): - pass - - -class TestMethods(BasePeriodTests, base.BaseMethodsTests): - @pytest.mark.parametrize("periods", [1, -2]) - def test_diff(self, data, periods): - if is_platform_windows() and np_version_gte1p24: - with tm.assert_produces_warning(RuntimeWarning, check_stacklevel=False): - super().test_diff(data, periods) - else: - super().test_diff(data, periods) - - @pytest.mark.parametrize("na_action", [None, "ignore"]) - def test_map(self, data, na_action): - result = data.map(lambda x: x, na_action=na_action) - tm.assert_extension_array_equal(result, data) - - -class TestInterface(BasePeriodTests, base.BaseInterfaceTests): - pass - - -class TestArithmeticOps(BasePeriodTests, base.BaseArithmeticOpsTests): - def _get_expected_exception(self, op_name, obj, other): - if op_name in ("__sub__", "__rsub__"): - return None - return super()._get_expected_exception(op_name, obj, other) - - -class TestCasting(BasePeriodTests, base.BaseCastingTests): - pass - - -class TestComparisonOps(BasePeriodTests, base.BaseComparisonOpsTests): - pass - - -class TestMissing(BasePeriodTests, base.BaseMissingTests): - pass - - -class TestReshaping(BasePeriodTests, base.BaseReshapingTests): - pass - - -class TestSetitem(BasePeriodTests, base.BaseSetitemTests): - pass - - -class TestGroupby(BasePeriodTests, base.BaseGroupbyTests): - pass - - -class TestPrinting(BasePeriodTests, base.BasePrintingTests): - pass - - -class TestParsing(BasePeriodTests, base.BaseParsingTests): - pass - - -class Test2DCompat(BasePeriodTests, base.NDArrayBacked2DTests): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/timedelta/test_arithmetic.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/timedelta/test_arithmetic.py deleted file mode 100644 index e583de1f489dbcbc6e29a89b3f40148c391edf60..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/timedelta/test_arithmetic.py +++ /dev/null @@ -1,1181 +0,0 @@ -""" -Tests for scalar Timedelta arithmetic ops -""" -from datetime import ( - datetime, - timedelta, -) -import operator - -import numpy as np -import pytest - -from pandas.errors import OutOfBoundsTimedelta - -import pandas as pd -from pandas import ( - NaT, - Timedelta, - Timestamp, - offsets, -) -import pandas._testing as tm -from pandas.core import ops - - -class TestTimedeltaAdditionSubtraction: - """ - Tests for Timedelta methods: - - __add__, __radd__, - __sub__, __rsub__ - """ - - @pytest.mark.parametrize( - "ten_seconds", - [ - Timedelta(10, unit="s"), - timedelta(seconds=10), - np.timedelta64(10, "s"), - np.timedelta64(10000000000, "ns"), - offsets.Second(10), - ], - ) - def test_td_add_sub_ten_seconds(self, ten_seconds): - # GH#6808 - base = Timestamp("20130101 09:01:12.123456") - expected_add = Timestamp("20130101 09:01:22.123456") - expected_sub = Timestamp("20130101 09:01:02.123456") - - result = base + ten_seconds - assert result == expected_add - - result = base - ten_seconds - assert result == expected_sub - - @pytest.mark.parametrize( - "one_day_ten_secs", - [ - Timedelta("1 day, 00:00:10"), - Timedelta("1 days, 00:00:10"), - timedelta(days=1, seconds=10), - np.timedelta64(1, "D") + np.timedelta64(10, "s"), - offsets.Day() + offsets.Second(10), - ], - ) - def test_td_add_sub_one_day_ten_seconds(self, one_day_ten_secs): - # GH#6808 - base = Timestamp("20130102 09:01:12.123456") - expected_add = Timestamp("20130103 09:01:22.123456") - expected_sub = Timestamp("20130101 09:01:02.123456") - - result = base + one_day_ten_secs - assert result == expected_add - - result = base - one_day_ten_secs - assert result == expected_sub - - @pytest.mark.parametrize("op", [operator.add, ops.radd]) - def test_td_add_datetimelike_scalar(self, op): - # GH#19738 - td = Timedelta(10, unit="d") - - result = op(td, datetime(2016, 1, 1)) - if op is operator.add: - # datetime + Timedelta does _not_ call Timedelta.__radd__, - # so we get a datetime back instead of a Timestamp - assert isinstance(result, Timestamp) - assert result == Timestamp(2016, 1, 11) - - result = op(td, Timestamp("2018-01-12 18:09")) - assert isinstance(result, Timestamp) - assert result == Timestamp("2018-01-22 18:09") - - result = op(td, np.datetime64("2018-01-12")) - assert isinstance(result, Timestamp) - assert result == Timestamp("2018-01-22") - - result = op(td, NaT) - assert result is NaT - - def test_td_add_timestamp_overflow(self): - ts = Timestamp("1700-01-01").as_unit("ns") - msg = "Cannot cast 259987 from D to 'ns' without overflow." - with pytest.raises(OutOfBoundsTimedelta, match=msg): - ts + Timedelta(13 * 19999, unit="D") - - msg = "Cannot cast 259987 days 00:00:00 to unit='ns' without overflow" - with pytest.raises(OutOfBoundsTimedelta, match=msg): - ts + timedelta(days=13 * 19999) - - @pytest.mark.parametrize("op", [operator.add, ops.radd]) - def test_td_add_td(self, op): - td = Timedelta(10, unit="d") - - result = op(td, Timedelta(days=10)) - assert isinstance(result, Timedelta) - assert result == Timedelta(days=20) - - @pytest.mark.parametrize("op", [operator.add, ops.radd]) - def test_td_add_pytimedelta(self, op): - td = Timedelta(10, unit="d") - result = op(td, timedelta(days=9)) - assert isinstance(result, Timedelta) - assert result == Timedelta(days=19) - - @pytest.mark.parametrize("op", [operator.add, ops.radd]) - def test_td_add_timedelta64(self, op): - td = Timedelta(10, unit="d") - result = op(td, np.timedelta64(-4, "D")) - assert isinstance(result, Timedelta) - assert result == Timedelta(days=6) - - @pytest.mark.parametrize("op", [operator.add, ops.radd]) - def test_td_add_offset(self, op): - td = Timedelta(10, unit="d") - - result = op(td, offsets.Hour(6)) - assert isinstance(result, Timedelta) - assert result == Timedelta(days=10, hours=6) - - def test_td_sub_td(self): - td = Timedelta(10, unit="d") - expected = Timedelta(0, unit="ns") - result = td - td - assert isinstance(result, Timedelta) - assert result == expected - - def test_td_sub_pytimedelta(self): - td = Timedelta(10, unit="d") - expected = Timedelta(0, unit="ns") - - result = td - td.to_pytimedelta() - assert isinstance(result, Timedelta) - assert result == expected - - result = td.to_pytimedelta() - td - assert isinstance(result, Timedelta) - assert result == expected - - def test_td_sub_timedelta64(self): - td = Timedelta(10, unit="d") - expected = Timedelta(0, unit="ns") - - result = td - td.to_timedelta64() - assert isinstance(result, Timedelta) - assert result == expected - - result = td.to_timedelta64() - td - assert isinstance(result, Timedelta) - assert result == expected - - def test_td_sub_nat(self): - # In this context pd.NaT is treated as timedelta-like - td = Timedelta(10, unit="d") - result = td - NaT - assert result is NaT - - def test_td_sub_td64_nat(self): - td = Timedelta(10, unit="d") - td_nat = np.timedelta64("NaT") - - result = td - td_nat - assert result is NaT - - result = td_nat - td - assert result is NaT - - def test_td_sub_offset(self): - td = Timedelta(10, unit="d") - result = td - offsets.Hour(1) - assert isinstance(result, Timedelta) - assert result == Timedelta(239, unit="h") - - def test_td_add_sub_numeric_raises(self): - td = Timedelta(10, unit="d") - msg = "unsupported operand type" - for other in [2, 2.0, np.int64(2), np.float64(2)]: - with pytest.raises(TypeError, match=msg): - td + other - with pytest.raises(TypeError, match=msg): - other + td - with pytest.raises(TypeError, match=msg): - td - other - with pytest.raises(TypeError, match=msg): - other - td - - def test_td_add_sub_int_ndarray(self): - td = Timedelta("1 day") - other = np.array([1]) - - msg = r"unsupported operand type\(s\) for \+: 'Timedelta' and 'int'" - with pytest.raises(TypeError, match=msg): - td + np.array([1]) - - msg = "|".join( - [ - ( - r"unsupported operand type\(s\) for \+: 'numpy.ndarray' " - "and 'Timedelta'" - ), - # This message goes on to say "Please do not rely on this error; - # it may not be given on all Python implementations" - "Concatenation operation is not implemented for NumPy arrays", - ] - ) - with pytest.raises(TypeError, match=msg): - other + td - msg = r"unsupported operand type\(s\) for -: 'Timedelta' and 'int'" - with pytest.raises(TypeError, match=msg): - td - other - msg = r"unsupported operand type\(s\) for -: 'numpy.ndarray' and 'Timedelta'" - with pytest.raises(TypeError, match=msg): - other - td - - def test_td_rsub_nat(self): - td = Timedelta(10, unit="d") - result = NaT - td - assert result is NaT - - result = np.datetime64("NaT") - td - assert result is NaT - - def test_td_rsub_offset(self): - result = offsets.Hour(1) - Timedelta(10, unit="d") - assert isinstance(result, Timedelta) - assert result == Timedelta(-239, unit="h") - - def test_td_sub_timedeltalike_object_dtype_array(self): - # GH#21980 - arr = np.array([Timestamp("20130101 9:01"), Timestamp("20121230 9:02")]) - exp = np.array([Timestamp("20121231 9:01"), Timestamp("20121229 9:02")]) - res = arr - Timedelta("1D") - tm.assert_numpy_array_equal(res, exp) - - def test_td_sub_mixed_most_timedeltalike_object_dtype_array(self): - # GH#21980 - now = Timestamp("2021-11-09 09:54:00") - arr = np.array([now, Timedelta("1D"), np.timedelta64(2, "h")]) - exp = np.array( - [ - now - Timedelta("1D"), - Timedelta("0D"), - np.timedelta64(2, "h") - Timedelta("1D"), - ] - ) - res = arr - Timedelta("1D") - tm.assert_numpy_array_equal(res, exp) - - def test_td_rsub_mixed_most_timedeltalike_object_dtype_array(self): - # GH#21980 - now = Timestamp("2021-11-09 09:54:00") - arr = np.array([now, Timedelta("1D"), np.timedelta64(2, "h")]) - msg = r"unsupported operand type\(s\) for \-: 'Timedelta' and 'Timestamp'" - with pytest.raises(TypeError, match=msg): - Timedelta("1D") - arr - - @pytest.mark.parametrize("op", [operator.add, ops.radd]) - def test_td_add_timedeltalike_object_dtype_array(self, op): - # GH#21980 - arr = np.array([Timestamp("20130101 9:01"), Timestamp("20121230 9:02")]) - exp = np.array([Timestamp("20130102 9:01"), Timestamp("20121231 9:02")]) - res = op(arr, Timedelta("1D")) - tm.assert_numpy_array_equal(res, exp) - - @pytest.mark.parametrize("op", [operator.add, ops.radd]) - def test_td_add_mixed_timedeltalike_object_dtype_array(self, op): - # GH#21980 - now = Timestamp("2021-11-09 09:54:00") - arr = np.array([now, Timedelta("1D")]) - exp = np.array([now + Timedelta("1D"), Timedelta("2D")]) - res = op(arr, Timedelta("1D")) - tm.assert_numpy_array_equal(res, exp) - - def test_td_add_sub_td64_ndarray(self): - td = Timedelta("1 day") - - other = np.array([td.to_timedelta64()]) - expected = np.array([Timedelta("2 Days").to_timedelta64()]) - - result = td + other - tm.assert_numpy_array_equal(result, expected) - result = other + td - tm.assert_numpy_array_equal(result, expected) - - result = td - other - tm.assert_numpy_array_equal(result, expected * 0) - result = other - td - tm.assert_numpy_array_equal(result, expected * 0) - - def test_td_add_sub_dt64_ndarray(self): - td = Timedelta("1 day") - other = pd.to_datetime(["2000-01-01"]).values - - expected = pd.to_datetime(["2000-01-02"]).values - tm.assert_numpy_array_equal(td + other, expected) - tm.assert_numpy_array_equal(other + td, expected) - - expected = pd.to_datetime(["1999-12-31"]).values - tm.assert_numpy_array_equal(-td + other, expected) - tm.assert_numpy_array_equal(other - td, expected) - - def test_td_add_sub_ndarray_0d(self): - td = Timedelta("1 day") - other = np.array(td.asm8) - - result = td + other - assert isinstance(result, Timedelta) - assert result == 2 * td - - result = other + td - assert isinstance(result, Timedelta) - assert result == 2 * td - - result = other - td - assert isinstance(result, Timedelta) - assert result == 0 * td - - result = td - other - assert isinstance(result, Timedelta) - assert result == 0 * td - - -class TestTimedeltaMultiplicationDivision: - """ - Tests for Timedelta methods: - - __mul__, __rmul__, - __div__, __rdiv__, - __truediv__, __rtruediv__, - __floordiv__, __rfloordiv__, - __mod__, __rmod__, - __divmod__, __rdivmod__ - """ - - # --------------------------------------------------------------- - # Timedelta.__mul__, __rmul__ - - @pytest.mark.parametrize( - "td_nat", [NaT, np.timedelta64("NaT", "ns"), np.timedelta64("NaT")] - ) - @pytest.mark.parametrize("op", [operator.mul, ops.rmul]) - def test_td_mul_nat(self, op, td_nat): - # GH#19819 - td = Timedelta(10, unit="d") - typs = "|".join(["numpy.timedelta64", "NaTType", "Timedelta"]) - msg = "|".join( - [ - rf"unsupported operand type\(s\) for \*: '{typs}' and '{typs}'", - r"ufunc '?multiply'? cannot use operands with types", - ] - ) - with pytest.raises(TypeError, match=msg): - op(td, td_nat) - - @pytest.mark.parametrize("nan", [np.nan, np.float64("NaN"), float("nan")]) - @pytest.mark.parametrize("op", [operator.mul, ops.rmul]) - def test_td_mul_nan(self, op, nan): - # np.float64('NaN') has a 'dtype' attr, avoid treating as array - td = Timedelta(10, unit="d") - result = op(td, nan) - assert result is NaT - - @pytest.mark.parametrize("op", [operator.mul, ops.rmul]) - def test_td_mul_scalar(self, op): - # GH#19738 - td = Timedelta(minutes=3) - - result = op(td, 2) - assert result == Timedelta(minutes=6) - - result = op(td, 1.5) - assert result == Timedelta(minutes=4, seconds=30) - - assert op(td, np.nan) is NaT - - assert op(-1, td)._value == -1 * td._value - assert op(-1.0, td)._value == -1.0 * td._value - - msg = "unsupported operand type" - with pytest.raises(TypeError, match=msg): - # timedelta * datetime is gibberish - op(td, Timestamp(2016, 1, 2)) - - with pytest.raises(TypeError, match=msg): - # invalid multiply with another timedelta - op(td, td) - - def test_td_mul_numeric_ndarray(self): - td = Timedelta("1 day") - other = np.array([2]) - expected = np.array([Timedelta("2 Days").to_timedelta64()]) - - result = td * other - tm.assert_numpy_array_equal(result, expected) - - result = other * td - tm.assert_numpy_array_equal(result, expected) - - def test_td_mul_numeric_ndarray_0d(self): - td = Timedelta("1 day") - other = np.array(2) - assert other.ndim == 0 - expected = Timedelta("2 days") - - res = td * other - assert type(res) is Timedelta - assert res == expected - - res = other * td - assert type(res) is Timedelta - assert res == expected - - def test_td_mul_td64_ndarray_invalid(self): - td = Timedelta("1 day") - other = np.array([Timedelta("2 Days").to_timedelta64()]) - - msg = ( - "ufunc '?multiply'? cannot use operands with types " - rf"dtype\('{tm.ENDIAN}m8\[ns\]'\) and dtype\('{tm.ENDIAN}m8\[ns\]'\)" - ) - with pytest.raises(TypeError, match=msg): - td * other - with pytest.raises(TypeError, match=msg): - other * td - - # --------------------------------------------------------------- - # Timedelta.__div__, __truediv__ - - def test_td_div_timedeltalike_scalar(self): - # GH#19738 - td = Timedelta(10, unit="d") - - result = td / offsets.Hour(1) - assert result == 240 - - assert td / td == 1 - assert td / np.timedelta64(60, "h") == 4 - - assert np.isnan(td / NaT) - - def test_td_div_td64_non_nano(self): - # truediv - td = Timedelta("1 days 2 hours 3 ns") - result = td / np.timedelta64(1, "D") - assert result == td._value / (86400 * 10**9) - result = td / np.timedelta64(1, "s") - assert result == td._value / 10**9 - result = td / np.timedelta64(1, "ns") - assert result == td._value - - # floordiv - td = Timedelta("1 days 2 hours 3 ns") - result = td // np.timedelta64(1, "D") - assert result == 1 - result = td // np.timedelta64(1, "s") - assert result == 93600 - result = td // np.timedelta64(1, "ns") - assert result == td._value - - def test_td_div_numeric_scalar(self): - # GH#19738 - td = Timedelta(10, unit="d") - - result = td / 2 - assert isinstance(result, Timedelta) - assert result == Timedelta(days=5) - - result = td / 5 - assert isinstance(result, Timedelta) - assert result == Timedelta(days=2) - - @pytest.mark.parametrize( - "nan", - [ - np.nan, - np.float64("NaN"), - float("nan"), - ], - ) - def test_td_div_nan(self, nan): - # np.float64('NaN') has a 'dtype' attr, avoid treating as array - td = Timedelta(10, unit="d") - result = td / nan - assert result is NaT - - result = td // nan - assert result is NaT - - def test_td_div_td64_ndarray(self): - td = Timedelta("1 day") - - other = np.array([Timedelta("2 Days").to_timedelta64()]) - expected = np.array([0.5]) - - result = td / other - tm.assert_numpy_array_equal(result, expected) - - result = other / td - tm.assert_numpy_array_equal(result, expected * 4) - - def test_td_div_ndarray_0d(self): - td = Timedelta("1 day") - - other = np.array(1) - res = td / other - assert isinstance(res, Timedelta) - assert res == td - - # --------------------------------------------------------------- - # Timedelta.__rdiv__ - - def test_td_rdiv_timedeltalike_scalar(self): - # GH#19738 - td = Timedelta(10, unit="d") - result = offsets.Hour(1) / td - assert result == 1 / 240.0 - - assert np.timedelta64(60, "h") / td == 0.25 - - def test_td_rdiv_na_scalar(self): - # GH#31869 None gets cast to NaT - td = Timedelta(10, unit="d") - - result = NaT / td - assert np.isnan(result) - - result = None / td - assert np.isnan(result) - - result = np.timedelta64("NaT") / td - assert np.isnan(result) - - msg = r"unsupported operand type\(s\) for /: 'numpy.datetime64' and 'Timedelta'" - with pytest.raises(TypeError, match=msg): - np.datetime64("NaT") / td - - msg = r"unsupported operand type\(s\) for /: 'float' and 'Timedelta'" - with pytest.raises(TypeError, match=msg): - np.nan / td - - def test_td_rdiv_ndarray(self): - td = Timedelta(10, unit="d") - - arr = np.array([td], dtype=object) - result = arr / td - expected = np.array([1], dtype=np.float64) - tm.assert_numpy_array_equal(result, expected) - - arr = np.array([None]) - result = arr / td - expected = np.array([np.nan]) - tm.assert_numpy_array_equal(result, expected) - - arr = np.array([np.nan], dtype=object) - msg = r"unsupported operand type\(s\) for /: 'float' and 'Timedelta'" - with pytest.raises(TypeError, match=msg): - arr / td - - arr = np.array([np.nan], dtype=np.float64) - msg = "cannot use operands with types dtype" - with pytest.raises(TypeError, match=msg): - arr / td - - def test_td_rdiv_ndarray_0d(self): - td = Timedelta(10, unit="d") - - arr = np.array(td.asm8) - - assert arr / td == 1 - - # --------------------------------------------------------------- - # Timedelta.__floordiv__ - - def test_td_floordiv_timedeltalike_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=4) - scalar = Timedelta(hours=3, minutes=3) - - assert td // scalar == 1 - assert -td // scalar.to_pytimedelta() == -2 - assert (2 * td) // scalar.to_timedelta64() == 2 - - def test_td_floordiv_null_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=4) - - assert td // np.nan is NaT - assert np.isnan(td // NaT) - assert np.isnan(td // np.timedelta64("NaT")) - - def test_td_floordiv_offsets(self): - # GH#19738 - td = Timedelta(hours=3, minutes=4) - assert td // offsets.Hour(1) == 3 - assert td // offsets.Minute(2) == 92 - - def test_td_floordiv_invalid_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=4) - - msg = "|".join( - [ - r"Invalid dtype datetime64\[D\] for __floordiv__", - "'dtype' is an invalid keyword argument for this function", - r"ufunc '?floor_divide'? cannot use operands with types", - ] - ) - with pytest.raises(TypeError, match=msg): - td // np.datetime64("2016-01-01", dtype="datetime64[us]") - - def test_td_floordiv_numeric_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=4) - - expected = Timedelta(hours=1, minutes=32) - assert td // 2 == expected - assert td // 2.0 == expected - assert td // np.float64(2.0) == expected - assert td // np.int32(2.0) == expected - assert td // np.uint8(2.0) == expected - - def test_td_floordiv_timedeltalike_array(self): - # GH#18846 - td = Timedelta(hours=3, minutes=4) - scalar = Timedelta(hours=3, minutes=3) - - # Array-like others - assert td // np.array(scalar.to_timedelta64()) == 1 - - res = (3 * td) // np.array([scalar.to_timedelta64()]) - expected = np.array([3], dtype=np.int64) - tm.assert_numpy_array_equal(res, expected) - - res = (10 * td) // np.array([scalar.to_timedelta64(), np.timedelta64("NaT")]) - expected = np.array([10, np.nan]) - tm.assert_numpy_array_equal(res, expected) - - def test_td_floordiv_numeric_series(self): - # GH#18846 - td = Timedelta(hours=3, minutes=4) - ser = pd.Series([1], dtype=np.int64) - res = td // ser - assert res.dtype.kind == "m" - - # --------------------------------------------------------------- - # Timedelta.__rfloordiv__ - - def test_td_rfloordiv_timedeltalike_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=3) - scalar = Timedelta(hours=3, minutes=4) - - # scalar others - # x // Timedelta is defined only for timedelta-like x. int-like, - # float-like, and date-like, in particular, should all either - # a) raise TypeError directly or - # b) return NotImplemented, following which the reversed - # operation will raise TypeError. - assert td.__rfloordiv__(scalar) == 1 - assert (-td).__rfloordiv__(scalar.to_pytimedelta()) == -2 - assert (2 * td).__rfloordiv__(scalar.to_timedelta64()) == 0 - - def test_td_rfloordiv_null_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=3) - - assert np.isnan(td.__rfloordiv__(NaT)) - assert np.isnan(td.__rfloordiv__(np.timedelta64("NaT"))) - - def test_td_rfloordiv_offsets(self): - # GH#19738 - assert offsets.Hour(1) // Timedelta(minutes=25) == 2 - - def test_td_rfloordiv_invalid_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=3) - - dt64 = np.datetime64("2016-01-01", "us") - - assert td.__rfloordiv__(dt64) is NotImplemented - - msg = ( - r"unsupported operand type\(s\) for //: 'numpy.datetime64' and 'Timedelta'" - ) - with pytest.raises(TypeError, match=msg): - dt64 // td - - def test_td_rfloordiv_numeric_scalar(self): - # GH#18846 - td = Timedelta(hours=3, minutes=3) - - assert td.__rfloordiv__(np.nan) is NotImplemented - assert td.__rfloordiv__(3.5) is NotImplemented - assert td.__rfloordiv__(2) is NotImplemented - assert td.__rfloordiv__(np.float64(2.0)) is NotImplemented - assert td.__rfloordiv__(np.uint8(9)) is NotImplemented - assert td.__rfloordiv__(np.int32(2.0)) is NotImplemented - - msg = r"unsupported operand type\(s\) for //: '.*' and 'Timedelta" - with pytest.raises(TypeError, match=msg): - np.float64(2.0) // td - with pytest.raises(TypeError, match=msg): - np.uint8(9) // td - with pytest.raises(TypeError, match=msg): - # deprecated GH#19761, enforced GH#29797 - np.int32(2.0) // td - - def test_td_rfloordiv_timedeltalike_array(self): - # GH#18846 - td = Timedelta(hours=3, minutes=3) - scalar = Timedelta(hours=3, minutes=4) - - # Array-like others - assert td.__rfloordiv__(np.array(scalar.to_timedelta64())) == 1 - - res = td.__rfloordiv__(np.array([(3 * scalar).to_timedelta64()])) - expected = np.array([3], dtype=np.int64) - tm.assert_numpy_array_equal(res, expected) - - arr = np.array([(10 * scalar).to_timedelta64(), np.timedelta64("NaT")]) - res = td.__rfloordiv__(arr) - expected = np.array([10, np.nan]) - tm.assert_numpy_array_equal(res, expected) - - def test_td_rfloordiv_intarray(self): - # deprecated GH#19761, enforced GH#29797 - ints = np.array([1349654400, 1349740800, 1349827200, 1349913600]) * 10**9 - - msg = "Invalid dtype" - with pytest.raises(TypeError, match=msg): - ints // Timedelta(1, unit="s") - - def test_td_rfloordiv_numeric_series(self): - # GH#18846 - td = Timedelta(hours=3, minutes=3) - ser = pd.Series([1], dtype=np.int64) - res = td.__rfloordiv__(ser) - assert res is NotImplemented - - msg = "Invalid dtype" - with pytest.raises(TypeError, match=msg): - # Deprecated GH#19761, enforced GH#29797 - ser // td - - # ---------------------------------------------------------------- - # Timedelta.__mod__, __rmod__ - - def test_mod_timedeltalike(self): - # GH#19365 - td = Timedelta(hours=37) - - # Timedelta-like others - result = td % Timedelta(hours=6) - assert isinstance(result, Timedelta) - assert result == Timedelta(hours=1) - - result = td % timedelta(minutes=60) - assert isinstance(result, Timedelta) - assert result == Timedelta(0) - - result = td % NaT - assert result is NaT - - def test_mod_timedelta64_nat(self): - # GH#19365 - td = Timedelta(hours=37) - - result = td % np.timedelta64("NaT", "ns") - assert result is NaT - - def test_mod_timedelta64(self): - # GH#19365 - td = Timedelta(hours=37) - - result = td % np.timedelta64(2, "h") - assert isinstance(result, Timedelta) - assert result == Timedelta(hours=1) - - def test_mod_offset(self): - # GH#19365 - td = Timedelta(hours=37) - - result = td % offsets.Hour(5) - assert isinstance(result, Timedelta) - assert result == Timedelta(hours=2) - - def test_mod_numeric(self): - # GH#19365 - td = Timedelta(hours=37) - - # Numeric Others - result = td % 2 - assert isinstance(result, Timedelta) - assert result == Timedelta(0) - - result = td % 1e12 - assert isinstance(result, Timedelta) - assert result == Timedelta(minutes=3, seconds=20) - - result = td % int(1e12) - assert isinstance(result, Timedelta) - assert result == Timedelta(minutes=3, seconds=20) - - def test_mod_invalid(self): - # GH#19365 - td = Timedelta(hours=37) - msg = "unsupported operand type" - with pytest.raises(TypeError, match=msg): - td % Timestamp("2018-01-22") - - with pytest.raises(TypeError, match=msg): - td % [] - - def test_rmod_pytimedelta(self): - # GH#19365 - td = Timedelta(minutes=3) - - result = timedelta(minutes=4) % td - assert isinstance(result, Timedelta) - assert result == Timedelta(minutes=1) - - def test_rmod_timedelta64(self): - # GH#19365 - td = Timedelta(minutes=3) - result = np.timedelta64(5, "m") % td - assert isinstance(result, Timedelta) - assert result == Timedelta(minutes=2) - - def test_rmod_invalid(self): - # GH#19365 - td = Timedelta(minutes=3) - - msg = "unsupported operand" - with pytest.raises(TypeError, match=msg): - Timestamp("2018-01-22") % td - - with pytest.raises(TypeError, match=msg): - 15 % td - - with pytest.raises(TypeError, match=msg): - 16.0 % td - - msg = "Invalid dtype int" - with pytest.raises(TypeError, match=msg): - np.array([22, 24]) % td - - # ---------------------------------------------------------------- - # Timedelta.__divmod__, __rdivmod__ - - def test_divmod_numeric(self): - # GH#19365 - td = Timedelta(days=2, hours=6) - - result = divmod(td, 53 * 3600 * 1e9) - assert result[0] == Timedelta(1, unit="ns") - assert isinstance(result[1], Timedelta) - assert result[1] == Timedelta(hours=1) - - assert result - result = divmod(td, np.nan) - assert result[0] is NaT - assert result[1] is NaT - - def test_divmod(self): - # GH#19365 - td = Timedelta(days=2, hours=6) - - result = divmod(td, timedelta(days=1)) - assert result[0] == 2 - assert isinstance(result[1], Timedelta) - assert result[1] == Timedelta(hours=6) - - result = divmod(td, 54) - assert result[0] == Timedelta(hours=1) - assert isinstance(result[1], Timedelta) - assert result[1] == Timedelta(0) - - result = divmod(td, NaT) - assert np.isnan(result[0]) - assert result[1] is NaT - - def test_divmod_offset(self): - # GH#19365 - td = Timedelta(days=2, hours=6) - - result = divmod(td, offsets.Hour(-4)) - assert result[0] == -14 - assert isinstance(result[1], Timedelta) - assert result[1] == Timedelta(hours=-2) - - def test_divmod_invalid(self): - # GH#19365 - td = Timedelta(days=2, hours=6) - - msg = r"unsupported operand type\(s\) for //: 'Timedelta' and 'Timestamp'" - with pytest.raises(TypeError, match=msg): - divmod(td, Timestamp("2018-01-22")) - - def test_rdivmod_pytimedelta(self): - # GH#19365 - result = divmod(timedelta(days=2, hours=6), Timedelta(days=1)) - assert result[0] == 2 - assert isinstance(result[1], Timedelta) - assert result[1] == Timedelta(hours=6) - - def test_rdivmod_offset(self): - result = divmod(offsets.Hour(54), Timedelta(hours=-4)) - assert result[0] == -14 - assert isinstance(result[1], Timedelta) - assert result[1] == Timedelta(hours=-2) - - def test_rdivmod_invalid(self): - # GH#19365 - td = Timedelta(minutes=3) - msg = "unsupported operand type" - - with pytest.raises(TypeError, match=msg): - divmod(Timestamp("2018-01-22"), td) - - with pytest.raises(TypeError, match=msg): - divmod(15, td) - - with pytest.raises(TypeError, match=msg): - divmod(16.0, td) - - msg = "Invalid dtype int" - with pytest.raises(TypeError, match=msg): - divmod(np.array([22, 24]), td) - - # ---------------------------------------------------------------- - - @pytest.mark.parametrize( - "op", [operator.mul, ops.rmul, operator.truediv, ops.rdiv, ops.rsub] - ) - @pytest.mark.parametrize( - "arr", - [ - np.array([Timestamp("20130101 9:01"), Timestamp("20121230 9:02")]), - np.array([Timestamp("2021-11-09 09:54:00"), Timedelta("1D")]), - ], - ) - def test_td_op_timedelta_timedeltalike_array(self, op, arr): - msg = "unsupported operand type|cannot use operands with types" - with pytest.raises(TypeError, match=msg): - op(arr, Timedelta("1D")) - - -class TestTimedeltaComparison: - def test_compare_pytimedelta_bounds(self): - # GH#49021 don't overflow on comparison with very large pytimedeltas - - for unit in ["ns", "us"]: - tdmax = Timedelta.max.as_unit(unit).max - tdmin = Timedelta.min.as_unit(unit).min - - assert tdmax < timedelta.max - assert tdmax <= timedelta.max - assert not tdmax > timedelta.max - assert not tdmax >= timedelta.max - assert tdmax != timedelta.max - assert not tdmax == timedelta.max - - assert tdmin > timedelta.min - assert tdmin >= timedelta.min - assert not tdmin < timedelta.min - assert not tdmin <= timedelta.min - assert tdmin != timedelta.min - assert not tdmin == timedelta.min - - # But the "ms" and "s"-reso bounds extend pass pytimedelta - for unit in ["ms", "s"]: - tdmax = Timedelta.max.as_unit(unit).max - tdmin = Timedelta.min.as_unit(unit).min - - assert tdmax > timedelta.max - assert tdmax >= timedelta.max - assert not tdmax < timedelta.max - assert not tdmax <= timedelta.max - assert tdmax != timedelta.max - assert not tdmax == timedelta.max - - assert tdmin < timedelta.min - assert tdmin <= timedelta.min - assert not tdmin > timedelta.min - assert not tdmin >= timedelta.min - assert tdmin != timedelta.min - assert not tdmin == timedelta.min - - def test_compare_pytimedelta_bounds2(self): - # a pytimedelta outside the microsecond bounds - pytd = timedelta(days=999999999, seconds=86399) - # NB: np.timedelta64(td, "s"") incorrectly overflows - td64 = np.timedelta64(pytd.days, "D") + np.timedelta64(pytd.seconds, "s") - td = Timedelta(td64) - assert td.days == pytd.days - assert td.seconds == pytd.seconds - - assert td == pytd - assert not td != pytd - assert not td < pytd - assert not td > pytd - assert td <= pytd - assert td >= pytd - - td2 = td - Timedelta(seconds=1).as_unit("s") - assert td2 != pytd - assert not td2 == pytd - assert td2 < pytd - assert td2 <= pytd - assert not td2 > pytd - assert not td2 >= pytd - - def test_compare_tick(self, tick_classes): - cls = tick_classes - - off = cls(4) - td = off.delta - assert isinstance(td, Timedelta) - - assert td == off - assert not td != off - assert td <= off - assert td >= off - assert not td < off - assert not td > off - - assert not td == 2 * off - assert td != 2 * off - assert td <= 2 * off - assert td < 2 * off - assert not td >= 2 * off - assert not td > 2 * off - - def test_comparison_object_array(self): - # analogous to GH#15183 - td = Timedelta("2 days") - other = Timedelta("3 hours") - - arr = np.array([other, td], dtype=object) - res = arr == td - expected = np.array([False, True], dtype=bool) - assert (res == expected).all() - - # 2D case - arr = np.array([[other, td], [td, other]], dtype=object) - res = arr != td - expected = np.array([[True, False], [False, True]], dtype=bool) - assert res.shape == expected.shape - assert (res == expected).all() - - def test_compare_timedelta_ndarray(self): - # GH#11835 - periods = [Timedelta("0 days 01:00:00"), Timedelta("0 days 01:00:00")] - arr = np.array(periods) - result = arr[0] > arr - expected = np.array([False, False]) - tm.assert_numpy_array_equal(result, expected) - - def test_compare_td64_ndarray(self): - # GG#33441 - arr = np.arange(5).astype("timedelta64[ns]") - td = Timedelta(arr[1]) - - expected = np.array([False, True, False, False, False], dtype=bool) - - result = td == arr - tm.assert_numpy_array_equal(result, expected) - - result = arr == td - tm.assert_numpy_array_equal(result, expected) - - result = td != arr - tm.assert_numpy_array_equal(result, ~expected) - - result = arr != td - tm.assert_numpy_array_equal(result, ~expected) - - def test_compare_custom_object(self): - """ - Make sure non supported operations on Timedelta returns NonImplemented - and yields to other operand (GH#20829). - """ - - class CustomClass: - def __init__(self, cmp_result=None) -> None: - self.cmp_result = cmp_result - - def generic_result(self): - if self.cmp_result is None: - return NotImplemented - else: - return self.cmp_result - - def __eq__(self, other): - return self.generic_result() - - def __gt__(self, other): - return self.generic_result() - - t = Timedelta("1s") - - assert t != "string" - assert t != 1 - assert t != CustomClass() - assert t != CustomClass(cmp_result=False) - - assert t < CustomClass(cmp_result=True) - assert not t < CustomClass(cmp_result=False) - - assert t == CustomClass(cmp_result=True) - - @pytest.mark.parametrize("val", ["string", 1]) - def test_compare_unknown_type(self, val): - # GH#20829 - t = Timedelta("1s") - msg = "not supported between instances of 'Timedelta' and '(int|str)'" - with pytest.raises(TypeError, match=msg): - t >= val - with pytest.raises(TypeError, match=msg): - t > val - with pytest.raises(TypeError, match=msg): - t <= val - with pytest.raises(TypeError, match=msg): - t < val - - -def test_ops_notimplemented(): - class Other: - pass - - other = Other() - - td = Timedelta("1 day") - assert td.__add__(other) is NotImplemented - assert td.__sub__(other) is NotImplemented - assert td.__truediv__(other) is NotImplemented - assert td.__mul__(other) is NotImplemented - assert td.__floordiv__(other) is NotImplemented - - -def test_ops_error_str(): - # GH#13624 - td = Timedelta("1 day") - - for left, right in [(td, "a"), ("a", td)]: - msg = "|".join( - [ - "unsupported operand type", - r'can only concatenate str \(not "Timedelta"\) to str', - "must be str, not Timedelta", - ] - ) - with pytest.raises(TypeError, match=msg): - left + right - - msg = "not supported between instances of" - with pytest.raises(TypeError, match=msg): - left > right - - assert not left == right # pylint: disable=unneeded-not - assert left != right diff --git a/spaces/pyodide-demo/self-hosted/cytoolz.js b/spaces/pyodide-demo/self-hosted/cytoolz.js deleted file mode 100644 index e9545319630a2342afc6d496215bdea6a0727bf0..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/cytoolz.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="cytoolz.data";var REMOTE_PACKAGE_BASE="cytoolz.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","cytoolz",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/cytoolz","curried",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","cytoolz-0.11.2-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:458562,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,881,1659,2848,4248,5460,6496,7295,8472,9759,10967,12182,13142,14474,15506,16561,17554,18875,20010,20978,22e3,23255,24528,25558,26887,28134,29033,30034,31318,32319,33316,34582,35753,36928,38193,39357,40415,41490,42712,43858,45041,45510,46449,47268,48026,48847,50129,51215,52439,53789,55121,55886,56638,57354,58531,59782,61192,62244,63273,64598,65524,66311,67650,68576,69126,69670,70846,72164,73522,74703,76111,77314,78710,80090,81571,83008,84421,85747,87127,88303,89669,91096,92607,94115,95609,96869,97606,98648,100147,101705,102701,103972,105143,105983,106937,108122,109165,110175,111369,112477,113388,114702,116131,117363,118461,119469,120480,121499,122679,123877,125075,125925,126661,127415,128456,129414,130310,131120,132059,132697,133325,133897,134815,136154,137475,138852,140292,141788,143176,144294,145361,146794,148302,149708,151085,152575,153733,154867,156392,157634,159150,160697,161937,163310,164725,166153,167623,169005,170523,171846,173136,174657,176132,177401,178632,180051,181299,182802,184238,185503,187008,188191,189663,190863,192098,193533,194859,196121,197556,198768,200048,200918,202136,203615,205069,206408,207823,209328,210709,212046,213221,214473,215948,217159,218194,219235,220297,221476,222600,223629,224270,225166,226214,227402,228810,229010,230186,231450,232936,234120,235270,236416,237436,238449,239466,240473,241485,242635,243784,244921,246017,247140,248383,249320,250162,250836,251510,252170,252870,254005,254828,255635,256252,256845,257416,257942,258517,259058,259558,260103,260615,261150,261934,262506,263014,263523,264038,264553,265255,265977,266498,267489,268233,268942,269629,270334,271087,271771,272486,273074,273755,275001,276503,277836,279150,280514,281856,283225,284631,286056,287552,288855,290353,291750,293057,294221,295277,296298,297591,299090,300536,302028,303530,304992,306367,307528,308618,309945,310992,312538,313992,315390,316702,318059,319493,320874,322236,323554,324904,325979,327353,328365,329483,330816,332177,333501,334890,336153,337636,338960,340468,341888,343484,344798,346226,347691,349175,350513,352042,353444,354353,355753,356918,358150,359447,360712,361938,362490,363039,364244,365591,367032,368287,369762,371194,372543,373344,374830,376028,377339,378875,380271,381805,383195,384524,385793,387181,388287,389472,390312,391340,392310,393131,394309,395103,395980,396556,396950,397895,399182,400355,401453,402594,403653,404303,404869,405196,405704,406422,407137,407776,409057,410092,411450,412698,414060,415460,415901,415926,417078,418424,419655,420945,422120,422521,423567,425047,426398,427570,428932,430540,432124,433523,434718,435774,436827,438180,439444,440782,441948,443464,444864,445983,447409,448670,450036,451498,452681,453833,454616,455737,457239,458410],sizes:[881,778,1189,1400,1212,1036,799,1177,1287,1208,1215,960,1332,1032,1055,993,1321,1135,968,1022,1255,1273,1030,1329,1247,899,1001,1284,1001,997,1266,1171,1175,1265,1164,1058,1075,1222,1146,1183,469,939,819,758,821,1282,1086,1224,1350,1332,765,752,716,1177,1251,1410,1052,1029,1325,926,787,1339,926,550,544,1176,1318,1358,1181,1408,1203,1396,1380,1481,1437,1413,1326,1380,1176,1366,1427,1511,1508,1494,1260,737,1042,1499,1558,996,1271,1171,840,954,1185,1043,1010,1194,1108,911,1314,1429,1232,1098,1008,1011,1019,1180,1198,1198,850,736,754,1041,958,896,810,939,638,628,572,918,1339,1321,1377,1440,1496,1388,1118,1067,1433,1508,1406,1377,1490,1158,1134,1525,1242,1516,1547,1240,1373,1415,1428,1470,1382,1518,1323,1290,1521,1475,1269,1231,1419,1248,1503,1436,1265,1505,1183,1472,1200,1235,1435,1326,1262,1435,1212,1280,870,1218,1479,1454,1339,1415,1505,1381,1337,1175,1252,1475,1211,1035,1041,1062,1179,1124,1029,641,896,1048,1188,1408,200,1176,1264,1486,1184,1150,1146,1020,1013,1017,1007,1012,1150,1149,1137,1096,1123,1243,937,842,674,674,660,700,1135,823,807,617,593,571,526,575,541,500,545,512,535,784,572,508,509,515,515,702,722,521,991,744,709,687,705,753,684,715,588,681,1246,1502,1333,1314,1364,1342,1369,1406,1425,1496,1303,1498,1397,1307,1164,1056,1021,1293,1499,1446,1492,1502,1462,1375,1161,1090,1327,1047,1546,1454,1398,1312,1357,1434,1381,1362,1318,1350,1075,1374,1012,1118,1333,1361,1324,1389,1263,1483,1324,1508,1420,1596,1314,1428,1465,1484,1338,1529,1402,909,1400,1165,1232,1297,1265,1226,552,549,1205,1347,1441,1255,1475,1432,1349,801,1486,1198,1311,1536,1396,1534,1390,1329,1269,1388,1106,1185,840,1028,970,821,1178,794,877,576,394,945,1287,1173,1098,1141,1059,650,566,327,508,718,715,639,1281,1035,1358,1248,1362,1400,441,25,1152,1346,1231,1290,1175,401,1046,1480,1351,1172,1362,1608,1584,1399,1195,1056,1053,1353,1264,1338,1166,1516,1400,1119,1426,1261,1366,1462,1183,1152,783,1121,1502,1171,152],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_cytoolz.data")}Module["addRunDependency"]("datafile_cytoolz.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/cytoolz/__init__.py",start:0,end:471,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/_signatures.py",start:471,end:4827,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/_version.py",start:4827,end:4879,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/compatibility.py",start:4879,end:5876,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/dicttoolz.pyx",start:5876,end:21307,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/functoolz.pyx",start:21307,end:46342,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/itertoolz.pyx",start:46342,end:97689,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/recipes.pyx",start:97689,end:99289,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/utils.pyx",start:99289,end:100642,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/__init__.pxd",start:100642,end:101392,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/cpython.pxd",start:101392,end:101889,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/dicttoolz.pxd",start:101889,end:103257,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/functoolz.pxd",start:103257,end:104509,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/itertoolz.pxd",start:104509,end:109204,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/recipes.pxd",start:109204,end:109304,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/utils.pxd",start:109304,end:109337,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/dicttoolz.so",start:109337,end:203679,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/functoolz.so",start:203679,end:399528,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/itertoolz.so",start:399528,end:753876,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/recipes.so",start:753876,end:787172,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/utils.so",start:787172,end:813966,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/curried/__init__.py",start:813966,end:816850,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/curried/exceptions.py",start:816850,end:817200,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/curried/operator.py",start:817200,end:817702,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz-0.11.2-py3.9.egg-info/PKG-INFO",start:817702,end:822188,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz-0.11.2-py3.9.egg-info/SOURCES.txt",start:822188,end:823542,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz-0.11.2-py3.9.egg-info/dependency_links.txt",start:823542,end:823543,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz-0.11.2-py3.9.egg-info/not-zip-safe",start:823543,end:823544,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz-0.11.2-py3.9.egg-info/requires.txt",start:823544,end:823574,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz-0.11.2-py3.9.egg-info/top_level.txt",start:823574,end:823582,audio:0}],remote_package_size:462658,package_uuid:"3c74a85b-3f05-4696-801b-2f4bdb55967f"})})(); \ No newline at end of file diff --git a/spaces/pyodide-demo/self-hosted/optlang.js b/spaces/pyodide-demo/self-hosted/optlang.js deleted file mode 100644 index 9e0fa91bed7f4009341434a4e9279538aa74c980..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/optlang.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="optlang.data";var REMOTE_PACKAGE_BASE="optlang.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","optlang",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","optlang-1.5.2-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:169200,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1125,2498,3795,4788,5800,6804,7927,8874,10232,11321,11992,13067,14100,15005,16001,17153,18613,19774,20686,21749,22946,23551,24751,25744,26779,27669,28762,29801,30876,31563,32585,33436,34412,35328,36436,37490,38362,39435,40398,41578,43068,44256,45147,46429,47775,48796,49922,51346,52449,53765,55238,56434,57545,58478,59507,60606,61562,62675,63449,64556,65534,66761,67716,68641,69645,70464,71344,72817,73900,74765,75771,76781,77674,78709,79659,80647,81460,82338,83207,84163,85141,86167,87575,88546,89249,90042,91145,92192,93599,95037,96133,97304,98140,99148,100298,101470,102543,103428,104482,105647,106758,107677,108749,109948,111130,112378,113581,114797,115886,116866,117935,118544,119630,120542,121518,122544,123482,124536,126052,127171,128120,129212,129976,130998,131893,132858,133749,134846,135758,136746,137637,138460,139428,140447,141737,143026,144002,144794,145851,146988,147930,148759,149684,150694,151682,152577,153822,155218,156290,157719,158737,159825,160892,162031,163635,165121,166535,167911,168892],sizes:[1125,1373,1297,993,1012,1004,1123,947,1358,1089,671,1075,1033,905,996,1152,1460,1161,912,1063,1197,605,1200,993,1035,890,1093,1039,1075,687,1022,851,976,916,1108,1054,872,1073,963,1180,1490,1188,891,1282,1346,1021,1126,1424,1103,1316,1473,1196,1111,933,1029,1099,956,1113,774,1107,978,1227,955,925,1004,819,880,1473,1083,865,1006,1010,893,1035,950,988,813,878,869,956,978,1026,1408,971,703,793,1103,1047,1407,1438,1096,1171,836,1008,1150,1172,1073,885,1054,1165,1111,919,1072,1199,1182,1248,1203,1216,1089,980,1069,609,1086,912,976,1026,938,1054,1516,1119,949,1092,764,1022,895,965,891,1097,912,988,891,823,968,1019,1290,1289,976,792,1057,1137,942,829,925,1010,988,895,1245,1396,1072,1429,1018,1088,1067,1139,1604,1486,1414,1376,981,308],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_optlang.data")}Module["addRunDependency"]("datafile_optlang.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/optlang/__init__.py",start:0,end:3226,audio:0},{filename:"/lib/python3.9/site-packages/optlang/coinor_cbc_interface.py",start:3226,end:33810,audio:0},{filename:"/lib/python3.9/site-packages/optlang/container.py",start:33810,end:40766,audio:0},{filename:"/lib/python3.9/site-packages/optlang/cplex_interface.py",start:40766,end:81714,audio:0},{filename:"/lib/python3.9/site-packages/optlang/duality.py",start:81714,end:89381,audio:0},{filename:"/lib/python3.9/site-packages/optlang/exceptions.py",start:89381,end:90849,audio:0},{filename:"/lib/python3.9/site-packages/optlang/expression_parsing.py",start:90849,end:97021,audio:0},{filename:"/lib/python3.9/site-packages/optlang/glpk_exact_interface.py",start:97021,end:102228,audio:0},{filename:"/lib/python3.9/site-packages/optlang/glpk_interface.py",start:102228,end:138045,audio:0},{filename:"/lib/python3.9/site-packages/optlang/gurobi_interface.py",start:138045,end:168215,audio:0},{filename:"/lib/python3.9/site-packages/optlang/inspyred_interface.py",start:168215,end:181192,audio:0},{filename:"/lib/python3.9/site-packages/optlang/interface.py",start:181192,end:241984,audio:0},{filename:"/lib/python3.9/site-packages/optlang/osqp_interface.py",start:241984,end:275873,audio:0},{filename:"/lib/python3.9/site-packages/optlang/scipy_interface.py",start:275873,end:300705,audio:0},{filename:"/lib/python3.9/site-packages/optlang/symbolics.py",start:300705,end:304993,audio:0},{filename:"/lib/python3.9/site-packages/optlang/util.py",start:304993,end:315999,audio:0},{filename:"/lib/python3.9/site-packages/optlang/_version.py",start:315999,end:316496,audio:0},{filename:"/lib/python3.9/site-packages/optlang-1.5.2-py3.9.egg-info/PKG-INFO",start:316496,end:324604,audio:0},{filename:"/lib/python3.9/site-packages/optlang-1.5.2-py3.9.egg-info/SOURCES.txt",start:324604,end:326391,audio:0},{filename:"/lib/python3.9/site-packages/optlang-1.5.2-py3.9.egg-info/dependency_links.txt",start:326391,end:326392,audio:0},{filename:"/lib/python3.9/site-packages/optlang-1.5.2-py3.9.egg-info/requires.txt",start:326392,end:326458,audio:0},{filename:"/lib/python3.9/site-packages/optlang-1.5.2-py3.9.egg-info/top_level.txt",start:326458,end:326466,audio:0},{filename:"/lib/python3.9/site-packages/optlang-1.5.2-py3.9.egg-info/zip-safe",start:326466,end:326467,audio:0}],remote_package_size:173296,package_uuid:"51cb1348-fd60-4a23-a854-e6fa34b81bf3"})})(); \ No newline at end of file diff --git a/spaces/qingxu98/gpt-academic/themes/default.py b/spaces/qingxu98/gpt-academic/themes/default.py deleted file mode 100644 index da1f187490eaec18e838f64619739a2c32126d0b..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/themes/default.py +++ /dev/null @@ -1,85 +0,0 @@ -import gradio as gr -from toolbox import get_conf -CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT') - -def adjust_theme(): - - try: - color_er = gr.themes.utils.colors.fuchsia - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"], - font_mono=["ui-monospace", "Consolas", "monospace"]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - - with open('themes/common.js', 'r', encoding='utf8') as f: - js = f"" - - # 添加一个萌萌的看板娘 - if ADD_WAIFU: - js += """ - - - - """ - gradio_original_template_fn = gr.routes.templates.TemplateResponse - def gradio_new_template_fn(*args, **kwargs): - res = gradio_original_template_fn(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template - except: - set_theme = None - print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - -with open("themes/default.css", "r", encoding="utf-8") as f: - advanced_css = f.read() -with open("themes/common.css", "r", encoding="utf-8") as f: - advanced_css += f.read() diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HACK Autodesk AutoCAD 2017 (x64) Keygen [SadeemPC].zip ((HOT)).md b/spaces/quidiaMuxgu/Expedit-SAM/HACK Autodesk AutoCAD 2017 (x64) Keygen [SadeemPC].zip ((HOT)).md deleted file mode 100644 index bd4d3a8af0405a1904f5bfca419fa9a32cfea2bb..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/HACK Autodesk AutoCAD 2017 (x64) Keygen [SadeemPC].zip ((HOT)).md +++ /dev/null @@ -1,7 +0,0 @@ -
          -

          insert presets crack. serial number full version for free. you can easily create drawings without any tools. mar 2, 2022 autodesk autocad crack keygen allows students and teachers to explore the world of design. it makes the job of autodesk autocad 2020 crack and its functions that are easier to use. autodesk autocad crack full is a symbol of great success and the famous world known software from autodesk. since every day of autodesk autocad crack 2020 becomes highly demanded and its use is increasing day by day. apr 25, 2020 it simplifies complex data processing and modeling in the shortest possible time.

          -

          HACK Autodesk AutoCAD 2017 (x64) Keygen [SadeemPC].zip


          Download Zip > https://geags.com/2uCqsr



          -

          a friend of mine was looking for autodesk autocad 2020 crack full version. autocad keygen full is a symbol of great success and the famous world known software from autodesk. you should download autodesk autocad crack. autocad crack activation is way that is easy, once the free autocad crack has been downloaded then open the autocad installation file and press the crack button. its very fast and simple to download and install this software. it makes the job of autodesk autocad 2022 nasl and its functions that are easier to use. the program has also made autodesk autocad 2020 crack the means of doing great 3d work.

          -

          autocad crack gives you a complete drawing environment that is handy. we give out the best activator of autodesk autocad crack. with this, you can activate unlimited license with this crack. autocad full serial key gives you complete solution to turn drawing. mar 20, 2020 autocad crack 2021 serial key full version has been completed, so it gives you the complete solution to turn drawing.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Libro Esencial Primaria Santillana Pdf 68.md b/spaces/quidiaMuxgu/Expedit-SAM/Libro Esencial Primaria Santillana Pdf 68.md deleted file mode 100644 index ed2f2da16e591a6506096c31738cca57bbed6454..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Libro Esencial Primaria Santillana Pdf 68.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Libro Esencial Primaria Santillana Pdf 68


          Download File ✸✸✸ https://geags.com/2uCqxa



          -
          -El libro de texto de Estudios Sociales y Cívica que tienen en sus manos ha ... La producción primaria en estos ecosistemas suele ... esencial de las emisiones de ácidos ... Redacta un manual de convivencia que permita conservar los recursos ... 68. Causas económicas y culturales de la independencia. En la mayoría de ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/criteria/lpips/__init__.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/criteria/lpips/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/raedeXanto/academic-chatgpt-beta/AKB48 River PV HD Choose Between 720p or 108016 Resolution.md b/spaces/raedeXanto/academic-chatgpt-beta/AKB48 River PV HD Choose Between 720p or 108016 Resolution.md deleted file mode 100644 index 69e09b50a3de9635e8722a707a892dc464b0ed1d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/AKB48 River PV HD Choose Between 720p or 108016 Resolution.md +++ /dev/null @@ -1,66 +0,0 @@ -
          -

          AKB48 River: The Song That Made History

          -

          If you are a fan of Japanese pop music, you have probably heard of AKB48, one of the most popular and successful idol groups in Japan. But do you know which song was their breakthrough hit that catapulted them to fame and glory? It was none other than AKB48 River, their 14th major single released on October 21, 2009. In this article, we will explore everything you need to know about this iconic song, from its meaning and message, to its chart performance and impact, to how you can watch and enjoy it.

          -

          akb48 river pv 720p or 108016


          DOWNLOAD ☆☆☆☆☆ https://tinourl.com/2uL2E9



          -

          What is AKB48 River?

          -

          AKB48 River is a song by AKB48, a Japanese idol group consisting of dozens of girls divided into several teams. The group is named after Akihabara, a district in Tokyo known for its otaku culture and electronics shops. The group performs regularly at their own theater in Akihabara, as well as releasing singles and albums, appearing on TV shows, movies, commercials, and participating in various events. AKB48 is known for its concept of \"idols you can meet\", which allows fans to interact with the members through handshake events, voting contests, online platforms, and more.

          -

          The song River is a powerful anthem that encourages people to pursue their dreams and overcome any obstacles that stand in their way. The lyrics compare life to a river that flows endlessly, sometimes calm and sometimes rough, but always moving forward. The chorus says \"There at the opposite bank waits a dream\", implying that one has to cross the river to reach their goal, no matter how difficult or dangerous it may be. The song also uses military metaphors, such as \"soldiers\", \"battlefield\", \"flag\", and \"victory\", to convey a sense of determination and courage.

          -

          The song River features 16 members of AKB48, selected by the producer Yasushi Akimoto based on their popularity and performance. The center position, which is the most prominent and coveted spot in an idol song, was given to Atsuko Maeda, who was then the most popular member of the group. The other members who participated in the song were Haruna Kojima, Mariko Shinoda, Minami Takahashi, Sayaka Akimoto, Yuko Oshima, Erena Ono, Sae Miyazawa, Tomomi Itano, Minami Minegishi, Yuki Kashiwagi, Mayu Watanabe, Rie Kitahara, Miho Miyazaki, and Tomomi Kasai. The song was composed by Yoshimasa Inoue and arranged by Yasushi Akimoto.

          -

          How did AKB48 River perform on the charts?

          -

          AKB48 River was a huge commercial success for the group, achieving several milestones and records. It was the first AKB48 single to top the Oricon weekly singles chart, which is the most authoritative music chart in Japan. It sold 179,000 copies in its first week, surpassing their previous best-selling single Namida Surprise!, which had sold 144,000 copies in 18 weeks. It also became their first single to sell over 200,000 copies in total.

          -

          The song also received various certifications and awards for its sales and popularity. It was certified Platinum by the Recording Industry Association of Japan (RIAJ) for selling over 250,000 copies. It won the Grand Prix at the 51st Japan Record Awards, the most prestigious music award in Japan. It also ranked 2nd on the Billboard Japan Hot 100 of the Year 2009, and 10th on the NHK Kouhaku Uta Gassen of the same year.

          -

          What was the impact and legacy of AKB48 River?

          -

          AKB48 River was not only a commercial success, but also a cultural phenomenon that changed the landscape of the Japanese idol industry. It marked the rise of AKB48 as a national sensation, and paved the way for their subsequent domination of the music scene. It also boosted the popularity and recognition of the individual members, especially Maeda Atsuko, who became known as the \"face of AKB48\" and one of the most influential idols in Japan.

          -

          akb48 river music video hd 720p or 1080p
          -akb48 river mv download 720p or 1080p
          -akb48 river pv full hd 720p or 1080p
          -akb48 river pv mp4 720p or 1080p
          -akb48 river pv youtube 720p or 1080p
          -akb48 river song video 720p or 1080p
          -akb48 river video clip 720p or 1080p
          -akb48 river video hd quality 720p or 1080p
          -akb48 river video online 720p or 1080p
          -akb48 river video stream 720p or 1080p
          -download akb48 river pv hd 720p or 1080p
          -how to watch akb48 river pv in 720p or 1080p
          -stream akb48 river pv hd 720p or 1080p
          -watch akb48 river pv free 720p or 1080p
          -watch akb48 river pv online hd 720p or 1080p
          -akb48 river dance video hd 720p or 1080p
          -akb48 river live performance video hd 720p or 1080p
          -akb48 river making of video hd 720p or 1080p
          -akb48 river official video hd 720p or 1080p
          -akb48 river original video hd 720p or 1080p
          -best quality akb48 river pv hd 720p or 1080p
          -high resolution akb48 river pv hd 720p or 1080p
          -low size akb48 river pv hd 720p or 1080p
          -no watermark akb48 river pv hd 720p or 1080p
          -subtitles for akb48 river pv hd 720p or 1080p
          -akb48 river behind the scenes video hd 720p or 1080p
          -akb48 river dance practice video hd 720p or 1080p
          -akb48 river dance tutorial video hd 720p or 1080p
          -akb48 river fan made video hd 720p or 1080p
          -akb48 river reaction video hd 720p or 1080p
          -compare akb48 river pv in different resolutions: HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -how to download akb48 river pv in HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -how to play akb48 river pv in HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -how to convert akb48 river pv from HD (1280x720) to Full HD (1920x1080) and Ultra HD (3840x2160)
          -how to edit akb48 river pv in HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -best sites to watch akb48 river pv in HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -best apps to watch akb48 river pv in HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -best devices to watch akb48 river pv in HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -best settings to watch akb48 river pv in HD (1280x720), Full HD (1920x1080), and Ultra HD (3840x2160)
          -best tips to watch akb48 river pv in HD (128

          -

          The song also had a lasting influence and inspiration on many other idol groups and artists, both within and outside of Japan. It established a new standard for idol songs, which are usually cheerful and cute, by showing that they can also be powerful and passionate. It also inspired many people to pursue their dreams and overcome their challenges, as the song's message resonated with many listeners. The song has been covered and performed by various groups and singers, such as JKT48, SNH48, Nogizaka46, Keyakizaka46, Sakurazaka46, Hinatazaka46, SKE48, NMB48, HKT48, NGT48, STU48, BNK48, MNL48, AKB48 Team SH, AKB48 Team TP, Cherry Bullet, and more.

          -

          How to watch and enjoy AKB48 River?

          -

          If you want to watch and enjoy AKB48 River, there are several ways to do so. One of them is to watch the music video, which was filmed at Iruma Air Base in Saitama Prefecture. The music video features the members wearing military-style outfits and marching on a runway, while performing impressive dance moves and formations. The music video also shows scenes of the members training and preparing for their mission, as well as expressing their emotions and determination. The music video has over 35 million views on YouTube as of December 2021.

          -

          Another way to watch and enjoy AKB48 River is to watch their live performances and events. The song has been performed at many concerts and stages by AKB48 and their sister groups, such as AKB104 Senbatsu Members Sokaku Matsuri (2009), AKB48 Request Hour Set List Best 100 2010 (ranked 1st), AKB48 Manseki Matsuri Kibou Sanpi Ryouron (2013), AKB48 Group Tokyo Dome Concert (2014), AKB48 Tandoku Request Hour Set List Best 100 2016 (ranked 3rd), AKB48 Group Request Hour Set List Best 100 2018 (ranked 11th), and more. The song is also often performed at graduation concerts of prominent members, such as Maeda Atsuko (2012), Oshima Yuko (2014), Miyazawa Sae (2016), Kojima Haruna (2017), Watanabe Mayu (2017), Takahashi Minami (2018), Kashiwagi Yuki (2021), and more.

          -

          One of the most memorable live performances of AKB48 River was at the 60th NHK Kouhaku Uta Gassen in 2009, which was the first time that AKB48 appeared on the prestigious year-end music show. The performance was a surprise for both the audience and the members, as they did not know that they were going to perform until the last minute. The performance was also a challenge for them, as they had to perform on a small stage with limited space and time. However, they managed to deliver a stunning performance that impressed everyone with their energy and enthusiasm. The performance was later voted as the best performance of the night by viewers.

          -

          Conclusion

          -

          AKB48 River is a song that made history for AKB48 and the Japanese idol industry. It was their first number one single on the Oricon chart, their first Grand Prix winner at the Japan Record Awards, and their first appearance on the NHK Kouhaku Uta Gassen, the most prestigious music show in Japan. It also established a new standard for idol songs, which are usually cheerful and cute, by showing that they can also be powerful and passionate. It also inspired many people to pursue their dreams and overcome their challenges, as the song's message resonated with many listeners. The song has been covered and performed by various groups and singers, both within and outside of Japan.

          -

          If you want to watch and enjoy AKB48 River, you can watch the music video, which was filmed at Iruma Air Base in Saitama Prefecture. You can also watch their live performances and events, which showcase their energy and enthusiasm. You can also buy their merchandise, such as CDs, DVDs, T-shirts, posters, stickers, and more. You can find them online or at their official shops in Akihabara and other locations.

          -

          FAQs

          -

          Here are some frequently asked questions about AKB48 River:

          - - Q: When was AKB48 River released? - A: AKB48 River was released on October 21, 2009. - Q: Who is the center of AKB48 River? - A: The center of AKB48 River is Atsuko Maeda, who was then the most popular member of AKB48. - Q: How many copies did AKB48 River sell? - A: AKB48 River sold over 250,000 copies, making it their first platinum single. - Q: What is the meaning of AKB48 River? - A: AKB48 River is a song that encourages people to pursue their dreams and overcome any obstacles that stand in their way. - Q: Where can I watch AKB48 River? - A: You can watch AKB48 River on YouTube, or on their official website or app.

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Photoshop Lightroom Classic CC 2020 Crack With Registration Code Free Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Photoshop Lightroom Classic CC 2020 Crack With Registration Code Free Download.md deleted file mode 100644 index dd59b08c60394837b55e3f974025bf5395d3cde3..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Photoshop Lightroom Classic CC 2020 Crack With Registration Code Free Download.md +++ /dev/null @@ -1,36 +0,0 @@ - -

          How to Download and Install Adobe Photoshop Lightroom Classic CC 2020 for Free

          -

          Adobe Photoshop Lightroom Classic CC 2020 is a powerful and versatile photo editing software that lets you organize, edit, and share your photos with ease. Whether you want to enhance the colors, remove distracting objects, create stunning panoramas, or apply various effects, Lightroom Classic CC 2020 has everything you need to bring out the best in your images.

          -

          Adobe Photoshop Lightroom Classic CC 2020 Crack With Registration Code Free Download


          Download Ziphttps://tinourl.com/2uL5b2



          -

          In this article, we will show you how to download and install Adobe Photoshop Lightroom Classic CC 2020 for free on your Windows PC. You don't need to pay for a subscription or enter a registration code to use this software. All you need is a crack file that will activate the full version of Lightroom Classic CC 2020.

          -

          Step 1: Download the Setup File and the Crack File

          -

          The first step is to download the setup file and the crack file for Adobe Photoshop Lightroom Classic CC 2020 from a reliable source. You can use the links below to download them:

          - -

          Make sure you have a good internet connection and enough disk space before downloading these files. Also, disable your antivirus software temporarily to avoid any interference.

          -

          Step 2: Install Adobe Photoshop Lightroom Classic CC 2020

          -

          The next step is to install Adobe Photoshop Lightroom Classic CC 2020 on your PC. Follow these steps to do so:

          -

          -
            -
          1. Extract the setup file using WinRAR or any other extraction tool.
          2. -
          3. Run the setup file as administrator and follow the instructions on the screen.
          4. -
          5. Select your preferred language and destination folder.
          6. -
          7. Wait for the installation to complete.
          8. -
          9. Do not launch the program after installation.
          10. -
          -

          Step 3: Apply the Crack File

          -

          The final step is to apply the crack file to activate Adobe Photoshop Lightroom Classic CC 2020. Follow these steps to do so:

          -
            -
          1. Extract the crack file using WinRAR or any other extraction tool.
          2. -
          3. Copy the crack file and paste it into the installation folder of Lightroom Classic CC 2020. The default location is C:\Program Files\Adobe\Adobe Lightroom Classic CC.
          4. -
          5. Replace the original file if prompted.
          6. -
          7. Run the program as administrator and enjoy using it for free.
          8. -
          -

          Conclusion

          -

          Adobe Photoshop Lightroom Classic CC 2020 is a great software for photographers and photo enthusiasts who want to edit and manage their photos in a professional way. With this software, you can easily adjust the exposure, contrast, color, tone, sharpness, noise, and other aspects of your images. You can also create amazing HDR images, panoramas, slideshows, albums, and more.

          -

          If you want to use Adobe Photoshop Lightroom Classic CC 2020 for free, you can follow the steps above to download and install it on your PC. You don't need to pay for a subscription or enter a registration code to use this software. All you need is a crack file that will activate the full version of Lightroom Classic CC 2020.

          -

          We hope this article was helpful for you. If you have any questions or suggestions, feel free to leave a comment below.

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Sanford-Meisner-Acting-Master-Cl.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Sanford-Meisner-Acting-Master-Cl.md deleted file mode 100644 index de020150e4fcc158d76d9d3bcffabf78fbddd820..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Sanford-Meisner-Acting-Master-Cl.md +++ /dev/null @@ -1,63 +0,0 @@ -## Sanford Meisner Acting Master Cl - - - -**Click Here ☆☆☆ [https://www.google.com/url?q=https%3A%2F%2Fshurll.com%2F2twEpD&sa=D&sntz=1&usg=AOvVaw33CjftTPEvkFcOiWO3--9X](https://www.google.com/url?q=https%3A%2F%2Fshurll.com%2F2twEpD&sa=D&sntz=1&usg=AOvVaw33CjftTPEvkFcOiWO3--9X)** - - - -# Learn the Meisner Technique from the Sanford Meisner Master Class - - - -If you are an aspiring actor who wants to learn the Meisner Technique, one of the most influential and effective approaches to acting, you should check out the Sanford Meisner Master Class. This is a rare opportunity to watch and learn from the legendary acting teacher himself, as he guides a group of students through his rigorous and transformative training. - - - -The Meisner Technique is based on the principle that acting is living truthfully under imaginary circumstances. It teaches actors how to be fully present and responsive to their scene partners, and how to use their imagination and emotional preparation to create authentic characters. The Meisner Technique helps actors develop a strong sense of truth, spontaneity, and emotional depth in their work. - - - -The Sanford Meisner Master Class is available in two formats: a DVD version that contains 8 hours of unedited footage from a class that Meisner taught in 1980, and a Vimeo Instant Streaming version that contains 4 hours of edited footage that focuses on the core techniques of the Meisner approach. Both versions offer invaluable insights into Meisner's teaching style, philosophy, and exercises, such as repetition, independent activity, emotional truth, and improvisation. - - - -Whether you are a beginner or an experienced actor, you can benefit from watching and studying the Sanford Meisner Master Class. You will learn from one of the greatest acting teachers of all time, who trained some of the most acclaimed actors of our era, such as Robert Duvall, Diane Keaton, Gregory Peck, and Joanne Woodward. You will also witness the transformation of his students as they progress through his intensive training. - - - -The Sanford Meisner Master Class is more than just an acting course. It is a legacy of one of the most principled and passionate teachers of acting in history. As playwright Arthur Miller said, "I can pretty well tell which actors have studied with Meisner. They are honest and simple and don't lay on complications that aren't necessary." If you want to become an honest and simple actor who can bring any character to life, you should enroll in the Sanford Meisner Master Class today. - - - -## What are the benefits of the Meisner Technique? - - - -The Meisner Technique offers many benefits for actors who want to improve their craft and their confidence. Here are some of the advantages of practicing this technique: - - - -- **It helps actors manage anxiety.** By focusing on something outside of themselves, such as their partner or their surroundings, actors can reduce their self-consciousness and calm their nervous systems. This can help them overcome stage fright and perform more freely. - -- **It aids in listening and communication.** By repeating what they hear and observing how it changes, actors can develop their listening skills and their ability to respond truthfully. This can help them create more realistic and engaging dialogue and interactions. - -- **It empowers actors to take up space.** By expressing their feelings and opinions without judgment or censorship, actors can learn to assert themselves and their characters. This can help them become more confident and assertive in their work and in their lives. - -- **It relieves pressure off of actors.** By letting go of expectations and preconceived notions, actors can allow themselves to be surprised and challenged by their partners and the circumstances. This can help them avoid boredom and stagnation and discover new possibilities and choices. - -- **It forces actors to be in the present moment.** By reacting to what is happening in the here and now, actors can avoid being distracted by the past or the future. This can help them create more authentic and spontaneous performances that capture the audience's attention. - - - -## How can you learn the Meisner Technique? - - - -If you are interested in learning the Meisner Technique, you have several options. You can enroll in an acting school or a workshop that offers Meisner classes, such as the Sanford Meisner Center in Los Angeles or Seattle, or the Neighborhood Playhouse in New York City. You can also find online courses or videos that teach the Meisner Technique, such as the Sanford Meisner Master Class on Vimeo or DVD. You can also read books or articles that explain the Meisner Technique, such as Sanford Meisner on Acting by Sanford Meisner and Dennis Longwell, or The Definitive Guide to the Meisner Technique by Alex Ates on Backstage. - - - -However you choose to learn the Meisner Technique, remember that it is a practice that requires dedication, patience, and openness. As Meisner himself said, "It takes twenty years to become an actor." The Meisner Technique is not a quick fix or a magic formula. It is a journey of self-discovery and artistic growth that can enrich your life as an actor and as a human being. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Driver Blue Link Bl-u83g.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Driver Blue Link Bl-u83g.md deleted file mode 100644 index c823364c47b934f2e5d4d7ebb5825901d9f4b2c1..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Driver Blue Link Bl-u83g.md +++ /dev/null @@ -1,14 +0,0 @@ -

          Driver Blue Link Bl-u83g


          Download ★★★★★ https://urlgoal.com/2uCJgZ



          - -Bauer The design of this ultra-compact DVR is similar to other high-end 1080p DVRs, but the real standout feature is the quality of its display. The front of the receiver has a 5.0” wide screen display with blue LED backlighting that is easily viewable from any angle. The display is 1280 by 720 pixels in size and can accept HDMI inputs from one or two sources. The receiver is equipped with an internal mini-HDMI input, and there are two external mini-HDMI inputs that will accept standard HDMI video sources. In addition, there are two analog audio inputs and a composite video input. - -Bauer The DVR’s basic controls are located at the top of the receiver, and they include a power button, two pairs of forward/back buttons, and one pair of up/down buttons. The Power button allows the user to turn the unit on and off, the forward/back buttons bring up the menu system, and the up/down buttons change the channel. One nice feature of the menu system is that each button can be assigned to a different channel. So, for example, one could assign the up/down buttons to control the volume while the other pair can be set to switch between the input sources. In addition to the main controls, the receiver also has a four-way navigation pad on the front. - -Bauer The included Remote Control is an Android device that can be used as a remote control. It features an 800 mAh battery that can be charged by plugging it into the receiver’s USB input. The remote uses 2.4GHz Wi-Fi technology and can be placed anywhere within a 20-foot range. - -The receiver is equipped with an advanced Wi-Fi system that includes a pair of 2.4 GHz antennas. There is also a built-in digital processor that handles the streaming of the incoming videos and live TV. The digital processor is the heart of this receiver, and it allows for two channels of high-quality DVR service. The two channels are both HD and can be set to record any type of programming. - -Bauer The DVR features a 400 GB hard drive that stores the recorded video, and it supports network recording from virtually any IP-connected device. Although it does not store programs as part of its programming, the DVR does give the user access to video and TV content on an internal server. The DVR also features a pair of HDMI and a 4fefd39f24
          -
          -
          -

          diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py deleted file mode 100644 index a4375b659a91267d3db9278f72bd1f0b030a4655..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/framework.py +++ /dev/null @@ -1,90 +0,0 @@ -# Mario Rosasco, 2016 -# adapted from framework.cpp, Copyright (C) 2010-2012 by Jason L. McKesson -# This file is licensed under the MIT License. -# -# NB: Unlike in the framework.cpp organization, the main loop is contained -# in the tutorial files, not in this framework file. Additionally, a copy of -# this module file must exist in the same directory as the tutorial files -# to be imported properly. - -import os -from OpenGL.GL import * - -# Function that creates and compiles shaders according to the given type (a GL enum value) and -# shader program (a file containing a GLSL program). -def loadShader(shaderType, shaderFile): - # check if file exists, get full path name - strFilename = findFileOrThrow(shaderFile) - shaderData = None - with open(strFilename, 'r') as f: - shaderData = f.read() - - shader = glCreateShader(shaderType) - glShaderSource(shader, shaderData) # note that this is a simpler function call than in C - - # This shader compilation is more explicit than the one used in - # framework.cpp, which relies on a glutil wrapper function. - # This is made explicit here mainly to decrease dependence on pyOpenGL - # utilities and wrappers, which docs caution may change in future versions. - glCompileShader(shader) - - status = glGetShaderiv(shader, GL_COMPILE_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetShaderInfoLog(shader) - strShaderType = "" - if shaderType is GL_VERTEX_SHADER: - strShaderType = "vertex" - elif shaderType is GL_GEOMETRY_SHADER: - strShaderType = "geometry" - elif shaderType is GL_FRAGMENT_SHADER: - strShaderType = "fragment" - - print("Compilation failure for " + strShaderType + " shader:\n" + str(strInfoLog)) - - return shader - - -# Function that accepts a list of shaders, compiles them, and returns a handle to the compiled program -def createProgram(shaderList): - program = glCreateProgram() - - for shader in shaderList: - glAttachShader(program, shader) - - glLinkProgram(program) - - status = glGetProgramiv(program, GL_LINK_STATUS) - if status == GL_FALSE: - # Note that getting the error log is much simpler in Python than in C/C++ - # and does not require explicit handling of the string buffer - strInfoLog = glGetProgramInfoLog(program) - print("Linker failure: \n" + str(strInfoLog)) - - for shader in shaderList: - glDetachShader(program, shader) - - return program - - -# Helper function to locate and open the target file (passed in as a string). -# Returns the full path to the file as a string. -def findFileOrThrow(strBasename): - # Keep constant names in C-style convention, for readability - # when comparing to C(/C++) code. - if os.path.isfile(strBasename): - return strBasename - - LOCAL_FILE_DIR = "data" + os.sep - GLOBAL_FILE_DIR = os.path.dirname(os.path.abspath(__file__)) + os.sep + "data" + os.sep - - strFilename = LOCAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - strFilename = GLOBAL_FILE_DIR + strBasename - if os.path.isfile(strFilename): - return strFilename - - raise IOError('Could not find target file ' + strBasename) \ No newline at end of file diff --git a/spaces/robyramos/teste_memoria-chat/README.md b/spaces/robyramos/teste_memoria-chat/README.md deleted file mode 100644 index a28cc4b3b82a1450c5e2464c667ad5453abb352f..0000000000000000000000000000000000000000 --- a/spaces/robyramos/teste_memoria-chat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Teste Memoria-chat -emoji: 🌖 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rolisz/sentence_transformers_canonical/README.md b/spaces/rolisz/sentence_transformers_canonical/README.md deleted file mode 100644 index 1bd4d3de4c303fccd3f2ee8bbda48fbc3ba12f70..0000000000000000000000000000000000000000 --- a/spaces/rolisz/sentence_transformers_canonical/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentence Transformers Canonical -emoji: 📊 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rorallitri/biomedical-language-models/logs/Deutschland Spielt Universal Unwrapper Crackl PATCHED.md b/spaces/rorallitri/biomedical-language-models/logs/Deutschland Spielt Universal Unwrapper Crackl PATCHED.md deleted file mode 100644 index 28ecd0d34826005c17641b260eea33095ff7e231..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Deutschland Spielt Universal Unwrapper Crackl PATCHED.md +++ /dev/null @@ -1,28 +0,0 @@ -

          Deutschland Spielt Universal Unwrapper Crackl


          DOWNLOAD » https://tinurll.com/2uzmxd



          - -änder casino online king om - -Aguilar had the other 9. The best value deals can be found at the daily-deal sites. German Deutsche Telekom's share price has rocketed higher today after the company beat quarterly profit targets. Disclaimer of Warranties and Limitation of Liability. With recent websites offering games in Spanish, French and Italian it seems a trend worth exploring further. French giants Paris St-Germain have agreed a world-record fee with Barcelona for 19-year-old Brazilian star Neymar. Privacy Policy Ad Choice Terms of Service. - -Krammer Wall company is a leading global producer of wall panels, metal framing, façade and interior cladding systems, including cladding systems, as well as other products. In, the company acquired Nordic Structures and added their high performance building envelope systems to its portfolio. - -Their use for retrofit and new construction solutions have helped them become a leader in the building envelope industry. - -Miele today launched the introduction of the unique Vornado and Heimat cylinders, the first in the home appliance industry to combine the benefits of easy installation and unparalleled cleaning performance. - -The new Vornado and Heimat cylinders offer the industry's most powerful and efficient steam cleaning action. Consistent, powerful steam action of the Vornado and Heimat steam cleaning cylinders provide superior cleaning power while eliminating the need to filter vapors. - -The new cylinders also provide easier and more convenient installation and a faster return to use after cleaning. Both cylinders combine the industry's strongest steam action with a lighter, more compact size. The Vornado and Heimat cylinders offer the industry's most powerful steam cleaning action. - -Vornado - -Consistent, powerful steam action of the Vornado steam cleaning cylinders provides superior cleaning power while eliminating the need to filter vapors. The Vornado and Heimat cylinders combine the industry's strongest steam action with a lighter, more compact size. - -The Vornado and Heimat steam cleaning cylinders offer the industry's most powerful steam action. Consistent, powerful steam action of the Vornado steam cleaning cylinders provides superior cleaning power while eliminating the need to filter vapors. - -The Vornado and Heimat cylinders combine the industry's strongest steam action with a lighter, more compact size. Vornado Steam cleaning cylinders are also available in a variety of sizes and are ideal for variety of applications. - -The cylind 4fefd39f24
          -
          -
          -

          diff --git a/spaces/rorallitri/biomedical-language-models/logs/Dogar Surgery Book Free 236 A Comprehensive Approach to the Principles of General Surgery.md b/spaces/rorallitri/biomedical-language-models/logs/Dogar Surgery Book Free 236 A Comprehensive Approach to the Principles of General Surgery.md deleted file mode 100644 index d0a1c2fd99cdd8d73e6833ae310d8edd08b0afc7..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Dogar Surgery Book Free 236 A Comprehensive Approach to the Principles of General Surgery.md +++ /dev/null @@ -1,6 +0,0 @@ -

          dogar surgery book free 236


          Download Zip ✓✓✓ https://tinurll.com/2uzo6e



          - - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/rr1/gpb/README.md b/spaces/rr1/gpb/README.md deleted file mode 100644 index 19fd298f9d50cffc09dbb67ed47544913ae18a99..0000000000000000000000000000000000000000 --- a/spaces/rr1/gpb/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gpb -emoji: 🦀 -colorFrom: pink -colorTo: green -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rsh123/newbing/README.md b/spaces/rsh123/newbing/README.md deleted file mode 100644 index e0cf7c3f51eb01d564f4c610d60d69f40a126426..0000000000000000000000000000000000000000 --- a/spaces/rsh123/newbing/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Newbing -emoji: 😻 -colorFrom: blue -colorTo: pink -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/safi842/FashionGen/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/convert_tf_to_pytorch.py b/spaces/safi842/FashionGen/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/convert_tf_to_pytorch.py deleted file mode 100644 index 7ccb787dec188e9dbd9ea31288c049c1bdb30f95..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/convert_tf_to_pytorch.py +++ /dev/null @@ -1,312 +0,0 @@ -# coding: utf-8 -""" -Convert a TF Hub model for BigGAN in a PT one. -""" -from __future__ import (absolute_import, division, print_function, unicode_literals) - -from itertools import chain - -import os -import argparse -import logging -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.functional import normalize - -from .model import BigGAN, WEIGHTS_NAME, CONFIG_NAME -from .config import BigGANConfig - -logger = logging.getLogger(__name__) - - -def extract_batch_norm_stats(tf_model_path, batch_norm_stats_path=None): - try: - import numpy as np - import tensorflow as tf - import tensorflow_hub as hub - except ImportError: - raise ImportError("Loading a TensorFlow models in PyTorch, requires TensorFlow and TF Hub to be installed. " - "Please see https://www.tensorflow.org/install/ for installation instructions for TensorFlow. " - "And see https://github.com/tensorflow/hub for installing Hub. " - "Probably pip install tensorflow tensorflow-hub") - tf.reset_default_graph() - logger.info('Loading BigGAN module from: {}'.format(tf_model_path)) - module = hub.Module(tf_model_path) - inputs = {k: tf.placeholder(v.dtype, v.get_shape().as_list(), k) - for k, v in module.get_input_info_dict().items()} - output = module(inputs) - - initializer = tf.global_variables_initializer() - sess = tf.Session() - stacks = sum(((i*10 + 1, i*10 + 3, i*10 + 6, i*10 + 8) for i in range(50)), ()) - numpy_stacks = [] - for i in stacks: - logger.info("Retrieving module_apply_default/stack_{}".format(i)) - try: - stack_var = tf.get_default_graph().get_tensor_by_name("module_apply_default/stack_%d:0" % i) - except KeyError: - break # We have all the stats - numpy_stacks.append(sess.run(stack_var)) - - if batch_norm_stats_path is not None: - torch.save(numpy_stacks, batch_norm_stats_path) - else: - return numpy_stacks - - -def build_tf_to_pytorch_map(model, config): - """ Build a map from TF variables to PyTorch modules. """ - tf_to_pt_map = {} - - # Embeddings and GenZ - tf_to_pt_map.update({'linear/w/ema_0.9999': model.embeddings.weight, - 'Generator/GenZ/G_linear/b/ema_0.9999': model.generator.gen_z.bias, - 'Generator/GenZ/G_linear/w/ema_0.9999': model.generator.gen_z.weight_orig, - 'Generator/GenZ/G_linear/u0': model.generator.gen_z.weight_u}) - - # GBlock blocks - model_layer_idx = 0 - for i, (up, in_channels, out_channels) in enumerate(config.layers): - if i == config.attention_layer_position: - model_layer_idx += 1 - layer_str = "Generator/GBlock_%d/" % i if i > 0 else "Generator/GBlock/" - layer_pnt = model.generator.layers[model_layer_idx] - for i in range(4): # Batchnorms - batch_str = layer_str + ("BatchNorm_%d/" % i if i > 0 else "BatchNorm/") - batch_pnt = getattr(layer_pnt, 'bn_%d' % i) - for name in ('offset', 'scale'): - sub_module_str = batch_str + name + "/" - sub_module_pnt = getattr(batch_pnt, name) - tf_to_pt_map.update({sub_module_str + "w/ema_0.9999": sub_module_pnt.weight_orig, - sub_module_str + "u0": sub_module_pnt.weight_u}) - for i in range(4): # Convolutions - conv_str = layer_str + "conv%d/" % i - conv_pnt = getattr(layer_pnt, 'conv_%d' % i) - tf_to_pt_map.update({conv_str + "b/ema_0.9999": conv_pnt.bias, - conv_str + "w/ema_0.9999": conv_pnt.weight_orig, - conv_str + "u0": conv_pnt.weight_u}) - model_layer_idx += 1 - - # Attention block - layer_str = "Generator/attention/" - layer_pnt = model.generator.layers[config.attention_layer_position] - tf_to_pt_map.update({layer_str + "gamma/ema_0.9999": layer_pnt.gamma}) - for pt_name, tf_name in zip(['snconv1x1_g', 'snconv1x1_o_conv', 'snconv1x1_phi', 'snconv1x1_theta'], - ['g/', 'o_conv/', 'phi/', 'theta/']): - sub_module_str = layer_str + tf_name - sub_module_pnt = getattr(layer_pnt, pt_name) - tf_to_pt_map.update({sub_module_str + "w/ema_0.9999": sub_module_pnt.weight_orig, - sub_module_str + "u0": sub_module_pnt.weight_u}) - - # final batch norm and conv to rgb - layer_str = "Generator/BatchNorm/" - layer_pnt = model.generator.bn - tf_to_pt_map.update({layer_str + "offset/ema_0.9999": layer_pnt.bias, - layer_str + "scale/ema_0.9999": layer_pnt.weight}) - layer_str = "Generator/conv_to_rgb/" - layer_pnt = model.generator.conv_to_rgb - tf_to_pt_map.update({layer_str + "b/ema_0.9999": layer_pnt.bias, - layer_str + "w/ema_0.9999": layer_pnt.weight_orig, - layer_str + "u0": layer_pnt.weight_u}) - return tf_to_pt_map - - -def load_tf_weights_in_biggan(model, config, tf_model_path, batch_norm_stats_path=None): - """ Load tf checkpoints and standing statistics in a pytorch model - """ - try: - import numpy as np - import tensorflow as tf - except ImportError: - raise ImportError("Loading a TensorFlow models in PyTorch, requires TensorFlow to be installed. Please see " - "https://www.tensorflow.org/install/ for installation instructions.") - # Load weights from TF model - checkpoint_path = tf_model_path + "/variables/variables" - init_vars = tf.train.list_variables(checkpoint_path) - from pprint import pprint - pprint(init_vars) - - # Extract batch norm statistics from model if needed - if batch_norm_stats_path: - stats = torch.load(batch_norm_stats_path) - else: - logger.info("Extracting batch norm stats") - stats = extract_batch_norm_stats(tf_model_path) - - # Build TF to PyTorch weights loading map - tf_to_pt_map = build_tf_to_pytorch_map(model, config) - - tf_weights = {} - for name in tf_to_pt_map.keys(): - array = tf.train.load_variable(checkpoint_path, name) - tf_weights[name] = array - # logger.info("Loading TF weight {} with shape {}".format(name, array.shape)) - - # Load parameters - with torch.no_grad(): - pt_params_pnt = set() - for name, pointer in tf_to_pt_map.items(): - array = tf_weights[name] - if pointer.dim() == 1: - if pointer.dim() < array.ndim: - array = np.squeeze(array) - elif pointer.dim() == 2: # Weights - array = np.transpose(array) - elif pointer.dim() == 4: # Convolutions - array = np.transpose(array, (3, 2, 0, 1)) - else: - raise "Wrong dimensions to adjust: " + str((pointer.shape, array.shape)) - if pointer.shape != array.shape: - raise ValueError("Wrong dimensions: " + str((pointer.shape, array.shape))) - logger.info("Initialize PyTorch weight {} with shape {}".format(name, pointer.shape)) - pointer.data = torch.from_numpy(array) if isinstance(array, np.ndarray) else torch.tensor(array) - tf_weights.pop(name, None) - pt_params_pnt.add(pointer.data_ptr()) - - # Prepare SpectralNorm buffers by running one step of Spectral Norm (no need to train the model): - for module in model.modules(): - for n, buffer in module.named_buffers(): - if n == 'weight_v': - weight_mat = module.weight_orig - weight_mat = weight_mat.reshape(weight_mat.size(0), -1) - u = module.weight_u - - v = normalize(torch.mv(weight_mat.t(), u), dim=0, eps=config.eps) - buffer.data = v - pt_params_pnt.add(buffer.data_ptr()) - - u = normalize(torch.mv(weight_mat, v), dim=0, eps=config.eps) - module.weight_u.data = u - pt_params_pnt.add(module.weight_u.data_ptr()) - - # Load batch norm statistics - index = 0 - for layer in model.generator.layers: - if not hasattr(layer, 'bn_0'): - continue - for i in range(4): # Batchnorms - bn_pointer = getattr(layer, 'bn_%d' % i) - pointer = bn_pointer.running_means - if pointer.shape != stats[index].shape: - raise "Wrong dimensions: " + str((pointer.shape, stats[index].shape)) - pointer.data = torch.from_numpy(stats[index]) - pt_params_pnt.add(pointer.data_ptr()) - - pointer = bn_pointer.running_vars - if pointer.shape != stats[index+1].shape: - raise "Wrong dimensions: " + str((pointer.shape, stats[index].shape)) - pointer.data = torch.from_numpy(stats[index+1]) - pt_params_pnt.add(pointer.data_ptr()) - - index += 2 - - bn_pointer = model.generator.bn - pointer = bn_pointer.running_means - if pointer.shape != stats[index].shape: - raise "Wrong dimensions: " + str((pointer.shape, stats[index].shape)) - pointer.data = torch.from_numpy(stats[index]) - pt_params_pnt.add(pointer.data_ptr()) - - pointer = bn_pointer.running_vars - if pointer.shape != stats[index+1].shape: - raise "Wrong dimensions: " + str((pointer.shape, stats[index].shape)) - pointer.data = torch.from_numpy(stats[index+1]) - pt_params_pnt.add(pointer.data_ptr()) - - remaining_params = list(n for n, t in chain(model.named_parameters(), model.named_buffers()) \ - if t.data_ptr() not in pt_params_pnt) - - logger.info("TF Weights not copied to PyTorch model: {} -".format(', '.join(tf_weights.keys()))) - logger.info("Remanining parameters/buffers from PyTorch model: {} -".format(', '.join(remaining_params))) - - return model - - -BigGAN128 = BigGANConfig(output_dim=128, z_dim=128, class_embed_dim=128, channel_width=128, num_classes=1000, - layers=[(False, 16, 16), - (True, 16, 16), - (False, 16, 16), - (True, 16, 8), - (False, 8, 8), - (True, 8, 4), - (False, 4, 4), - (True, 4, 2), - (False, 2, 2), - (True, 2, 1)], - attention_layer_position=8, eps=1e-4, n_stats=51) - -BigGAN256 = BigGANConfig(output_dim=256, z_dim=128, class_embed_dim=128, channel_width=128, num_classes=1000, - layers=[(False, 16, 16), - (True, 16, 16), - (False, 16, 16), - (True, 16, 8), - (False, 8, 8), - (True, 8, 8), - (False, 8, 8), - (True, 8, 4), - (False, 4, 4), - (True, 4, 2), - (False, 2, 2), - (True, 2, 1)], - attention_layer_position=8, eps=1e-4, n_stats=51) - -BigGAN512 = BigGANConfig(output_dim=512, z_dim=128, class_embed_dim=128, channel_width=128, num_classes=1000, - layers=[(False, 16, 16), - (True, 16, 16), - (False, 16, 16), - (True, 16, 8), - (False, 8, 8), - (True, 8, 8), - (False, 8, 8), - (True, 8, 4), - (False, 4, 4), - (True, 4, 2), - (False, 2, 2), - (True, 2, 1), - (False, 1, 1), - (True, 1, 1)], - attention_layer_position=8, eps=1e-4, n_stats=51) - - -def main(): - parser = argparse.ArgumentParser(description="Convert a BigGAN TF Hub model in a PyTorch model") - parser.add_argument("--model_type", type=str, default="", required=True, - help="BigGAN model type (128, 256, 512)") - parser.add_argument("--tf_model_path", type=str, default="", required=True, - help="Path of the downloaded TF Hub model") - parser.add_argument("--pt_save_path", type=str, default="", - help="Folder to save the PyTorch model (default: Folder of the TF Hub model)") - parser.add_argument("--batch_norm_stats_path", type=str, default="", - help="Path of previously extracted batch norm statistics") - args = parser.parse_args() - - logging.basicConfig(level=logging.INFO) - - if not args.pt_save_path: - args.pt_save_path = args.tf_model_path - - if args.model_type == "128": - config = BigGAN128 - elif args.model_type == "256": - config = BigGAN256 - elif args.model_type == "512": - config = BigGAN512 - else: - raise ValueError("model_type should be one of 128, 256 or 512") - - model = BigGAN(config) - model = load_tf_weights_in_biggan(model, config, args.tf_model_path, args.batch_norm_stats_path) - - model_save_path = os.path.join(args.pt_save_path, WEIGHTS_NAME) - config_save_path = os.path.join(args.pt_save_path, CONFIG_NAME) - - logger.info("Save model dump to {}".format(model_save_path)) - torch.save(model.state_dict(), model_save_path) - logger.info("Save configuration file to {}".format(config_save_path)) - with open(config_save_path, "w", encoding="utf-8") as f: - f.write(config.to_json_string()) - -if __name__ == "__main__": - main() diff --git a/spaces/salashvijay/audiototxttosentiment/app.py b/spaces/salashvijay/audiototxttosentiment/app.py deleted file mode 100644 index f94b8f29207160b11c5b78af19b6583550a472b8..0000000000000000000000000000000000000000 --- a/spaces/salashvijay/audiototxttosentiment/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import streamlit as st -import firebase_admin -from firebase_admin import credentials -from firebase_admin import firestore -import datetime -from transformers import pipeline -import gradio as gr -from wordcloud import WordCloud, STOPWORDS -import matplotlib.pyplot as plt -import pandas as pd - -stopwords = set(STOPWORDS) - -# load cloud firestore client which establishes a connection to dataset where we persist data -@st.experimental_singleton -def get_db_firestore(): - cred = credentials.Certificate('test.json') - firebase_admin.initialize_app(cred, {'projectId': u'audio-txt-sentiment-analysis',}) - db = firestore.client() - return db - -#start it up -db = get_db_firestore() -asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h") - -def transcribe(audio): - text = asr(audio)["text"] - return text - -classifier = pipeline("text-classification") - -def speech_to_text(speech): - text = asr(speech)["text"] - return text - -def text_to_sentiment(text): - sentiment = classifier(text)[0]["label"] - return sentiment - -def upsert(text): - date_time =str(datetime.datetime.today()) - doc_ref = db.collection('Text2SpeechSentimentSave').document(date_time) - doc_ref.set({u'firefield': 'Recognize Speech', u'first': 'https://huggingface.co/spaces/awacke1/Text2SpeechSentimentSave', u'last': text, u'born': date_time,}) - saved = select('Text2SpeechSentimentSave', date_time) - # check it here: https://console.firebase.google.com/u/0/project/clinical-nlp-b9117/firestore/data/~2FStreamlitSpaces - return saved - -def select(collection, document): - doc_ref = db.collection(collection).document(document) - doc = doc_ref.get() - docid = ("The id is: ", doc.id) - contents = ("The contents are: ", doc.to_dict()) - return contents - -def selectall(text): - docs = db.collection('Text2SpeechSentimentSave').stream() - doclist='' - for doc in docs: - #docid=doc.id - #dict=doc.to_dict() - #doclist+=doc.to_dict() - r=(f'{doc.id} => {doc.to_dict()}') - doclist += r - return doclist - -def wordcloud(text): - comment_words = '' - stopwords = set(STOPWORDS) - for val in text: - - # typecaste each val to string - val = str(val) - - # split the value - tokens = val.split() - - # Converts each token into lowercase - for i in range(len(tokens)): - tokens[i] = tokens[i].lower() - - comment_words += " ".join(tokens)+" " - - wordcloud = WordCloud(width = 800, height = 800, - background_color ='white', - stopwords = stopwords, - min_font_size = 10).generate(comment_words) - - # plot the WordCloud image - plt.figure(figsize = (8, 8), facecolor = None) - plt.imshow(wordcloud) - plt.axis("off") - plt.tight_layout(pad = 0) - - plt.show() - -demo = gr.Blocks() - -with demo: - #audio_file = gr.Audio(type="filepath") - audio_file = gr.inputs.Audio(source="upload", type="filepath") - text = gr.Textbox() - label = gr.Label() - #saved = gr.Textbox() - savedAll = gr.Image() - - b1 = gr.Button("Recognize Speech") - b2 = gr.Button("Classify Sentiment") - # b3 = gr.Button("Save Speech to Text") - b4 = gr.Button("Word cloud") - - b1.click(speech_to_text, inputs=audio_file, outputs=text) - b2.click(text_to_sentiment, inputs=text, outputs=label) - # b3.click(upsert, inputs=text, outputs=saved) - b4.click(wordcloud, inputs=text, outputs=savedAll) - -demo.launch() \ No newline at end of file diff --git a/spaces/sander-wood/tunesformer/README.md b/spaces/sander-wood/tunesformer/README.md deleted file mode 100644 index 030331c08313c4c5e26e534c5f3534d4541589f4..0000000000000000000000000000000000000000 --- a/spaces/sander-wood/tunesformer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TunesFormer - Forming Irish Tunes with Control Codes by Bar Patching -emoji: 🎼 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sanzgiri/cartoonify/app.py b/spaces/sanzgiri/cartoonify/app.py deleted file mode 100644 index 8a944c5a3fe41d320a437b0c48cb2c0f1d1c47c9..0000000000000000000000000000000000000000 --- a/spaces/sanzgiri/cartoonify/app.py +++ /dev/null @@ -1,185 +0,0 @@ -import os -import cv2 -import numpy as np -#import tensorflow as tf -import tensorflow.compat.v1 as tf -import tf_slim as slim -import streamlit as st -from PIL import Image - -tf.disable_v2_behavior() - -def tf_box_filter(x, r): - k_size = int(2 * r + 1) - ch = x.get_shape().as_list()[-1] - weight = 1 / (k_size ** 2) - box_kernel = weight * np.ones((k_size, k_size, ch, 1)) - box_kernel = np.array(box_kernel).astype(np.float32) - output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME') - return output - - -def guided_filter(x, y, r, eps=1e-2): - x_shape = tf.shape(x) - # y_shape = tf.shape(y) - - N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r) - - mean_x = tf_box_filter(x, r) / N - mean_y = tf_box_filter(y, r) / N - cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y - var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf_box_filter(A, r) / N - mean_b = tf_box_filter(b, r) / N - - output = mean_A * x + mean_b - - return output - - -def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8): - # assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4 - - lr_x_shape = tf.shape(lr_x) - # lr_y_shape = tf.shape(lr_y) - hr_x_shape = tf.shape(hr_x) - - N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r) - - mean_x = tf_box_filter(lr_x, r) / N - mean_y = tf_box_filter(lr_y, r) / N - cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y - var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf.image.resize_images(A, hr_x_shape[1: 3]) - mean_b = tf.image.resize_images(b, hr_x_shape[1: 3]) - - output = mean_A * hr_x + mean_b - - return output - - -def resblock(inputs, out_channel=32, name='resblock'): - with tf.variable_scope(name): - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel * 2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel * 2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel * 4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel * 4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel * 2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1 * 2, w1 * 2)) - x3 = slim.convolution2d(x3 + x1, channel * 2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2 * 2, w2 * 2)) - x4 = slim.convolution2d(x4 + x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - - -def resize_crop(image): - h, w, c = np.shape(image) - #st.write(h, w, c) - if min(h, w) > 720: - if h > w: - h, w = int(720 * h / w), 720 - else: - h, w = 720, int(720 * w / h) - w = int(w / 2) - h = int(h / 2) - st.image(image, caption=f'Your image', width=w) - image = cv2.resize(np.float32(image), (w, h), - interpolation=cv2.INTER_AREA) - h, w = (h // 8) * 8, (w // 8) * 8 - #st.write(h,w) - image = image[:h, :w, :] - return image - - -def cartoonize(infile, outfile, model_path): - - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = unet_generator(input_photo) - final_out = guided_filter(input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - #config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - - sess.run(tf.global_variables_initializer()) - saver.restore(sess, tf.train.latest_checkpoint(model_path)) - - #image = cv2.imread(infile) - image = infile - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = sess.run(final_out, feed_dict={input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(outfile, output) - - -def main(): - - model_path = 'saved_model' - outfile = "result.jpg" - if os.path.exists(outfile): - os.system(f"rm -f {outfile}") - - st.title('Cartoonify!') - infile = st.file_uploader("Choose an image file to cartoonify", type=["jpg", "jpeg"]) - - if infile is not None: - image = Image.open(infile) - #st.image(image, caption=f'Your image', use_column_width=True) - cartoonize(image, outfile, model_path) - - omage = Image.open(outfile) - st.image(omage, caption=f'Cartoonized version: {outfile}') - - - -if __name__ == "__main__": - main() diff --git a/spaces/scedlatioru/img-to-music/example/Elite Hackers Wifi Password [VERIFIED] Cracker V4.6.2 Setup Free Download.md b/spaces/scedlatioru/img-to-music/example/Elite Hackers Wifi Password [VERIFIED] Cracker V4.6.2 Setup Free Download.md deleted file mode 100644 index 8c857c91d2b3097456e81f460dde78014c601421..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Elite Hackers Wifi Password [VERIFIED] Cracker V4.6.2 Setup Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Elite Hackers wifi password cracker v4.6.2 setup free download


          Download Zip ··· https://gohhs.com/2uEzEq



          -
          -Feb 2, 2019 - Wifi Password Cracker Software Free Download For PC: Wifi password Hacker is that the app you'll be able to use for hacking any WiFi network. 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Font Psl Kanda Modern 15.md b/spaces/scedlatioru/img-to-music/example/Font Psl Kanda Modern 15.md deleted file mode 100644 index 47d629665a122c763872c223217ec10831543b90..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Font Psl Kanda Modern 15.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Font Psl Kanda Modern 15


          Download Filehttps://gohhs.com/2uEztE



          - -Font Psl Kanda Modern 15 ->>> http://cinurl.com/15gf6y font kanda modern font kanda modern extra font psl kanda modern free .... nAug 4, 2013 Psl kanda ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Jmp 11 Serial Number Crack Software LINK.md b/spaces/scedlatioru/img-to-music/example/Jmp 11 Serial Number Crack Software LINK.md deleted file mode 100644 index ea73180feb364f12a8b0a0d53af772d9a7e6e558..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Jmp 11 Serial Number Crack Software LINK.md +++ /dev/null @@ -1,106 +0,0 @@ - -

          Jmp 11 Serial Number Crack Software: What You Need to Know

          - -

          Jmp 11 is a powerful statistical discovery software that helps you analyze data and make informed decisions. It is widely used by researchers, engineers, scientists, and business professionals in various fields. However, Jmp 11 is not a free software, and you need a valid serial number to activate it.

          -

          Jmp 11 Serial Number Crack Software


          Download Filehttps://gohhs.com/2uEzZs



          - -

          If you are looking for a way to get Jmp 11 serial number crack software, you might be tempted to search online for some websites that offer free serial keys or crack files. But is this a good idea? In this article, we will explain why cracking Jmp 11 software with a serial number is not recommended, and what are the risks and consequences of doing so.

          - -

          Why Cracking Jmp 11 Software with a Serial Number is Not Recommended

          - -

          Cracking Jmp 11 software with a serial number might seem like an easy and cheap way to get the full version of the software, but it comes with many drawbacks and dangers. Here are some of them:

          - -
            -
          • It is illegal. Cracking Jmp 11 software with a serial number violates the terms and conditions of the software license agreement, and infringes the intellectual property rights of the software developer. You could face legal actions or penalties if you are caught using cracked software.
          • -
          • It is unsafe. Cracking Jmp 11 software with a serial number exposes your computer or mobile phone to potential risks of malware, viruses, spyware, or ransomware. These malicious programs could damage your device, steal your personal information, or encrypt your files and demand a ransom to unlock them.
          • -
          • It is unreliable. Cracking Jmp 11 software with a serial number compromises the quality and performance of the software. You could experience errors, crashes, bugs, or compatibility issues that could affect your data analysis and results. You could also miss out on important updates, patches, or features that are only available for the official version of the software.
          • -
          • It is unethical. Cracking Jmp 11 software with a serial number deprives the software developer of their rightful income and recognition for their hard work and innovation. You could also lose your credibility and reputation as a professional or a student if you use cracked software for your projects or assignments.
          • -
          - -

          What Are the Alternatives to Cracking Jmp 11 Software with a Serial Number

          - -

          If you want to use Jmp 11 software without paying for a license, there are some alternatives that are legal, safe, reliable, and ethical. Here are some of them:

          - -
            -
          • Try the free trial version. You can download the free trial version of Jmp 11 software from the official website and use it for 30 days without any limitations. This will give you enough time to explore the features and functions of the software and decide if you want to buy it or not.
          • -
          • Look for discounts or promotions. You can look for discounts or promotions that are offered by the software developer or authorized resellers from time to time. You can also check if you are eligible for academic pricing if you are a student or an educator.
          • -
          • Use alternative software. You can look for alternative software that has similar or comparable features and functions as Jmp 11 software, but at a lower price or even for free. Some examples are RStudio, SPSS, Excel, Tableau, etc.
          • -
          - -

          Conclusion

          - -

          Jmp 11 serial number crack software might seem like a tempting option to get the full version of the statistical discovery software without paying for it, but it is not worth it. Cracking Jmp 11 software with a serial number is illegal, unsafe, unreliable, and unethical. You should avoid it at all costs and look for other alternatives that are legal, safe, reliable, and ethical.

          - -

          We hope this article has helped you understand why cracking Jmp 11 software with a serial number is not recommended, and what are the risks and consequences of doing so. If you have any questions or comments, please feel free to leave them below.

          -

          How to Get Jmp 11 Serial Number Legally

          - -

          If you have decided to buy Jmp 11 software, you will need a valid serial number to activate it. There are two ways to get Jmp 11 serial number legally:

          -

          - -
            -
          • Buy it from the official website. You can visit the official website of Jmp 11 software and choose the edition and license type that suits your needs. You can pay by credit card, PayPal, or wire transfer. After completing the payment, you will receive an email with your serial number and download link.
          • -
          • Buy it from an authorized reseller. You can also buy Jmp 11 software from an authorized reseller in your region or country. You can find a list of authorized resellers on the official website of Jmp 11 software. You can contact them and ask for a quote and payment options. After confirming the order, you will receive your serial number and download link.
          • -
          - -

          Once you have your serial number, you can download and install Jmp 11 software on your computer or mobile phone. You can follow the instructions on the screen to enter your serial number and activate your software.

          - -

          How to Use Jmp 11 Software Effectively

          - -

          After activating Jmp 11 software with your serial number, you can start using it for your data analysis and discovery. Jmp 11 software has many features and functions that can help you explore, visualize, model, and communicate your data. Here are some tips on how to use Jmp 11 software effectively:

          - -
            -
          • Learn the basics. You can watch some tutorials or read some guides on the official website of Jmp 11 software to learn the basics of the software interface, data import, data manipulation, data analysis, data visualization, and data reporting.
          • -
          • Use the help system. You can access the help system by clicking on the Help menu or pressing F1 on your keyboard. You can find detailed information on every feature and function of Jmp 11 software, as well as examples and tips.
          • -
          • Join the community. You can join the online community of Jmp 11 software users on the official website of Jmp 11 software. You can ask questions, share ideas, exchange tips, and learn from other users.
          • -
          - -

          Jmp 11 software is a powerful tool that can help you discover new insights from your data. By using it legally and effectively, you can enhance your productivity and creativity in your work or study.

          -

          How to Crack JMP 11 Software with a Serial Number

          - -

          If you are determined to crack JMP 11 software with a serial number, you will need to follow some steps to do so. However, we do not recommend this method and we are not responsible for any consequences that may arise from it. Here are the steps to crack JMP 11 software with a serial number:

          - -
            -
          1. Find a serial number or a crack file online. You can search online for some websites that offer free serial numbers or crack files for JMP 11 software. However, you should be careful and avoid any suspicious or malicious links that could harm your device or data.
          2. -
          3. Download and install JMP 11 software. You can download the trial version of JMP 11 software from the official website and install it on your computer or mobile phone. You can use it for 30 days without any limitations.
          4. -
          5. Enter the serial number or apply the crack file. You can enter the serial number that you found online when prompted by the software, or copy and paste the crack file into the installation folder of JMP 11 software. This should bypass the activation process and unlock the full version of JMP 11 software.
          6. -
          - -

          However, you should be aware that cracking JMP 11 software with a serial number is illegal, unsafe, unreliable, and unethical. You could face legal actions or penalties, malware or virus attacks, software errors or crashes, or loss of credibility or reputation. You should avoid cracking JMP 11 software with a serial number and look for other alternatives that are legal, safe, reliable, and ethical.

          - -

          Conclusion

          - -

          JMP 11 serial number crack software is a tempting option to get the full version of the statistical discovery software without paying for it, but it is not worth it. Cracking JMP 11 software with a serial number is illegal, unsafe, unreliable, and unethical. You should avoid it at all costs and look for other alternatives that are legal, safe, reliable, and ethical.

          - -

          We hope this article has helped you understand why cracking JMP 11 software with a serial number is not recommended, and what are the risks and consequences of doing so. If you have any questions or comments, please feel free to leave them below.

          -

          How to Avoid Cracking JMP 11 Software with a Serial Number

          - -

          Cracking JMP 11 software with a serial number is not only illegal, unsafe, unreliable, and unethical, but also unnecessary. There are many ways to avoid cracking JMP 11 software with a serial number and still enjoy the benefits of the statistical discovery software. Here are some of them:

          - -
            -
          • Use the free online version. You can use the free online version of JMP 11 software on the official website of JMP 11 software. You can access it from any browser and device, and you can upload your data and perform basic data analysis and visualization. You can also share your results online or download them as PDF or HTML files.
          • -
          • Use the free academic version. If you are a student or an instructor at an accredited academic institution, you can use the free academic version of JMP 11 software on your personal computer or mobile phone. You can download it from the official website of JMP 11 software and register with your academic email address. You can use it for teaching or learning purposes only.
          • -
          • Use the free trial extension. If you have already used the free trial version of JMP 11 software for 30 days and you still need more time to evaluate it, you can request a free trial extension from the official website of JMP 11 software. You can extend your trial period for another 30 days or more, depending on your situation.
          • -
          - -

          By using these alternatives, you can avoid cracking JMP 11 software with a serial number and still use the statistical discovery software legally, safely, reliably, and ethically.

          - -

          How to Buy JMP 11 Software with a Discount

          - -

          If you have decided to buy JMP 11 software after using the free or trial versions, you might want to save some money and get a discount. There are some ways to buy JMP 11 software with a discount. Here are some of them:

          - -
            -
          • Buy it during a promotion. You can buy JMP 11 software during a promotion that is offered by the software developer or authorized resellers from time to time. You can check the official website of JMP 11 software or follow their social media accounts to get notified of any promotions.
          • -
          • Buy it with a coupon code. You can buy JMP 11 software with a coupon code that is provided by some websites or platforms that partner with the software developer or authorized resellers. You can search online for some coupon codes or sign up for some newsletters or rewards programs that offer coupon codes.
          • -
          • Buy it with a group purchase. You can buy JMP 11 software with a group purchase that is organized by some websites or platforms that negotiate with the software developer or authorized resellers for a lower price per unit. You can join a group purchase or create your own group purchase and invite other people who want to buy JMP 11 software.
          • -
          - -

          By using these methods, you can buy JMP 11 software with a discount and save some money while getting the full version of the statistical discovery software.

          -

          Summary

          - -

          JMP 11 serial number crack software is a bad idea that you should avoid at all costs. Cracking JMP 11 software with a serial number is illegal, unsafe, unreliable, and unethical. It could bring you legal troubles, malware infections, software problems, or reputation damages. You should not crack JMP 11 software with a serial number and look for other alternatives that are legal, safe, reliable, and ethical.

          - -

          In this article, we have explained why cracking JMP 11 software with a serial number is not recommended, and what are the risks and consequences of doing so. We have also provided some alternatives to cracking JMP 11 software with a serial number, such as using the free online version, the free academic version, the free trial extension, or buying JMP 11 software with a discount. By using these alternatives, you can use JMP 11 software legally, safely, reliably, and ethically.

          - -

          We hope this article has helped you understand why cracking JMP 11 software with a serial number is not recommended, and what are the alternatives to doing so. If you have any questions or comments, please feel free to leave them below.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Tees Maar Khan ((INSTALL)) Full Movie 720p Download Movies.md b/spaces/scedlatioru/img-to-music/example/Tees Maar Khan ((INSTALL)) Full Movie 720p Download Movies.md deleted file mode 100644 index b9bce90252550663db391c98309cc145ab777ab2..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Tees Maar Khan ((INSTALL)) Full Movie 720p Download Movies.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Tees Maar Khan Full Movie 720p Download Movies


          Download File https://gohhs.com/2uEzmR



          -
          -Film müzikleriyle, TEES MAAR KHAN ,Aatish Kapor ve güzeller güzeli Anya (TMK 'NIN KARISI) ' NIN mimikleri, esprileriyle çok eğlenceli bir film izleyeceğinizden ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/segestic/HealthBlock/api/main.py b/spaces/segestic/HealthBlock/api/main.py deleted file mode 100644 index 9c1160e473a19d8edd1f9aa29c386bc4798ddced..0000000000000000000000000000000000000000 --- a/spaces/segestic/HealthBlock/api/main.py +++ /dev/null @@ -1,54 +0,0 @@ -from fastapi import FastAPI -from pydantic import BaseModel, ValidationError, validator - -from pytezos import pytezos - - -app = FastAPI() - - - -pytezos = pytezos.using(shell = 'https://rpc.tzkt.io/ghostnet', key='edsk3MrRkoidY2SjEgufvi44orvyjxgZoy4LhaJNTNcddWykW6SssL') -contract = pytezos.contract('KT1KvCVKiZhkPG8s9CCoxW3r135phk2HhZUV') - - -# Define the request body model -class Patient(BaseModel): - name: str - email: str - contact: int - age: int - gender: str - number: int - #is_offer: bool = None - - - @validator('name') - def name_must_not_be_empty(cls, v): - if v == ' ': - raise ValueError('must contain a space') - return v - - @validator('email') - def name_must_contain_at(cls, v): - if '@' not in v: - raise ValueError('must contain @ ') - return v - -@app.get("/") -async def read_root(): - return {"Hello": "World"} - - - - -# Define the POST route handler -@app.post("/patient/") -async def create_patient(patient: Patient): - # patient object is validated automatically by FastAPI - patient_dict = patient.dict() - contract.addUser(email = patient.email, name = patient.name, age = patient.age, gender = patient.gender, number = patient.number).with_amount(0).as_transaction().fill().sign().inject() - return {"message": f"User {patient.name} with email {patient.email} created successfully"} - - - diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py deleted file mode 100644 index c8340c723fad8e07e2fc62daaa3912487498814b..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py +++ /dev/null @@ -1,221 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -Backbone modules. -""" - -from typing import Dict, List - -import torch -import torch.nn.functional as F -import torchvision -from torch import nn -from torchvision.models._utils import IntermediateLayerGetter - -from groundingdino.util.misc import NestedTensor, clean_state_dict, is_main_process - -from .position_encoding import build_position_encoding -from .swin_transformer import build_swin_transformer - - -class FrozenBatchNorm2d(torch.nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed. - - Copy-paste from torchvision.misc.ops with added eps before rqsrt, - without which any other models than torchvision.models.resnet[18,34,50,101] - produce nans. - """ - - def __init__(self, n): - super(FrozenBatchNorm2d, self).__init__() - self.register_buffer("weight", torch.ones(n)) - self.register_buffer("bias", torch.zeros(n)) - self.register_buffer("running_mean", torch.zeros(n)) - self.register_buffer("running_var", torch.ones(n)) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - num_batches_tracked_key = prefix + "num_batches_tracked" - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - - super(FrozenBatchNorm2d, self)._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def forward(self, x): - # move reshapes to the beginning - # to make it fuser-friendly - w = self.weight.reshape(1, -1, 1, 1) - b = self.bias.reshape(1, -1, 1, 1) - rv = self.running_var.reshape(1, -1, 1, 1) - rm = self.running_mean.reshape(1, -1, 1, 1) - eps = 1e-5 - scale = w * (rv + eps).rsqrt() - bias = b - rm * scale - return x * scale + bias - - -class BackboneBase(nn.Module): - def __init__( - self, - backbone: nn.Module, - train_backbone: bool, - num_channels: int, - return_interm_indices: list, - ): - super().__init__() - for name, parameter in backbone.named_parameters(): - if ( - not train_backbone - or "layer2" not in name - and "layer3" not in name - and "layer4" not in name - ): - parameter.requires_grad_(False) - - return_layers = {} - for idx, layer_index in enumerate(return_interm_indices): - return_layers.update( - {"layer{}".format(5 - len(return_interm_indices) + idx): "{}".format(layer_index)} - ) - - # if len: - # if use_stage1_feature: - # return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"} - # else: - # return_layers = {"layer2": "0", "layer3": "1", "layer4": "2"} - # else: - # return_layers = {'layer4': "0"} - self.body = IntermediateLayerGetter(backbone, return_layers=return_layers) - self.num_channels = num_channels - - def forward(self, tensor_list: NestedTensor): - xs = self.body(tensor_list.tensors) - out: Dict[str, NestedTensor] = {} - for name, x in xs.items(): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0] - out[name] = NestedTensor(x, mask) - # import ipdb; ipdb.set_trace() - return out - - -class Backbone(BackboneBase): - """ResNet backbone with frozen BatchNorm.""" - - def __init__( - self, - name: str, - train_backbone: bool, - dilation: bool, - return_interm_indices: list, - batch_norm=FrozenBatchNorm2d, - ): - if name in ["resnet18", "resnet34", "resnet50", "resnet101"]: - backbone = getattr(torchvision.models, name)( - replace_stride_with_dilation=[False, False, dilation], - pretrained=is_main_process(), - norm_layer=batch_norm, - ) - else: - raise NotImplementedError("Why you can get here with name {}".format(name)) - # num_channels = 512 if name in ('resnet18', 'resnet34') else 2048 - assert name not in ("resnet18", "resnet34"), "Only resnet50 and resnet101 are available." - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - num_channels_all = [256, 512, 1024, 2048] - num_channels = num_channels_all[4 - len(return_interm_indices) :] - super().__init__(backbone, train_backbone, num_channels, return_interm_indices) - - -class Joiner(nn.Sequential): - def __init__(self, backbone, position_embedding): - super().__init__(backbone, position_embedding) - - def forward(self, tensor_list: NestedTensor): - xs = self[0](tensor_list) - out: List[NestedTensor] = [] - pos = [] - for name, x in xs.items(): - out.append(x) - # position encoding - pos.append(self[1](x).to(x.tensors.dtype)) - - return out, pos - - -def build_backbone(args): - """ - Useful args: - - backbone: backbone name - - lr_backbone: - - dilation - - return_interm_indices: available: [0,1,2,3], [1,2,3], [3] - - backbone_freeze_keywords: - - use_checkpoint: for swin only for now - - """ - position_embedding = build_position_encoding(args) - train_backbone = True - if not train_backbone: - raise ValueError("Please set lr_backbone > 0") - return_interm_indices = args.return_interm_indices - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - args.backbone_freeze_keywords - use_checkpoint = getattr(args, "use_checkpoint", False) - - if args.backbone in ["resnet50", "resnet101"]: - backbone = Backbone( - args.backbone, - train_backbone, - args.dilation, - return_interm_indices, - batch_norm=FrozenBatchNorm2d, - ) - bb_num_channels = backbone.num_channels - elif args.backbone in [ - "swin_T_224_1k", - "swin_B_224_22k", - "swin_B_384_22k", - "swin_L_224_22k", - "swin_L_384_22k", - ]: - pretrain_img_size = int(args.backbone.split("_")[-2]) - backbone = build_swin_transformer( - args.backbone, - pretrain_img_size=pretrain_img_size, - out_indices=tuple(return_interm_indices), - dilation=False, - use_checkpoint=use_checkpoint, - ) - - bb_num_channels = backbone.num_features[4 - len(return_interm_indices) :] - else: - raise NotImplementedError("Unknown backbone {}".format(args.backbone)) - - assert len(bb_num_channels) == len( - return_interm_indices - ), f"len(bb_num_channels) {len(bb_num_channels)} != len(return_interm_indices) {len(return_interm_indices)}" - - model = Joiner(backbone, position_embedding) - model.num_channels = bb_num_channels - assert isinstance( - bb_num_channels, List - ), "bb_num_channels is expected to be a List but {}".format(type(bb_num_channels)) - # import ipdb; ipdb.set_trace() - return model diff --git a/spaces/sgxz/bingo/src/components/ui/select.tsx b/spaces/sgxz/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/shabnam91/Sanskrit-TTS/commons.py b/spaces/shabnam91/Sanskrit-TTS/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/sharmaanupam/eigenvectors/utils.py b/spaces/sharmaanupam/eigenvectors/utils.py deleted file mode 100644 index 96d5b6d4e534fc0c78eb9b392088e052d6bd11f1..0000000000000000000000000000000000000000 --- a/spaces/sharmaanupam/eigenvectors/utils.py +++ /dev/null @@ -1,134 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt - -def getSquareY(x): - if x==-1 or x == 1: - return 0 - else: - return 1 - -getSquareYVectorised = np.vectorize(getSquareY) - -def getCircle(x): - return np.sqrt(1 - np.square(x)) - -def transform(x,y,t): - points = np.array([x, y]) - result = t @ points - return result[0,:], result[1,:] - -def plotGridLines(xlim,ylim,t,color,label,linewidth): - for i in range(xlim[0]-20,xlim[1]+21): - x = [i,i] - y = [ylim[0]-20,ylim[1]+20] - x,y = transform(x,y,t) - if i == xlim[0]-20: - plt.plot(x,y, color=color,linestyle='dashed',linewidth=linewidth,label=label) - else: - plt.plot(x,y, color=color,linestyle='dashed',linewidth=linewidth) - for i in range(ylim[0]-20,ylim[1]+21): - y = [i,i] - x = [xlim[0]-20,xlim[1]+20] - x,y = transform(x,y,t) - plt.plot(x,y, color=color,linestyle='dashed',linewidth=linewidth) - -def discriminant(t): - return t[0,0]**2 - 2*t[1,1]*t[0,0] + t[1,1]**2 + 4*t[0,1]*t[1,0] - -def getBatman(s=2): - X = [] - Y = [] - - # lower - x = np.linspace(-4, 4, 1600) - y = np.zeros((0)) - for px in x: - y = np.append(y,abs(px/2)- 0.09137*px**2 + np.sqrt(1-(abs(abs(px)-2)-1)**2) -3) - X.append(x/s) - Y.append(y/s) - - # lower left - x = np.linspace(-7., -4, 300) - y = np.zeros((0)) - for px in x: - y = np.append(y, -3*np.sqrt(-(px/7)**2+1)) - X.append(x/s) - Y.append(y/s) - - # lower right - x = np.linspace(4, 7, 300) - y = np.zeros((0)) - for px in x: - y = np.append(y, -3*np.sqrt(-(px/7)**2+1)) - X.append(x/s) - Y.append(y/s) - - # top left - x = np.linspace(-7, -2.95, 300) - y = np.zeros((0)) - for px in x: - y = np.append(y, 3*np.sqrt(-(px/7)**2+1)) - X.append(x/s) - Y.append(y/s) - - # top right - x = np.linspace(2.95, 7, 300) - y = np.zeros((0)) - for px in x: - y = np.append(y, 3*np.sqrt(-(px/7)**2+1)) - X.append(x/s) - Y.append(y/s) - - # left ear left - x = np.linspace(-1, -.77, 2) - y = np.zeros((0)) - for px in x: - y = np.append(y, 9-8*abs(px)) - X.append(x/s) - Y.append(y/s) - - # right ear right - x = np.linspace(.77, 1, 2) - y = np.zeros((0)) - for px in x: - y = np.append(y, 9-8*abs(px)) - X.append(x/s) - Y.append(y/s) - - # mid - x = np.linspace(-.43, .43, 100) - y = np.zeros((0)) - for px in x: - y = np.append(y,2) - X.append(x/s) - Y.append(y/s) - - x = np.linspace(-2.91, -1, 100) - y = np.zeros((0)) - for px in x: - y = np.append(y, 1.5 - .5*abs(px) - 1.89736*(np.sqrt(3-px**2+2*abs(px))-2) ) - X.append(x/s) - Y.append(y/s) - - x = np.linspace(1, 2.91, 100) - y = np.zeros((0)) - for px in x: - y = np.append(y, 1.5 - .5*abs(px) - 1.89736*(np.sqrt(3-px**2+2*abs(px))-2) ) - X.append(x/s) - Y.append(y/s) - - x = np.linspace(-.7,-.43, 10) - y = np.zeros((0)) - for px in x: - y = np.append(y, 3*abs(px)+.75) - X.append(x/s) - Y.append(y/s) - - x = np.linspace(.43, .7, 10) - y = np.zeros((0)) - for px in x: - y = np.append(y, 3*abs(px)+.75) - X.append(x/s) - Y.append(y/s) - - return X, Y \ No newline at end of file diff --git a/spaces/shencc/gpt/request_llm/bridge_jittorllms.py b/spaces/shencc/gpt/request_llm/bridge_jittorllms.py deleted file mode 100644 index 28d0a7aab745cca4a1cdaded3c4803319000b5f0..0000000000000000000000000000000000000000 --- a/spaces/shencc/gpt/request_llm/bridge_jittorllms.py +++ /dev/null @@ -1,153 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "jittorllms尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.jittorllms_model = None - self.info = "" - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import jittor - from .jittorllms.models import get_model - self.info = "依赖检测通过" - self.success = True - except: - self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt`"+\ - r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" - self.success = False - - def ready(self): - return self.jittorllms_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - def load_model(): - import types - try: - if self.jittorllms_model is None: - device, = get_conf('LOCAL_MODEL_DEVICE') - from .jittorllms.models import get_model - # availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] - args_dict = {'model': 'chatglm', 'RUN_DEVICE':'cpu'} - self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) - except: - self.child.send('[Local Message] Call jittorllms fail 不能正常加载jittorllms的参数。') - raise RuntimeError("不能正常加载jittorllms的参数!") - - load_model() - - # 进入任务等待状态 - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - # 收到消息,开始请求 - try: - for response, history in self.jittorllms_model.run_web_demo(kwargs['query'], kwargs['history']): - self.child.send(response) - except: - self.child.send('[Local Message] Call jittorllms fail.') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global glm_handle -glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info - if not glm_handle.success: - error = glm_handle.info - glm_handle = None - raise RuntimeError(error) - - # jittorllms 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - history_feedin.append(["What can I do?", sys_prompt]) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not glm_handle.success: - glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - history_feedin.append(["What can I do?", system_prompt] ) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收jittorllms的回复 - response = "[Local Message]: 等待jittorllms响应中 ..." - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待jittorllms响应中 ...": - response = "[Local Message]: jittorllms响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/extract_feature_print.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/extract_feature_print.py deleted file mode 100644 index 987daabb9cf8a3259f673dc9bd7d24a15dadfde6..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/extract_feature_print.py +++ /dev/null @@ -1,104 +0,0 @@ -import os, sys, traceback - -# device=sys.argv[1] -n_part = int(sys.argv[2]) -i_part = int(sys.argv[3]) -if len(sys.argv) == 5: - exp_dir = sys.argv[4] -else: - i_gpu = sys.argv[4] - exp_dir = sys.argv[5] - os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu) - -import torch -import torch.nn.functional as F -import soundfile as sf -import numpy as np -from fairseq import checkpoint_utils - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -printt(sys.argv) -model_path = "hubert_base.pt" - -printt(exp_dir) -wavPath = "%s/1_16k_wavs" % exp_dir -outPath = "%s/3_feature256" % exp_dir -os.makedirs(outPath, exist_ok=True) - - -# wave must be 16k, hop_size=320 -def readwave(wav_path, normalize=False): - wav, sr = sf.read(wav_path) - assert sr == 16000 - feats = torch.from_numpy(wav).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - if normalize: - with torch.no_grad(): - feats = F.layer_norm(feats, feats.shape) - feats = feats.view(1, -1) - return feats - - -# HuBERT model -printt("load model(s) from {}".format(model_path)) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -printt("move model to %s" % device) -if device != "cpu": - model = model.half() -model.eval() - -todo = sorted(list(os.listdir(wavPath)))[i_part::n_part] -n = max(1, len(todo) // 10) # 最多打印十条 -if len(todo) == 0: - printt("no-feature-todo") -else: - printt("all-feature-%s" % len(todo)) - for idx, file in enumerate(todo): - try: - if file.endswith(".wav"): - wav_path = "%s/%s" % (wavPath, file) - out_path = "%s/%s" % (outPath, file.replace("wav", "npy")) - - if os.path.exists(out_path): - continue - - feats = readwave(wav_path, normalize=saved_cfg.task.normalize) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device) - if device != "cpu" - else feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - feats = feats.squeeze(0).float().cpu().numpy() - if np.isnan(feats).sum() == 0: - np.save(out_path, feats, allow_pickle=False) - else: - printt("%s-contains nan" % file) - if idx % n == 0: - printt("now-%s,all-%s,%s,%s" % (len(todo), idx, file, feats.shape)) - except: - printt(traceback.format_exc()) - printt("all-feature-done") diff --git a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/modeling/pixel_decoder/ops/src/vision.cpp b/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/modeling/pixel_decoder/ops/src/vision.cpp deleted file mode 100644 index 4a08821e0121a77556aa7a263ec8ebfa928b13b6..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/modeling/pixel_decoder/ops/src/vision.cpp +++ /dev/null @@ -1,21 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -/*! -* Copyright (c) Facebook, Inc. and its affiliates. -* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR -*/ - -#include "ms_deform_attn.h" - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward"); - m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward"); -} diff --git a/spaces/shriarul5273/Yolov7/LICENSE.md b/spaces/shriarul5273/Yolov7/LICENSE.md deleted file mode 100644 index f288702d2fa16d3cdf0035b15a9fcbc552cd88e7..0000000000000000000000000000000000000000 --- a/spaces/shriarul5273/Yolov7/LICENSE.md +++ /dev/null @@ -1,674 +0,0 @@ - GNU GENERAL PUBLIC LICENSE - Version 3, 29 June 2007 - - Copyright (C) 2007 Free Software Foundation, Inc. - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - Preamble - - The GNU General Public License is a free, copyleft license for -software and other kinds of works. - - The licenses for most software and other practical works are designed -to take away your freedom to share and change the works. By contrast, -the GNU General Public License is intended to guarantee your freedom to -share and change all versions of a program--to make sure it remains free -software for all its users. We, the Free Software Foundation, use the -GNU General Public License for most of our software; it applies also to -any other work released this way by its authors. You can apply it to -your programs, too. - - When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -them if you wish), that you receive source code or can get it if you -want it, that you can change the software or use pieces of it in new -free programs, and that you know you can do these things. - - To protect your rights, we need to prevent others from denying you -these rights or asking you to surrender the rights. Therefore, you have -certain responsibilities if you distribute copies of the software, or if -you modify it: responsibilities to respect the freedom of others. - - For example, if you distribute copies of such a program, whether -gratis or for a fee, you must pass on to the recipients the same -freedoms that you received. You must make sure that they, too, receive -or can get the source code. And you must show them these terms so they -know their rights. - - Developers that use the GNU GPL protect your rights with two steps: -(1) assert copyright on the software, and (2) offer you this License -giving you legal permission to copy, distribute and/or modify it. - - For the developers' and authors' protection, the GPL clearly explains -that there is no warranty for this free software. For both users' and -authors' sake, the GPL requires that modified versions be marked as -changed, so that their problems will not be attributed erroneously to -authors of previous versions. - - Some devices are designed to deny users access to install or run -modified versions of the software inside them, although the manufacturer -can do so. This is fundamentally incompatible with the aim of -protecting users' freedom to change the software. The systematic -pattern of such abuse occurs in the area of products for individuals to -use, which is precisely where it is most unacceptable. Therefore, we -have designed this version of the GPL to prohibit the practice for those -products. If such problems arise substantially in other domains, we -stand ready to extend this provision to those domains in future versions -of the GPL, as needed to protect the freedom of users. - - Finally, every program is threatened constantly by software patents. -States should not allow patents to restrict development and use of -software on general-purpose computers, but in those that do, we wish to -avoid the special danger that patents applied to a free program could -make it effectively proprietary. To prevent this, the GPL assures that -patents cannot be used to render the program non-free. - - The precise terms and conditions for copying, distribution and -modification follow. - - TERMS AND CONDITIONS - - 0. Definitions. - - "This License" refers to version 3 of the GNU General Public License. - - "Copyright" also means copyright-like laws that apply to other kinds of -works, such as semiconductor masks. - - "The Program" refers to any copyrightable work licensed under this -License. Each licensee is addressed as "you". "Licensees" and -"recipients" may be individuals or organizations. - - To "modify" a work means to copy from or adapt all or part of the work -in a fashion requiring copyright permission, other than the making of an -exact copy. The resulting work is called a "modified version" of the -earlier work or a work "based on" the earlier work. - - A "covered work" means either the unmodified Program or a work based -on the Program. - - To "propagate" a work means to do anything with it that, without -permission, would make you directly or secondarily liable for -infringement under applicable copyright law, except executing it on a -computer or modifying a private copy. Propagation includes copying, -distribution (with or without modification), making available to the -public, and in some countries other activities as well. - - To "convey" a work means any kind of propagation that enables other -parties to make or receive copies. Mere interaction with a user through -a computer network, with no transfer of a copy, is not conveying. - - An interactive user interface displays "Appropriate Legal Notices" -to the extent that it includes a convenient and prominently visible -feature that (1) displays an appropriate copyright notice, and (2) -tells the user that there is no warranty for the work (except to the -extent that warranties are provided), that licensees may convey the -work under this License, and how to view a copy of this License. If -the interface presents a list of user commands or options, such as a -menu, a prominent item in the list meets this criterion. - - 1. Source Code. - - The "source code" for a work means the preferred form of the work -for making modifications to it. "Object code" means any non-source -form of a work. - - A "Standard Interface" means an interface that either is an official -standard defined by a recognized standards body, or, in the case of -interfaces specified for a particular programming language, one that -is widely used among developers working in that language. - - The "System Libraries" of an executable work include anything, other -than the work as a whole, that (a) is included in the normal form of -packaging a Major Component, but which is not part of that Major -Component, and (b) serves only to enable use of the work with that -Major Component, or to implement a Standard Interface for which an -implementation is available to the public in source code form. A -"Major Component", in this context, means a major essential component -(kernel, window system, and so on) of the specific operating system -(if any) on which the executable work runs, or a compiler used to -produce the work, or an object code interpreter used to run it. - - The "Corresponding Source" for a work in object code form means all -the source code needed to generate, install, and (for an executable -work) run the object code and to modify the work, including scripts to -control those activities. However, it does not include the work's -System Libraries, or general-purpose tools or generally available free -programs which are used unmodified in performing those activities but -which are not part of the work. For example, Corresponding Source -includes interface definition files associated with source files for -the work, and the source code for shared libraries and dynamically -linked subprograms that the work is specifically designed to require, -such as by intimate data communication or control flow between those -subprograms and other parts of the work. - - The Corresponding Source need not include anything that users -can regenerate automatically from other parts of the Corresponding -Source. - - The Corresponding Source for a work in source code form is that -same work. - - 2. Basic Permissions. - - All rights granted under this License are granted for the term of -copyright on the Program, and are irrevocable provided the stated -conditions are met. This License explicitly affirms your unlimited -permission to run the unmodified Program. The output from running a -covered work is covered by this License only if the output, given its -content, constitutes a covered work. This License acknowledges your -rights of fair use or other equivalent, as provided by copyright law. - - You may make, run and propagate covered works that you do not -convey, without conditions so long as your license otherwise remains -in force. You may convey covered works to others for the sole purpose -of having them make modifications exclusively for you, or provide you -with facilities for running those works, provided that you comply with -the terms of this License in conveying all material for which you do -not control copyright. Those thus making or running the covered works -for you must do so exclusively on your behalf, under your direction -and control, on terms that prohibit them from making any copies of -your copyrighted material outside their relationship with you. - - Conveying under any other circumstances is permitted solely under -the conditions stated below. Sublicensing is not allowed; section 10 -makes it unnecessary. - - 3. Protecting Users' Legal Rights From Anti-Circumvention Law. - - No covered work shall be deemed part of an effective technological -measure under any applicable law fulfilling obligations under article -11 of the WIPO copyright treaty adopted on 20 December 1996, or -similar laws prohibiting or restricting circumvention of such -measures. - - When you convey a covered work, you waive any legal power to forbid -circumvention of technological measures to the extent such circumvention -is effected by exercising rights under this License with respect to -the covered work, and you disclaim any intention to limit operation or -modification of the work as a means of enforcing, against the work's -users, your or third parties' legal rights to forbid circumvention of -technological measures. - - 4. Conveying Verbatim Copies. - - You may convey verbatim copies of the Program's source code as you -receive it, in any medium, provided that you conspicuously and -appropriately publish on each copy an appropriate copyright notice; -keep intact all notices stating that this License and any -non-permissive terms added in accord with section 7 apply to the code; -keep intact all notices of the absence of any warranty; and give all -recipients a copy of this License along with the Program. - - You may charge any price or no price for each copy that you convey, -and you may offer support or warranty protection for a fee. - - 5. Conveying Modified Source Versions. - - You may convey a work based on the Program, or the modifications to -produce it from the Program, in the form of source code under the -terms of section 4, provided that you also meet all of these conditions: - - a) The work must carry prominent notices stating that you modified - it, and giving a relevant date. - - b) The work must carry prominent notices stating that it is - released under this License and any conditions added under section - 7. This requirement modifies the requirement in section 4 to - "keep intact all notices". - - c) You must license the entire work, as a whole, under this - License to anyone who comes into possession of a copy. This - License will therefore apply, along with any applicable section 7 - additional terms, to the whole of the work, and all its parts, - regardless of how they are packaged. This License gives no - permission to license the work in any other way, but it does not - invalidate such permission if you have separately received it. - - d) If the work has interactive user interfaces, each must display - Appropriate Legal Notices; however, if the Program has interactive - interfaces that do not display Appropriate Legal Notices, your - work need not make them do so. - - A compilation of a covered work with other separate and independent -works, which are not by their nature extensions of the covered work, -and which are not combined with it such as to form a larger program, -in or on a volume of a storage or distribution medium, is called an -"aggregate" if the compilation and its resulting copyright are not -used to limit the access or legal rights of the compilation's users -beyond what the individual works permit. Inclusion of a covered work -in an aggregate does not cause this License to apply to the other -parts of the aggregate. - - 6. Conveying Non-Source Forms. - - You may convey a covered work in object code form under the terms -of sections 4 and 5, provided that you also convey the -machine-readable Corresponding Source under the terms of this License, -in one of these ways: - - a) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by the - Corresponding Source fixed on a durable physical medium - customarily used for software interchange. - - b) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by a - written offer, valid for at least three years and valid for as - long as you offer spare parts or customer support for that product - model, to give anyone who possesses the object code either (1) a - copy of the Corresponding Source for all the software in the - product that is covered by this License, on a durable physical - medium customarily used for software interchange, for a price no - more than your reasonable cost of physically performing this - conveying of source, or (2) access to copy the - Corresponding Source from a network server at no charge. - - c) Convey individual copies of the object code with a copy of the - written offer to provide the Corresponding Source. This - alternative is allowed only occasionally and noncommercially, and - only if you received the object code with such an offer, in accord - with subsection 6b. - - d) Convey the object code by offering access from a designated - place (gratis or for a charge), and offer equivalent access to the - Corresponding Source in the same way through the same place at no - further charge. You need not require recipients to copy the - Corresponding Source along with the object code. If the place to - copy the object code is a network server, the Corresponding Source - may be on a different server (operated by you or a third party) - that supports equivalent copying facilities, provided you maintain - clear directions next to the object code saying where to find the - Corresponding Source. Regardless of what server hosts the - Corresponding Source, you remain obligated to ensure that it is - available for as long as needed to satisfy these requirements. - - e) Convey the object code using peer-to-peer transmission, provided - you inform other peers where the object code and Corresponding - Source of the work are being offered to the general public at no - charge under subsection 6d. - - A separable portion of the object code, whose source code is excluded -from the Corresponding Source as a System Library, need not be -included in conveying the object code work. - - A "User Product" is either (1) a "consumer product", which means any -tangible personal property which is normally used for personal, family, -or household purposes, or (2) anything designed or sold for incorporation -into a dwelling. In determining whether a product is a consumer product, -doubtful cases shall be resolved in favor of coverage. For a particular -product received by a particular user, "normally used" refers to a -typical or common use of that class of product, regardless of the status -of the particular user or of the way in which the particular user -actually uses, or expects or is expected to use, the product. A product -is a consumer product regardless of whether the product has substantial -commercial, industrial or non-consumer uses, unless such uses represent -the only significant mode of use of the product. - - "Installation Information" for a User Product means any methods, -procedures, authorization keys, or other information required to install -and execute modified versions of a covered work in that User Product from -a modified version of its Corresponding Source. The information must -suffice to ensure that the continued functioning of the modified object -code is in no case prevented or interfered with solely because -modification has been made. - - If you convey an object code work under this section in, or with, or -specifically for use in, a User Product, and the conveying occurs as -part of a transaction in which the right of possession and use of the -User Product is transferred to the recipient in perpetuity or for a -fixed term (regardless of how the transaction is characterized), the -Corresponding Source conveyed under this section must be accompanied -by the Installation Information. But this requirement does not apply -if neither you nor any third party retains the ability to install -modified object code on the User Product (for example, the work has -been installed in ROM). - - The requirement to provide Installation Information does not include a -requirement to continue to provide support service, warranty, or updates -for a work that has been modified or installed by the recipient, or for -the User Product in which it has been modified or installed. Access to a -network may be denied when the modification itself materially and -adversely affects the operation of the network or violates the rules and -protocols for communication across the network. - - Corresponding Source conveyed, and Installation Information provided, -in accord with this section must be in a format that is publicly -documented (and with an implementation available to the public in -source code form), and must require no special password or key for -unpacking, reading or copying. - - 7. Additional Terms. - - "Additional permissions" are terms that supplement the terms of this -License by making exceptions from one or more of its conditions. -Additional permissions that are applicable to the entire Program shall -be treated as though they were included in this License, to the extent -that they are valid under applicable law. If additional permissions -apply only to part of the Program, that part may be used separately -under those permissions, but the entire Program remains governed by -this License without regard to the additional permissions. - - When you convey a copy of a covered work, you may at your option -remove any additional permissions from that copy, or from any part of -it. (Additional permissions may be written to require their own -removal in certain cases when you modify the work.) You may place -additional permissions on material, added by you to a covered work, -for which you have or can give appropriate copyright permission. - - Notwithstanding any other provision of this License, for material you -add to a covered work, you may (if authorized by the copyright holders of -that material) supplement the terms of this License with terms: - - a) Disclaiming warranty or limiting liability differently from the - terms of sections 15 and 16 of this License; or - - b) Requiring preservation of specified reasonable legal notices or - author attributions in that material or in the Appropriate Legal - Notices displayed by works containing it; or - - c) Prohibiting misrepresentation of the origin of that material, or - requiring that modified versions of such material be marked in - reasonable ways as different from the original version; or - - d) Limiting the use for publicity purposes of names of licensors or - authors of the material; or - - e) Declining to grant rights under trademark law for use of some - trade names, trademarks, or service marks; or - - f) Requiring indemnification of licensors and authors of that - material by anyone who conveys the material (or modified versions of - it) with contractual assumptions of liability to the recipient, for - any liability that these contractual assumptions directly impose on - those licensors and authors. - - All other non-permissive additional terms are considered "further -restrictions" within the meaning of section 10. If the Program as you -received it, or any part of it, contains a notice stating that it is -governed by this License along with a term that is a further -restriction, you may remove that term. If a license document contains -a further restriction but permits relicensing or conveying under this -License, you may add to a covered work material governed by the terms -of that license document, provided that the further restriction does -not survive such relicensing or conveying. - - If you add terms to a covered work in accord with this section, you -must place, in the relevant source files, a statement of the -additional terms that apply to those files, or a notice indicating -where to find the applicable terms. - - Additional terms, permissive or non-permissive, may be stated in the -form of a separately written license, or stated as exceptions; -the above requirements apply either way. - - 8. Termination. - - You may not propagate or modify a covered work except as expressly -provided under this License. Any attempt otherwise to propagate or -modify it is void, and will automatically terminate your rights under -this License (including any patent licenses granted under the third -paragraph of section 11). - - However, if you cease all violation of this License, then your -license from a particular copyright holder is reinstated (a) -provisionally, unless and until the copyright holder explicitly and -finally terminates your license, and (b) permanently, if the copyright -holder fails to notify you of the violation by some reasonable means -prior to 60 days after the cessation. - - Moreover, your license from a particular copyright holder is -reinstated permanently if the copyright holder notifies you of the -violation by some reasonable means, this is the first time you have -received notice of violation of this License (for any work) from that -copyright holder, and you cure the violation prior to 30 days after -your receipt of the notice. - - Termination of your rights under this section does not terminate the -licenses of parties who have received copies or rights from you under -this License. If your rights have been terminated and not permanently -reinstated, you do not qualify to receive new licenses for the same -material under section 10. - - 9. Acceptance Not Required for Having Copies. - - You are not required to accept this License in order to receive or -run a copy of the Program. Ancillary propagation of a covered work -occurring solely as a consequence of using peer-to-peer transmission -to receive a copy likewise does not require acceptance. However, -nothing other than this License grants you permission to propagate or -modify any covered work. These actions infringe copyright if you do -not accept this License. Therefore, by modifying or propagating a -covered work, you indicate your acceptance of this License to do so. - - 10. Automatic Licensing of Downstream Recipients. - - Each time you convey a covered work, the recipient automatically -receives a license from the original licensors, to run, modify and -propagate that work, subject to this License. You are not responsible -for enforcing compliance by third parties with this License. - - An "entity transaction" is a transaction transferring control of an -organization, or substantially all assets of one, or subdividing an -organization, or merging organizations. If propagation of a covered -work results from an entity transaction, each party to that -transaction who receives a copy of the work also receives whatever -licenses to the work the party's predecessor in interest had or could -give under the previous paragraph, plus a right to possession of the -Corresponding Source of the work from the predecessor in interest, if -the predecessor has it or can get it with reasonable efforts. - - You may not impose any further restrictions on the exercise of the -rights granted or affirmed under this License. For example, you may -not impose a license fee, royalty, or other charge for exercise of -rights granted under this License, and you may not initiate litigation -(including a cross-claim or counterclaim in a lawsuit) alleging that -any patent claim is infringed by making, using, selling, offering for -sale, or importing the Program or any portion of it. - - 11. Patents. - - A "contributor" is a copyright holder who authorizes use under this -License of the Program or a work on which the Program is based. The -work thus licensed is called the contributor's "contributor version". - - A contributor's "essential patent claims" are all patent claims -owned or controlled by the contributor, whether already acquired or -hereafter acquired, that would be infringed by some manner, permitted -by this License, of making, using, or selling its contributor version, -but do not include claims that would be infringed only as a -consequence of further modification of the contributor version. For -purposes of this definition, "control" includes the right to grant -patent sublicenses in a manner consistent with the requirements of -this License. - - Each contributor grants you a non-exclusive, worldwide, royalty-free -patent license under the contributor's essential patent claims, to -make, use, sell, offer for sale, import and otherwise run, modify and -propagate the contents of its contributor version. - - In the following three paragraphs, a "patent license" is any express -agreement or commitment, however denominated, not to enforce a patent -(such as an express permission to practice a patent or covenant not to -sue for patent infringement). To "grant" such a patent license to a -party means to make such an agreement or commitment not to enforce a -patent against the party. - - If you convey a covered work, knowingly relying on a patent license, -and the Corresponding Source of the work is not available for anyone -to copy, free of charge and under the terms of this License, through a -publicly available network server or other readily accessible means, -then you must either (1) cause the Corresponding Source to be so -available, or (2) arrange to deprive yourself of the benefit of the -patent license for this particular work, or (3) arrange, in a manner -consistent with the requirements of this License, to extend the patent -license to downstream recipients. "Knowingly relying" means you have -actual knowledge that, but for the patent license, your conveying the -covered work in a country, or your recipient's use of the covered work -in a country, would infringe one or more identifiable patents in that -country that you have reason to believe are valid. - - If, pursuant to or in connection with a single transaction or -arrangement, you convey, or propagate by procuring conveyance of, a -covered work, and grant a patent license to some of the parties -receiving the covered work authorizing them to use, propagate, modify -or convey a specific copy of the covered work, then the patent license -you grant is automatically extended to all recipients of the covered -work and works based on it. - - A patent license is "discriminatory" if it does not include within -the scope of its coverage, prohibits the exercise of, or is -conditioned on the non-exercise of one or more of the rights that are -specifically granted under this License. You may not convey a covered -work if you are a party to an arrangement with a third party that is -in the business of distributing software, under which you make payment -to the third party based on the extent of your activity of conveying -the work, and under which the third party grants, to any of the -parties who would receive the covered work from you, a discriminatory -patent license (a) in connection with copies of the covered work -conveyed by you (or copies made from those copies), or (b) primarily -for and in connection with specific products or compilations that -contain the covered work, unless you entered into that arrangement, -or that patent license was granted, prior to 28 March 2007. - - Nothing in this License shall be construed as excluding or limiting -any implied license or other defenses to infringement that may -otherwise be available to you under applicable patent law. - - 12. No Surrender of Others' Freedom. - - If conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot convey a -covered work so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you may -not convey it at all. For example, if you agree to terms that obligate you -to collect a royalty for further conveying from those to whom you convey -the Program, the only way you could satisfy both those terms and this -License would be to refrain entirely from conveying the Program. - - 13. Use with the GNU Affero General Public License. - - Notwithstanding any other provision of this License, you have -permission to link or combine any covered work with a work licensed -under version 3 of the GNU Affero General Public License into a single -combined work, and to convey the resulting work. The terms of this -License will continue to apply to the part which is the covered work, -but the special requirements of the GNU Affero General Public License, -section 13, concerning interaction through a network will apply to the -combination as such. - - 14. Revised Versions of this License. - - The Free Software Foundation may publish revised and/or new versions of -the GNU General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - - Each version is given a distinguishing version number. If the -Program specifies that a certain numbered version of the GNU General -Public License "or any later version" applies to it, you have the -option of following the terms and conditions either of that numbered -version or of any later version published by the Free Software -Foundation. If the Program does not specify a version number of the -GNU General Public License, you may choose any version ever published -by the Free Software Foundation. - - If the Program specifies that a proxy can decide which future -versions of the GNU General Public License can be used, that proxy's -public statement of acceptance of a version permanently authorizes you -to choose that version for the Program. - - Later license versions may give you additional or different -permissions. However, no additional obligations are imposed on any -author or copyright holder as a result of your choosing to follow a -later version. - - 15. Disclaimer of Warranty. - - THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY -APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT -HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY -OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, -THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR -PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM -IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF -ALL NECESSARY SERVICING, REPAIR OR CORRECTION. - - 16. Limitation of Liability. - - IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS -THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY -GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE -USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF -DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD -PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), -EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF -SUCH DAMAGES. - - 17. Interpretation of Sections 15 and 16. - - If the disclaimer of warranty and limitation of liability provided -above cannot be given local legal effect according to their terms, -reviewing courts shall apply local law that most closely approximates -an absolute waiver of all civil liability in connection with the -Program, unless a warranty or assumption of liability accompanies a -copy of the Program in return for a fee. - - END OF TERMS AND CONDITIONS - - How to Apply These Terms to Your New Programs - - If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - - To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -state the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 3 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . - -Also add information on how to contact you by electronic and paper mail. - - If the program does terminal interaction, make it output a short -notice like this when it starts in an interactive mode: - - Copyright (C) - This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, your program's commands -might be different; for a GUI interface, you would use an "about box". - - You should also get your employer (if you work as a programmer) or school, -if any, to sign a "copyright disclaimer" for the program, if necessary. -For more information on this, and how to apply and follow the GNU GPL, see -. - - The GNU General Public License does not permit incorporating your program -into proprietary programs. If your program is a subroutine library, you -may consider it more useful to permit linking proprietary applications with -the library. If this is what you want to do, use the GNU Lesser General -Public License instead of this License. But first, please read -. diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Instagram Pink APK for Free and Enjoy the New Look of Your Favorite Social Media.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Instagram Pink APK for Free and Enjoy the New Look of Your Favorite Social Media.md deleted file mode 100644 index de10a0b19fe2d844716693ee73aade14e04ec000..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Instagram Pink APK for Free and Enjoy the New Look of Your Favorite Social Media.md +++ /dev/null @@ -1,117 +0,0 @@ - -

          Instagram Pink APK Download: What Is It and How to Get It

          -

          Introduction

          -

          Instagram is one of the most popular social media platforms in the world, with over 1 billion users who love to create and share photos, stories, reels, and videos with their friends and followers. But what if you want to spice up your Instagram experience with a different color scheme and some extra features? That's where Instagram Pink APK comes in.

          -

          instagram pink apk download


          Download · https://ssurll.com/2uNXBg



          -

          What is Instagram Pink APK?

          -

          Instagram Pink APK is a modified version of the original Instagram app that has a pink theme and icons, as well as some additional filters, stickers, privacy, and security options. It is not an official app from Meta (the company that owns Instagram), but rather a third-party app that is developed by independent developers who want to offer a customized and enhanced version of Instagram for their users.

          -

          Why do people want to download Instagram Pink APK?

          -

          There are many reasons why people might want to download Instagram Pink APK. Some of them are:

          -
            -
          • They like the color pink and want to have a more feminine and cute look for their Instagram app.
          • -
          • They want to have more creative tools and options for their photos, stories, reels, and videos, such as additional filters, stickers, emojis, and fonts.
          • -
          • They want to have more privacy and security features for their Instagram account, such as hiding their online status, disabling read receipts, blocking unwanted messages, and deleting sent messages.
          • -
          • They want to have a faster and smoother performance for their Instagram app, as well as compatibility with different devices and Android versions.
          • -
          -

          How to download Instagram Pink APK safely and securely?

          -

          Since Instagram Pink APK is not an official app from Meta, it is not available on the Google Play Store or the App Store. Therefore, you need to download it from a reliable and trustworthy source that offers a safe and secure download link. Here are some steps you need to follow to download Instagram Pink APK safely and securely:

          -
            -
          1. First, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
          2. -
          3. Next, you need to find a reputable website that offers the latest version of Instagram Pink APK. You can search for it on Google or use one of these links:
          4. -
          5. Then, you need to download the Instagram Pink APK file from the website. Make sure you scan it with an antivirus software before opening it.
          6. -
          7. Finally, you need to install the Instagram Pink APK file on your device by tapping on it and following the instructions. Once the installation is complete, you can launch the app and enjoy its features.
          8. -
          -

          Features of Instagram Pink APK

          -

          Instagram Pink APK has many features that make it different from the original Instagram app. Some of them are:

          -

          instagram pink app download for android
          -instagram pink mod apk free download
          -instagram pink edition apk download latest version
          -instagram pink apk download 2023
          -instagram pink theme apk download
          -instagram pink apk download for pc
          -instagram pink apk download uptodown
          -instagram pink apk download apkpure
          -instagram pink apk download no root
          -instagram pink apk download link
          -how to download instagram pink apk on android
          -instagram pink apk download for ios
          -instagram pink plus apk download
          -instagram pink gold apk download
          -instagram pink pro apk download
          -instagram pink apk download rexdl
          -instagram pink apk download revdl
          -instagram pink apk download 2022
          -instagram pink modded apk download
          -instagram pink hack apk download
          -instagram pink lite apk download
          -instagram pink dark mode apk download
          -instagram pink gb apk download
          -instagram pink premium apk download
          -instagram pink beta apk download
          -instagram pink old version apk download
          -instagram pink new version apk download
          -instagram pink update apk download
          -instagram pink offline apk download
          -instagram pink online apk download
          -where to download instagram pink apk
          -why to download instagram pink apk
          -what is instagram pink apk download
          -how to install instagram pink apk download
          -how to use instagram pink apk download
          -benefits of instagram pink apk download
          -features of instagram pink apk download
          -reviews of instagram pink apk download
          -alternatives to instagram pink apk download
          -problems with instagram pink apk download
          -solutions for instagram pink apk download
          -tips and tricks for instagram pink apk download
          -tutorials for instagram pink apk download
          -guides for instagram pink apk download
          -videos for instagram pink apk download
          -images for instagram pink apk download
          -wallpapers for instagram pink apk download
          -stickers for instagram pink apk download
          -fonts for instagram pink apk download
          -filters for instagram pink apk download

          -

          Customized pink theme and icons

          -

          The most obvious feature of Instagram Pink APK is its pink theme and icons. The app has a beautiful and elegant design that gives it a feminine and cute vibe. The app also has different shades of pink for different sections, such as light pink for the feed, dark pink for the stories, and hot pink for the reels. The icons are also changed to match the pink theme, such as a heart for the like button, a flower for the comment button, and a star for the bookmark button.

          -

          Additional filters and stickers

          -

          Another feature of Instagram Pink APK is its additional filters and stickers. The app has more than 100 filters and stickers that you can use to enhance your photos, stories, reels, and videos. You can find filters and stickers for different occasions, moods, themes, and styles, such as birthday, love, glam, retro, and more. You can also adjust the intensity and opacity of the filters and stickers to suit your preference.

          -

          Privacy and security options

          -

          One more feature of Instagram Pink APK is its privacy and security options. The app has some options that are not available on the original Instagram app, such as hiding your online status, disabling read receipts, blocking unwanted messages, and deleting sent messages. These options can help you protect your privacy and security on Instagram, as well as avoid unwanted interactions and situations.

          -

          Compatibility and performance

          -

          The last feature of Instagram Pink APK is its compatibility and performance. The app is compatible with different devices and Android versions, from Android 4.0 to Android 11. The app also has a faster and smoother performance than the original Instagram app, as it has less ads and bugs. The app also consumes less battery and data than the original Instagram app.

          -

          Pros and cons of Instagram Pink APK

          -

          Instagram Pink APK has its pros and cons that you need to consider before downloading it. Here are some of them:

          -

          Pros

          -

          Aesthetic and fun design

          -

          The first pro of Instagram Pink APK is its aesthetic and fun design. The app has a pink theme and icons that give it a feminine and cute look. The app also has different shades of pink for different sections, which make it more attractive and appealing. The app also has more than 100 filters and stickers that you can use to make your photos, stories, reels, and videos more creative and fun.

          -

          More creative tools and options

          -

          The second pro of Instagram Pink APK is its more creative tools and options. The app has additional filters, stickers, emojis, and fonts that you can use to enhance your photos, stories, reels, and videos. You can also adjust the intensity and opacity of the filters and stickers to suit your preference. You can also use different layouts, collages, frames, backgrounds, and effects to make your photos, stories, reels, and videos more unique and interesting.

          -

          Enhanced privacy and security features

          -

          The third pro of Instagram Pink APK is its enhanced privacy and security features. The app has some options that are not available on the original Instagram app, such as hiding your online status, disabling read receipts, blocking unwanted messages, and deleting sent messages. These options can help you protect your privacy and security on Instagram, as well as avoid unwanted interactions and situations.

          -

          Cons

          -

          Not an official app from Meta

          -

          The first con of Instagram Pink APK is that it is not an official app from Meta (the company that owns Instagram). This means that the app is not authorized or endorsed by Meta, and it may not follow the terms of service or policies of Meta. This also means that the app may not receive regular updates or support from Meta, and it may not have all the features or functions of the original Instagram app.

          -

          Risk of malware and viruses

          -

          The second con of Instagram Pink APK is that it may have a risk of malware and viruses. Since the app is not available on the Google Play Store or the App Store, you need to download it from a third-party source that may not be reliable or trustworthy. This means that the app may contain malware or viruses that can harm your device or compromise your data. Therefore, you need to be careful when downloading the app from an unknown source, and scan it with an antivirus software before installing it.

          -

          Potential legal issues and account bans

          -

          The third con of Instagram Pink APK is that it may have potential legal issues or account bans. Since the app is a modified version of the original Instagram app that violates the terms of service or policies of Meta (the company that owns Instagram), you may face legal issues or account bans if you use it. Meta may detect that you are using an unauthorized or modified version of their app, and they may take action against you or your account. Therefore, you need to be aware of the risks and the consequences of using Instagram Pink APK.

          -

          Conclusion

          -

          Instagram Pink APK is a modified version of the original Instagram app that has a pink theme and icons, as well as some additional filters, stickers, privacy, and security options. It is not an official app from Meta (the company that owns Instagram), but rather a third-party app that is developed by independent developers who want to offer a customized and enhanced version of Instagram for their users.

          -

          Instagram Pink APK has its pros and cons that you need to consider before downloading it. Some of the pros are its aesthetic and fun design, its more creative tools and options, and its enhanced privacy and security features. Some of the cons are that it is not an official app from Meta, it may have a risk of malware and viruses, and it may have potential legal issues or account bans.

          -

          If you want to download Instagram Pink APK, you need to follow some steps to do it safely and securely. You need to enable the installation of apps from unknown sources on your device, find a reputable website that offers the latest version of Instagram Pink APK, download the file from the website, scan it with an antivirus software, and install it on your device.

          -

          However, we do not recommend or endorse downloading or using Instagram Pink APK, as it may violate the terms of service or policies of Meta (the company that owns Instagram), and it may harm your device or compromise your data. We suggest that you use the original Instagram app from the Google Play Store or the App Store, as it is more reliable, secure, and updated.

          -

          FAQs

          -

          What is Instagram Pink APK?

          -

          Instagram Pink APK is a modified version of the original Instagram app that has a pink theme and icons, as well as some additional filters, stickers, privacy, and security options.

          -

          Is Instagram Pink APK safe?

          -

          Instagram Pink APK may not be safe, as it is not an official app from Meta (the company that owns Instagram), and it may contain malware or viruses that can harm your device or compromise your data.

          -

          How can I download Instagram Pink APK?

          -

          You can download Instagram Pink APK from a third-party source that offers a safe and secure download link. You need to enable the installation of apps from unknown sources on your device, find a reputable website that offers the latest version of Instagram Pink APK, download the file from the website, scan it with an antivirus software, and install it on your device.

          -

          What are the benefits of using Instagram Pink APK?

          -

          Some of the benefits of using Instagram Pink APK are its aesthetic and fun design, its more creative tools and options, and its enhanced privacy and security features.

          -

          What are the drawbacks of using Instagram Pink APK?

          -

          Some of the drawbacks of using Instagram Pink APK are that it is not an official app from Meta (the company that owns Instagram), it may have a risk of malware and viruses, and it may have potential legal issues or account bans.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Started with FINAL FANTASY XI A Guide for New Players.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Started with FINAL FANTASY XI A Guide for New Players.md deleted file mode 100644 index e16d704997b5328fd44525d66eef5c87dcf29d1f..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Started with FINAL FANTASY XI A Guide for New Players.md +++ /dev/null @@ -1,153 +0,0 @@ - -

          How to Download and Play Final Fantasy XI in 2021

          -

          Final Fantasy XI is one of the longest-running and most beloved MMORPGs of all time. Released in 2002, it has been continuously updated with new content and features for nearly two decades, attracting millions of players from around the world. If you are looking for a rich and immersive online RPG experience that combines the classic Final Fantasy elements with a vast and diverse world, then Final Fantasy XI is the game for you.

          -

          final fantasy xi download


          Download ····· https://ssurll.com/2uNYrr



          -

          But how can you download and play Final Fantasy XI in 2021? Is it still compatible with modern PCs? Is it still worth paying a monthly subscription fee? And how can you get started as a new player? In this article, we will answer all these questions and more, so that you can enjoy this legendary game without any hassle.

          -

          First of all, let's take a look at the system requirements and the free trial option for Final Fantasy XI. You will need a Windows PC with at least an Intel Core i3 processor, 2 GB of RAM, and an NVIDIA GeForce GT 740 graphics card. You will also need a broadband internet connection and about 15 GB of free disk space. You can check your PC specifications using the PlayOnline System Information tool .

          -

          If you are not sure whether you want to commit to a monthly subscription fee, you can try Final Fantasy XI for free for 30 days by selecting the free trial option from the Square Enix store . You will need to register a Square Enix account and a PlayOnline ID, which we will explain later. The free trial will give you access to all the content up to level 50, including the original game and the first three expansions.

          -

          How to Download Final Fantasy XI for Windows

          -

          Once you have decided to play Final Fantasy XI, you will need to download the client software from the official website . The client software includes the PlayOnline Viewer, which is an application that allows you to manage your account, launch the game, chat with other players, and access various services. It also includes Final Fantasy XI itself, along with all its expansions and updates.

          -

          The download process is fairly simple, but it may take some time depending on your internet speed. Here are the steps you need to follow:

          -
            -
          1. Visit the official website and click on "Download The Ultimate Collection Seekers Edition".
          2. -
          3. You will see five links for five different files. You need to download all of them to the same folder. The files are compressed and have a total size of about 10 GB.
          4. -
          5. After you have downloaded all the files, you need to unzip them using a program like WinRAR or 7-Zip. You will get a folder called "FINAL FANTASY XI Ultimate Collection Seekers Edition (EU)" or something similar.
          6. -
          7. Open the folder and double-click on the file called "setup.exe". This will launch the installer for the client software.
          8. -
          9. Follow the instructions on the screen and accept the terms and conditions. You will need to choose a destination folder for the installation. The default location is "C:\Program Files (x86)\PlayOnline".
          10. -
          11. The installer will install DirectX, PlayOnline Viewer, and Final Fantasy XI on your PC. This may take several minutes, so please be patient.
          12. -
          -

          Congratulations, you have successfully downloaded and installed Final Fantasy XI on your PC. You are now ready to create an account and start playing.

          -

          How to Create an Account and Start Playing Final Fantasy XI

          -

          Before you can play Final Fantasy XI, you need to create an account and register your game. This will allow you to access the game servers, create your character, and enjoy the game content. You will also need to pay a monthly subscription fee of $12.95 USD, plus $1 USD for each additional character slot . You can cancel your subscription at any time from the Square Enix account management system .

          -

          final fantasy xi ultimate collection seekers edition download
          -final fantasy xi free trial download
          -final fantasy xi online download pc
          -final fantasy xi private server download
          -final fantasy xi wings of the goddess download
          -final fantasy xi playonline viewer download
          -final fantasy xi directx 11 download
          -final fantasy xi windower 4 download
          -final fantasy xi steam download
          -final fantasy xi nasomi download
          -final fantasy xi eden download
          -final fantasy xi mobile download
          -final fantasy xi offline download
          -final fantasy xi update download
          -final fantasy xi noesis download
          -final fantasy xi benchmark download
          -final fantasy xi dat files download
          -final fantasy xi music download
          -final fantasy xi ost download
          -final fantasy xi sound effects download
          -final fantasy xi wallpaper download
          -final fantasy xi mod download
          -final fantasy xi texture pack download
          -final fantasy xi hd overhaul download
          -final fantasy xi reshade download
          -final fantasy xi controller setup download
          -final fantasy xi gamepad config tool download
          -final fantasy xi keyboard layout download
          -final fantasy xi macros download
          -final fantasy xi windower plugins download
          -final fantasy xi windower addons download
          -final fantasy xi ashita download
          -final fantasy xi xiloader download
          -final fantasy xi polutils download
          -final fantasy xi ffxi switch monitor download
          -final fantasy xi ffxi clipper download
          -final fantasy xi ffxi fishing bot download
          -final fantasy xi ffxi apptool download
          -final fantasy xi ffxi mappy download
          -final fantasy xi ffxi gearswap download
          -final fantasy xi ffxi spellcast xml download
          -final fantasy xi ffxi lua scripts download
          -final fantasy xi ffxi maps download
          -final fantasy xi ffxi wiki guide pdf download

          -

          The account creation and registration process is a bit complicated, but we will guide you through it step by step. Here are the steps you need to follow:

          -
            -
          1. Register a Square Enix account if you don't have one already. This is a universal account that you can use for other Square Enix games and services. You will need to provide your email address, password, date of birth, country, and security question.
          2. -
          3. After you have registered your Square Enix account, you will receive an email with a verification link. Click on the link to verify your email address and activate your account.
          4. -
          5. Log in to your Square Enix account and click on "Select Service" from the menu. Then, click on "PlayOnline / FINAL FANTASY XI".
          6. -
          7. You will be asked to create a PlayOnline ID, which is a unique identifier for your Final Fantasy XI account. You will also need to create a PlayOnline password and a PlayOnline handle name. The PlayOnline ID and password are case-sensitive and must be 4-8 characters long. The PlayOnline handle name is a nickname that other players can see in the PlayOnline Viewer.
          8. -
          9. After you have created your PlayOnline ID, you will receive an email with a registration code for Final Fantasy XI. You will need this code to register your game and access the game servers.
          10. -
          11. Go back to your Square Enix account and click on "Add New Service Account". Enter the registration code that you received in the email and click on "Next".
          12. -
          13. You will see a list of service accounts that you can choose from. Each service account corresponds to one character slot in Final Fantasy XI. You can have up to 16 service accounts per Square Enix account, but you will need to pay an extra fee for each one. For now, just select the first service account and click on "Next".
          14. -
          15. You will see a confirmation screen with your service account details. Click on "Confirm" to complete the registration process.
          16. -
          -

          You have now created an account and registered your game for Final Fantasy XI. You are almost ready to play, but first you need to launch the game and log in with your PlayOnline ID.

          -
            -
          1. Go to the folder where you installed Final Fantasy XI and double-click on the file called "pol.exe". This will launch the PlayOnline Viewer, which is the launcher for Final Fantasy XI.
          2. -
          3. The first time you launch the PlayOnline Viewer, it will check for updates and download them automatically. This may take some time depending on your internet speed and the size of the updates. Please wait until the update process is complete.
          4. -
          5. After the updates are done, you will see the main menu of the PlayOnline Viewer. Click on "Play" and then select "FINAL FANTASY XI" from the list of games.
          6. -
          7. You will be prompted to enter your PlayOnline ID and password that you created earlier. Enter them correctly and click on "Log In".
          8. -
          9. You will see a screen with your service account information. Click on "Play" to proceed.
          10. -
          11. You will see a screen with your character slot information. If this is your first time playing, you will have one empty slot available. Click on "Create Character" to create your character.
          12. -
          -

          You have now logged in to Final Fantasy XI and are ready to create your character and choose a world server.

          -

          How to Create Your Character and Choose a World Server

          -

          Final Fantasy XI allows you to customize your character's appearance, race, gender, and name. You can also choose a starting job, which determines your basic abilities and equipment. You can change your job later in the game, so don't worry too much about your initial choice.

          -

          Here are the steps you need to follow to create your character:

          -
            -
          1. You will see a screen with five options for your character's race: Hume, Elvaan, Tarutaru, Mithra, and Galka. Each race has its own strengths and weaknesses, as well as unique looks and personalities. You can click on each option to see a preview of how your character will look like. You can also click on the "Details" button to see more information about each race's traits and history.
          2. -
          3. After you have selected your race, you will see a screen with two options for your character's gender: male or female. Some races have gender restrictions, such as Mithra being female-only and Galka being male-only. You can click on each option to see a preview of how your character will look like.
          4. -
          5. After you have selected your gender, you will see a screen with four options for your character's face type: A, B, C, and D. Each face type has different features, such as eye color, hair style, and facial expression. You can click on each option to see a preview of how your character will look like.
          6. -
          7. After you have selected your face type, you will see a screen with four options for your character's hair color: 1, 2, 3, and 4. Each hair color has different shades and highlights. You can click on each option to see a preview of how your character will look like.
          8. -
          9. After you have selected your hair color, you will see a screen with four options for your character's name: random, keyboard, list, and history. You can choose a random name generated by the game, type in a name using the keyboard, select a name from a list of pre-made names, or use a name that you have used before in Final Fantasy XI. Your name must be between 3 and 15 characters long and must not contain any offensive or inappropriate words.
          10. -
          11. After you have entered or selected your name, you will see a confirmation screen with your character's details. You can click on the "Back" button to go back and change any of the options. You can also click on the "Voice" button to hear how your character will sound like in the game. If you are satisfied with your character, click on the "OK" button to proceed.
          12. -
          -

          You have now created your character and are ready to choose a world server.

          -
            -
          1. You will see a screen with a list of world servers that you can join. Each world server has its own population, economy, community, and events. You can click on each server name to see more information about it, such as the number of players online, the average level of players, the current weather, and the current conquest status.
          2. -
          3. You can also click on the "Recommend" button to see which world server is recommended for you based on your location and language preference. The recommended server will have a green icon next to it.
          4. -
          5. After you have selected a world server, you will see a confirmation screen with your character's details and the server name. You can click on the "Back" button to go back and change your server choice. If you are satisfied with your selection, click on the "OK" button to proceed.
          6. -
          -

          You have now chosen a world server and are ready to enter the game world.

          -

          How to Enjoy Final Fantasy XI as a New Player

          -

          Final Fantasy XI is a vast and complex game that offers countless possibilities for adventure and exploration. As a new player, you may feel overwhelmed by the amount of information and options available to you. Don't worry, we are here to help you get started and enjoy the game at your own pace.

          -

          Here are some tips on how to play Final Fantasy XI as a new player:

          -

          Tips on Choosing a Job, Leveling Up, and Joining a Party

          -
            -
          • Your job determines your role and abilities in combat. There are six basic jobs that you can choose from at the beginning of the game: Warrior, Monk, White Mage, Black Mage, Red Mage, and Thief. Each job has its own strengths, weaknesses, and specialties. For example, Warriors are good at dealing physical damage and taking hits, White Mages are good at healing and supporting allies, and Thieves are good at stealing items and evading attacks. You can learn more about each job from the in-game help menu or from the official website .
          • -
          • You can level up your job by gaining experience points (EXP) from defeating enemies, completing quests, and joining events. You will need a certain amount of EXP to reach the next level, which will increase your stats and unlock new abilities. You can check your current level, EXP, and progress from the main menu or from the status screen.
          • -
          • You can also change your job or learn a new job at any time in the game. To do so, you will need to visit a Mog House, which is a personal room that you can access from any residential area. You will also need to complete a quest for each new job that you want to learn. You can have up to 22 jobs in total, including the six basic jobs and 16 advanced jobs that you can unlock later in the game.
          • -
          • You can also combine two jobs to create a customized class. You can choose one job as your main job and another job as your subjob. Your subjob will be half the level of your main job and will grant you some of its abilities and traits. For example, you can combine Warrior as your main job and White Mage as your subjob to create a Paladin-like class that can heal and tank. You can change your subjob at any Mog House.
          • -
          • One of the best ways to level up and enjoy Final Fantasy XI is to join a party with other players. A party is a group of up to six players that can cooperate and communicate with each other in combat. A party can have different roles and strategies depending on the composition of jobs and the type of enemies. For example, a typical party may consist of a tank, a healer, a damage dealer, a support, a puller, and a backup.
          • -
          • To join a party, you can either use the Seek Party feature from the main menu or the Party Recruitment feature from the PlayOnline Viewer. The Seek Party feature will allow you to search for existing parties that match your criteria, such as level range, job preference, and location. The Party Recruitment feature will allow you to create or join a party advertisement that other players can see and respond to.
          • -
          • You can also invite other players to join your party by targeting them and selecting Invite to Party from the menu. You can only invite players who are not already in a party and who are within your level range. You can also accept or decline party invitations from other players by selecting Reply to Party Invite from the menu.
          • -
          -

          Tips on Exploring the World, Completing Quests, and Joining Events

          -
            -
          • Final Fantasy XI has a huge and diverse world that you can explore and discover. There are over 100 different areas that you can visit, each with its own landscape, climate, wildlife, culture, and history. You can travel between areas by walking, riding a chocobo (a giant bird), using an airship (a flying vehicle), or using a teleport (a magic spell).
          • -
          • Some areas are safe and peaceful, while others are dangerous and hostile. You will encounter various enemies that will attack you on sight or when provoked. You will also encounter various NPCs (non-player characters) that will offer you quests, information, services, or items.
          • -
          • Quests are optional tasks that you can complete for rewards, such as EXP, gil (the currency of Final Fantasy XI), items, fame (your reputation in each city), or access to new areas or features. Quests usually involve talking to NPCs, delivering items, defeating enemies, or solving puzzles. You can accept quests from NPCs that have an exclamation mark above their heads.
          • -
          • There are hundreds of quests that you can do in Final Fantasy XI, each with its own story and difficulty. Some quests are standalone, while others are part of a series or a storyline. Some quests are repeatable, while others are one-time only. Some quests are exclusive to certain jobs or races, while others are available to everyone.
          • -
          • You can check your current quests from the Quests menu or from the Quest Log screen. You can also cancel quests that you don't want to do anymore by selecting Cancel Quest from the menu.
          • -
          • Events are special activities that you can join for fun and challenge. Events usually involve fighting against powerful enemies or competing against other players in various modes. Events may have specific rules, objectives, and rewards. Events may be seasonal, such as the Starlight Celebration or the Egg Hunt, or permanent, such as the Dynamis or the Ambuscade.
          • -
          • You can join events by talking to NPCs that have an event icon above their heads, or by using the Event menu or the Event Log screen. You can also check the official website or the PlayOnline Viewer for the latest news and announcements about events.
          • -
          -

          Tips on Communicating with Other Players, Using Chat Commands, and Joining a Linkshell

          -
            -
          • One of the best aspects of Final Fantasy XI is the social interaction with other players. You can communicate with other players using various chat modes, such as say, shout, party, linkshell, tell, or emote. You can also use chat commands to perform various actions, such as checking your status, changing your equipment, or casting a spell.
          • -
          • To chat with other players, you can use the keyboard or the software keyboard. You can also use a microphone and a headset to use the voice chat feature. To use the keyboard, you need to press the Enter key to activate the chat window, type your message, and press Enter again to send it. To use the software keyboard, you need to press the Insert key to activate it, select the letters or symbols with the arrow keys, and press Enter to send your message.
          • -
          • To change your chat mode, you need to use a prefix before your message. For example, if you want to shout your message to everyone in your area, you need to type "/shout" followed by your message. If you want to send a private message to another player, you need to type "/tell" followed by their name and your message. You can also use shortcuts such as "/s" for say, "/p" for party, "/l" for linkshell, and "/e" for emote.
          • -
          • To use chat commands, you need to type a slash followed by the command name and any parameters. For example, if you want to check your HP and MP, you need to type "/check". If you want to change your weapon, you need to type "/equip" followed by the slot name and the item name. You can also use shortcuts such as "/hp" for check HP, "/mp" for check MP, and "/ws" for weapon skill.
          • -
          • You can also join a linkshell, which is a group of players that share a common chat channel and a common interest. A linkshell can be used for socializing, organizing parties or events, or providing support and advice. To join a linkshell, you need to receive an invitation from another player who owns or belongs to that linkshell. You will also need a linkpearl, which is an item that allows you to access the linkshell chat.
          • -
          • To use a linkpearl, you need to equip it in your inventory and then select Linkshell from the menu. You will see a list of linkshells that you belong to and their members. You can also create your own linkshell by buying a linkshell from a Linkshell Distributor NPC and inviting other players to join it.
          • -
          -

          Conclusion

          -

          Final Fantasy XI is an amazing game that offers endless hours of fun and adventure. Whether you want to explore a vast and beautiful world, fight against epic enemies and bosses, complete challenging quests and events, or make friends and socialize with other players, Final Fantasy XI has something for everyone.

          -

          We hope that this article has helped you download and play Final Fantasy XI in 2021. If you have any questions or problems, please feel free to contact us or visit the official website or the PlayOnline Viewer for more information and support.

          -

          Thank you for reading and happy adventuring!

          -

          FAQs

          -

          Q1: How much does Final Fantasy XI cost per month?

          -

          A1: Final Fantasy XI costs $12.95 USD per month for one service account (one character slot). You can add up to 15 additional service accounts (character slots) for $1 USD each per month. You can also buy optional items and services from the Mog Station using Crysta (the virtual currency of Square Enix) or real money.

          -

          Q2: Can I play Final Fantasy XI on other platforms besides Windows?

          -

          A2: Yes, you can play Final Fantasy XI on PlayStation 2 and Xbox 360 as well. However, these platforms are no longer supported by Square Enix and may have compatibility issues with newer updates and features. You will also need a special device called a HDD (hard disk drive) for PlayStation 2 or an Xbox 360 hard drive for Xbox 360 to play Final Fantasy XI. You can also use a controller or a keyboard and mouse to play the game on any platform.

          -

          Q3: What is the difference between Final Fantasy XI and Final Fantasy XIV?

          -

          A3: Final Fantasy XI and Final Fantasy XIV are both MMORPGs set in the Final Fantasy universe, but they have different stories, settings, gameplay, and features. Final Fantasy XI is the older game, released in 2002, and has a more traditional and complex gameplay style. Final Fantasy XIV is the newer game, released in 2010, and has a more modern and streamlined gameplay style. Both games have their own merits and appeal to different types of players.

          -

          Q4: How can I change my job or learn new skills in Final Fantasy XI?

          -

          A4: You can change your job or learn a new job at any time in the game by visiting a Mog House and selecting Job Change from the menu. You will need to complete a quest for each new job that you want to learn, which you can find from various NPCs in the game world. You can also combine two jobs to create a customized class by selecting Subjob from the menu. You can learn new skills by leveling up your job, using certain items, or completing certain quests.

          -

          Q5: How can I get help or report a problem in Final Fantasy XI?

          -

          A5: You can get help or report a problem in Final Fantasy XI by using the Help Desk feature from the PlayOnline Viewer or the main menu. You can also contact the customer support team by phone, email, or chat from the official website or the PlayOnline Viewer. You can also visit the official forums or the community websites for more tips and advice from other players.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/classification/finetune_classification_bart-base_ocnli.sh b/spaces/skf15963/summary/fengshen/examples/classification/finetune_classification_bart-base_ocnli.sh deleted file mode 100644 index 6ef4886993eb2c1c8938180c940ece9bb156b73f..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/classification/finetune_classification_bart-base_ocnli.sh +++ /dev/null @@ -1,143 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=ocnli-bart-base # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=2 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:2 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id) - - -export TORCH_EXTENSIONS_DIR=/cognitive_comp/gaoxinyu/cache/torch_extendsions - -MODEL_NAME=bart-base - -TASK=ocnli -TEXTA_NAME=sentence1 -TEXTB_NAME=sentence2 -LABEL_NAME=label -ID_NAME=id - - -BATCH_SIZE=8 -VAL_BATCH_SIZE=32 -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -DATA_DIR=/cognitive_comp/yangping/data/ChineseCLUE_DATA/${TASK}_public/ -PRETRAINED_MODEL_PATH=/cognitive_comp/gaoxinyu/pretrained_model/$MODEL_NAME/ - - -CHECKPOINT_PATH=/cognitive_comp/gaoxinyu/ln_model/finetune/ckpt/$TASK/ -DEFAULT_ROOT_DIR=/cognitive_comp/gaoxinyu/ln_model/finetune/${MODEL_NAME}-${TASK} -OUTPUT_PATH=/cognitive_comp/gaoxinyu/ln_model/finetune/${MODEL_NAME}-${TASK}/predict.json - - -config_json="./ds_config.${MODEL_NAME}.json" -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -# reduce_bucket_size: hidden_size*hidden_size -# stage3_prefetch_bucket_size: 0.9 * hidden_size * hidden_size -# stage3_param_persistence_threshold: 10 * hidden_size - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": $BATCH_SIZE, - "steps_per_print": 100, - "gradient_clipping": 0.1, - "zero_optimization": { - "stage": ${ZERO_STAGE} - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-7, - "eps": 1e-12, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 1e-8, - "warmup_max_lr": 1e-6, - "warmup_num_steps": 400, - "warmup_type": "linear" - } - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": false, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json - - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.json \ - --valid_data dev.json \ - --test_data test.json \ - --train_batchsize $BATCH_SIZE \ - --valid_batchsize $VAL_BATCH_SIZE \ - --max_length 128 \ - --texta_name $TEXTA_NAME \ - --textb_name $TEXTB_NAME \ - --label_name $LABEL_NAME \ - --id_name $ID_NAME \ - " - -MODEL_ARGS="\ - --learning_rate 1e-6 \ - --weight_decay 1e-2 \ - --warmup 0.01 \ - --num_labels 3 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_acc \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 200 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_acc:.4f} \ - " - - -TRAINER_ARGS="\ - --max_epochs 67 \ - --gpus 2 \ - --num_nodes 1 \ - --strategy $STRATEGY \ - --gradient_clip_val 1.0 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 1.0 \ - --default_root_dir $DEFAULT_ROOT_DIR \ - " - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -DOCKER_PATH=/cognitive_comp/gaoxinyu/docker/pytorch21_06_py3_docker_image_v2.sif -SCRIPT_PATH=/cognitive_comp/gaoxinyu/github/Fengshenbang-LM/fengshen/examples/classification/finetune_classification.py - -# python3 $SCRIPT_PATH $options -srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $DOCKER_PATH python3 $SCRIPT_PATH $options - diff --git a/spaces/skf15963/summary/fengshen/examples/stable_diffusion_chinese/img/test.md b/spaces/skf15963/summary/fengshen/examples/stable_diffusion_chinese/img/test.md deleted file mode 100644 index c8b1b42336f7e2c26898b5b99441d324f2de5412..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/stable_diffusion_chinese/img/test.md +++ /dev/null @@ -1 +0,0 @@ -delete diff --git a/spaces/sklearn-docs/Plotting-Cross-Validated-Predictions/README.md b/spaces/sklearn-docs/Plotting-Cross-Validated-Predictions/README.md deleted file mode 100644 index bad2f61c0bbe7c480070b00899df66833f68985d..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Plotting-Cross-Validated-Predictions/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Plotting Cross Validated Predictions -emoji: 👁 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sklearn-docs/plot-k-means-digits/app.py b/spaces/sklearn-docs/plot-k-means-digits/app.py deleted file mode 100644 index 146fca61d04acedfa8a34e61581977525081e666..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/plot-k-means-digits/app.py +++ /dev/null @@ -1,185 +0,0 @@ -import gradio as gr -import pandas as pd -import numpy as np -from time import time -from sklearn import metrics -from sklearn.pipeline import make_pipeline -from sklearn.preprocessing import StandardScaler -from sklearn.cluster import KMeans -from sklearn.decomposition import PCA -from huggingface_hub import login -from datasets import load_dataset -import matplotlib.pyplot as plt - - -# https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py - -def display_plot(data, n_digits): - reduced_data = PCA(n_components=2).fit_transform(data) - kmeans = KMeans(init="k-means++", n_clusters=n_digits, n_init=4) - kmeans.fit(reduced_data) - - # Step size of the mesh. Decrease to increase the quality of the VQ. - h = 0.02 # point in the mesh [x_min, x_max]x[y_min, y_max]. - - # Plot the decision boundary. For that, we will assign a color to each - x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 - y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 - xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) - - # Obtain labels for each point in mesh. Use last trained model. - Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) - - # Put the result into a color plot - Z = Z.reshape(xx.shape) - - fig = plt.figure() - - plt.clf() - plt.imshow( - Z, - interpolation="nearest", - extent=(xx.min(), xx.max(), yy.min(), yy.max()), - cmap=plt.cm.Paired, - aspect="auto", - origin="lower", - ) - - plt.plot(reduced_data[:, 0], reduced_data[:, 1], "k.", markersize=2) - # Plot the centroids as a white X - centroids = kmeans.cluster_centers_ - plt.scatter( - centroids[:, 0], - centroids[:, 1], - marker="x", - s=169, - linewidths=3, - color="w", - zorder=10, - ) - plt.title( - "K-means clustering on the digits dataset (PCA-reduced data)\n" - "Centroids are marked with white cross" - ) - plt.xlim(x_min, x_max) - plt.ylim(y_min, y_max) - plt.xticks(()) - plt.yticks(()) - return fig - -def bench_k_means(kmeans, name, data, labels): - """Benchmark to evaluate the KMeans initialization methods. - - Parameters - ---------- - kmeans : KMeans instance - A :class:`~sklearn.cluster.KMeans` instance with the initialization - already set. - name : str - Name given to the strategy. It will be used to show the results in a - table. - data : ndarray of shape (n_samples, n_features) - The data to cluster. - labels : ndarray of shape (n_samples,) - The labels used to compute the clustering metrics which requires some - supervision. - """ - t0 = time() - estimator = make_pipeline(StandardScaler(), kmeans).fit(data) - fit_time = time() - t0 - results = [name, fit_time, estimator[-1].inertia_] - - # Define the metrics which require only the true labels and estimator - # labels - clustering_metrics = [ - metrics.homogeneity_score, - metrics.completeness_score, - metrics.v_measure_score, - metrics.adjusted_rand_score, - metrics.adjusted_mutual_info_score, - ] - results += [m(labels, estimator[-1].labels_) for m in clustering_metrics] - - # The silhouette score requires the full dataset - results += [ - metrics.silhouette_score( - data, - estimator[-1].labels_, - metric="euclidean", - sample_size=300, - ) - ] - - return results - -title = "A demo of K-Means clustering on the handwritten digits data" -def do_submit(kmeans_n_digit,random_n_digit, pca_n_digit): - # Load the dataset - dataset = load_dataset("sklearn-docs/digits", header=None) - # convert dataset to pandas - df = dataset['train'].to_pandas() - data = df.iloc[:, :64] - labels = df.iloc[:, 64] - - kmeans = KMeans(init="k-means++", n_clusters=int(kmeans_n_digit), n_init=4, random_state=0) - results = bench_k_means(kmeans=kmeans, name="k-means++", data=data, labels=labels) - - df = pd.DataFrame(results).T - numeric_cols = ['time','inertia','homo','compl','v-meas','ARI','AMI','silhouette'] - df.columns = ['init'] + numeric_cols - - kmeans = KMeans(init="random", n_clusters=int(random_n_digit), n_init=4, random_state=0) - results = bench_k_means(kmeans=kmeans, name="random", data=data, labels=labels) - df.loc[len(df.index)] = results - - pca = PCA(n_components=int(pca_n_digit)).fit(data) - kmeans = KMeans(init=pca.components_, n_clusters=int(pca_n_digit), n_init=1) - results = bench_k_means(kmeans=kmeans, name="PCA-based", data=data, labels=labels) - df.loc[len(df.index)] = results - df[df.columns[1:]] = df.iloc[:,1:].astype(float).round(3) - - df = df.T #Transpose for display - df.columns = df.iloc[0,:].tolist() - df = df.iloc[1:,:].reset_index() - df.columns = ['metrics', 'k-means++', 'random', 'PCA-based'] - return display_plot(data, kmeans_n_digit), df - -#Theme from - https://huggingface.co/spaces/trl-lib/stack-llama/blob/main/app.py -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", - radius_size=gr.themes.sizes.radius_sm, - font=[gr.themes.GoogleFont("Open Sans"), "ui-sans-serif", "system-ui", "sans-serif"], -) - -with gr.Blocks(title=title, theme=theme) as demo: - gr.Markdown(f"## {title}") - gr.Markdown("This demo is based on this [scikit-learn example](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)") - gr.Markdown("In this example we compare the various initialization strategies for K-means in terms of runtime and quality of the results.") - gr.Markdown("As the ground truth is known here, we also apply different cluster quality metrics to judge the goodness of fit of the cluster labels to the ground truth.") - gr.Markdown("Cluster quality metrics evaluated (see [Clustering performance evaluation](https://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation) \ - for definitions and discussions of the metrics):") - gr.Markdown("---") - gr.Markdown(" We will be utilizing [digits](https://huggingface.co/datasets/sklearn-docs/digits) dataset. This dataset contains handwritten digits from 0 to 9. \ - In the context of clustering, one would like to group images such that the handwritten digits on the image are the same.") - - - with gr.Row(): - with gr.Column(scale=0.5): - kmeans_n_digit = gr.Slider(minimum=2, maximum=10, label="KMeans n_digits", info="n_digits is number of handwritten digits" , step=1, value=10) - random_n_digit = gr.Slider(minimum=2, maximum=10, label="Random n_digits", step=1, value=10) - pca_n_digit = gr.Slider(minimum=2, maximum=10, label="PCA n_digits",step=1, value=10) - - plt_out = gr.Plot() - - with gr.Column(scale=0.5): - sample_df = pd.DataFrame(np.zeros((9,4)),columns=['metrics', 'k-means++', 'random', 'PCA-based']) - - output = gr.Dataframe(sample_df, label="Clustering Metrics") - - with gr.Row(): - sub_btn = gr.Button("Submit") - sub_btn.click(fn=do_submit, inputs=[kmeans_n_digit,random_n_digit, pca_n_digit], outputs=[plt_out, output]) - -demo.launch() \ No newline at end of file diff --git a/spaces/sklearn-docs/text-feature-extraction-evaluation/README.md b/spaces/sklearn-docs/text-feature-extraction-evaluation/README.md deleted file mode 100644 index 5a954c0a59cc1b84486998262c9484602530b672..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/text-feature-extraction-evaluation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sample Pipeline for Text Feature Extraction and Evaluation -emoji: 🗞️ -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sneedium/captcha_pixelplanet/modules/attention.py b/spaces/sneedium/captcha_pixelplanet/modules/attention.py deleted file mode 100644 index 6b70138d1bfc3205461df4a10d377a89e4f9ceea..0000000000000000000000000000000000000000 --- a/spaces/sneedium/captcha_pixelplanet/modules/attention.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -import torch.nn as nn -from .transformer import PositionalEncoding - -class Attention(nn.Module): - def __init__(self, in_channels=512, max_length=25, n_feature=256): - super().__init__() - self.max_length = max_length - - self.f0_embedding = nn.Embedding(max_length, in_channels) - self.w0 = nn.Linear(max_length, n_feature) - self.wv = nn.Linear(in_channels, in_channels) - self.we = nn.Linear(in_channels, max_length) - - self.active = nn.Tanh() - self.softmax = nn.Softmax(dim=2) - - def forward(self, enc_output): - enc_output = enc_output.permute(0, 2, 3, 1).flatten(1, 2) - reading_order = torch.arange(self.max_length, dtype=torch.long, device=enc_output.device) - reading_order = reading_order.unsqueeze(0).expand(enc_output.size(0), -1) # (S,) -> (B, S) - reading_order_embed = self.f0_embedding(reading_order) # b,25,512 - - t = self.w0(reading_order_embed.permute(0, 2, 1)) # b,512,256 - t = self.active(t.permute(0, 2, 1) + self.wv(enc_output)) # b,256,512 - - attn = self.we(t) # b,256,25 - attn = self.softmax(attn.permute(0, 2, 1)) # b,25,256 - g_output = torch.bmm(attn, enc_output) # b,25,512 - return g_output, attn.view(*attn.shape[:2], 8, 32) - - -def encoder_layer(in_c, out_c, k=3, s=2, p=1): - return nn.Sequential(nn.Conv2d(in_c, out_c, k, s, p), - nn.BatchNorm2d(out_c), - nn.ReLU(True)) - -def decoder_layer(in_c, out_c, k=3, s=1, p=1, mode='nearest', scale_factor=None, size=None): - align_corners = None if mode=='nearest' else True - return nn.Sequential(nn.Upsample(size=size, scale_factor=scale_factor, - mode=mode, align_corners=align_corners), - nn.Conv2d(in_c, out_c, k, s, p), - nn.BatchNorm2d(out_c), - nn.ReLU(True)) - - -class PositionAttention(nn.Module): - def __init__(self, max_length, in_channels=512, num_channels=64, - h=8, w=32, mode='nearest', **kwargs): - super().__init__() - self.max_length = max_length - self.k_encoder = nn.Sequential( - encoder_layer(in_channels, num_channels, s=(1, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)) - ) - self.k_decoder = nn.Sequential( - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, in_channels, size=(h, w), mode=mode) - ) - - self.pos_encoder = PositionalEncoding(in_channels, dropout=0, max_len=max_length) - self.project = nn.Linear(in_channels, in_channels) - - def forward(self, x): - N, E, H, W = x.size() - k, v = x, x # (N, E, H, W) - - # calculate key vector - features = [] - for i in range(0, len(self.k_encoder)): - k = self.k_encoder[i](k) - features.append(k) - for i in range(0, len(self.k_decoder) - 1): - k = self.k_decoder[i](k) - k = k + features[len(self.k_decoder) - 2 - i] - k = self.k_decoder[-1](k) - - # calculate query vector - # TODO q=f(q,k) - zeros = x.new_zeros((self.max_length, N, E)) # (T, N, E) - q = self.pos_encoder(zeros) # (T, N, E) - q = q.permute(1, 0, 2) # (N, T, E) - q = self.project(q) # (N, T, E) - - # calculate attention - attn_scores = torch.bmm(q, k.flatten(2, 3)) # (N, T, (H*W)) - attn_scores = attn_scores / (E ** 0.5) - attn_scores = torch.softmax(attn_scores, dim=-1) - - v = v.permute(0, 2, 3, 1).view(N, -1, E) # (N, (H*W), E) - attn_vecs = torch.bmm(attn_scores, v) # (N, T, E) - - return attn_vecs, attn_scores.view(N, -1, H, W) \ No newline at end of file diff --git a/spaces/sohomghosh/FinLanSer_Financial_Language_Simplifier/app.py b/spaces/sohomghosh/FinLanSer_Financial_Language_Simplifier/app.py deleted file mode 100644 index 36c79fcc8810b0eafd9f8d1a8872524d55c3971d..0000000000000000000000000000000000000000 --- a/spaces/sohomghosh/FinLanSer_Financial_Language_Simplifier/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import pickle -import gradio as gr -import torch -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -from transformers import BertTokenizer, BertForSequenceClassification, pipeline, AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline, AutoModelForSeq2SeqLM, AutoModel, RobertaModel, RobertaTokenizer -from sentence_transformers import SentenceTransformer -from fin_readability_sustainability import BERTClass, do_predict -import pandas as pd - -#import lightgbm -#lr_clf_finbert = pickle.load(open("lr_clf_finread_new.pkl",'rb')) -tokenizer_read = BertTokenizer.from_pretrained('ProsusAI/finbert') - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model_read = BERTClass(2, "readability") -model_read.to(device) -model_read.load_state_dict(torch.load('readability_model.bin', map_location=device)['model_state_dict']) - - -def get_readability(text): - df = pd.DataFrame({'sentence':[text]}) - actual_predictions_read = do_predict(model_read, tokenizer_read, df) - score = round(actual_predictions_read[1][0], 4) - return score - -# Reference : https://huggingface.co/humarin/chatgpt_paraphraser_on_T5_base -tokenizer = AutoTokenizer.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base") -model = AutoModelForSeq2SeqLM.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base") - -def paraphrase( - question, - num_beams=5, - num_beam_groups=5, - num_return_sequences=5, - repetition_penalty=10.0, - diversity_penalty=3.0, - no_repeat_ngram_size=2, - temperature=0.7, - max_length=128 -): - input_ids = tokenizer( - f'paraphrase: {question}', - return_tensors="pt", padding="longest", - max_length=max_length, - truncation=True, - ).input_ids - - outputs = model.generate( - input_ids, temperature=temperature, repetition_penalty=repetition_penalty, - num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size, - num_beams=num_beams, num_beam_groups=num_beam_groups, - max_length=max_length, diversity_penalty=diversity_penalty - ) - - res = tokenizer.batch_decode(outputs, skip_special_tokens=True) - - return res - -def get_most_readable_paraphrse(text): - li_paraphrases = paraphrase(text) - li_paraphrases.append(text) - best = li_paraphrases[0] - score_max = get_readability(best) - for i in range(1,len(li_paraphrases)): - curr = li_paraphrases[i] - score = get_readability(curr) - if score > score_max: - best = curr - score_max = score - if best!=text and score_max>.6: - ans = "The most redable version of text that I can think of is:\n" + best - else: - "Sorry! I am not confident. As per my best knowledge, you already have the most readable version of the text!" - return ans - -def set_example_text(example_text): - return gr.Textbox.update(value=example_text[0]) - -with gr.Blocks() as demo: - gr.Markdown( - """ - # FinLanSer - Financial Language Simplifier - """) - text = gr.Textbox(label="Enter text you want to simply (make more readable)") - greet_btn = gr.Button("Simplify/Make Readable") - output = gr.Textbox(label="Output Box") - greet_btn.click(fn=get_most_readable_paraphrse, inputs=text, outputs=output, api_name="get_most_raedable_paraphrse") - example_text = gr.Dataset(components=[text], samples=[['Legally assured line of credit with a bank'], ['A mutual fund is a type of financial vehicle made up of a pool of money collected from many investors to invest in securities like stocks, bonds, money market instruments']]) - example_text.click(fn=set_example_text, inputs=example_text,outputs=example_text.components) - -demo.launch() \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/subword_nmt_bpe.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/subword_nmt_bpe.py deleted file mode 100644 index 5d724d2730a5895ca55af2998c2ced471625b516..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/subword_nmt_bpe.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SubwordNMTBPEConfig(FairseqDataclass): - bpe_codes: str = field(default="???", metadata={"help": "path to subword NMT BPE"}) - bpe_separator: str = field(default="@@", metadata={"help": "BPE separator"}) - - -@register_bpe("subword_nmt", dataclass=SubwordNMTBPEConfig) -class SubwordNMTBPE(object): - def __init__(self, cfg): - if cfg.bpe_codes is None: - raise ValueError("--bpe-codes is required for --bpe=subword_nmt") - codes = file_utils.cached_path(cfg.bpe_codes) - try: - from subword_nmt import apply_bpe - - bpe_parser = apply_bpe.create_parser() - bpe_args = bpe_parser.parse_args( - [ - "--codes", - codes, - "--separator", - cfg.bpe_separator, - ] - ) - self.bpe = apply_bpe.BPE( - bpe_args.codes, - bpe_args.merges, - bpe_args.separator, - None, - bpe_args.glossaries, - ) - self.bpe_symbol = bpe_args.separator + " " - except ImportError: - raise ImportError( - "Please install subword_nmt with: pip install subword-nmt" - ) - - def encode(self, x: str) -> str: - return self.bpe.process_line(x) - - def decode(self, x: str) -> str: - return (x + " ").replace(self.bpe_symbol, "").rstrip() diff --git a/spaces/stomexserde/gpt4-ui/Examples/Apne Apne Phanday 1 Telugu Dubbed Movie Free Download.md b/spaces/stomexserde/gpt4-ui/Examples/Apne Apne Phanday 1 Telugu Dubbed Movie Free Download.md deleted file mode 100644 index e56c67513e4b64267b9902bca562aa2f9de4c368..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Apne Apne Phanday 1 Telugu Dubbed Movie Free Download.md +++ /dev/null @@ -1,12 +0,0 @@ - -

          Apne Apne Phanday: A Hindi Comedy Movie Dubbed in Telugu

          -

          Apne Apne Phanday is a 2014 Hindi comedy movie directed by Rana Rizwan and produced by Xavier Anthony Saldanha. The movie features Swapnil Joshi, Nitin Chaudhary, Subhankar Paul, Nitin Bhatia and Nilofer in the lead roles. The movie revolves around four friends who get into trouble with a gangster and try to escape from his clutches. The movie is full of hilarious situations and dialogues that will make you laugh out loud.

          -

          The movie was released in Mumbai and Gujarat in 2014 and received positive reviews from the critics and the audience. The movie was also dubbed in Telugu and released online for free on YouTube[^1^]. You can watch the full movie video or the official trailer of Apne Apne Phanday on YouTube[^1^] [^2^]. The movie is a perfect entertainer for those who love comedy movies and want to have some fun.

          -

          Apne Apne Phanday 1 telugu dubbed movie free download


          Download === https://urlgoal.com/2uIc4a



          Apne Apne Phanday is a movie that showcases the friendship and the misadventures of four friends who belong to different backgrounds and professions. Swapnil Joshi plays the role of a struggling actor who wants to make it big in Bollywood. Nitin Chaudhary plays the role of a software engineer who is unhappy with his job and his boss. Subhankar Paul plays the role of a journalist who is always looking for a scoop. Nitin Bhatia plays the role of a doctor who is in love with a nurse named Nilofer.

          -

          The movie takes a turn when the four friends accidentally get involved with a gangster named Rana (played by Sayyed Rizwan) who is after a diamond necklace worth crores. The gangster kidnaps Nilofer and threatens to kill her if the friends do not return the necklace. The friends have no clue about the necklace and try to find a way out of this mess. They also encounter a corrupt cop, a don, a film producer and a politician in their quest to save Nilofer and themselves.

          -

          Apne Apne Phanday is a movie that will keep you entertained with its witty dialogues, funny situations and hilarious performances. The movie has a good dose of comedy, romance, action and drama. The movie also has some catchy songs composed by R.P. Chaudhary and sung by various singers. The movie is a must-watch for all the comedy lovers who want to enjoy a light-hearted and fun-filled movie.

          Apne Apne Phanday is a movie that has received positive feedback from the critics and the audience. The movie has been praised for its comedy, direction, screenplay and acting. The movie has been rated 3.5 out of 5 stars by various websites and magazines. The movie has also been appreciated for its message of friendship and loyalty. The movie shows how the four friends stick together and support each other in times of trouble.

          -

          Apne Apne Phanday is a movie that has also been dubbed in Telugu and released online for free on YouTube. The movie has been watched by millions of viewers across the world and has received many likes and comments. The movie has also been shared on social media platforms like Facebook and Twitter by the fans. The movie has gained popularity among the Telugu-speaking audience who have enjoyed the comedy and the story of the movie.

          -

          -

          Apne Apne Phanday is a movie that you should not miss if you are looking for a fun and entertaining movie to watch with your friends and family. The movie will make you laugh, cry, cheer and clap with its amazing scenes and dialogues. The movie will also make you feel good and happy with its positive vibe and message. The movie is a perfect example of how comedy movies can be made with a good script, direction and acting.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Aqw Class Hack Download.md b/spaces/stomexserde/gpt4-ui/Examples/Aqw Class Hack Download.md deleted file mode 100644 index 1193082fec6acfe75c25cd2c386a3046788c0c98..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Aqw Class Hack Download.md +++ /dev/null @@ -1,31 +0,0 @@ -
          -

          How to Hack Your Way to Level 100 in Adventure Quest Worlds (AQW)

          -

          Adventure Quest Worlds (AQW) is a massively multiplayer online role-playing game (MMORPG) that lets you create your own character and explore a fantasy world full of quests, monsters, and friends. But what if you want to level up faster and unlock more classes, items, and skills? Well, you can use hacks, cheats, and trainers to boost your progress and get an edge over other players.

          -

          In this article, we will show you how to hack your way to level 100 in AQW using a tool called Grimoire 3.8+, which is a bot that automates your gameplay and performs various tasks for you. We will also provide you with some tips and warnings before you start hacking, so you can avoid getting banned or scammed.

          -

          aqw class hack download


          Download Filehttps://urlgoal.com/2uI9wJ



          -

          What is Grimoire 3.8+ and How Does It Work?

          -

          Grimoire 3.8+ is a bot that can run scripts for AQW, which are sets of commands that tell the bot what to do in the game. For example, you can use a script to make the bot join a certain map, kill a certain monster, accept a certain quest, use a certain skill, etc. By using scripts, you can automate your gameplay and level up faster without having to do anything yourself.

          -

          Grimoire 3.8+ is one of the most popular and updated bots for AQW, and it has many features and options that make it easy to use and customize. You can download Grimoire 3.8+ for free from various websites[^1^] [^2^], but be careful not to download any fake or infected files that might harm your computer or steal your account information.

          -

          How to Use Grimoire 3.8+ to Hack Your Way to Level 100 in AQW?

          -

          To use Grimoire 3.8+ to hack your way to level 100 in AQW, you need to follow these steps:

          -
            -
          1. Download Grimoire 3.8+ from a trusted source and extract it to a folder on your computer.
          2. -
          3. Open Grimoire 3.8+.exe and log in with your AQW account details.
          4. -
          5. Click on the Bot Manager tab and load a script that suits your needs. You can find many scripts online[^1^] [^2^] or create your own using the Script Editor.
          6. -
          7. Click on the Start Bot button and watch the bot do its magic.
          8. -
          9. Enjoy your fast leveling and hacking!
          10. -
          -

          Tips and Warnings Before You Start Hacking

          -

          Before you start hacking your way to level 100 in AQW using Grimoire 3.8+, here are some tips and warnings that you should keep in mind:

          -

          -
            -
          • Do not use the bot for too long or too often, as this might raise suspicion and get you banned by the game moderators. Use the bot sparingly and switch between different scripts and maps to avoid detection.
          • -
          • Do not use the bot in public or crowded maps, as this might annoy other players and get you reported. Use the bot in private or empty maps where no one can see you.
          • -
          • Do not use the bot to hack classes that require real money or special requirements, as this might get you banned or scammed. Use the bot only for classes that are available for free or easy to obtain.
          • -
          • Do not share your account details or bot files with anyone, as this might get you hacked or scammed. Keep your account and bot safe and secure.
          • -
          • Do not trust any websites or links that claim to offer free hacks, cheats, trainers, or downloads for AQW, as they might be fake or infected with viruses or malware. Only download from trusted sources and scan your files with antivirus software before opening them.
          • -
          -

          Conclusion

          -

          Hacking your way to level 100 in AQW using Grimoire 3.8+ is possible and easy, but it also comes with risks and responsibilities. You should be careful not to abuse the bot or violate the game rules, as this might ruin your fun and get you banned or scammed. You should also respect other players and enjoy

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/DWG TrueConvert 2009 Scaricare Attivatore 64 Bits.md b/spaces/stomexserde/gpt4-ui/Examples/DWG TrueConvert 2009 Scaricare Attivatore 64 Bits.md deleted file mode 100644 index 1525ed23c02b35e679f735103d4f7f8cd405c20c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/DWG TrueConvert 2009 Scaricare Attivatore 64 Bits.md +++ /dev/null @@ -1,35 +0,0 @@ -
          -

          How to Download and Use DWG TrueConvert 2009 to Convert DWG Files

          -

          If you are looking for a way to convert DWG files to different versions or formats, you may have heard of DWG TrueConvert 2009. This is a free tool that is part of DWG TrueView, which is a viewer and converter for DWG files. In this article, we will show you how to download and use DWG TrueConvert 2009 to convert DWG files easily and quickly.

          -

          What is DWG TrueConvert 2009?

          -

          DWG TrueConvert 2009 is a tool that allows you to convert DWG files from one version to another, or to other formats such as DXF or DWF. You can also use it to batch convert multiple files at once, or to apply passwords or digital signatures to your files. DWG TrueConvert 2009 supports DWG files from AutoCAD 2000 to AutoCAD 2010[^2^].

          -

          DWG TrueConvert 2009 scaricare attivatore 64 bits


          Download File >>>>> https://urlgoal.com/2uIapi



          -

          How to Download DWG TrueConvert 2009?

          -

          DWG TrueConvert 2009 is not available as a separate installer or program. It is part of DWG TrueView, which can be downloaded at DWG TrueView and other CAD file viewers [^2^]. You need to select the version that matches your operating system (32-bit or 64-bit) and language. The download size is about 180 MB.

          -

          How to Use DWG TrueConvert 2009?

          -

          After you install DWG TrueView, you can launch DWG TrueConvert 2009 from the Start menu or the desktop shortcut. The interface is simple and intuitive. You can follow these steps to convert your DWG files:

          -
            -
          1. Click on the Add Files button to browse and select the files you want to convert. You can also drag and drop files from Windows Explorer.
          2. -
          3. Click on the Convert button to open the Convert Drawing dialog box.
          4. -
          5. Select the destination folder where you want to save the converted files.
          6. -
          7. Select the output format (DWG, DXF or DWF) and the version (from AutoCAD R14 to AutoCAD 2010).
          8. -
          9. If you want to apply passwords or digital signatures to your files, click on the Security Options button and enter the required information.
          10. -
          11. Click on OK to start the conversion process. You can see the progress and status of each file in the main window.
          12. -
          13. When the conversion is done, you can open the destination folder and check the converted files.
          14. -
          -

          DWG TrueConvert 2009 is a handy tool that can help you convert your DWG files easily and quickly. It is free and easy to use. However, if you need more advanced features such as editing, printing or measuring your DWG files, you may want to try DWG TrueView [^2^] or AutoCAD, which are also available from Autodesk.

          - -

          What are the Benefits of DWG TrueConvert 2009?

          -

          DWG TrueConvert 2009 has several benefits for users who need to work with DWG files from different sources or versions. Some of the benefits are:

          -
            -
          • It is free and easy to download and install.
          • -
          • It can convert DWG files from AutoCAD 2000 to AutoCAD 2010, which covers most of the versions used in the industry.
          • -
          • It can also convert DWG files to DXF or DWF formats, which are compatible with other CAD software or viewers.
          • -
          • It can batch convert multiple files at once, saving time and effort.
          • -
          • It can apply passwords or digital signatures to protect the integrity and security of the files.
          • -
          • It can help users better communicate and manage CAD data, with more options for sharing the file version and format best suited to the situation[^1^].
          • -
          • It can let users accurately view, plot, print and publish to DWG files using DWG TrueView[^2^].
          • -
          -

          DWG TrueConvert 2009 is a handy tool that can help you convert your DWG files easily and quickly. It is free and easy to use. However, if you need more advanced features such as editing, printing or measuring your DWG files, you may want to try DWG TrueView [^2^] or AutoCAD, which are also available from Autodesk.

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Latest Software Update For Blackberry Curve 8520 Software BEST.md b/spaces/stomexserde/gpt4-ui/Examples/Download Latest Software Update For Blackberry Curve 8520 Software BEST.md deleted file mode 100644 index a51095c57510f47bf8e8fb86f21c990361ef83a0..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Latest Software Update For Blackberry Curve 8520 Software BEST.md +++ /dev/null @@ -1,29 +0,0 @@ -
          -

          How to Download and Install the Latest Software Update for Blackberry Curve 8520

          -

          The Blackberry Curve 8520 is a smartphone that was released in 2009 and runs on the Blackberry OS 5.0. However, you can update your device to the latest official firmware version, which is 5.0.0.822, to enjoy improved performance and stability. This article will guide you on how to download and install the latest software update for your Blackberry Curve 8520.

          -

          Step 1: Check your current software version

          -

          Before you proceed with the update, you should check your current software version to see if you need to update or not. To do this, go to Options > About on your device and look for the third line that starts with v. This is your software version. If it is lower than 5.0.0.822, then you need to update.

          -

          download latest software update for blackberry curve 8520 software


          Download 🗸 https://urlgoal.com/2uI90G



          -

          Step 2: Backup your data

          -

          Updating your software may erase some of your data, such as contacts, messages, photos, etc. Therefore, it is recommended that you backup your data before you proceed with the update. You can use the Blackberry Desktop Software on your computer to backup and restore your data. You can download it from here[^1^].

          -

          Step 3: Download the latest software update

          -

          You can download the latest software update for your Blackberry Curve 8520 from various sources, such as the official Blackberry website, your carrier's website, or third-party websites. However, make sure that you download the correct version for your device model and region. For example, if you are using a Blackberry Curve 8520 from TIM Italy, you can download the software update from here[^4^]. The file name should be something like 8520wifiM_PBr5.0.0_rel1385_PL5.2.0.76_A5.0.0.822_TIM_Italy.exe. Save the file to your computer and run it to install it.

          -

          Step 4: Install the latest software update

          -

          After installing the software update file on your computer, you need to connect your Blackberry Curve 8520 to your computer using a USB cable. Launch the Blackberry Desktop Software and click on Device > Update. The software will automatically detect the latest software update for your device and prompt you to install it. Follow the on-screen instructions to complete the installation process.

          -

          Step 5: Enjoy the latest software update

          -

          Once the installation is done, your device will reboot and you will see a message that says The loading operation was successful. You can then disconnect your device from your computer and enjoy the latest software update for your Blackberry Curve 8520.

          -

          Congratulations! You have successfully updated your Blackberry Curve 8520 to the latest software version.

          - -

          What's new in the software update 5.0.0.822 for Blackberry Curve 8520?

          -

          The software update 5.0.0.822 for Blackberry Curve 8520 brings some improvements and fixes to your device. Some of the notable changes are:

          -

          -
            -
          • Improved browser performance and stability.
          • -
          • Enhanced email setup and configuration.
          • -
          • Better memory management and battery life.
          • -
          • Fixed issues with Bluetooth connectivity and media playback.
          • -
          • Added support for more languages and regions.
          • -
          -

          For a full list of changes and enhancements, you can visit the official Blackberry website or your carrier's website.

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/metagpt/utils/singleton.py b/spaces/sub314xxl/MetaGPT/metagpt/utils/singleton.py deleted file mode 100644 index a9e0862c050777981a753fa3f6449578f07e737c..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/utils/singleton.py +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 16:15 -@Author : alexanderwu -@File : singleton.py -""" -import abc - - -class Singleton(abc.ABCMeta, type): - """ - Singleton metaclass for ensuring only one instance of a class. - """ - - _instances = {} - - def __call__(cls, *args, **kwargs): - """Call method for the singleton metaclass.""" - if cls not in cls._instances: - cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) - return cls._instances[cls] diff --git a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch.py b/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch.py deleted file mode 100644 index 863f42db6f50e5eac70931b8c0e6443f831a6018..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch.py +++ /dev/null @@ -1,867 +0,0 @@ -# ----------------------------------------------------------------------------------- -# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257 -# Originally Written by Ze Liu, Modified by Jingyun Liang. -# ----------------------------------------------------------------------------------- - -import math -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - r""" Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - - -class SwinTransformerBlock(nn.Module): - r""" Swin Transformer Block. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - if min(self.input_resolution) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - self.shift_size = 0 - self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if self.shift_size > 0: - attn_mask = self.calculate_mask(self.input_resolution) - else: - attn_mask = None - - self.register_buffer("attn_mask", attn_mask) - - def calculate_mask(self, x_size): - # calculate attention mask for SW-MSA - H, W = x_size - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x, x_size): - H, W = x_size - B, L, C = x.shape - # assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size - if self.input_resolution == x_size: - attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C - else: - attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device)) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - - -class PatchMerging(nn.Module): - r""" Patch Merging Layer. - - Args: - input_resolution (tuple[int]): Resolution of input feature. - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.input_resolution = input_resolution - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x): - """ - x: B, H*W, C - """ - H, W = self.input_resolution - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even." - - x = x.view(B, H, W, C) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - def extra_repr(self) -> str: - return f"input_resolution={self.input_resolution}, dim={self.dim}" - - def flops(self): - H, W = self.input_resolution - flops = H * W * self.dim - flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim - return flops - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock(dim=dim, input_resolution=input_resolution, - num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, x_size): - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, x_size) - else: - x = blk(x, x_size) - if self.downsample is not None: - x = self.downsample(x) - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" - - def flops(self): - flops = 0 - for blk in self.blocks: - flops += blk.flops() - if self.downsample is not None: - flops += self.downsample.flops() - return flops - - -class RSTB(nn.Module): - """Residual Swin Transformer Block (RSTB). - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - img_size: Input image size. - patch_size: Patch size. - resi_connection: The convolutional block before residual connection. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False, - img_size=224, patch_size=4, resi_connection='1conv'): - super(RSTB, self).__init__() - - self.dim = dim - self.input_resolution = input_resolution - - self.residual_group = BasicLayer(dim=dim, - input_resolution=input_resolution, - depth=depth, - num_heads=num_heads, - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path, - norm_layer=norm_layer, - downsample=downsample, - use_checkpoint=use_checkpoint) - - if resi_connection == '1conv': - self.conv = nn.Conv2d(dim, dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim // 4, 1, 1, 0), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim, 3, 1, 1)) - - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, - norm_layer=None) - - self.patch_unembed = PatchUnEmbed( - img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, - norm_layer=None) - - def forward(self, x, x_size): - return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x - - def flops(self): - flops = 0 - flops += self.residual_group.flops() - H, W = self.input_resolution - flops += H * W * self.dim * self.dim * 9 - flops += self.patch_embed.flops() - flops += self.patch_unembed.flops() - - return flops - - -class PatchEmbed(nn.Module): - r""" Image to Patch Embedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - x = x.flatten(2).transpose(1, 2) # B Ph*Pw C - if self.norm is not None: - x = self.norm(x) - return x - - def flops(self): - flops = 0 - H, W = self.img_size - if self.norm is not None: - flops += H * W * self.embed_dim - return flops - - -class PatchUnEmbed(nn.Module): - r""" Image to Patch Unembedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - def forward(self, x, x_size): - B, HW, C = x.shape - x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C - return x - - def flops(self): - flops = 0 - return flops - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -class UpsampleOneStep(nn.Sequential): - """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle) - Used in lightweight SR to save parameters. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - - """ - - def __init__(self, scale, num_feat, num_out_ch, input_resolution=None): - self.num_feat = num_feat - self.input_resolution = input_resolution - m = [] - m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1)) - m.append(nn.PixelShuffle(scale)) - super(UpsampleOneStep, self).__init__(*m) - - def flops(self): - H, W = self.input_resolution - flops = H * W * self.num_feat * 3 * 9 - return flops - - -class SwinIR(nn.Module): - r""" SwinIR - A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer. - - Args: - img_size (int | tuple(int)): Input image size. Default 64 - patch_size (int | tuple(int)): Patch size. Default: 1 - in_chans (int): Number of input image channels. Default: 3 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each Swin Transformer layer. - num_heads (tuple(int)): Number of attention heads in different layers. - window_size (int): Window size. Default: 7 - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None - drop_rate (float): Dropout rate. Default: 0 - attn_drop_rate (float): Attention dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction - img_range: Image range. 1. or 255. - upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None - resi_connection: The convolutional block before residual connection. '1conv'/'3conv' - """ - - def __init__(self, img_size=64, patch_size=1, in_chans=3, - embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6], - window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, - norm_layer=nn.LayerNorm, ape=False, patch_norm=True, - use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv', - **kwargs): - super(SwinIR, self).__init__() - num_in_ch = in_chans - num_out_ch = in_chans - num_feat = 64 - self.img_range = img_range - if in_chans == 3: - rgb_mean = (0.4488, 0.4371, 0.4040) - self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1) - else: - self.mean = torch.zeros(1, 1, 1, 1) - self.upscale = upscale - self.upsampler = upsampler - self.window_size = window_size - - ##################################################################################################### - ################################### 1, shallow feature extraction ################################### - self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1) - - ##################################################################################################### - ################################### 2, deep feature extraction ###################################### - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.num_features = embed_dim - self.mlp_ratio = mlp_ratio - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.patches_resolution - self.patches_resolution = patches_resolution - - # merge non-overlapping patches into image - self.patch_unembed = PatchUnEmbed( - img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build Residual Swin Transformer blocks (RSTB) - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = RSTB(dim=embed_dim, - input_resolution=(patches_resolution[0], - patches_resolution[1]), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results - norm_layer=norm_layer, - downsample=None, - use_checkpoint=use_checkpoint, - img_size=img_size, - patch_size=patch_size, - resi_connection=resi_connection - - ) - self.layers.append(layer) - self.norm = norm_layer(self.num_features) - - # build the last conv layer in deep feature extraction - if resi_connection == '1conv': - self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1)) - - ##################################################################################################### - ################################ 3, high quality image reconstruction ################################ - if self.upsampler == 'pixelshuffle': - # for classical SR - self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.upsample = Upsample(upscale, num_feat) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR (to save parameters) - self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch, - (patches_resolution[0], patches_resolution[1])) - elif self.upsampler == 'nearest+conv': - # for real-world SR (less artifacts) - self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - if self.upscale == 4: - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - else: - # for image denoising and JPEG compression artifact reduction - self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1) - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'absolute_pos_embed'} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {'relative_position_bias_table'} - - def check_image_size(self, x): - _, _, h, w = x.size() - mod_pad_h = (self.window_size - h % self.window_size) % self.window_size - mod_pad_w = (self.window_size - w % self.window_size) % self.window_size - x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect') - return x - - def forward_features(self, x): - x_size = (x.shape[2], x.shape[3]) - x = self.patch_embed(x) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - - for layer in self.layers: - x = layer(x, x_size) - - x = self.norm(x) # B L C - x = self.patch_unembed(x, x_size) - - return x - - def forward(self, x): - H, W = x.shape[2:] - x = self.check_image_size(x) - - self.mean = self.mean.type_as(x) - x = (x - self.mean) * self.img_range - - if self.upsampler == 'pixelshuffle': - # for classical SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.conv_last(self.upsample(x)) - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.upsample(x) - elif self.upsampler == 'nearest+conv': - # for real-world SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - if self.upscale == 4: - x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - x = self.conv_last(self.lrelu(self.conv_hr(x))) - else: - # for image denoising and JPEG compression artifact reduction - x_first = self.conv_first(x) - res = self.conv_after_body(self.forward_features(x_first)) + x_first - x = x + self.conv_last(res) - - x = x / self.img_range + self.mean - - return x[:, :, :H*self.upscale, :W*self.upscale] - - def flops(self): - flops = 0 - H, W = self.patches_resolution - flops += H * W * 3 * self.embed_dim * 9 - flops += self.patch_embed.flops() - for i, layer in enumerate(self.layers): - flops += layer.flops() - flops += H * W * 3 * self.embed_dim * self.embed_dim - flops += self.upsample.flops() - return flops - - -if __name__ == '__main__': - upscale = 4 - window_size = 8 - height = (1024 // upscale // window_size + 1) * window_size - width = (720 // upscale // window_size + 1) * window_size - model = SwinIR(upscale=2, img_size=(height, width), - window_size=window_size, img_range=1., depths=[6, 6, 6, 6], - embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect') - print(model) - print(height, width, model.flops() / 1e9) - - x = torch.randn((1, 3, height, width)) - x = model(x) - print(x.shape) diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/deepbooru.py b/spaces/supertori/files/stable-diffusion-webui/modules/deepbooru.py deleted file mode 100644 index 122fce7f569dbd28f9c6d83af874bb3efed34a5e..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/deepbooru.py +++ /dev/null @@ -1,99 +0,0 @@ -import os -import re - -import torch -from PIL import Image -import numpy as np - -from modules import modelloader, paths, deepbooru_model, devices, images, shared - -re_special = re.compile(r'([\\()])') - - -class DeepDanbooru: - def __init__(self): - self.model = None - - def load(self): - if self.model is not None: - return - - files = modelloader.load_models( - model_path=os.path.join(paths.models_path, "torch_deepdanbooru"), - model_url='https://github.com/AUTOMATIC1111/TorchDeepDanbooru/releases/download/v1/model-resnet_custom_v3.pt', - ext_filter=[".pt"], - download_name='model-resnet_custom_v3.pt', - ) - - self.model = deepbooru_model.DeepDanbooruModel() - self.model.load_state_dict(torch.load(files[0], map_location="cpu")) - - self.model.eval() - self.model.to(devices.cpu, devices.dtype) - - def start(self): - self.load() - self.model.to(devices.device) - - def stop(self): - if not shared.opts.interrogate_keep_models_in_memory: - self.model.to(devices.cpu) - devices.torch_gc() - - def tag(self, pil_image): - self.start() - res = self.tag_multi(pil_image) - self.stop() - - return res - - def tag_multi(self, pil_image, force_disable_ranks=False): - threshold = shared.opts.interrogate_deepbooru_score_threshold - use_spaces = shared.opts.deepbooru_use_spaces - use_escape = shared.opts.deepbooru_escape - alpha_sort = shared.opts.deepbooru_sort_alpha - include_ranks = shared.opts.interrogate_return_ranks and not force_disable_ranks - - pic = images.resize_image(2, pil_image.convert("RGB"), 512, 512) - a = np.expand_dims(np.array(pic, dtype=np.float32), 0) / 255 - - with torch.no_grad(), devices.autocast(): - x = torch.from_numpy(a).to(devices.device) - y = self.model(x)[0].detach().cpu().numpy() - - probability_dict = {} - - for tag, probability in zip(self.model.tags, y): - if probability < threshold: - continue - - if tag.startswith("rating:"): - continue - - probability_dict[tag] = probability - - if alpha_sort: - tags = sorted(probability_dict) - else: - tags = [tag for tag, _ in sorted(probability_dict.items(), key=lambda x: -x[1])] - - res = [] - - filtertags = set([x.strip().replace(' ', '_') for x in shared.opts.deepbooru_filter_tags.split(",")]) - - for tag in [x for x in tags if x not in filtertags]: - probability = probability_dict[tag] - tag_outformat = tag - if use_spaces: - tag_outformat = tag_outformat.replace('_', ' ') - if use_escape: - tag_outformat = re.sub(re_special, r'\\\1', tag_outformat) - if include_ranks: - tag_outformat = f"({tag_outformat}:{probability:.3f})" - - res.append(tag_outformat) - - return ", ".join(res) - - -model = DeepDanbooru() diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/errors.py b/spaces/supertori/files/stable-diffusion-webui/modules/errors.py deleted file mode 100644 index 72c9c44497221eb814b402aa5859a3e6aaeaac00..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/errors.py +++ /dev/null @@ -1,43 +0,0 @@ -import sys -import traceback - - -def print_error_explanation(message): - lines = message.strip().split("\n") - max_len = max([len(x) for x in lines]) - - print('=' * max_len, file=sys.stderr) - for line in lines: - print(line, file=sys.stderr) - print('=' * max_len, file=sys.stderr) - - -def display(e: Exception, task): - print(f"{task or 'error'}: {type(e).__name__}", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - message = str(e) - if "copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768])" in message: - print_error_explanation(""" -The most likely cause of this is you are trying to load Stable Diffusion 2.0 model without specifying its config file. -See https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20 for how to solve this. - """) - - -already_displayed = {} - - -def display_once(e: Exception, task): - if task in already_displayed: - return - - display(e, task) - - already_displayed[task] = 1 - - -def run(code, task): - try: - code() - except Exception as e: - display(task, e) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Farming Simulator 2008 Download Fixed.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Farming Simulator 2008 Download Fixed.md deleted file mode 100644 index bef9a644d696ffd1e91e8c766f3194aaa51e74d1..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Farming Simulator 2008 Download Fixed.md +++ /dev/null @@ -1,27 +0,0 @@ - -

          How to Download and Play Farming Simulator 2008 on Your PC

          -

          Farming Simulator 2008 is the first game in the popular Farming Simulator series, developed by GIANTS Software and published by astragon Entertainment. In this game, you can experience the life of a farmer, from plowing fields and sowing seeds, to harvesting crops and selling them at the market. You can also use various machines and tools to help you with your work, such as tractors, harvesters, plows, seeders, balers, and more.

          -

          Farming Simulator 2008 Download


          Download Filehttps://cinurl.com/2uEXad



          -

          If you are a fan of farming games and want to try out Farming Simulator 2008 on your PC, you might be wondering how to download and play it. Unfortunately, the game is not available on any official digital platforms, such as Steam or Origin. However, there are some unofficial websites that offer free downloads of the game or its demo version. Here are some steps you can follow to download and play Farming Simulator 2008 on your PC:

          -
            -
          1. Visit one of the unofficial websites that offer Farming Simulator 2008 downloads, such as https://themorc.github.io/LS08-things/ or https://archive.org/details/ls-2008-wmp. Be careful when downloading files from unknown sources, as they might contain viruses or malware. Scan the files with an antivirus software before opening them.
          2. -
          3. Choose the version of the game that you want to download. There are different versions of Farming Simulator 2008 available, such as the German original version, the Polish version, the rare version with LS2009 music, and the official addon for Farming Simulator 2008. You can also download modpacks that add new features or content to the game, such as ModAgri v2 or ŻNIWA W RSP modpack.
          4. -
          5. Download the game or the modpack file to your PC. The file size might vary depending on the version or the modpack you choose. The game file is usually in a ZIP or RAR format, which you need to extract using a software like WinRAR or 7-Zip.
          6. -
          7. Open the extracted folder and look for the setup.exe file. Double-click on it to start the installation process. Follow the instructions on the screen to install the game or the modpack on your PC.
          8. -
          9. Launch the game from your desktop shortcut or from the Start menu. You can also launch it from the folder where you installed it. Enjoy playing Farming Simulator 2008 on your PC!
          10. -
          -

          Farming Simulator 2008 is a fun and relaxing game that lets you experience the joys and challenges of farming. You can also play it with other players online using an experimental project called LS2008MP, which adds multiplayer support to the game. You can learn more about this project at https://themorc.github.io/LS08-things/LS2008MP.html. Have fun with Farming Simulator 2008!

          - -

          Tips and Tricks for Playing Farming Simulator 2008

          -

          Farming Simulator 2008 is a simple but enjoyable game that lets you experience the basics of farming. However, it can also be challenging and rewarding if you know some tips and tricks to make your farming life easier and more profitable. Here are some of them:

          -

          -
            -
          • Plan your fields wisely. You can only grow wheat in Farming Simulator 2008, but you can still optimize your yield and income by choosing the right fields to buy and cultivate. Look for fields that are large, flat, and close to your farm or the market. Avoid fields that are small, hilly, or far away from your destination.
          • -
          • Use your machines efficiently. You have a limited number of machines and tools in Farming Simulator 2008, so you need to use them wisely. Don't leave your machines idle or parked in the middle of the road. Always attach them to a tractor or a trailer when not in use. Also, try to use the right machine for the right job. For example, use the Fendt 8350 combine harvester for harvesting large fields, and use the Fendt 716 Vario tractor with the Pottinger Euroboss trailer for transporting small loads.
          • -
          • Take care of your crops. You need to plow, sow, and harvest your crops in Farming Simulator 2008, but you also need to take care of them in between. Make sure you water your crops regularly to prevent them from wilting. You can use the Fendt 716 Vario tractor with the Amazone ZA-M fertilizer spreader to water your crops. Also, watch out for the weather and the seasons. Rain and hail can damage your crops, so try to harvest them before they get spoiled. You can also use the Fendt 936 Vario tractor with the Lemken Juwel plow to turn over your soil after harvesting to prepare it for the next season.
          • -
          • Explore the map and find hidden secrets. Farming Simulator 2008 has a large map that you can explore freely. You can find different places and objects that can help you with your farming or just have some fun. For example, you can find a gas station where you can refuel your machines, a train station where you can sell your crops at a higher price, a windmill where you can get some extra money, and a lake where you can go fishing.
          • -
          • Try out mods and multiplayer. Farming Simulator 2008 has a modding community that offers new features and content for the game. You can download and install mods that add new machines, tools, maps, crops, animals, and more. You can also play with other players online using an experimental project called LS2008MP, which adds multiplayer support to the game. You can join or host a server and cooperate or compete with other farmers.
          • -
          -

          Farming Simulator 2008 is a game that can keep you entertained for hours with its realistic and relaxing gameplay. You can also learn more about farming and improve your skills by following these tips and tricks. Have fun with Farming Simulator 2008!

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mirillis Action! 3.7.1 Crack Full Latest [Version] Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mirillis Action! 3.7.1 Crack Full Latest [Version] Download.md deleted file mode 100644 index d131ff675e89f75a23e8b322c76fa76bd1098505..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mirillis Action! 3.7.1 Crack Full Latest [Version] Download.md +++ /dev/null @@ -1,33 +0,0 @@ - -

          Mirillis Action! 3.7.1 Crack Full Latest [Version] Download

          -

          Mirillis Action! is a powerful screen recorder and gameplay capture software that allows you to record and stream your desktop activity, web videos, music, screenshots and more. With Mirillis Action! you can create high-quality video tutorials, live gameplay, online courses, webinars and more.

          -

          Mirillis Action! 3.7.1 is the latest version of this software that comes with many new features and improvements. Some of the highlights are:

          -

          Mirillis Action! 3.7.1 Crack Full Latest [Version] Download


          Download Ziphttps://cinurl.com/2uEYxs



          -
            -
          • Improved performance and stability of video recording and streaming.
          • -
          • Added support for HDR10+ and Dolby Vision formats.
          • -
          • Added webcam overlay editor and chroma key option.
          • -
          • Added new live streaming platforms: Twitch Studio, Trovo and Nimo TV.
          • -
          • Added new video editing tools: trim, crop, rotate, merge and split.
          • -
          • Added new audio options: microphone noise reduction, audio mixer and audio synchronization.
          • -
          • Added new user interface themes and languages.
          • -
          -

          If you want to enjoy all these features and more, you need to download Mirillis Action! 3.7.1 Crack Full Latest [Version] from our website. This crack will activate the full version of Mirillis Action! without any limitations or watermarks. You can use it for personal or professional purposes without any risk of viruses or malware.

          -

          To download Mirillis Action! 3.7.1 Crack Full Latest [Version], follow these simple steps:

          -
            -
          1. Click on the download button below and save the file to your computer.
          2. -
          3. Extract the file using WinRAR or any other extraction tool.
          4. -
          5. Run the setup file and follow the installation instructions.
          6. -
          7. Copy the crack file from the crack folder and paste it into the installation directory.
          8. -
          9. Launch Mirillis Action! and enjoy the full version.
          10. -
          -

          That's it! You have successfully downloaded and installed Mirillis Action! 3.7.1 Crack Full Latest [Version]. Now you can record and stream your screen activity with ease and quality. Don't forget to share this article with your friends and leave a comment below if you have any questions or feedback.

          - -

          Mirillis Action! is not only a screen recorder and gameplay capture software, but also a powerful video editor and converter. You can edit your recorded videos with various tools and effects, such as slow motion, time lapse, zoom, annotations, transitions and more. You can also convert your videos to different formats and resolutions, such as MP4, AVI, MOV, MKV, 4K, HD and more. You can also export your videos to popular devices and platforms, such as YouTube, Facebook, Instagram, Twitter and more.

          -

          Mirillis Action! also has a user-friendly and customizable interface that allows you to access all the features and settings easily. You can choose from different themes and languages to suit your preferences. You can also create hotkeys and shortcuts to control your recording and streaming with ease. You can also monitor your recording and streaming status with the built-in HUD and widgets.

          -

          Mirillis Action! is the ultimate screen recorder and gameplay capture software that you need to create amazing videos and live streams. It is compatible with Windows 7, 8, 8.1 and 10. It requires a minimum of 4 GB of RAM and 100 MB of free disk space. It also supports DirectX 9.0c and later graphics cards.

          -

          Don't miss this opportunity to download Mirillis Action! 3.7.1 Crack Full Latest [Version] from our website. This crack will give you access to all the features and functions of Mirillis Action! without any restrictions or costs. You can use it for personal or professional purposes without any worries or hassles.

          -

          -

          Download Mirillis Action! 3.7.1 Crack Full Latest [Version] now and start creating stunning videos and live streams with Mirillis Action!

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Powerlogic Ion Enterprise Software Download Fix.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Powerlogic Ion Enterprise Software Download Fix.md deleted file mode 100644 index f834dc771018ae741cc3d5eeef24f3353976cdde..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Powerlogic Ion Enterprise Software Download Fix.md +++ /dev/null @@ -1,9 +0,0 @@ -
          -

          powerlogic ion eem is a comprehensive suite of tools that can be used to monitor, validate, predict, and control all energy-related costs and risks. the software helps a company perform enhanced energy analysis, energy modeling, trend analysis, and benchmarking, and ultimately control energy-related costs and risks.

          -

          powerlogic ion eem software, from schneider electric, enables a company to visualize, analyze, and act upon energy data. the suite helps companies perform enhanced energy analysis, energy modeling, trend analysis, and benchmarking, and ultimately control energy-related costs and risks.

          -

          Powerlogic Ion Enterprise Software Download


          Download Filehttps://cinurl.com/2uEXv8



          -

          if the software is used to gather data for a building, industrial facility, or a facility with multiple zones, different tools can be used to collect data in different parts of the facility. for example, if a building has several zones, a thermostat can be installed in each zone to measure power use. the software can then use the building s data to display an energy dashboard for each zone.

          -

          powerlogic ion eem, from schneider electric, is an enterprise-class energy management software suite. the suite helps companies visualize, analyze, and act upon energy data. the software can help a company perform enhanced energy analysis, energy modeling, trend analysis, and benchmarking, and ultimately control energy-related costs and risks.

          -

          powerlogic ion eem is an enterprise-class energy management software suite. the suite helps companies visualize, analyze, and act upon energy data. the software can help a company perform enhanced energy analysis, energy modeling, trend analysis, and benchmarking, and ultimately control energy-related costs and risks.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/__init__.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/__init__.py deleted file mode 100644 index 965605587211b7bf0bd6bc3acdbb33dd49cab023..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .evaluation import * # noqa: F401, F403 -from .seg import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 diff --git a/spaces/tabeina/bingo1/src/components/ui/sheet.tsx b/spaces/tabeina/bingo1/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/tang155/bingo/src/components/ui/dropdown-menu.tsx b/spaces/tang155/bingo/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/tang155/bingo/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/tanishqvashisht/horseToZebra/train.py b/spaces/tanishqvashisht/horseToZebra/train.py deleted file mode 100644 index 732d640c8ccbf6f5bff7ea05934eea4e1be48c81..0000000000000000000000000000000000000000 --- a/spaces/tanishqvashisht/horseToZebra/train.py +++ /dev/null @@ -1,209 +0,0 @@ -""" -Training for CycleGAN - -Programmed by Aladdin Persson -* 2020-11-05: Initial coding -* 2022-12-21: Small revision of code, checked that it works with latest PyTorch version -""" - -import torch -from dataset import HorseZebraDataset -import sys -from utils import save_checkpoint, load_checkpoint -from torch.utils.data import DataLoader -import torch.nn as nn -import torch.optim as optim -import config -from tqdm import tqdm -from torchvision.utils import save_image -from discriminator_model import Discriminator -from generator_model import Generator - - -def train_fn( - disc_H, disc_Z, gen_Z, gen_H, loader, opt_disc, opt_gen, l1, mse, d_scaler, g_scaler -): - H_reals = 0 - H_fakes = 0 - loop = tqdm(loader, leave=True) - - for idx, (zebra, horse) in enumerate(loop): - zebra = zebra.to(config.DEVICE) - horse = horse.to(config.DEVICE) - - # Train Discriminators H and Z - with torch.cuda.amp.autocast(): - fake_horse = gen_H(zebra) - D_H_real = disc_H(horse) - D_H_fake = disc_H(fake_horse.detach()) - H_reals += D_H_real.mean().item() - H_fakes += D_H_fake.mean().item() - D_H_real_loss = mse(D_H_real, torch.ones_like(D_H_real)) - D_H_fake_loss = mse(D_H_fake, torch.zeros_like(D_H_fake)) - D_H_loss = D_H_real_loss + D_H_fake_loss - - fake_zebra = gen_Z(horse) - D_Z_real = disc_Z(zebra) - D_Z_fake = disc_Z(fake_zebra.detach()) - D_Z_real_loss = mse(D_Z_real, torch.ones_like(D_Z_real)) - D_Z_fake_loss = mse(D_Z_fake, torch.zeros_like(D_Z_fake)) - D_Z_loss = D_Z_real_loss + D_Z_fake_loss - - # put it togethor - D_loss = (D_H_loss + D_Z_loss) / 2 - - opt_disc.zero_grad() - d_scaler.scale(D_loss).backward() - d_scaler.step(opt_disc) - d_scaler.update() - - # Train Generators H and Z - with torch.cuda.amp.autocast(): - # adversarial loss for both generators - D_H_fake = disc_H(fake_horse) - D_Z_fake = disc_Z(fake_zebra) - loss_G_H = mse(D_H_fake, torch.ones_like(D_H_fake)) - loss_G_Z = mse(D_Z_fake, torch.ones_like(D_Z_fake)) - - # cycle loss - cycle_zebra = gen_Z(fake_horse) - cycle_horse = gen_H(fake_zebra) - cycle_zebra_loss = l1(zebra, cycle_zebra) - cycle_horse_loss = l1(horse, cycle_horse) - - # identity loss (remove these for efficiency if you set lambda_identity=0) - identity_zebra = gen_Z(zebra) - identity_horse = gen_H(horse) - identity_zebra_loss = l1(zebra, identity_zebra) - identity_horse_loss = l1(horse, identity_horse) - - # add all togethor - G_loss = ( - loss_G_Z - + loss_G_H - + cycle_zebra_loss * config.LAMBDA_CYCLE - + cycle_horse_loss * config.LAMBDA_CYCLE - + identity_horse_loss * config.LAMBDA_IDENTITY - + identity_zebra_loss * config.LAMBDA_IDENTITY - ) - - opt_gen.zero_grad() - g_scaler.scale(G_loss).backward() - g_scaler.step(opt_gen) - g_scaler.update() - - if idx % 200 == 0: - save_image(fake_horse * 0.5 + 0.5, f"saved_images/horse_{idx}.png") - save_image(fake_zebra * 0.5 + 0.5, f"saved_images/zebra_{idx}.png") - - loop.set_postfix(H_real=H_reals / (idx + 1), H_fake=H_fakes / (idx + 1)) - - -def main(): - disc_H = Discriminator(in_channels=3).to(config.DEVICE) - disc_Z = Discriminator(in_channels=3).to(config.DEVICE) - gen_Z = Generator(img_channels=3, num_residuals=9).to(config.DEVICE) - gen_H = Generator(img_channels=3, num_residuals=9).to(config.DEVICE) - opt_disc = optim.Adam( - list(disc_H.parameters()) + list(disc_Z.parameters()), - lr=config.LEARNING_RATE, - betas=(0.5, 0.999), - ) - - opt_gen = optim.Adam( - list(gen_Z.parameters()) + list(gen_H.parameters()), - lr=config.LEARNING_RATE, - betas=(0.5, 0.999), - ) - - L1 = nn.L1Loss() - mse = nn.MSELoss() - - if config.LOAD_MODEL: - load_checkpoint( - config.CHECKPOINT_GEN_H, - gen_H, - opt_gen, - config.LEARNING_RATE, - ) - load_checkpoint( - config.CHECKPOINT_GEN_Z, - gen_Z, - opt_gen, - config.LEARNING_RATE, - ) - load_checkpoint( - config.CHECKPOINT_CRITIC_H, - disc_H, - opt_disc, - config.LEARNING_RATE, - ) - load_checkpoint( - config.CHECKPOINT_CRITIC_Z, - disc_Z, - opt_disc, - config.LEARNING_RATE, - ) - - dataset = HorseZebraDataset( - root_horse=config.TRAIN_DIR + "/horses", - root_zebra=config.TRAIN_DIR + "/zebras", - transform=config.transforms, - ) - val_dataset = HorseZebraDataset( - root_horse=config.VAL_DIR + "/horses", - root_zebra=config.VAL_DIR + "/zebras", - transform=config.transforms, - ) - val_loader = DataLoader( - val_dataset, - batch_size=1, - shuffle=False, - pin_memory=True, - ) - loader = DataLoader( - dataset, - batch_size=config.BATCH_SIZE, - shuffle=True, - num_workers=config.NUM_WORKERS, - pin_memory=True, - ) - input_dataset = HorseZebraDataset( - root_horse="input", - root_zebra="input", - transform=config.transforms, - ) - input_loader = DataLoader( - input_dataset, - batch_size=config.BATCH_SIZE, - shuffle=True, - num_workers=config.NUM_WORKERS, - pin_memory=True, - ) - g_scaler = torch.cuda.amp.GradScaler() - d_scaler = torch.cuda.amp.GradScaler() - - for epoch in range(config.NUM_EPOCHS): - train_fn( - disc_H, - disc_Z, - gen_Z, - gen_H, - input_loader, - opt_disc, - opt_gen, - L1, - mse, - d_scaler, - g_scaler, - ) - - if config.SAVE_MODEL: - save_checkpoint(gen_H, opt_gen, filename=config.CHECKPOINT_GEN_H) - save_checkpoint(gen_Z, opt_gen, filename=config.CHECKPOINT_GEN_Z) - save_checkpoint(disc_H, opt_disc, filename=config.CHECKPOINT_CRITIC_H) - save_checkpoint(disc_Z, opt_disc, filename=config.CHECKPOINT_CRITIC_Z) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/tensorflow/esrgan-tf2/app.py b/spaces/tensorflow/esrgan-tf2/app.py deleted file mode 100644 index c74a2dc00ce3f042a882d515335a9bf78e53ec4d..0000000000000000000000000000000000000000 --- a/spaces/tensorflow/esrgan-tf2/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import os -import time -from PIL import Image -import numpy as np -import tensorflow as tf -import tensorflow_hub as hub -import matplotlib.pyplot as plt -import gradio as gr - -# Declaring Constants -SAVED_MODEL_PATH = "https://tfhub.dev/captain-pool/esrgan-tf2/1" - -def resize(width,img): - basewidth = width - img = Image.open(img) - wpercent = (basewidth/float(img.size[0])) - hsize = int((float(img.size[1])*float(wpercent))) - img = img.resize((basewidth,hsize), Image.ANTIALIAS) - img.save('somepic.jpg') - return 'somepic.jpg' - -def preprocess_image(image_path): - """ Loads image from path and preprocesses to make it model ready - Args: - image_path: Path to the image file - """ - hr_image = tf.image.decode_image(tf.io.read_file(image_path)) - # If PNG, remove the alpha channel. The model only supports - # images with 3 color channels. - if hr_image.shape[-1] == 4: - hr_image = hr_image[...,:-1] - hr_size = (tf.convert_to_tensor(hr_image.shape[:-1]) // 4) * 4 - hr_image = tf.image.crop_to_bounding_box(hr_image, 0, 0, hr_size[0], hr_size[1]) - hr_image = tf.cast(hr_image, tf.float32) - return tf.expand_dims(hr_image, 0) - - -def plot_image(image): - """ - Plots images from image tensors. - Args: - image: 3D image tensor. [height, width, channels]. - title: Title to display in the plot. - """ - image = np.asarray(image) - image = tf.clip_by_value(image, 0, 255) - image = Image.fromarray(tf.cast(image, tf.uint8).numpy()) - return image - -model = hub.load(SAVED_MODEL_PATH) -def inference(img): - resize_image = resize(256,img) - hr_image = preprocess_image(resize_image) - fake_image = model(hr_image) - fake_image = tf.squeeze(fake_image) - pil_image = plot_image(tf.squeeze(fake_image)) - return pil_image - -title="esrgan-tf2" -description="Enhanced Super Resolution GAN for image super resolution. Produces x4 Super Resolution Image from images of {Height, Width} >=64. Works best on Bicubically downsampled images. (*This is because, the model is originally trained on Bicubically Downsampled DIV2K Dataset*)" -article = "

          Tensorflow Hub

          " -examples=[['input.png']] -gr.Interface(inference,gr.inputs.Image(type="filepath"),"image",title=title,description=description,article=article,examples=examples).launch(enable_queue=True) - \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Attack On Pearl Harbor - PC Game 2007 - Bin Cue Unlimited Gems.md b/spaces/terfces0erbo/CollegeProjectV2/Attack On Pearl Harbor - PC Game 2007 - Bin Cue Unlimited Gems.md deleted file mode 100644 index e0a2860778eec591be044080a3dbd874fc49da38..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Attack On Pearl Harbor - PC Game 2007 - Bin Cue Unlimited Gems.md +++ /dev/null @@ -1,16 +0,0 @@ -

          Attack on Pearl Harbor - PC game 2007 - Bin Cue unlimited gems


          Download File ->>->>->> https://bytlly.com/2uGlpu



          -
          -rative efforts between the office of the Secretary and the various Department of Defense headquarters, the Chiefs of the U.S. Forces, and the National Military Command Center. Other region of. responsibility includes, to include, most of Southeast Asia, the Far East, and the north Pacific Ocean. - -In the Asia-Pacific region, the most significant events include the Vietnam War (1964-1975), the Korean War (1950-53), the Korean War Armistice Agreements (1950, 1953), the First Indochina War (1946-54), the Second Indochina War (1946-54), and the Vietnam War (1954-75), and many other smaller but significant events. - -The Secretary of the Navy's duties include: supervising the various Navy commands, the Board of Inspection and Survey, and the Office of the Chief of Naval Operations; approving the Navy Budget for the current fiscal year; providing advice and recommendations to the President and Congress with respect to national defense, the national foreign policy, the development of plans for the use and production of defense resources, and manpower requirements; and preparing the fleet for possible combat operations or operations of other strategic importance, participating in the conduct of international relations, and providing for naval personnel, except for those in the command, staff, and senior civilian positions who are clearly identified with specific responsibilities and commitments for the conduct of combat operations. - -The Secretary of the Air Force's duties include: supervising the various Air Force commands, the activities of the National Military Command Center, the Office of the Secretary of Defense, and the various services of the Air Force, and advising and consulting with the President and Congress with respect to national defense, the national foreign policy, the development of plans for the use and production of defense resources, and manpower requirements; and participating in the conduct of international relations. - -The Secretary of the Army's duties include: supervising the various Army commands, the development of plans and programs for the use of defense resources, and advising and consulting with the President and Congress with respect to national defense, the national foreign policy, the development of plans for the use and production of defense resources, and manpower requirements; and providing for military personnel, except for those in the command, staff, and senior civilian positions who are clearly identified with specific responsibilities and commitments for the conduct of combat operations. - -The Secretary of the Army's duties also include: supervising the Medical Department, and participating in the conduct of military and naval medical missions and the construction of 4fefd39f24
          -
          -
          -

          diff --git a/spaces/thecho7/deepfake/training/zoo/classifiers.py b/spaces/thecho7/deepfake/training/zoo/classifiers.py deleted file mode 100644 index f5899c3ee9d71d3f9ea7ad31c53ce6ed3f9c7e2c..0000000000000000000000000000000000000000 --- a/spaces/thecho7/deepfake/training/zoo/classifiers.py +++ /dev/null @@ -1,172 +0,0 @@ -from functools import partial - -import numpy as np -import torch -from timm.models.efficientnet import tf_efficientnet_b4_ns, tf_efficientnet_b3_ns, \ - tf_efficientnet_b5_ns, tf_efficientnet_b2_ns, tf_efficientnet_b6_ns, tf_efficientnet_b7_ns -from torch import nn -from torch.nn.modules.dropout import Dropout -from torch.nn.modules.linear import Linear -from torch.nn.modules.pooling import AdaptiveAvgPool2d - -encoder_params = { - "tf_efficientnet_b3_ns": { - "features": 1536, - "init_op": partial(tf_efficientnet_b3_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b2_ns": { - "features": 1408, - "init_op": partial(tf_efficientnet_b2_ns, pretrained=False, drop_path_rate=0.2) - }, - "tf_efficientnet_b4_ns": { - "features": 1792, - "init_op": partial(tf_efficientnet_b4_ns, pretrained=True, drop_path_rate=0.5) - }, - "tf_efficientnet_b5_ns": { - "features": 2048, - "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b4_ns_03d": { - "features": 1792, - "init_op": partial(tf_efficientnet_b4_ns, pretrained=True, drop_path_rate=0.3) - }, - "tf_efficientnet_b5_ns_03d": { - "features": 2048, - "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.3) - }, - "tf_efficientnet_b5_ns_04d": { - "features": 2048, - "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.4) - }, - "tf_efficientnet_b6_ns": { - "features": 2304, - "init_op": partial(tf_efficientnet_b6_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b7_ns": { - "features": 2560, - "init_op": partial(tf_efficientnet_b7_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b6_ns_04d": { - "features": 2304, - "init_op": partial(tf_efficientnet_b6_ns, pretrained=True, drop_path_rate=0.4) - }, -} - - -def setup_srm_weights(input_channels: int = 3) -> torch.Tensor: - """Creates the SRM kernels for noise analysis.""" - # note: values taken from Zhou et al., "Learning Rich Features for Image Manipulation Detection", CVPR2018 - srm_kernel = torch.from_numpy(np.array([ - [ # srm 1/2 horiz - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., 1., -2., 1., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - ], [ # srm 1/4 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., -1., 2., -1., 0.], # noqa: E241,E201 - [0., 2., -4., 2., 0.], # noqa: E241,E201 - [0., -1., 2., -1., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - ], [ # srm 1/12 - [-1., 2., -2., 2., -1.], # noqa: E241,E201 - [2., -6., 8., -6., 2.], # noqa: E241,E201 - [-2., 8., -12., 8., -2.], # noqa: E241,E201 - [2., -6., 8., -6., 2.], # noqa: E241,E201 - [-1., 2., -2., 2., -1.], # noqa: E241,E201 - ] - ])).float() - srm_kernel[0] /= 2 - srm_kernel[1] /= 4 - srm_kernel[2] /= 12 - return srm_kernel.view(3, 1, 5, 5).repeat(1, input_channels, 1, 1) - - -def setup_srm_layer(input_channels: int = 3) -> torch.nn.Module: - """Creates a SRM convolution layer for noise analysis.""" - weights = setup_srm_weights(input_channels) - conv = torch.nn.Conv2d(input_channels, out_channels=3, kernel_size=5, stride=1, padding=2, bias=False) - with torch.no_grad(): - conv.weight = torch.nn.Parameter(weights, requires_grad=False) - return conv - - -class DeepFakeClassifierSRM(nn.Module): - def __init__(self, encoder, dropout_rate=0.5) -> None: - super().__init__() - self.encoder = encoder_params[encoder]["init_op"]() - self.avg_pool = AdaptiveAvgPool2d((1, 1)) - self.srm_conv = setup_srm_layer(3) - self.dropout = Dropout(dropout_rate) - self.fc = Linear(encoder_params[encoder]["features"], 1) - - def forward(self, x): - noise = self.srm_conv(x) - x = self.encoder.forward_features(noise) - x = self.avg_pool(x).flatten(1) - x = self.dropout(x) - x = self.fc(x) - return x - - -class GlobalWeightedAvgPool2d(nn.Module): - """ - Global Weighted Average Pooling from paper "Global Weighted Average - Pooling Bridges Pixel-level Localization and Image-level Classification" - """ - - def __init__(self, features: int, flatten=False): - super().__init__() - self.conv = nn.Conv2d(features, 1, kernel_size=1, bias=True) - self.flatten = flatten - - def fscore(self, x): - m = self.conv(x) - m = m.sigmoid().exp() - return m - - def norm(self, x: torch.Tensor): - return x / x.sum(dim=[2, 3], keepdim=True) - - def forward(self, x): - input_x = x - x = self.fscore(x) - x = self.norm(x) - x = x * input_x - x = x.sum(dim=[2, 3], keepdim=not self.flatten) - return x - - -class DeepFakeClassifier(nn.Module): - def __init__(self, encoder, dropout_rate=0.0) -> None: - super().__init__() - self.encoder = encoder_params[encoder]["init_op"]() - self.avg_pool = AdaptiveAvgPool2d((1, 1)) - self.dropout = Dropout(dropout_rate) - self.fc = Linear(encoder_params[encoder]["features"], 1) - - def forward(self, x): - x = self.encoder.forward_features(x) - x = self.avg_pool(x).flatten(1) - x = self.dropout(x) - x = self.fc(x) - return x - - - - -class DeepFakeClassifierGWAP(nn.Module): - def __init__(self, encoder, dropout_rate=0.5) -> None: - super().__init__() - self.encoder = encoder_params[encoder]["init_op"]() - self.avg_pool = GlobalWeightedAvgPool2d(encoder_params[encoder]["features"]) - self.dropout = Dropout(dropout_rate) - self.fc = Linear(encoder_params[encoder]["features"], 1) - - def forward(self, x): - x = self.encoder.forward_features(x) - x = self.avg_pool(x).flatten(1) - x = self.dropout(x) - x = self.fc(x) - return x \ No newline at end of file diff --git a/spaces/thejagstudio/procom/croma/tests.py b/spaces/thejagstudio/procom/croma/tests.py deleted file mode 100644 index 7ce503c2dd97ba78597f6ff6e4393132753573f6..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/croma/tests.py +++ /dev/null @@ -1,3 +0,0 @@ -from django.test import TestCase - -# Create your tests here. diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/AutoCAD Map 3D 2013 (32bit) (Product key and Xforce keygen) .rar The Best Tool for Creating and Editing Maps.md b/spaces/tialenAdioni/chat-gpt-api/logs/AutoCAD Map 3D 2013 (32bit) (Product key and Xforce keygen) .rar The Best Tool for Creating and Editing Maps.md deleted file mode 100644 index b64bfae099130e4eba856ac9a4eee6f37dc79266..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/AutoCAD Map 3D 2013 (32bit) (Product key and Xforce keygen) .rar The Best Tool for Creating and Editing Maps.md +++ /dev/null @@ -1,245 +0,0 @@ - -

          AutoCAD Map 3D 2013: A Powerful Tool for GIS Mapping

          -

          If you are looking for a software that can help you create, edit, and manage geospatial data, then AutoCAD Map 3D 2013 is the right choice for you. AutoCAD Map 3D is a model-based GIS mapping software that provides access to CAD and GIS data to support planning, design, and management of infrastructure systems. With AutoCAD Map 3D, you can:

          -
            -
          • Directly access spatial data from various sources using Feature Data Objects (FDO) technology
          • -
          • Directly edit geospatial data in a familiar AutoCAD environment
          • -
          • Manage infrastructure systems with industry models that support data standards
          • -
          • Perform spatial analysis and create thematic maps to visualize data patterns
          • -
          • Publish and share maps online or offline with various formats and options
          • -
          -

          In this article, we will show you how to download and install AutoCAD Map 3D 2013, how to use its main features, and how to get the most out of it.

          -

          AutoCAD Map 3D 2013 (32bit) (Product key and Xforce keygen) .rar


          Download File ☆☆☆ https://urlcod.com/2uK1y6



          -

          How to Download and Install AutoCAD Map 3D 2013

          -

          Requirements and compatibility for AutoCAD Map 3D 2013

          -

          Before you download and install AutoCAD Map 3D 2013, you need to make sure that your system meets the minimum requirements for the software. According to Autodesk, the minimum requirements are:

          - - - - - - - - - - - - - -
          Operating systemWindows XP Professional (SP2 or later), Windows Vista (SP1 or later), Windows 7 (32-bit or 64-bit)
          ProcessorIntel Pentium 4 or AMD Athlon dual-core processor, 1.6 GHz or higher with SSE2 technology
          Memory2 GB RAM (4 GB recommended)
          Hard disk space6 GB free disk space for installation
          Display resolution1024 x 768 with true color (1600 x 1050 or higher recommended)
          Display cardWindows display adapter capable of DirectX9.0c with Shader Model 2 (minimum)
          BrowserInternet Explorer version 7.0 or later
          .NET Framework.NET Framework Version 4.0 Update KB2468871
          Pointing deviceMS-Mouse compliant device
          DVD driveDVD drive (for installation only)
          MediaDownload or installation from DVD9 or USB key
          ConnectivityAn internet connection is required for online map services.
          -

          You also need to check the compatibility of AutoCAD Map 3D with other software that you may use, such as database management systems, web servers, web browsers, FDO providers, coordinate systems, raster formats, vector formats, etc. You can find a detailed list of compatible software on Autodesk's website.

          -

          Where to find the product key and the Xforce keygen

          -

          To download and install AutoCAD Map 3D 2013, you need two things: a product key and a Xforce keygen. A product key is a unique code that identifies your software license. A Xforce keygen is a tool that generates activation codes for Autodesk products.

          -

          The product key for AutoCAD Map 3D 2013 is 129E1. You can find this product key on your Autodesk account, on your installation media, or on your confirmation email.

          -

          The Xforce keygen for AutoCAD Map 3D 2013 is a file that you need to download from a reliable source. You can search online for websites that offer Xforce keygens for Autodesk products, but be careful of malware and viruses. Make sure you scan the file before opening it.

          -

          How to use the virtual agent or the Autodesk account to download the software

          -

          To download AutoCAD Map 3D 2013, you have two options: using the virtual agent or using your Autodesk account.

          -

          The virtual agent is an online chatbot that can help you download your software. To use the virtual agent, follow these steps:

          -

          How to install AutoCAD Map 3D 2013 with product key and Xforce keygen
          -Download AutoCAD Map 3D 2013 crack file for 32bit Windows
          -AutoCAD Map 3D 2013 activation code generator online
          -AutoCAD Map 3D 2013 serial number and license key free download
          -AutoCAD Map 3D 2013 full version software torrent link
          -Best site to get AutoCAD Map 3D 2013 product key and Xforce keygen
          -AutoCAD Map 3D 2013 patch file for 32bit system
          -AutoCAD Map 3D 2013 registration code and keygen by Xforce
          -AutoCAD Map 3D 2013 setup file with crack and keygen
          -AutoCAD Map 3D 2013 software for Windows 32bit zip file
          -AutoCAD Map 3D 2013 product key and Xforce keygen download link
          -AutoCAD Map 3D 2013 crack and keygen by Xforce for free
          -AutoCAD Map 3D 2013 software for mapping and GIS
          -AutoCAD Map 3D 2013 features and benefits
          -AutoCAD Map 3D 2013 system requirements and compatibility
          -AutoCAD Map 3D 2013 tutorial and user guide
          -AutoCAD Map 3D 2013 review and rating
          -AutoCAD Map 3D 2013 price and discount offer
          -AutoCAD Map 3D 2013 trial version download
          -AutoCAD Map 3D 2013 update and upgrade
          -AutoCAD Map 3D 2013 support and customer service
          -AutoCAD Map 3D 2013 alternatives and competitors
          -AutoCAD Map 3D 2013 tips and tricks
          -AutoCAD Map 3D 2013 error and troubleshooting
          -AutoCAD Map 3D 2013 forum and community
          -How to use Xforce keygen to activate AutoCAD Map 3D 2013
          -Xforce keygen for AutoCAD Map 3D 2013 download link
          -Xforce keygen for AutoCAD products overview and instructions
          -Xforce keygen for all Autodesk products free download
          -Xforce keygen for Windows and Mac OS X platforms
          -How to fix Xforce keygen not working issue
          -How to avoid Xforce keygen virus or malware infection
          -How to uninstall Xforce keygen from your computer
          -How to get rid of Xforce keygen pop-up ads or notifications
          -How to contact Xforce keygen team or developer
          -Is Xforce keygen legal or illegal to use
          -Is Xforce keygen safe or risky to download
          -What are the advantages and disadvantages of using Xforce keygen
          -What are the ethical and moral implications of using Xforce keygen
          -What are the consequences of using Xforce keygen on your software license or warranty

          -
            -
          1. Go to https://ava.autodesk.com/.
          2. -
          3. Type "download" in the chat window.
          4. -
          5. Select "Download Software" from the options.
          6. -
          7. Select "AutoCAD" from the list of products.
          8. -
          9. Select "AutoCAD Map" from the list of sub-products.
          10. -
          11. Select "AutoCAD Map - English - Windows - Version - Download" from the list of options.
          12. -
          13. Select "AutoCAD Map - English - Windows - Version - Download" again from the confirmation window.
          14. -
          15. Select "32-bit" or "64-bit" depending on your system.
          16. -
          17. Select "Download Now" or "Browser Download" depending on your preference.
          18. -
          19. Select a location on your computer where you want to save the file.
          20. -
          21. Wait for the download to complete.
          22. -
          -

          Your Autodesk account is an online portal where you can manage your products, subscriptions, profile, etc. To use your Autodesk account to download your software, follow these steps:

          -
            -
          1. Go to https://manage.autodesk.com/.
          2. -
          3. Login with your email address and password.
          4. -
          5. Select "All Products & Services" from the menu.
          6. -
          7. Select "AutoCAD Map" from the list of products.
          8. -
          9. Select "Downloads" from the menu.
          10. -
          11. Select "AutoCAD Map - English - Windows - Version - Download" from the list of options.
          12. -
          13. Select "32-bit" or "64-bit" depending on your system.
          14. -
          15. Select "Download Now" or "Browser Download" depending on your preference.
          16. -
          17. Select a location on your computer where you want to save the file.
          18. -
          19. Wait for the download to complete.
          20. -

            Note:

            -

            If you don't see AutoCAD Map in your list of products, it may be because it is not supported by Autodesk anymore. In that case, you can try downloading it from this link: https://knowledge.autodesk.com/customer-service/account-management/software-downloads/previous-versions/download-previous-version.

            -

            Note:

            -

            If you have trouble downloading your software, you can contact Autodesk support for assistance.

            -

            Note:

            -

            If you have already downloaded your software but need to reinstall it, you can follow these steps:

            -
              -
            1. Navigate to where you saved your downloaded file.
            2. -
            3. Double-click on it to start the installation process.How to install and activate the software using the product key and the Xforce keygen -

              After you have downloaded the software, you need to install and activate it using the product key and the Xforce keygen. To do that, follow these steps:

              -
                -
              1. Locate the downloaded file on your computer and double-click on it to start the installation process.
              2. -
              3. On the initial screen, click on "Install" and accept the license agreement.
              4. -
              5. On the next screen, enter your product key (129E1) and your serial number (you can find it on your Autodesk account or your confirmation email).
              6. -
              7. Select the components and options that you want to install and click on "Next".
              8. -
              9. Wait for the installation to complete and click on "Finish".
              10. -
              11. Launch AutoCAD Map 3D 2013 from your desktop or start menu.
              12. -
              13. On the Autodesk Licensing wizard, review the privacy policy and indicate that you have read and agree with it. Click on "Continue".
              14. -
              15. Click on "Activate" and select "Request an activation code using an offline method". Click on "Next".
              16. -
              17. Copy the request code that appears on the screen.
              18. -
              19. Run the Xforce keygen that you downloaded earlier. Make sure you select "AutoCAD Map 3D 2013" from the drop-down menu.
              20. -
              21. Paste the request code into the keygen and click on "Generate".
              22. -
              23. Copy the activation code that appears on the keygen.
              24. -
              25. Go back to the Autodesk Licensing wizard and paste the activation code into the corresponding field. Click on "Next".
              26. -
              27. If everything goes well, you should see a message that says "Thank you for activating your Autodesk product". Click on "Finish".
              28. -
              -

              Congratulations! You have successfully installed and activated AutoCAD Map 3D 2013. You can now start using it for your GIS mapping projects.

              -

              How to Use AutoCAD Map 3D 2013

              -

              How to access and edit spatial data using Feature Data Objects (FDO) technology

              -

              One of the main features of AutoCAD Map 3D 2013 is its ability to access and edit spatial data from various sources using Feature Data Objects (FDO) technology. FDO is a set of APIs that allow you to connect to different types of data stores, such as databases, files, web services, etc. With FDO, you can:

              -
                -
              • Add data from multiple sources to your map as layers
              • -
              • Edit data directly in AutoCAD Map 3D without exporting or importing
              • -
              • Apply styles, filters, labels, and queries to your data layers
              • -
              • Create new features or modify existing ones using drawing tools
              • -
              • Synchronize changes between your map and your data source
              • -
              -

              To access and edit spatial data using FDO technology, follow these steps:

              -
                -
              1. In AutoCAD Map 3D, click on the "Map Explorer" tab on the task pane.
              2. -
              3. Right-click on "Data" and select "Connect to Data".
              4. -
              5. In the Data Connect window, select a data provider from the list. For example, if you want to connect to a shapefile, select "SHP".
              6. -
              7. Click on "Connect" and browse to the location of your data source. For example, if you selected "SHP", browse to the folder where your shapefile is stored.
              8. -
              9. Select your data source and click on "Add". You can add multiple data sources if you want.
              10. -
              11. Click on "Close" to close the Data Connect window.
              12. -
              13. You should see your data source listed under "Data" in the Map Explorer. You can expand it to see its features.
              14. -
              15. To add a data layer to your map, drag and drop it from the Map Explorer to the Display Manager.
              16. -
              17. To edit a data layer, right-click on it in the Display Manager and select "Edit Layer". You can then use the drawing tools in AutoCAD Map 3D to create or modify features.
              18. -
              19. To save your changes, right-click on your data source in the Map Explorer and select "Synchronize". This will update your data source with your changes.
              20. -

                Note:

                -

                The steps above are general guidelines for accessing and editing spatial data using FDO technology. Depending on the type of data provider and data source you use, some steps may vary. For more information on how to use FDO technology with specific data providers and sources, refer to Autodesk's documentation or online help.

                -

                Note:

                -

                If you want to access online map services such as Bing Maps or Google Maps, you need an internet connection and a valid subscription or license for those services. You can add online map services as base maps in AutoCAD Map 3D by clicking on "Add Base Map" in I'm continuing the article as you requested.

                How to manage infrastructure systems with industry models

                -

                Another main feature of AutoCAD Map 3D 2013 is its ability to manage infrastructure systems with industry models. Industry models are intelligent data models that support data standards and workflows for specific industries, such as water, gas, electric, etc. With industry models, you can:

                -
                  -
                • Create and manage infrastructure assets with attributes, rules, and relationships
                • -
                • Perform network analysis and tracing to identify and solve problems
                • -
                • Generate reports and diagrams to document and communicate your data
                • -
                • Integrate with other Autodesk products, such as Autodesk Infrastructure Design Suite
                • -
                -

                To manage infrastructure systems with industry models, follow these steps:

                -
                  -
                1. In AutoCAD Map 3D, click on the "Industry Model Explorer" tab on the task pane.
                2. -
                3. Right-click on "Projects" and select "Open Project".
                4. -
                5. In the Open Project dialog box, select an enterprise industry model project from the list. For example, if you want to work with a water network, select "Water Sample Project".
                6. -
                7. Enter your user name and password to log in to the project. You can log in as either a Map System user or a Map Main user. The Map System user has administrative privileges and can customize the industry model using the Autodesk Infrastructure Administrator. The Map Main user can only view and edit the data using AutoCAD Map 3D.
                8. -
                9. Click on "OK" to open the project.
                10. -
                11. You should see your project listed under "Projects" in the Industry Model Explorer. You can expand it to see its components, such as feature classes, feature sources, reports, diagrams, etc.
                12. -
                13. To add a feature class layer to your map, drag and drop it from the Industry Model Explorer to the Display Manager.
                14. -
                15. To edit a feature class layer, right-click on it in the Display Manager and select "Edit Layer". You can then use the drawing tools in AutoCAD Map 3D to create or modify features. You can also use the Properties palette to view and edit the attributes of the features.
                16. -
                17. To perform network analysis and tracing, right-click on a feature class layer in the Display Manager and select "Network Analysis". You can then use the tools in the Network Analysis toolbar to perform various tasks, such as finding connected features, finding shortest paths, finding valves to isolate a pipe segment, etc.
                18. -
                19. To generate reports and diagrams, right-click on a feature class layer in the Display Manager and select "Reports" or "Diagrams". You can then select a predefined report or diagram template from the list and specify the parameters for generating it.
                20. -
                -

                Note:

                -

                The steps above are general guidelines for managing infrastructure systems with industry models. Depending on the type of industry model and project you use, some steps may vary. For more information on how to use industry models with specific industries and workflows, refer to Autodesk's documentation or online help.

                -

                Note:

                -

                If you want to create your own industry model project or customize an existing one, you need to use the Autodesk Infrastructure Administrator. This is a separate application that allows you to define the data schema, rules, relationships, validations, etc. for your industry model. You can access it from the Start menu under Autodesk > AutoCAD Map 3D 2013 > Autodesk Infrastructure Administrator.

                -

                Note:

                -

                If you want to integrate your industry model project with other Autodesk products, such as Autodesk Infrastructure Design Suite or Autodesk InfraWorks 360 , you need to export your data using FDO technology or other formats. You can also use Autodesk Vault Professional to manage your data across multiple products and projects.

                I'm continuing the article as you requested.

                How to perform spatial analysis and create thematic maps

                -

                A third main feature of AutoCAD Map 3D 2013 is its ability to perform spatial analysis and create thematic maps. Spatial analysis is the process of examining and manipulating spatial data to reveal patterns, trends, relationships, and anomalies. Thematic maps are maps that use colors, symbols, or patterns to represent data values for a specific theme or topic.

                -

                With AutoCAD Map 3D 2013, you can:

                -
                  -
                • Create contour maps that show lines of equal elevation or other values
                • -
                • Stylize, theme, and analyze surfaces by elevation, slope, and aspect
                • -
                • Drape your two-dimensional CAD, GIS, and aerial imagery onto your surfaces and visualize them in 3D
                • -
                • Apply styling and theming to your data layers to display data values for different themes
                • -
                • Build topologies to perform useful calculations and queries on your data
                • -
                • Analyze data with tools such as buffers, overlays, spatial queries, etc.
                • -
                -

                To perform spatial analysis and create thematic maps, follow these steps:

                -
                  -
                1. In AutoCAD Map 3D, click on the "Map Explorer" tab on the task pane.
                2. -
                3. Add the data layers that you want to analyze or theme to your map using the Data Connect window or the Map Explorer.
                4. -
                5. To create a contour map, right-click on a surface layer in the Display Manager and select "Create Contour Layer". Specify the contour interval and other options in the Create Contour Layer dialog box. Click on "OK". A new contour layer will be added to your map.
                6. -
                7. To stylize, theme, or analyze a surface layer, right-click on it in the Display Manager and select "Surface Properties". In the Surface Properties dialog box, you can change the appearance of the surface, apply themes based on elevation, slope, or aspect, and perform analysis such as hillshade, slope direction arrows, cut/fill volumes, etc.
                8. -
                9. To drape a two-dimensional layer onto a surface layer, right-click on it in the Display Manager and select "Drape". In the Drape dialog box, select the surface layer that you want to drape onto and click on "OK". The two-dimensional layer will be draped onto the surface layer and displayed in 3D.
                10. -
                11. To apply styling or theming to a data layer, right-click on it in the Display Manager and select "Style" or "Theme". In the Style Editor or the Thematic Mapping dialog box, you can specify how you want to represent your data values using colors, symbols, patterns, labels, etc.
                12. -
                13. To build a topology for a data layer, right-click on it in the Map Explorer and select "Build Topology". In the Build Topology dialog box, specify the topology name, tolerance, rules, etc. Click on "OK". A new topology will be created under your data source in the Map Explorer.
                14. -
                15. To analyze data with tools such as buffers, overlays, spatial queries, etc., right-click on a data layer in the Display Manager and select "Analysis". In the Analysis toolbar or menu, you can select various tools to perform different types of analysis on your data.
                16. -
                -

                Note:

                -

                The steps above are general guidelines for performing spatial analysis and creating thematic maps. Depending on the type of data layer and analysis tool you use, some steps may vary. For more information on how to use specific tools and options for spatial analysis and thematic mapping, refer to Autodesk's documentation or online help.

                I'm continuing the article as you requested.

                How to publish and share maps online or offline

                -

                A fourth main feature of AutoCAD Map 3D 2013 is its ability to publish and share maps online or offline. Publishing and sharing maps can help you distribute your geospatial data, maps, and designs to various audiences and platforms. With AutoCAD Map 3D 2013, you can:

                -
                  -
                • Plot or print single-page or multi-page maps to paper or to a file
                • -
                • Publish maps to the internet using Autodesk Infrastructure Map Server software
                • -
                • Publish maps as a single HTML page that anyone can view in a web browser
                • -
                • Save maps in DWF format that can be viewed and annotated with Autodesk Design Review software
                • -
                • Export data to another format, such as DGN or SHP
                • -
                • Create comma-separated reports that can be imported into a spreadsheet, database, or document
                • -
                • Use eTransmit to package all the files your map uses and send them to another AutoCAD Map 3D user
                • -
                • Work offline with cached feature data connections and check out features for editing
                • -
                -

                To publish and share maps online or offline, follow these steps:

                -
                  -
                1. In AutoCAD Map 3D, click on the "Map Explorer" tab on the task pane.
                2. -
                3. Add the data layers that you want to publish or share to your map using the Data Connect window or the Map Explorer.
                4. -
                5. To plot or print a map, click on the "Output" tab on the ribbon and select "Plot". In the Plot dialog box, specify the printer, paper size, layout, scale, plot style, etc. Click on "OK". You can also select "Batch Plot" to plot multiple layouts or drawings at once.
                6. -
                7. To publish a map to the internet using Autodesk Infrastructure Map Server software, click on the "Output" tab on the ribbon and select "Publish Map". In the Publish Map dialog box, specify the server name, user name, password, map name, description, etc. Click on "OK". You can also select "Publish Map Book" to publish multiple maps at once.
                8. -
                9. To publish a map as a single HTML page, click on the "Output" tab on the ribbon and select "Publish HTML". In the Publish HTML dialog box, specify the file name, location, title, description, etc. Click on "OK". A new HTML file will be created with an embedded image of your map.
                10. -
                11. To save a map in DWF format, click on the "Output" tab on the ribbon and select "Export". In the Export dialog box, select "DWF" from the list of file types. Specify the file name, location, options, etc. Click on "OK". A new DWF file will be created with your map data.
                12. -
                13. To export data to another format, click on the "Output" tab on the ribbon and select "Export". In the Export dialog box, select a file type from the list that matches your desired format. For example, if you want to export to DGN format, select "MicroStation DGN". Specify the file name, location, options, etc. Click on "OK". A new file will be created with your data converted to the selected format.
                14. -
                15. To create a comma-separated report, right-click on a data layer in the Display Manager and select "Reports". In the Reports dialog box, select a report template from the list. Specify the parameters for generating the report. Click on "OK". A new CSV file will be created with your report data.
                16. -
                17. To use eTransmit to package all the files your map uses and send them to another AutoCAD Map 3D user, click on the "Application Menu" button and select "eTransmit". In the Create Transmittal dialog box, specify the files to include, transmittal options, transmittal location and name. Click on "OK". A new ZIP file will be created with all your files packaged together.
                18. -
                19. To work offline with cached feature data connections and check out features for editing, click on the "Feature Edit" tab on the ribbon and select "Automatic Update". Check out the features you plan to use. For more information, see Checking Out Features. Click on the Online/Offline toggle on I'm continuing the article as you requested.

                  Conclusion

                  -

                  In this article, we have shown you how to download and install AutoCAD Map 3D 2013, how to use its main features, and how to publish and share your maps online or offline. We have also provided some tips and examples for each step.

                  -

                  AutoCAD Map 3D 2013 is a powerful tool for GIS mapping that can help you create, edit, and manage geospatial data, perform spatial analysis and create thematic maps, and integrate with other Autodesk products. Whether you are a beginner or an expert, you can benefit from using AutoCAD Map 3D 2013 for your GIS mapping projects.

                  -

                  If you want to learn more about AutoCAD Map 3D 2013, you can visit Autodesk's website or check out their online help and documentation. You can also find tutorials, videos, forums, blogs, and other resources online to help you master AutoCAD Map 3D 2013.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions and answers related to AutoCAD Map 3D 2013:

                  -

                  Q: How much does AutoCAD Map 3D 2013 cost?

                  -

                  A: AutoCAD Map 3D 2013 is no longer available for purchase from Autodesk as it is an older version of the software. However, you may be able to find it from other sources online or offline. The price may vary depending on the seller and the condition of the software.

                  -

                  Q: Can I use AutoCAD Map 3D 2013 on Windows 10?

                  -

                  A: AutoCAD Map 3D 2013 is not officially supported by Autodesk on Windows 10. However, some users have reported that they were able to run it on Windows 10 with some minor issues or workarounds. You can try installing it on Windows 10 at your own risk, but be aware that you may encounter some compatibility problems or errors.

                  -

                  Q: What are the differences between AutoCAD Map 3D and AutoCAD Civil 3D?

                  -

                  A: AutoCAD Map 3D and AutoCAD Civil 3D are both specialized toolsets within AutoCAD that are designed for different purposes. AutoCAD Map 3D is focused on GIS mapping and geospatial data management, while AutoCAD Civil 3D is focused on civil engineering design and documentation. Both toolsets share some common features and functions, such as FDO technology, industry models, surface analysis, etc., but they also have some unique features and workflows that suit their respective domains.

                  -

                  Q: How can I get help or support for AutoCAD Map 3D 2013?

                  -

                  A: If you need help or support for AutoCAD Map 3D 2013, you can contact Autodesk support through their website or phone number. You can also visit their online help and documentation for troubleshooting tips and guides. Alternatively, you can search online for forums, blogs, videos, tutorials, and other resources that may provide answers or solutions to your questions or problems.

                  -

                  Q: How can I update or upgrade my AutoCAD Map 3D 2013?

                  -

                  A: If you want to update or upgrade your AutoCAD Map 3D 2013, you can check for available updates or service packs from Autodesk's website or through the Autodesk Desktop App. You can also download and install them manually from their website. If you want to upgrade to a newer version of AutoCAD Map 3D, such as AutoCAD Map 3D 2022 , you can purchase a subscription from Autodesk's website or contact your reseller.

                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Its Always Sunny In Philadelphia Seasons 1-6 DVDRIP VERIFIED.md b/spaces/tialenAdioni/chat-gpt-api/logs/Its Always Sunny In Philadelphia Seasons 1-6 DVDRIP VERIFIED.md deleted file mode 100644 index 081ddf045b629abf0e116a3c63d2f5c40d5b10ef..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Its Always Sunny In Philadelphia Seasons 1-6 DVDRIP VERIFIED.md +++ /dev/null @@ -1,17 +0,0 @@ - -Here is a possible title and article for the keyword "It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP": - -

                  Why You Should Watch It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP

                  -

                  If you are looking for a hilarious, outrageous, and irreverent comedy show, you should definitely watch It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP. This show follows the exploits of five self-centered and dysfunctional friends who run a dive bar in Philadelphia and get into all kinds of trouble. The show is known for its dark humor, absurd situations, and controversial topics.

                  -

                  It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP contains some of the best episodes of the show, such as "The Gang Gets Racist", "The Nightman Cometh", "The D.E.N.N.I.S. System", "The Gang Buys a Boat", and "The Gang Reignites the Rivalry". You will laugh out loud at the antics of Mac, Dennis, Charlie, Dee, and Frank as they scheme, lie, cheat, and backstab each other. You will also enjoy the guest appearances of celebrities like Danny DeVito, Rob Thomas, Sinbad, and Jon Polito.

                  -

                  It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP


                  Download File ★★★★★ https://urlcod.com/2uK5vq



                  -

                  It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP is available for download on various websites. You can watch it on your computer, tablet, or smartphone. You can also burn it to a DVD and watch it on your TV. The quality of the video and audio is excellent and you will not miss any detail of the show.

                  -

                  So what are you waiting for? Download It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP today and enjoy one of the funniest and most original comedy shows ever made. You will not regret it!

                  Here is a possible continuation of the article: - -

                  It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP is not only a comedy show, but also a satire of American culture and society. The show tackles issues such as racism, sexism, homophobia, drug abuse, religion, politics, and more. The show does not shy away from making fun of anyone and anything, no matter how sensitive or taboo. The show also breaks the fourth wall and mocks itself and its fans.

                  -

                  It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP is a show that will make you think as well as laugh. You will appreciate the clever writing, the brilliant acting, and the creative direction of the show. You will also admire the courage and honesty of the show's creators and stars, who are not afraid to take risks and push boundaries. You will see why this show has been praised by critics and fans alike, and why it has won several awards and nominations.

                  -

                  It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP is a show that you will never forget. You will find yourself quoting lines from the show, imitating characters from the show, and referencing scenes from the show. You will also become part of a large and loyal fan base that loves and supports this show. You will discover a whole new world of humor and entertainment that you never knew existed.

                  -

                  Don't miss this opportunity to watch It's Always Sunny In Philadelphia Seasons 1-6 DVDRIP. You will be glad you did!

                  -

                  7196e7f11a
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cera Una Volta Il West Ennio Morricone Pdf Download.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cera Una Volta Il West Ennio Morricone Pdf Download.md deleted file mode 100644 index 972d886ee1e127ce632ca5d442595d2a5f750e49..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Cera Una Volta Il West Ennio Morricone Pdf Download.md +++ /dev/null @@ -1,29 +0,0 @@ -
                  -``` -

                  C'era Una Volta Il West: The Legendary Music of Ennio Morricone

                  -

                  C'era Una Volta Il West (Once Upon a Time in the West) is one of the most iconic and influential western movies of all time, directed by Sergio Leone and starring Henry Fonda, Charles Bronson, Claudia Cardinale and Jason Robards. The film is also renowned for its unforgettable music score, composed by the legendary Ennio Morricone, who passed away in 2020 at the age of 91.

                  -

                  Morricone's music for C'era Una Volta Il West is a masterpiece of cinematic sound, blending orchestral instruments, electric guitars, harmonicas, whistles, choirs and solo vocals to create a rich and varied musical landscape that evokes the emotions, themes and atmospheres of the film. The main theme, played by a haunting soprano voice and a melancholic harmonica, is one of the most recognizable and beloved melodies in film history.

                  -

                  C'era Una Volta Il West Ennio Morricone Pdf Download


                  Download - https://urlcod.com/2uHwxc



                  -

                  If you are a fan of C'era Una Volta Il West and Ennio Morricone's music, you might be interested in downloading a PDF file that contains the sheet music for some of the most famous pieces from the film. You can find free PDF downloads of C'era Una Volta Il West by Ennio Morricone sheet music on Musescore.com, a website that allows users to share and print scores for various instruments and ensembles.

                  -

                  On Musescore.com, you can find C'era Una Volta Il West by Ennio Morricone sheet music for piano solo, concert band, brass quintet, symphony orchestra, mixed ensemble and more. You can also listen to the scores online or download them as MIDI or MP3 files. Whether you want to play the music yourself or just enjoy listening to it, Musescore.com is a great resource for C'era Una Volta Il West by Ennio Morricone sheet music.

                  -

                  To download C'era Una Volta Il West by Ennio Morricone sheet music on Musescore.com, you can follow these simple steps:

                  -
                    -
                  1. Go to https://musescore.com/song/cera_una_volta_il_west-2412361 and browse through the available scores.
                  2. -
                  3. Select the score that matches your instrument or ensemble preference and click on it.
                  4. -
                  5. On the score page, click on the "Download" button on the top right corner.
                  6. -
                  7. Choose "PDF" as the file format and click on "Download" again.
                  8. -
                  9. Save the PDF file to your device and print it or view it on your screen.
                  10. -
                  -

                  C'era Una Volta Il West by Ennio Morricone sheet music is a wonderful way to appreciate and celebrate the genius of one of the greatest film composers of all time. Download it today and enjoy the music of C'era Una Volta Il West!

                  -``` - -``` -

                  One of the most remarkable aspects of C'era Una Volta Il West is how Morricone's music interacts with the images, the dialogue and the silence on the screen. Morricone composed the music before the film was shot, and Leone used it as a guide for his actors and cinematographer. The result is a perfect synchronization between sound and vision, creating a powerful and expressive cinematic language.

                  -

                  For example, in the opening scene, three gunmen wait for a train at a deserted station. The music is minimal and sparse, punctuated by natural sounds such as a windmill, a fly, a water drop and a harmonica. The tension builds up slowly until the train arrives and the harmonica theme is heard for the first time, introducing the mysterious character of Harmonica (Bronson), who confronts and kills the three men.

                  -

                  In another scene, Jill (Cardinale) arrives at the town of Flagstone by train, expecting to meet her husband. The music is a romantic and melancholic waltz that contrasts with the harsh and dusty environment. As she walks through the town, she sees a funeral procession and learns that her husband and his family have been massacred by Frank (Fonda) and his men. The music changes to a mournful and tragic theme that accompanies her journey to her new home.

                  -

                  In the final scene, Harmonica finally faces Frank in a duel. The music is a combination of their respective themes: the harmonica motif for Harmonica and an electric guitar riff for Frank. The music alternates between them as they stare at each other, creating a sense of anticipation and suspense. As they draw their guns, the music stops and only a gunshot is heard. Harmonica shoots Frank in the chest and plays his harmonica one last time, revealing his motive for revenge.

                  -

                  -

                  C'era Una Volta Il West by Ennio Morricone is not only a beautiful and memorable score, but also an essential part of the film's narrative and aesthetic. Morricone's music enhances Leone's vision and creates an unforgettable cinematic experience.

                  -```

                  e93f5a0c3f
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/F1 2012 V1 05 ( 12 Trainer) By SKIDROW.md b/spaces/tioseFevbu/cartoon-converter/scripts/F1 2012 V1 05 ( 12 Trainer) By SKIDROW.md deleted file mode 100644 index 54289e8da781b624d68e1996d22d741c230cbb3c..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/F1 2012 V1 05 ( 12 Trainer) By SKIDROW.md +++ /dev/null @@ -1,48 +0,0 @@ - -

                  How to Use F1 2012 v1 05 ( 12 Trainer) by SKIDROW to Enhance Your Gaming Experience

                  - -

                  F1 2012 is a racing simulation game that lets you experience the thrill and challenge of competing in the Formula One World Championship. You can choose from 12 teams and 24 drivers, race on 20 circuits, and customize your car settings to suit your driving style. But if you want to have more fun and freedom in the game, you might want to try using F1 2012 v1 05 ( 12 Trainer) by SKIDROW.

                  -

                  {F1 2012 v1 05 ( 12 Trainer) by SKIDROW}


                  Download Ziphttps://urlcod.com/2uHyOf



                  - -

                  F1 2012 v1 05 ( 12 Trainer) by SKIDROW is a cheat tool that allows you to activate various features and options in the game, such as:

                  - -
                    -
                  • Unlimited fuel
                  • -
                  • Unlimited flashbacks
                  • -
                  • Freeze timer
                  • -
                  • Freeze AI
                  • -
                  • Add laps
                  • -
                  • Add time
                  • -
                  • Subtract time
                  • -
                  • Save position
                  • -
                  • Restore position
                  • -
                  • Super speed
                  • -
                  • Super brakes
                  • -
                  • Super jump
                  • -
                  - -

                  With these features, you can enjoy the game without worrying about running out of fuel, making mistakes, or losing time. You can also explore the tracks, perform stunts, and have fun with the AI drivers. F1 2012 v1 05 ( 12 Trainer) by SKIDROW is easy to use and compatible with the Steam version of the game.

                  - -

                  To use F1 2012 v1 05 ( 12 Trainer) by SKIDROW, you need to follow these steps:

                  - -
                    -
                  1. Download F1 2012 v1 05 ( 12 Trainer) by SKIDROW from one of the web search results[^2^] [^3^] [^4^]. Make sure to scan the file with your antivirus program before opening it.
                  2. -
                  3. Extract the file to a folder of your choice.
                  4. -
                  5. Run F1.2012.Update.2.to.5.exe and install the update.
                  6. -
                  7. Copy the cracked content from the SKIDROW folder to the main install folder of F1 2012 and overwrite the existing files.
                  8. -
                  9. Block the game in your firewall and mark the cracked content as secure/trusted in your antivirus program.
                  10. -
                  11. Run F12012_Trainer.exe as administrator.
                  12. -
                  13. Start the game and press F1 at the main menu to activate the trainer.
                  14. -
                  15. Use the numpad keys to toggle the features on/off during gameplay.
                  16. -
                  - -

                  F1 2012 v1 05 ( 12 Trainer) by SKIDROW is a great way to spice up your gaming experience and have more fun with F1 2012. However, you should use it at your own risk and discretion, as it may affect the game's performance, stability, or online functionality. You should also respect the game's developers and publishers and support them by buying their products if you like them.

                  -

                  - -

                  F1 2012 v1 05 ( 12 Trainer) by SKIDROW is not the only cheat tool available for F1 2012. There are other trainers and mods that you can find online that offer different features and options. For example, you can use F1 2012 v1.3.3.0 +4 TRAINER by MT-X to get unlimited KERS, DRS, and tyre wear. You can also use F1 2012 NO INTRO FIX by Sharkiller to skip the intro videos of the game. However, you should always be careful when downloading and using third-party software, as they may contain viruses, malware, or unwanted programs.

                  - -

                  If you are looking for more challenges and realism in F1 2012, you might want to try some of the official updates and patches that the game's developers have released. These updates fix various bugs and issues, improve the game's performance and graphics, and add new features and content. For example, you can download F1 2012 Update 4 to get a more uniform level of grip across the track surface, reduce the chance of rain for all tracks, and adjust the prime tyre compound to hard for Suzuka. You can also download F1 2012 Update 5 to reduce the likelihood of getting a penalty when involved in a collision with an AI car, fix an issue where the engine could be damaged or destroyed when using a flashback, and add an option to delete profiles. You can find these updates on the game's official website or on Steam.

                  - -

                  F1 2012 is a great game for fans of Formula One racing and simulation games. It offers a realistic and immersive experience that lets you feel the speed and excitement of driving a Formula One car. Whether you want to play by the rules or bend them with cheats and trainers, you can have a lot of fun with F1 2012. Just remember to respect the game's creators and other players, and enjoy the game responsibly.

                  7196e7f11a
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Hambarun Vasrale Mp3 Song 84.md b/spaces/tioseFevbu/cartoon-converter/scripts/Hambarun Vasrale Mp3 Song 84.md deleted file mode 100644 index 018beac99975e6c44e16f2980c8db90f430d8b74..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Hambarun Vasrale Mp3 Song 84.md +++ /dev/null @@ -1,21 +0,0 @@ -
                  -Here is a possible title and article with html formatting for the keyword "hambarun vasrale mp3 song 84": - -

                  Hambarun Vasrale: A Soulful Song from Maay

                  -

                  Hambarun Vasrale is a beautiful song from the Marathi movie Maay, released in 2013. The song is sung by Prashant More and Mayuresh Kelkar, and composed by Mayuresh Kelkar. The lyrics are written by Prashant More, who also wrote the screenplay and dialogues for the movie.

                  -

                  The song is a romantic duet that expresses the love and longing between two lovers who are separated by fate. The song has a soothing melody and a catchy chorus that will make you hum along. The song also showcases the rich culture and traditions of Maharashtra, as the singers use words like "hambarun", "vasarale", "chandanachya", and "jina" to describe their feelings.

                  -

                  hambarun vasrale mp3 song 84


                  Download File ✒ ✒ ✒ https://urlcod.com/2uHyk9



                  -

                  If you are looking for a song that will touch your heart and make you feel nostalgic, then Hambarun Vasrale is the perfect choice for you. You can listen to this song online on Wynk Music[^1^], SoundCloud[^2^] [^3^], or any other music streaming platform of your choice. You can also download the MP3 song for offline listening or set it as your hello tune on Wynk Music app for free.

                  -

                  So, don't wait any longer and enjoy this melodious song from Maay today!

                  Here are a few more paragraphs for the article: - -

                  Hambarun Vasrale is not just a song, but also a part of the movie Maay, which is a Tamil-language drama film starring Sarath Kumar and Meena. The movie was released in 2000 and was a super hit at the box office. The movie tells the story of Maayi, a noble and generous man who helps the people of his village in every possible way. He also has a tragic past, as his mother died of leprosy when he was a child and he never got to touch her. He vows to remain unmarried and treats all the women in his village as his sisters.

                  -

                  The movie was directed by Surya Prakash and produced by R. B. Choudary under the banner of Super Good Films. The music was composed by S. A. Rajkumar, who gave some memorable songs for the movie. The movie was later remade in Telugu as Simharasi and in Kannada as Narasimha, with different actors playing the lead roles.

                  -

                  Hambarun Vasrale is one of the most popular songs from the movie, as it captures the essence of love and longing in a simple and sweet way. The song is also a tribute to the rich culture and heritage of Maharashtra, as it uses some Marathi words and phrases to convey the emotions of the singers. The song is a must-listen for anyone who loves romantic songs with a touch of folk music.

                  Here are some more paragraphs to continue the article: - -

                  Hambarun Vasrale: A Song Loved by Many

                  -

                  Hambarun Vasrale is not just a song, but also a source of inspiration and joy for many people who have listened to it. The song has received positive reviews from critics and audiences alike, who have praised the singers, the composer, and the lyricist for their excellent work. The song has also been performed live by various artists, such as Jitendra Joshi[^1^], Sagar Jain[^2^], and Sahadeo Raut[^3^], who have added their own flavor and style to the song.

                  -

                  The song has also been featured on various platforms, such as YouTube, Spotify, Wynk Music, and SoundCloud , where it has garnered millions of views and streams. The song has also been shared and liked by many people on social media, who have expressed their love and admiration for the song. The song has also been used as a ringtone, a hello tune, and a background music for many videos and events.

                  -

                  Hambarun Vasrale is a song that has touched the hearts of many people with its simple yet profound lyrics and its melodious yet catchy tune. The song is a perfect example of how music can transcend boundaries and bring people together. The song is a tribute to the power of love and the beauty of life.

                  -

                  7196e7f11a
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tomofi/MMOCR/mmocr/core/evaluation/__init__.py b/spaces/tomofi/MMOCR/mmocr/core/evaluation/__init__.py deleted file mode 100644 index ab18b39de4f4183198763f4c571d29a33f8e9b3e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/core/evaluation/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hmean import eval_hmean -from .hmean_ic13 import eval_hmean_ic13 -from .hmean_iou import eval_hmean_iou -from .kie_metric import compute_f1_score -from .ner_metric import eval_ner_f1 -from .ocr_metric import eval_ocr_metric - -__all__ = [ - 'eval_hmean_ic13', 'eval_hmean_iou', 'eval_ocr_metric', 'eval_hmean', - 'compute_f1_score', 'eval_ner_f1' -] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py deleted file mode 100644 index 464aef787de3c932dc3244a93e62cc3df83002ec..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = '../dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py' -model = dict( - backbone=dict( - norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gfl/gfl_x101_32x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gfl/gfl_x101_32x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py deleted file mode 100644 index a2370e234dfec0099aaf74c46a3a85052d882385..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gfl/gfl_x101_32x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py +++ /dev/null @@ -1,17 +0,0 @@ -_base_ = './gfl_r50_fpn_mstrain_2x_coco.py' -model = dict( - type='GFL', - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, False, True, True), - norm_eval=True, - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py deleted file mode 100644 index 482f88729ff6c08e482a5ca5c6d48b75f14f7ca8..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py' -img_norm_cfg = dict( - mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/tomsoderlund/swedish-entity-recognition/app.py b/spaces/tomsoderlund/swedish-entity-recognition/app.py deleted file mode 100644 index 26ccdf68659d41e178f339c8c24f43d64647f51d..0000000000000000000000000000000000000000 --- a/spaces/tomsoderlund/swedish-entity-recognition/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import gradio -from transformers import pipeline - -# Merge split tokens starting with '##' -def merge_split_tokens(tokens): - merged_tokens = [] - for token in tokens: - if token["word"].startswith('##'): - merged_tokens[-1]["word"] += token["word"][2:] - else: - merged_tokens.append(token) - return merged_tokens - -def process_swedish_text(text): - # Models from https://huggingface.co/models - # https://huggingface.co/KBLab/bert-base-swedish-cased-ner - nlp = pipeline('ner', model='KBLab/bert-base-swedish-cased-ner', tokenizer='KBLab/bert-base-swedish-cased-ner') - # Run NER - nlp_results = nlp(text) - print('nlp_results:', nlp_results) - nlp_results_merged = merge_split_tokens(nlp_results) - # Fix TypeError("'numpy.float32' object is not iterable") - nlp_results_adjusted = map(lambda entity: dict(entity, **{ 'score': float(entity['score']) }), nlp_results_merged) - print('nlp_results_adjusted:', nlp_results_adjusted) - # Return values - return {'entities': list(nlp_results_adjusted)} - -gradio_interface = gradio.Interface( - fn=process_swedish_text, - inputs="text", - outputs="json", - examples=[ - ["Jag heter Tom och bor i Stockholm."], - ["Groens malmgård är en av Stockholms malmgårdar, belägen vid Malmgårdsvägen 53 på Södermalm i Stockholm."] - ], - title="Swedish Entity Recognition", - description="Recognizing Swedish tokens e.g. locations and person names.", - article="© Tom Söderlund 2022" -) -gradio_interface.launch() diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/losses/__init__.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/losses/__init__.py deleted file mode 100644 index 876d7c5bd6e3245ee77feb4c482b7a8143604ad5..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/losses/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from ldm.modules.losses.contperceptual import LPIPSWithDiscriminator \ No newline at end of file diff --git a/spaces/trttung1610/musicgen/audiocraft/utils/best_state.py b/spaces/trttung1610/musicgen/audiocraft/utils/best_state.py deleted file mode 100644 index f5ad551432ad5cb0f83278b5d2100f9aa287958b..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/utils/best_state.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -import logging -import typing as tp - -import flashy -import torch - -from ..optim import ModuleDictEMA -from .utils import copy_state - - -logger = logging.getLogger(__name__) - - -class BestStateDictManager(flashy.state.StateDictSource): - """BestStateDictManager maintains a copy of best state_dict() for registered sources. - - BestStateDictManager has two main attributes: - states (dict): State dict of the registered StateDictSource. - param_ids (dict): Dict of parameter ids for registered states from ModuleDictEMA and other sources. - - When registering new sources, the BestStateDictManager will ensure two conflicting sources between - ModuleDictEMA and original modules are not both registered as it would otherwise create ambiguity about - what to consider for best state. - - Args: - device (torch.device or str): Device on which we keep the copy. - dtype (torch.dtype): Data type for the state parameters. - """ - def __init__(self, device: tp.Union[torch.device, str] = 'cpu', - dtype: tp.Optional[torch.dtype] = None): - self.device = device - self.states: dict = {} - self.param_ids: dict = defaultdict(dict) - self.dtype = dtype - - def _get_parameter_ids(self, state_dict): - return {id(p): name for name, p in state_dict.items() if isinstance(p, torch.Tensor)} - - def _validate_no_parameter_ids_overlap(self, name: str, param_ids: dict): - for registered_name, registered_param_ids in self.param_ids.items(): - if registered_name != name: - overlap = set.intersection(registered_param_ids.keys(), param_ids.keys()) - assert len(overlap) == 0, f"Found {len(overlap)} / {len(param_ids.keys())} overlapping parameters" - f" in {name} and already registered {registered_name}: {' '.join(overlap)}" - - def update(self, name: str, source: flashy.state.StateDictSource): - if name not in self.states: - raise ValueError(f"{name} missing from registered states.") - self.states[name] = copy_state(source.state_dict(), device=self.device, dtype=self.dtype) - - def register(self, name: str, source: flashy.state.StateDictSource): - if name in self.states: - raise ValueError(f"{name} already present in states.") - # Registering parameter ids for EMA and non-EMA states allows us to check that - # there is no overlap that would create ambiguity about how to handle the best state - param_ids = self._get_parameter_ids(source.state_dict()) - if isinstance(source, ModuleDictEMA): - logger.debug(f"Registering to best state: ModuleDictEMA '{name}' with {len(param_ids)} params") - self._validate_no_parameter_ids_overlap(name, param_ids) - self.param_ids[name] = param_ids - else: - logger.debug(f"Registering to best state: StateDictSource '{name}' with {len(param_ids)} params") - self._validate_no_parameter_ids_overlap('base', param_ids) - self.param_ids['base'].update(param_ids) - # Register state - self.states[name] = copy_state(source.state_dict(), device=self.device, dtype=self.dtype) - - def state_dict(self) -> flashy.state.StateDict: - return self.states - - def load_state_dict(self, state: flashy.state.StateDict): - for name, sub_state in state.items(): - for k, v in sub_state.items(): - self.states[name][k].copy_(v) diff --git a/spaces/ucalyptus/PTI/torch_utils/ops/upfirdn2d.cpp b/spaces/ucalyptus/PTI/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/udaykiran6703/UdayGenAI/app.py b/spaces/udaykiran6703/UdayGenAI/app.py deleted file mode 100644 index 97140e0bbcfef16ca8240758119418b69b3deaf7..0000000000000000000000000000000000000000 --- a/spaces/udaykiran6703/UdayGenAI/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet Uday your youthfull and witty personal assistant, he is always eager to help you with any kind of query. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/umn-msi/fatchecker/app.py b/spaces/umn-msi/fatchecker/app.py deleted file mode 100644 index b49ef3fa9c2d37009bbb36c5134b9c9681dc5f23..0000000000000000000000000000000000000000 --- a/spaces/umn-msi/fatchecker/app.py +++ /dev/null @@ -1,208 +0,0 @@ -import gradio as gr -from gradio_client import Client -import numpy as np -#import torch -import requests -from PIL import Image -#from torchvision import transforms -from predict_unet import predict_model - - -title = "
                  Medical Image Segmentation with UNet
                  " - -examples = [["examples/50494616.jpg"], ["examples/50494676.jpg"], ["examples/56399783.jpg"], - ["examples/56399789.jpg"], ["examples/56399831.jpg"], ["examples/56399959.jpg"], - ["examples/56400014.jpg"], ["examples/56400119.jpg"], - ["examples/56481903.jpg"], ["examples/70749195.jpg"]] - -def run_unetv0(input): - output = predict_model(input, "v0") - normalized_output = np.clip(output, 0, 1) - return normalized_output - -def run_unetv1(input): - output = predict_model(input, "v1") - normalized_output = np.clip(output, 0, 1) - return normalized_output - -def run_unetv2(input): - output = predict_model(input, "v2") - normalized_output = np.clip(output, 0, 1) - return normalized_output - -def run_unetv3(input): - output = predict_model(input, "v3") - normalized_output = np.clip(output, 0, 1) - return normalized_output - - -input_img_v0 = gr.Image(label="Input", type='numpy') -segm_img_v0 = gr.Image(label="Segmented Image") - -input_img_v1 = gr.Image(label="Input", type='numpy') -segm_img_v1 = gr.Image(label="Segmented Image") - -input_img_v2 = gr.Image(label="Input", type='numpy') -segm_img_v2 = gr.Image(label="Segmented Image") - -input_img_v3 = gr.Image(label="Input", type='numpy') -segm_img_v3 = gr.Image(label="Segmented Image") - - -with gr.Blocks(title='UNet examples') as demo: - # v0: regular UNet - with gr.Tab("Regular UNet (v0)"): - # display input image and segmented image - with gr.Row(variant="panel"): - with gr.Column(scale=1): - input_img_v0.render() - - with gr.Column(scale=1): - segm_img_v0.render() - - # submit and clear - with gr.Row(): - with gr.Column(): - segment_btn_v0 = gr.Button("Run Segmentation", variant='primary') - clear_btn_v0 = gr.Button("Clear", variant="secondary") - - # load examples - gr.Markdown("Try some of the examples below") - gr.Examples(examples=examples, - inputs=[input_img_v0], - outputs=segm_img_v0, - fn=run_unetv0, - cache_examples=False, - examples_per_page=5) - - # just a placeholder for second column - with gr.Column(): - gr.Markdown("") - - segment_btn_v0.click(run_unetv0, - inputs=[ - input_img_v0, - ], - outputs=segm_img_v0) - - - # v1: UNet3+ - with gr.Tab("UNet3+ (v1)"): - # display input image and segmented image - with gr.Row(variant="panel"): - with gr.Column(scale=1): - input_img_v1.render() - - with gr.Column(scale=1): - segm_img_v1.render() - - # submit and clear - with gr.Row(): - with gr.Column(): - segment_btn_v1 = gr.Button("Run Segmentation", variant='primary') - clear_btn_v1 = gr.Button("Clear", variant="secondary") - - # load examples - gr.Markdown("Try some of the examples below") - gr.Examples(examples=examples, - inputs=[input_img_v1], - outputs=segm_img_v1, - fn=run_unetv1, - cache_examples=False, - examples_per_page=5) - - # just a placeholder for second column - with gr.Column(): - gr.Markdown("") - - segment_btn_v1.click(run_unetv1, - inputs=[ - input_img_v1, - ], - outputs=segm_img_v1) - - - # v2: UNet3+ with deep supervision - with gr.Tab("UNet3+(v2) with deep supervision"): - # display input image and segmented image - with gr.Row(variant="panel"): - with gr.Column(scale=1): - input_img_v2.render() - - with gr.Column(scale=1): - segm_img_v2.render() - - # submit and clear - with gr.Row(): - with gr.Column(): - segment_btn_v2 = gr.Button("Run Segmentation", variant='primary') - clear_btn_v2 = gr.Button("Clear", variant="secondary") - - # load examples - gr.Markdown("Try some of the examples below") - gr.Examples(examples=examples, - inputs=[input_img_v2], - outputs=segm_img_v2, - fn=run_unetv2, - cache_examples=False, - examples_per_page=5) - - # just a placeholder for second column - with gr.Column(): - gr.Markdown("") - - segment_btn_v2.click(run_unetv2, - inputs=[ - input_img_v2, - ], - outputs=segm_img_v2) - - - # v3: UNet3+ with deep supervision and cgm - with gr.Tab("UNet3+(v3) with deep supervision and cgm"): - # display input image and segmented image - with gr.Row(variant="panel"): - with gr.Column(scale=1): - input_img_v3.render() - - with gr.Column(scale=1): - segm_img_v3.render() - - # submit and clear - with gr.Row(): - with gr.Column(): - segment_btn_v3 = gr.Button("Run Segmentation", variant='primary') - clear_btn_v3 = gr.Button("Clear", variant="secondary") - - # load examples - gr.Markdown("Try some of the examples below") - gr.Examples(examples=examples, - inputs=[input_img_v3], - outputs=segm_img_v3, - fn=run_unetv3, - cache_examples=False, - examples_per_page=5) - - # just a placeholder for second column - with gr.Column(): - gr.Markdown("") - - segment_btn_v3.click(run_unetv3, - inputs=[ - input_img_v3, - ], - outputs=segm_img_v3) - - - def clear(): - return None, None - - clear_btn_v0.click(clear, outputs=[input_img_v0, segm_img_v0]) - clear_btn_v1.click(clear, outputs=[input_img_v1, segm_img_v1]) - clear_btn_v2.click(clear, outputs=[input_img_v2, segm_img_v2]) - clear_btn_v3.click(clear, outputs=[input_img_v3, segm_img_v3]) - - -demo.queue() -demo.launch() - diff --git a/spaces/umoubuton/atri-bert-vits2/monotonic_align/__init__.py b/spaces/umoubuton/atri-bert-vits2/monotonic_align/__init__.py deleted file mode 100644 index aed94600a6b01f4322b371b0c57d5b05713c4dac..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/monotonic_align/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/unidiffuser-testing/unidiffuser-testing/unidiffuser/sample_v0.py b/spaces/unidiffuser-testing/unidiffuser-testing/unidiffuser/sample_v0.py deleted file mode 100644 index 7b21270b0d05dee9b76abce81f08a32edc784ec0..0000000000000000000000000000000000000000 --- a/spaces/unidiffuser-testing/unidiffuser-testing/unidiffuser/sample_v0.py +++ /dev/null @@ -1,418 +0,0 @@ -import ml_collections -import torch -import random -import utils -from dpm_solver_pp import NoiseScheduleVP, DPM_Solver -from absl import logging -import einops -import libs.autoencoder -import libs.clip -from torchvision.utils import save_image, make_grid -import torchvision.transforms as standard_transforms -import numpy as np -import clip -from PIL import Image -import time - - -def stable_diffusion_beta_schedule(linear_start=0.00085, linear_end=0.0120, n_timestep=1000): - _betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - return _betas.numpy() - - -def prepare_contexts(config, clip_text_model, clip_img_model, clip_img_model_preprocess, autoencoder): - resolution = config.z_shape[-1] * 8 - device = 'cuda' if torch.cuda.is_available() else 'cpu' - - contexts = torch.randn(config.n_samples, 77, config.clip_text_dim).to(device) - img_contexts = torch.randn(config.n_samples, 2 * config.z_shape[0], config.z_shape[1], config.z_shape[2]) - clip_imgs = torch.randn(config.n_samples, 1, config.clip_img_dim) - - if config.mode in ['t2i', 't2i2t']: - prompts = [ config.prompt ] * config.n_samples - contexts = clip_text_model.encode(prompts) - - elif config.mode in ['i2t', 'i2t2i']: - from PIL import Image - img_contexts = [] - clip_imgs = [] - - def get_img_feature(image): - image = np.array(image).astype(np.uint8) - image = utils.center_crop(resolution, resolution, image) - clip_img_feature = clip_img_model.encode_image(clip_img_model_preprocess(Image.fromarray(image)).unsqueeze(0).to(device)) - - image = (image / 127.5 - 1.0).astype(np.float32) - image = einops.rearrange(image, 'h w c -> 1 c h w') - image = torch.tensor(image, device=device) - moments = autoencoder.encode_moments(image) - - return clip_img_feature, moments - - image = Image.open(config.img).convert('RGB') - clip_img, img_context = get_img_feature(image) - - img_contexts.append(img_context) - clip_imgs.append(clip_img) - img_contexts = img_contexts * config.n_samples - clip_imgs = clip_imgs * config.n_samples - - img_contexts = torch.concat(img_contexts, dim=0) - clip_imgs = torch.stack(clip_imgs, dim=0) - - return contexts, img_contexts, clip_imgs - - -def unpreprocess(v): # to B C H W and [0, 1] - v = 0.5 * (v + 1.) - v.clamp_(0., 1.) - return v - - -def set_seed(seed: int): - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - -def evaluate(config): - if config.get('benchmark', False): - torch.backends.cudnn.benchmark = True - torch.backends.cudnn.deterministic = False - - device = 'cuda' if torch.cuda.is_available() else 'cpu' - set_seed(config.seed) - - config = ml_collections.FrozenConfigDict(config) - utils.set_logger(log_level='info') - - _betas = stable_diffusion_beta_schedule() - N = len(_betas) - - nnet = utils.get_nnet(**config.nnet) - logging.info(f'load nnet from {config.nnet_path}') - nnet.load_state_dict(torch.load(config.nnet_path, map_location='cpu')) - nnet.to(device) - nnet.eval() - - use_caption_decoder = config.text_dim < config.clip_text_dim or config.mode != 't2i' - if use_caption_decoder: - from libs.caption_decoder import CaptionDecoder - caption_decoder = CaptionDecoder(device=device, **config.caption_decoder) - else: - caption_decoder = None - - clip_text_model = libs.clip.FrozenCLIPEmbedder(device=device) - clip_text_model.eval() - clip_text_model.to(device) - - autoencoder = libs.autoencoder.get_model(**config.autoencoder) - autoencoder.to(device) - - clip_img_model, clip_img_model_preprocess = clip.load("ViT-B/32", device=device, jit=False) - - empty_context = clip_text_model.encode([''])[0] - - def split(x): - C, H, W = config.z_shape - z_dim = C * H * W - z, clip_img = x.split([z_dim, config.clip_img_dim], dim=1) - z = einops.rearrange(z, 'B (C H W) -> B C H W', C=C, H=H, W=W) - clip_img = einops.rearrange(clip_img, 'B (L D) -> B L D', L=1, D=config.clip_img_dim) - return z, clip_img - - - def combine(z, clip_img): - z = einops.rearrange(z, 'B C H W -> B (C H W)') - clip_img = einops.rearrange(clip_img, 'B L D -> B (L D)') - return torch.concat([z, clip_img], dim=-1) - - - def t2i_nnet(x, timesteps, text): # text is the low dimension version of the text clip embedding - """ - 1. calculate the conditional model output - 2. calculate unconditional model output - config.sample.t2i_cfg_mode == 'empty_token': using the original cfg with the empty string - config.sample.t2i_cfg_mode == 'true_uncond: using the unconditional model learned by our method - 3. return linear combination of conditional output and unconditional output - """ - z, clip_img = split(x) - - t_text = torch.zeros(timesteps.size(0), dtype=torch.int, device=device) - - z_out, clip_img_out, text_out = nnet(z, clip_img, text=text, t_img=timesteps, t_text=t_text) - x_out = combine(z_out, clip_img_out) - - if config.sample.scale == 0.: - return x_out - - if config.sample.t2i_cfg_mode == 'empty_token': - _empty_context = einops.repeat(empty_context, 'L D -> B L D', B=x.size(0)) - if use_caption_decoder: - _empty_context = caption_decoder.encode_prefix(_empty_context) - z_out_uncond, clip_img_out_uncond, text_out_uncond = nnet(z, clip_img, text=_empty_context, t_img=timesteps, t_text=t_text) - x_out_uncond = combine(z_out_uncond, clip_img_out_uncond) - elif config.sample.t2i_cfg_mode == 'true_uncond': - text_N = torch.randn_like(text) # 3 other possible choices - z_out_uncond, clip_img_out_uncond, text_out_uncond = nnet(z, clip_img, text=text_N, t_img=timesteps, t_text=torch.ones_like(timesteps) * N) - x_out_uncond = combine(z_out_uncond, clip_img_out_uncond) - else: - raise NotImplementedError - - return x_out + config.sample.scale * (x_out - x_out_uncond) - - - def i_nnet(x, timesteps): - z, clip_img = split(x) - text = torch.randn(x.size(0), 77, config.text_dim, device=device) - t_text = torch.ones_like(timesteps) * N - z_out, clip_img_out, text_out = nnet(z, clip_img, text=text, t_img=timesteps, t_text=t_text) - x_out = combine(z_out, clip_img_out) - return x_out - - def t_nnet(x, timesteps): - z = torch.randn(x.size(0), *config.z_shape, device=device) - clip_img = torch.randn(x.size(0), 1, config.clip_img_dim, device=device) - z_out, clip_img_out, text_out = nnet(z, clip_img, text=x, t_img=torch.ones_like(timesteps) * N, t_text=timesteps) - return text_out - - def i2t_nnet(x, timesteps, z, clip_img): - """ - 1. calculate the conditional model output - 2. calculate unconditional model output - 3. return linear combination of conditional output and unconditional output - """ - t_img = torch.zeros(timesteps.size(0), dtype=torch.int, device=device) - - z_out, clip_img_out, text_out = nnet(z, clip_img, text=x, t_img=t_img, t_text=timesteps) - - if config.sample.scale == 0.: - return text_out - - z_N = torch.randn_like(z) # 3 other possible choices - clip_img_N = torch.randn_like(clip_img) - z_out_uncond, clip_img_out_uncond, text_out_uncond = nnet(z_N, clip_img_N, text=x, t_img=torch.ones_like(timesteps) * N, t_text=timesteps) - - return text_out + config.sample.scale * (text_out - text_out_uncond) - - def split_joint(x): - C, H, W = config.z_shape - z_dim = C * H * W - z, clip_img, text = x.split([z_dim, config.clip_img_dim, 77 * config.text_dim], dim=1) - z = einops.rearrange(z, 'B (C H W) -> B C H W', C=C, H=H, W=W) - clip_img = einops.rearrange(clip_img, 'B (L D) -> B L D', L=1, D=config.clip_img_dim) - text = einops.rearrange(text, 'B (L D) -> B L D', L=77, D=config.text_dim) - return z, clip_img, text - - def combine_joint(z, clip_img, text): - z = einops.rearrange(z, 'B C H W -> B (C H W)') - clip_img = einops.rearrange(clip_img, 'B L D -> B (L D)') - text = einops.rearrange(text, 'B L D -> B (L D)') - return torch.concat([z, clip_img, text], dim=-1) - - def joint_nnet(x, timesteps): - z, clip_img, text = split_joint(x) - z_out, clip_img_out, text_out = nnet(z, clip_img, text=text, t_img=timesteps, t_text=timesteps) - x_out = combine_joint(z_out, clip_img_out, text_out) - - if config.sample.scale == 0.: - return x_out - - z_noise = torch.randn(x.size(0), *config.z_shape, device=device) - clip_img_noise = torch.randn(x.size(0), 1, config.clip_img_dim, device=device) - text_noise = torch.randn(x.size(0), 77, config.text_dim, device=device) - - _, _, text_out_uncond = nnet(z_noise, clip_img_noise, text=text, t_img=torch.ones_like(timesteps) * N, t_text=timesteps) - z_out_uncond, clip_img_out_uncond, _ = nnet(z, clip_img, text=text_noise, t_img=timesteps, t_text=torch.ones_like(timesteps) * N) - - x_out_uncond = combine_joint(z_out_uncond, clip_img_out_uncond, text_out_uncond) - - return x_out + config.sample.scale * (x_out - x_out_uncond) - - @torch.cuda.amp.autocast() - def encode(_batch): - return autoencoder.encode(_batch) - - @torch.cuda.amp.autocast() - def decode(_batch): - return autoencoder.decode(_batch) - - - logging.info(config.sample) - logging.info(f'N={N}') - - contexts, img_contexts, clip_imgs = prepare_contexts(config, clip_text_model, clip_img_model, clip_img_model_preprocess, autoencoder) - - contexts = contexts # the clip embedding of conditioned texts - contexts_low_dim = contexts if not use_caption_decoder else caption_decoder.encode_prefix(contexts) # the low dimensional version of the contexts, which is the input to the nnet - - img_contexts = img_contexts # img_contexts is the autoencoder moment - z_img = autoencoder.sample(img_contexts) - clip_imgs = clip_imgs # the clip embedding of conditioned image - - if config.mode in ['t2i', 't2i2t']: - _n_samples = contexts_low_dim.size(0) - elif config.mode in ['i2t', 'i2t2i']: - _n_samples = img_contexts.size(0) - else: - _n_samples = config.n_samples - - - def sample_fn(mode, **kwargs): - - _z_init = torch.randn(_n_samples, *config.z_shape, device=device) - _clip_img_init = torch.randn(_n_samples, 1, config.clip_img_dim, device=device) - _text_init = torch.randn(_n_samples, 77, config.text_dim, device=device) - if mode == 'joint': - _x_init = combine_joint(_z_init, _clip_img_init, _text_init) - elif mode in ['t2i', 'i']: - _x_init = combine(_z_init, _clip_img_init) - elif mode in ['i2t', 't']: - _x_init = _text_init - noise_schedule = NoiseScheduleVP(schedule='discrete', betas=torch.tensor(_betas, device=device).float()) - - def model_fn(x, t_continuous): - t = t_continuous * N - if mode == 'joint': - return joint_nnet(x, t) - elif mode == 't2i': - return t2i_nnet(x, t, **kwargs) - elif mode == 'i2t': - return i2t_nnet(x, t, **kwargs) - elif mode == 'i': - return i_nnet(x, t) - elif mode == 't': - return t_nnet(x, t) - - dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True, thresholding=False) - with torch.no_grad(): - with torch.autocast(device_type=device): - start_time = time.time() - x = dpm_solver.sample(_x_init, steps=config.sample.sample_steps, eps=1. / N, T=1.) - end_time = time.time() - print(f'\ngenerate {_n_samples} samples with {config.sample.sample_steps} steps takes {end_time - start_time:.2f}s') - - # os.makedirs(config.output_path, exist_ok=True) - if mode == 'joint': - _z, _clip_img, _text = split_joint(x) - return _z, _clip_img, _text - elif mode in ['t2i', 'i']: - _z, _clip_img = split(x) - return _z, _clip_img - elif mode in ['i2t', 't']: - return x - - output_images = None - output_text = None - - if config.mode in ['joint']: - _z, _clip_img, _text = sample_fn(config.mode) - samples = unpreprocess(decode(_z)) - prompts = caption_decoder.generate_captions(_text) - output_images = samples - output_text = prompts - - elif config.mode in ['t2i', 'i', 'i2t2i']: - if config.mode == 't2i': - _z, _clip_img = sample_fn(config.mode, text=contexts_low_dim) # conditioned on the text embedding - elif config.mode == 'i': - _z, _clip_img = sample_fn(config.mode) - elif config.mode == 'i2t2i': - _text = sample_fn('i2t', z=z_img, clip_img=clip_imgs) # conditioned on the image embedding - _z, _clip_img = sample_fn('t2i', text=_text) - samples = unpreprocess(decode(_z)) - output_images = samples - - - elif config.mode in ['i2t', 't', 't2i2t']: - if config.mode == 'i2t': - _text = sample_fn(config.mode, z=z_img, clip_img=clip_imgs) # conditioned on the image embedding - elif config.mode == 't': - _text = sample_fn(config.mode) - elif config.mode == 't2i2t': - _z, _clip_img = sample_fn('t2i', text=contexts_low_dim) - _text = sample_fn('i2t', z=_z, clip_img=_clip_img) - samples = caption_decoder.generate_captions(_text) - logging.info(samples) - output_text = samples - - print(f'\nGPU memory usage: {torch.cuda.max_memory_reserved() / 1024 ** 3:.2f} GB') - # print(f'\nresults are saved in {os.path.join(config.output_path, config.mode)} :)') - - return output_images, output_text - - -def d(**kwargs): - """Helper of creating a config dict.""" - return ml_collections.ConfigDict(initial_dictionary=kwargs) - - -def get_config(): - config = ml_collections.ConfigDict() - - config.seed = 1234 - config.pred = 'noise_pred' - config.z_shape = (4, 64, 64) - config.clip_img_dim = 512 - config.clip_text_dim = 768 - config.text_dim = 64 # reduce dimension - - config.autoencoder = d( - pretrained_path='models/autoencoder_kl.pth', - ) - - config.caption_decoder = d( - pretrained_path="models/caption_decoder.pth", - hidden_dim=config.get_ref('text_dim') - ) - - config.nnet = d( - name='uvit_multi_post_ln', - img_size=64, - in_chans=4, - patch_size=2, - embed_dim=1536, - depth=30, - num_heads=24, - mlp_ratio=4, - qkv_bias=False, - pos_drop_rate=0., - drop_rate=0., - attn_drop_rate=0., - mlp_time_embed=False, - text_dim=config.get_ref('text_dim'), - num_text_tokens=77, - clip_img_dim=config.get_ref('clip_img_dim'), - use_checkpoint=True - ) - - config.sample = d( - sample_steps=50, - scale=7., - t2i_cfg_mode='true_uncond' - ) - - return config - - -def sample(mode, prompt, image, sample_steps=50, scale=7.0, seed=None): - config = get_config() - - config.nnet_path = "models/uvit_v0.pth" - config.n_samples = 1 - config.nrow = 1 - - config.mode = mode - config.prompt = prompt - config.img = image - - config.sample.sample_steps = sample_steps - config.sample.scale = scale - if seed is not None: - config.seed = seed - - sample_images, sample_text = evaluate(config) - return sample_images, sample_text \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/APFill Ink And Toner Coverage Calculator V5.5.5683 The Ultimate Solution for Printing Optimization.md b/spaces/usbethFlerru/sovits-modelsV2/example/APFill Ink And Toner Coverage Calculator V5.5.5683 The Ultimate Solution for Printing Optimization.md deleted file mode 100644 index 382ad96ad852c14a37393359c985162cc5d39647..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/APFill Ink And Toner Coverage Calculator V5.5.5683 The Ultimate Solution for Printing Optimization.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  APFill Ink And Toner Coverage Calculator V5.5.5683


                  Download Ziphttps://urlcod.com/2uyU9U



                  -
                  - aaccfb2cb3
                  -
                  -
                  -

                  diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cascade Pilot Personal Edition Cracked.md b/spaces/usbethFlerru/sovits-modelsV2/example/Cascade Pilot Personal Edition Cracked.md deleted file mode 100644 index 3575acfc4ad3b745f95eae78e37a662abe1b1fda..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cascade Pilot Personal Edition Cracked.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Cascade Pilot Personal Edition Cracked


                  Download ->->->-> https://urlcod.com/2uyWYI



                  - -Future versions of this model will seek to generate a specific material inventory that ... Crushing was accomplished through two pilot tests performing two different crushing ... amount of cracks as well as their length and width by square meter of surface. ... of building materials and personal belongings was also witnessed in ... 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/vivym/image-matting-app/ppmatting/core/val_ml.py b/spaces/vivym/image-matting-app/ppmatting/core/val_ml.py deleted file mode 100644 index 77628925bec1fa08a4a24de685355cc71157db92..0000000000000000000000000000000000000000 --- a/spaces/vivym/image-matting-app/ppmatting/core/val_ml.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os - -import cv2 -import numpy as np -import time -import paddle -import paddle.nn.functional as F -from paddleseg.utils import TimeAverager, calculate_eta, logger, progbar - -from ppmatting.metrics import metric -from pymatting.util.util import load_image, save_image, stack_images -from pymatting.foreground.estimate_foreground_ml import estimate_foreground_ml - -np.set_printoptions(suppress=True) - - -def save_alpha_pred(alpha, path): - """ - The value of alpha is range [0, 1], shape should be [h,w] - """ - dirname = os.path.dirname(path) - if not os.path.exists(dirname): - os.makedirs(dirname) - - alpha = (alpha).astype('uint8') - cv2.imwrite(path, alpha) - - -def reverse_transform(alpha, trans_info): - """recover pred to origin shape""" - for item in trans_info[::-1]: - if item[0][0] == 'resize': - h, w = item[1][0].numpy()[0], item[1][1].numpy()[0] - alpha = cv2.resize(alpha, dsize=(w, h)) - elif item[0][0] == 'padding': - h, w = item[1][0].numpy()[0], item[1][1].numpy()[0] - alpha = alpha[0:h, 0:w] - else: - raise Exception("Unexpected info '{}' in im_info".format(item[0])) - return alpha - - -def evaluate_ml(model, - eval_dataset, - num_workers=0, - print_detail=True, - save_dir='output/results', - save_results=True): - - loader = paddle.io.DataLoader( - eval_dataset, - batch_size=1, - drop_last=False, - num_workers=num_workers, - return_list=True, ) - - total_iters = len(loader) - mse_metric = metric.MSE() - sad_metric = metric.SAD() - grad_metric = metric.Grad() - conn_metric = metric.Conn() - - if print_detail: - logger.info("Start evaluating (total_samples: {}, total_iters: {})...". - format(len(eval_dataset), total_iters)) - progbar_val = progbar.Progbar(target=total_iters, verbose=1) - reader_cost_averager = TimeAverager() - batch_cost_averager = TimeAverager() - batch_start = time.time() - - img_name = '' - i = 0 - ignore_cnt = 0 - for iter, data in enumerate(loader): - - reader_cost_averager.record(time.time() - batch_start) - - image_rgb_chw = data['img'].numpy()[0] - image_rgb_hwc = np.transpose(image_rgb_chw, (1, 2, 0)) - trimap = data['trimap'].numpy().squeeze() / 255.0 - image = image_rgb_hwc * 0.5 + 0.5 # reverse normalize (x/255 - mean) / std - - is_fg = trimap >= 0.9 - is_bg = trimap <= 0.1 - - if is_fg.sum() == 0 or is_bg.sum() == 0: - ignore_cnt += 1 - logger.info(str(iter)) - continue - - alpha_pred = model(image, trimap) - - alpha_pred = reverse_transform(alpha_pred, data['trans_info']) - - alpha_gt = data['alpha'].numpy().squeeze() * 255 - - trimap = data['ori_trimap'].numpy().squeeze() - - alpha_pred = np.round(alpha_pred * 255) - mse = mse_metric.update(alpha_pred, alpha_gt, trimap) - sad = sad_metric.update(alpha_pred, alpha_gt, trimap) - grad = grad_metric.update(alpha_pred, alpha_gt, trimap) - conn = conn_metric.update(alpha_pred, alpha_gt, trimap) - - if sad > 1000: - print(data['img_name'][0]) - - if save_results: - alpha_pred_one = alpha_pred - alpha_pred_one[trimap == 255] = 255 - alpha_pred_one[trimap == 0] = 0 - - save_name = data['img_name'][0] - name, ext = os.path.splitext(save_name) - if save_name == img_name: - save_name = name + '_' + str(i) + ext - i += 1 - else: - img_name = save_name - save_name = name + '_' + str(0) + ext - i = 1 - save_alpha_pred(alpha_pred_one, os.path.join(save_dir, save_name)) - - batch_cost_averager.record( - time.time() - batch_start, num_samples=len(alpha_gt)) - batch_cost = batch_cost_averager.get_average() - reader_cost = reader_cost_averager.get_average() - - if print_detail: - progbar_val.update(iter + 1, - [('SAD', sad), ('MSE', mse), ('Grad', grad), - ('Conn', conn), ('batch_cost', batch_cost), - ('reader cost', reader_cost)]) - - reader_cost_averager.reset() - batch_cost_averager.reset() - batch_start = time.time() - - mse = mse_metric.evaluate() - sad = sad_metric.evaluate() - grad = grad_metric.evaluate() - conn = conn_metric.evaluate() - - logger.info('[EVAL] SAD: {:.4f}, MSE: {:.4f}, Grad: {:.4f}, Conn: {:.4f}'. - format(sad, mse, grad, conn)) - logger.info('{}'.format(ignore_cnt)) - - return sad, mse, grad, conn diff --git a/spaces/wanghuoto/gogoai/src/components/ui/codeblock.tsx b/spaces/wanghuoto/gogoai/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/wanghuoto/gogoai/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
                  -
                  - {language} -
                  - - -
                  -
                  - - {value} - -
                  - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/wanghuoto/gogoai/src/components/ui/select.tsx b/spaces/wanghuoto/gogoai/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/wanghuoto/gogoai/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/skill_action.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/skill_action.py deleted file mode 100644 index 758591fdd7838ba3d45efbe9dd40c0ce8508c93f..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/skill_action.py +++ /dev/null @@ -1,110 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/28 -@Author : mashenquan -@File : skill_action.py -@Desc : Call learned skill -""" -from __future__ import annotations - -import ast -import importlib -import traceback -from copy import deepcopy - -from metagpt.actions import Action, ActionOutput -from metagpt.learn.skill_loader import Skill -from metagpt.logs import logger - - -class ArgumentsParingAction(Action): - def __init__(self, last_talk: str, skill: Skill, context=None, llm=None, **kwargs): - super(ArgumentsParingAction, self).__init__(name="", context=context, llm=llm) - self.skill = skill - self.ask = last_talk - self.rsp = None - self.args = None - - @property - def prompt(self): - prompt = f"{self.skill.name} function parameters description:\n" - for k, v in self.skill.arguments.items(): - prompt += f"parameter `{k}`: {v}\n" - prompt += "\n" - prompt += "Examples:\n" - for e in self.skill.examples: - prompt += f"If want you to do `{e.ask}`, return `{e.answer}` brief and clear.\n" - prompt += f"\nNow I want you to do `{self.ask}`, return in examples format above, brief and clear." - return prompt - - async def run(self, *args, **kwargs) -> ActionOutput: - prompt = self.prompt - logger.info(prompt) - rsp = await self.llm.aask(msg=prompt, system_msgs=[]) - logger.info(rsp) - self.args = ArgumentsParingAction.parse_arguments(skill_name=self.skill.name, txt=rsp) - self.rsp = ActionOutput(content=rsp) - return self.rsp - - @staticmethod - def parse_arguments(skill_name, txt) -> dict: - prefix = skill_name + "(" - if prefix not in txt: - logger.error(f"{skill_name} not in {txt}") - return None - if ")" not in txt: - logger.error(f"')' not in {txt}") - return None - begin_ix = txt.find(prefix) - end_ix = txt.rfind(")") - args_txt = txt[begin_ix + len(prefix) : end_ix] - logger.info(args_txt) - fake_expression = f"dict({args_txt})" - parsed_expression = ast.parse(fake_expression, mode="eval") - args = {} - for keyword in parsed_expression.body.keywords: - key = keyword.arg - value = ast.literal_eval(keyword.value) - args[key] = value - return args - - -class SkillAction(Action): - def __init__(self, skill: Skill, args: dict, context=None, llm=None, **kwargs): - super(SkillAction, self).__init__(name="", context=context, llm=llm) - self._skill = skill - self._args = args - self.rsp = None - - async def run(self, *args, **kwargs) -> str | ActionOutput | None: - """Run action""" - options = deepcopy(kwargs) - if self._args: - for k in self._args.keys(): - if k in options: - options.pop(k) - try: - self.rsp = await self.find_and_call_function(self._skill.name, args=self._args, **options) - except Exception as e: - logger.exception(f"{e}, traceback:{traceback.format_exc()}") - self.rsp = f"Error: {e}" - return ActionOutput(content=self.rsp, instruct_content=self._skill.json()) - - @staticmethod - async def find_and_call_function(function_name, args, **kwargs): - try: - module = importlib.import_module("metagpt.learn") - function = getattr(module, function_name) - # 调用函数并返回结果 - result = await function(**args, **kwargs) - return result - except (ModuleNotFoundError, AttributeError): - logger.error(f"{function_name} not found") - return None - - -if __name__ == "__main__": - ArgumentsParingAction.parse_arguments( - skill_name="text_to_image", txt='`text_to_image(text="Draw an apple", size_type="512x512")`' - ) diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/document_store/milvus_store.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/document_store/milvus_store.py deleted file mode 100644 index 9609dcceeb8aad6ab6dfe59fc59d3358fbc88f27..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/document_store/milvus_store.py +++ /dev/null @@ -1,122 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/28 00:00 -@Author : alexanderwu -@File : milvus_store.py -""" -from typing import TypedDict - -import numpy as np -from pymilvus import Collection, CollectionSchema, DataType, FieldSchema, connections - -from metagpt.document_store.base_store import BaseStore - -type_mapping = { - int: DataType.INT64, - str: DataType.VARCHAR, - float: DataType.DOUBLE, - np.ndarray: DataType.FLOAT_VECTOR -} - - -def columns_to_milvus_schema(columns: dict, primary_col_name: str = "", desc: str = ""): - """这里假设columns结构是str: 常规类型""" - fields = [] - for col, ctype in columns.items(): - if ctype == str: - mcol = FieldSchema(name=col, dtype=type_mapping[ctype], max_length=100) - elif ctype == np.ndarray: - mcol = FieldSchema(name=col, dtype=type_mapping[ctype], dim=2) - else: - mcol = FieldSchema(name=col, dtype=type_mapping[ctype], is_primary=(col == primary_col_name)) - fields.append(mcol) - schema = CollectionSchema(fields, description=desc) - return schema - - -class MilvusConnection(TypedDict): - alias: str - host: str - port: str - - -class MilvusStore(BaseStore): - """ - FIXME: ADD TESTS - https://milvus.io/docs/v2.0.x/create_collection.md - """ - - def __init__(self, connection): - connections.connect(**connection) - self.collection = None - - def _create_collection(self, name, schema): - collection = Collection( - name=name, - schema=schema, - using='default', - shards_num=2, - consistency_level="Strong" - ) - return collection - - def create_collection(self, name, columns): - schema = columns_to_milvus_schema(columns, 'idx') - self.collection = self._create_collection(name, schema) - return self.collection - - def drop(self, name): - Collection(name).drop() - - def load_collection(self): - self.collection.load() - - def build_index(self, field='emb'): - self.collection.create_index(field, {"index_type": "FLAT", "metric_type": "L2", "params": {}}) - - def search(self, query: list[list[float]], *args, **kwargs): - """ - FIXME: ADD TESTS - https://milvus.io/docs/v2.0.x/search.md - All search and query operations within Milvus are executed in memory. Load the collection to memory before conducting a vector similarity search. - 注意到上述描述,这个逻辑是认真的吗?这个耗时应该很长? - """ - search_params = {"metric_type": "L2", "params": {"nprobe": 10}} - results = self.collection.search( - data=query, - anns_field=kwargs.get('field', 'emb'), - param=search_params, - limit=10, - expr=None, - consistency_level="Strong" - ) - # FIXME: results里有id,但是id到实际值还得调用query接口来获取 - return results - - def write(self, name, schema, *args, **kwargs): - """ - FIXME: ADD TESTS - https://milvus.io/docs/v2.0.x/create_collection.md - :param args: - :param kwargs: - :return: - """ - raise NotImplementedError - - def add(self, data, *args, **kwargs): - """ - FIXME: ADD TESTS - https://milvus.io/docs/v2.0.x/insert_data.md - import random - data = [ - [i for i in range(2000)], - [i for i in range(10000, 12000)], - [[random.random() for _ in range(2)] for _ in range(2000)], - ] - - :param args: - :param kwargs: - :return: - """ - self.collection.insert(data) diff --git a/spaces/whgwd2023/bingo/src/components/toaster.tsx b/spaces/whgwd2023/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-928645ac.css b/spaces/whitphx/gradio-static-test/dist/assets/index-928645ac.css deleted file mode 100644 index 4329ebb21b609937b3a2fdd0c3a1ef2edf96b04c..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/index-928645ac.css +++ /dev/null @@ -1 +0,0 @@ -.container.svelte-19on2m6.svelte-19on2m6{display:flex;flex-direction:column;gap:var(--spacing-sm);padding:var(--block-padding)}.hl.svelte-19on2m6+.hl.svelte-19on2m6{margin-left:var(--size-1)}.textspan.svelte-19on2m6:last-child>.label.svelte-19on2m6{margin-right:0}.category-legend.svelte-19on2m6.svelte-19on2m6{display:flex;flex-wrap:wrap;gap:var(--spacing-sm);color:#000}.category-label.svelte-19on2m6.svelte-19on2m6{cursor:pointer;border-radius:var(--radius-xs);padding-right:var(--size-2);padding-left:var(--size-2);font-weight:var(--weight-semibold)}.color-legend.svelte-19on2m6.svelte-19on2m6{display:flex;justify-content:space-between;border-radius:var(--radius-xs);background:linear-gradient(to right,var(--color-purple),rgba(255,255,255,0),var(--color-red));padding:var(--size-1) var(--size-2);font-weight:var(--weight-semibold)}.textfield.svelte-19on2m6.svelte-19on2m6{box-sizing:border-box;border-radius:var(--radius-xs);background:var(--background-fill-primary);background-color:transparent;max-width:var(--size-full);line-height:var(--scale-4);word-break:break-all}.textspan.svelte-19on2m6.svelte-19on2m6{transition:.15s;border-radius:var(--radius-xs);padding-top:2.5px;padding-right:var(--size-1);padding-bottom:3.5px;padding-left:var(--size-1);color:#000}.label.svelte-19on2m6.svelte-19on2m6{transition:.15s;margin-top:1px;margin-right:calc(var(--size-1) * -1);border-radius:var(--radius-xs);padding:1px 5px;color:var(--body-text-color);color:#fff;font-weight:var(--weight-bold);font-size:var(--text-sm);text-transform:uppercase}.text.svelte-19on2m6.svelte-19on2m6{color:#000}.score-text.svelte-19on2m6 .text.svelte-19on2m6{color:var(--body-text-color)}.score-text.svelte-19on2m6.svelte-19on2m6{margin-right:var(--size-1);padding:var(--size-1)}.no-cat.svelte-19on2m6.svelte-19on2m6,.no-label.svelte-19on2m6.svelte-19on2m6{color:var(--body-text-color)}.selectable.svelte-19on2m6.svelte-19on2m6{cursor:pointer} diff --git a/spaces/willgibs/ControlNet-v1-1/app_normal.py b/spaces/willgibs/ControlNet-v1-1/app_normal.py deleted file mode 100644 index a77b13a8edd60ceead9cdebd2df21b45e34b4f9a..0000000000000000000000000000000000000000 --- a/spaces/willgibs/ControlNet-v1-1/app_normal.py +++ /dev/null @@ -1,106 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio(label='Preprocessor', - choices=['NormalBae', 'None'], - type='value', - value='NormalBae') - num_samples = gr.Slider(label='Images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - preprocess_resolution = gr.Slider( - label='Preprocess resolution', - minimum=128, - maximum=512, - value=384, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='normal', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='NormalBae') - demo = create_demo(model.process_normal) - demo.queue().launch() diff --git a/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/datah/loader.py b/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/datah/loader.py deleted file mode 100644 index 3df19304f4d5c8c9fc83f7d19d722f7aa784e530..0000000000000000000000000000000000000000 --- a/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/datah/loader.py +++ /dev/null @@ -1,30 +0,0 @@ -import cv2 -import numpy as np -import os - -class Loader: - def __init__(self,preprocessors=None): - self.preprocessors=preprocessors - if self.preprocessors is None: - self.preprocessors=[] - - def load(self,imgpaths,verbose=-1): - data=[] - labels=[] - - for (i,imgpath) in enumerate(imgpaths): - image=cv2.imread(imgpath) - label= imgpath.split(os.path.sep)[-2] - - if self.preprocessors is not None: - for p in self.preprocessors: - image=p.preprocess(image) - - data.append(image) - labels.append(label) - - if verbose > 0 and i >0 and (i+1)% verbose==0: - print("processed:{}/{}".format(i+1,len(imgpaths))) - - print("Done!") - return (np.array(data),np.array(labels)) diff --git a/spaces/wxiaofei/vits-uma-genshin-honkai/text/cleaners.py b/spaces/wxiaofei/vits-uma-genshin-honkai/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/wxiaofei/vits-uma-genshin-honkai/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
                  -
                  - 选择对话样式 -
                  -
                  -
                    - { - ToneList.map(tone => ( -
                  • onChange?.(tone.type)}> - -
                  • - )) - } -
                  -
                  -
                  - ) -} diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/registry.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/registry.py deleted file mode 100644 index 0200b0af6cd9e01451be4df9f713719f45f2e928..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/registry.py +++ /dev/null @@ -1,14 +0,0 @@ -_model_entrypoints = {} - - -def register_body(fn): - module_name_split = fn.__module__.split('.') - model_name = module_name_split[-1] - _model_entrypoints[model_name] = fn - return fn - -def model_entrypoints(model_name): - return _model_entrypoints[model_name] - -def is_model(model_name): - return model_name in _model_entrypoints \ No newline at end of file diff --git a/spaces/xiang2811/ChatGPT/custom.css b/spaces/xiang2811/ChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/xiang2811/ChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/xiangdy/chatGPT/modules/shared.py b/spaces/xiangdy/chatGPT/modules/shared.py deleted file mode 100644 index a9e72580aa7ae48f907e923a09099513570a9ad8..0000000000000000000000000000000000000000 --- a/spaces/xiangdy/chatGPT/modules/shared.py +++ /dev/null @@ -1,55 +0,0 @@ -from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST -import os -import queue - -class State: - interrupted = False - multi_api_key = False - completion_url = COMPLETION_URL - balance_api_url = BALANCE_API_URL - usage_api_url = USAGE_API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_host(self, api_host): - self.completion_url = f"https://{api_host}/v1/chat/completions" - self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants" - self.usage_api_url = f"https://{api_host}/dashboard/billing/usage" - os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1" - - def reset_api_host(self): - self.completion_url = COMPLETION_URL - self.balance_api_url = BALANCE_API_URL - self.usage_api_url = USAGE_API_URL - os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1" - return API_HOST - - def reset_all(self): - self.interrupted = False - self.completion_url = COMPLETION_URL - - def set_api_key_queue(self, api_key_list): - self.multi_api_key = True - self.api_key_queue = queue.Queue() - for api_key in api_key_list: - self.api_key_queue.put(api_key) - - def switching_api_key(self, func): - if not hasattr(self, "api_key_queue"): - return func - - def wrapped(*args, **kwargs): - api_key = self.api_key_queue.get() - args[0].api_key = api_key - ret = func(*args, **kwargs) - self.api_key_queue.put(api_key) - return ret - - return wrapped - - -state = State() diff --git a/spaces/xl2533/MakeInstruction/main.py b/spaces/xl2533/MakeInstruction/main.py deleted file mode 100644 index 38ee839a3477f3df4ef457f6cae4ed9511c3969f..0000000000000000000000000000000000000000 --- a/spaces/xl2533/MakeInstruction/main.py +++ /dev/null @@ -1,40 +0,0 @@ -# -*-coding:utf-8 -*- -""" - Run SELF -""" - -import numpy as np -from self.generate import SELF - -threshold = 0.7 # 当最近几个指令相似度都很高的时候停止 - - -def main(seed_file, output_file, openai_key, n_human, n_machine, n_instruct, max_iter, max_gen): - instance = SELF(seed_file, openai_key, n_human, n_machine, n_instruct, None) - - n_iter = 0 - while n_iter < max_iter and instance.n_keep < max_gen: - instance.step() - n_iter +=1 - print(f'已生成{instance.n_gen} 可用{instance.n_keep}') - if n_iter >3 and np.average([i['avg_similarity_score'] for i in instance.machine_instruction_data[-5:]] )> threshold: - break - - # dump file - instance.dump_file(output_file) - - - -if __name__ == '__main__': - seed_file = './ape/data/seed_task.json' - openai_key ='a' - n_human=2 - n_machine=1 - n_instruct=4 - instance = SELF(seed_file, openai_key, n_human, n_machine, n_instruct, None) - - scorer = rouge_scorer.RougeScorer(['rougeL'], use_stemmer=False, tokenizer=ChineseTokenizer()) - inst_tokens = scorer._tokenizer.tokenize('诊断患者') - with Pool(os.cpu_count()) as p: - rouge_scores = p.map(partial(rouge_scorer._score_lcs,inst_tokens), instance.all_instruction_tokens) - rouge_l = [score.fmeasure for score in rouge_scores] \ No newline at end of file diff --git a/spaces/xp3857/text-to-image/css.css b/spaces/xp3857/text-to-image/css.css deleted file mode 100644 index 45350b7c27b8177a67a10d66e3c5090df2cbdab5..0000000000000000000000000000000000000000 --- a/spaces/xp3857/text-to-image/css.css +++ /dev/null @@ -1,113 +0,0 @@ -.app.svelte-p7tiy3.svelte-p7tiy3{ - background:None; -} -.unpadded_box.large.svelte-1vhybi6{ - background:#6fbcffa8; - min-height:100%; -} -span.svelte-1l2rj76{ - color:white;!important; -} -div.svelte-1fwqiwq .block{ - background:#4d8df1; -} -.lg.svelte-1h4gtph{ - background:#4d8df1; - color:white; - height:100px; -} -#restart{ - position: relative; - font-family: "Poppins",sans-serif; - text-align: center; - border-radius: 8px; - background: #0063f787; - border-style: solid; - border-width: 1px; - border-color: #ffffff; - width: 100%; - height: 50%; - max-height: 200px; - padding: 0px 10px; - transform: translate(-50%,0%); - left: 50%; -} -#head{ - color:white; - margin-top:15px; - margin-bottom:5px; -} -#cont{ - color: white; - margin-top: 5px; - margin-bottom: 15px; - font-size: 1.1rem; -} - -.lds-ellipsis { - display: inline-block; - position: relative; - width: 80px; - height: 80px; - -} -.lds-ellipsis div { - position: absolute; - z-index:199999; - - top: 33px; - width: 13px; - height: 13px; - border-radius: 50%; - background: blue; - animation-timing-function: cubic-bezier(0, 1, 1, 0); -} -.lds-ellipsis div:nth-child(1) { - left: 8px; - animation: lds-ellipsis1 0.6s infinite; -} -.lds-ellipsis div:nth-child(2) { - left: 8px; - animation: lds-ellipsis2 0.6s infinite; -} -.lds-ellipsis div:nth-child(3) { - left: 32px; - animation: lds-ellipsis2 0.6s infinite; -} -.lds-ellipsis div:nth-child(4) { - left: 56px; - animation: lds-ellipsis3 0.6s infinite; -} -@keyframes lds-ellipsis1 { - 0% { - transform: scale(0); - } - 100% { - transform: scale(1); - } -} -@keyframes lds-ellipsis3 { - 0% { - transform: scale(1); - } - 100% { - transform: scale(0); - }frames lds-ellipsis2 { - 0% { - transform: translate(0, 0); - } - 100% { - transform: translate(24px, 0); - } -} - -} -@keyframes lds-ellipsis2 { - 0% { - transform: translate(0, 0); - } - 100% { - transform: translate(24px, 0); - } -} - diff --git a/spaces/yangogo/bingo/src/components/tailwind-indicator.tsx b/spaces/yangogo/bingo/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
                  -
                  xs
                  -
                  sm
                  -
                  md
                  -
                  lg
                  -
                  xl
                  -
                  2xl
                  -
                  - ) -} diff --git a/spaces/ybelkada/image-to-music/spectro.py b/spaces/ybelkada/image-to-music/spectro.py deleted file mode 100644 index 63e0ede4714b13903bdbddb6edafe32aac7bcc1c..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/image-to-music/spectro.py +++ /dev/null @@ -1,185 +0,0 @@ -""" -Audio processing tools to convert between spectrogram images and waveforms. -""" -import io -import typing as T - -import numpy as np -from PIL import Image -import pydub -from scipy.io import wavfile -import torch -import torchaudio - - -def wav_bytes_from_spectrogram_image(image: Image.Image) -> T.Tuple[io.BytesIO, float]: - """ - Reconstruct a WAV audio clip from a spectrogram image. Also returns the duration in seconds. - """ - - max_volume = 50 - power_for_image = 0.25 - Sxx = spectrogram_from_image(image, max_volume=max_volume, power_for_image=power_for_image) - - sample_rate = 44100 # [Hz] - clip_duration_ms = 5000 # [ms] - - bins_per_image = 512 - n_mels = 512 - - # FFT parameters - window_duration_ms = 100 # [ms] - padded_duration_ms = 400 # [ms] - step_size_ms = 10 # [ms] - - # Derived parameters - num_samples = int(image.width / float(bins_per_image) * clip_duration_ms) * sample_rate - n_fft = int(padded_duration_ms / 1000.0 * sample_rate) - hop_length = int(step_size_ms / 1000.0 * sample_rate) - win_length = int(window_duration_ms / 1000.0 * sample_rate) - - samples = waveform_from_spectrogram( - Sxx=Sxx, - n_fft=n_fft, - hop_length=hop_length, - win_length=win_length, - num_samples=num_samples, - sample_rate=sample_rate, - mel_scale=True, - n_mels=n_mels, - max_mel_iters=200, - num_griffin_lim_iters=32, - ) - - wav_bytes = io.BytesIO() - wavfile.write(wav_bytes, sample_rate, samples.astype(np.int16)) - wav_bytes.seek(0) - - duration_s = float(len(samples)) / sample_rate - - return wav_bytes, duration_s - - -def spectrogram_from_image( - image: Image.Image, max_volume: float = 50, power_for_image: float = 0.25 -) -> np.ndarray: - """ - Compute a spectrogram magnitude array from a spectrogram image. - - TODO(hayk): Add image_from_spectrogram and call this out as the reverse. - """ - # Convert to a numpy array of floats - data = np.array(image).astype(np.float32) - - # Flip Y take a single channel - data = data[::-1, :, 0] - - # Invert - data = 255 - data - - # Rescale to max volume - data = data * max_volume / 255 - - # Reverse the power curve - data = np.power(data, 1 / power_for_image) - - return data - - -def spectrogram_from_waveform( - waveform: np.ndarray, - sample_rate: int, - n_fft: int, - hop_length: int, - win_length: int, - mel_scale: bool = True, - n_mels: int = 512, -) -> np.ndarray: - """ - Compute a spectrogram from a waveform. - """ - - spectrogram_func = torchaudio.transforms.Spectrogram( - n_fft=n_fft, - power=None, - hop_length=hop_length, - win_length=win_length, - ) - - waveform_tensor = torch.from_numpy(waveform.astype(np.float32)).reshape(1, -1) - Sxx_complex = spectrogram_func(waveform_tensor).numpy()[0] - - Sxx_mag = np.abs(Sxx_complex) - - if mel_scale: - mel_scaler = torchaudio.transforms.MelScale( - n_mels=n_mels, - sample_rate=sample_rate, - f_min=0, - f_max=10000, - n_stft=n_fft // 2 + 1, - norm=None, - mel_scale="htk", - ) - - Sxx_mag = mel_scaler(torch.from_numpy(Sxx_mag)).numpy() - - return Sxx_mag - - -def waveform_from_spectrogram( - Sxx: np.ndarray, - n_fft: int, - hop_length: int, - win_length: int, - num_samples: int, - sample_rate: int, - mel_scale: bool = True, - n_mels: int = 512, - max_mel_iters: int = 200, - num_griffin_lim_iters: int = 32, - device: str = "cuda:0", -) -> np.ndarray: - """ - Reconstruct a waveform from a spectrogram. - - This is an approximate inverse of spectrogram_from_waveform, using the Griffin-Lim algorithm - to approximate the phase. - """ - Sxx_torch = torch.from_numpy(Sxx).to(device) - - # TODO(hayk): Make this a class that caches the two things - - if mel_scale: - mel_inv_scaler = torchaudio.transforms.InverseMelScale( - n_mels=n_mels, - sample_rate=sample_rate, - f_min=0, - f_max=10000, - n_stft=n_fft // 2 + 1, - norm=None, - mel_scale="htk", - max_iter=max_mel_iters, - ).to(device) - - Sxx_torch = mel_inv_scaler(Sxx_torch) - - griffin_lim = torchaudio.transforms.GriffinLim( - n_fft=n_fft, - win_length=win_length, - hop_length=hop_length, - power=1.0, - n_iter=num_griffin_lim_iters, - ).to(device) - - waveform = griffin_lim(Sxx_torch).cpu().numpy() - - return waveform - - -def mp3_bytes_from_wav_bytes(wav_bytes: io.BytesIO) -> io.BytesIO: - mp3_bytes = io.BytesIO() - sound = pydub.AudioSegment.from_wav(wav_bytes) - sound.export(mp3_bytes, format="mp3") - mp3_bytes.seek(0) - return mp3_bytes \ No newline at end of file diff --git a/spaces/yeqingmei123/face-test/e4e/criteria/__init__.py b/spaces/yeqingmei123/face-test/e4e/criteria/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/nms/gpu_nms.hpp b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/nms/gpu_nms.hpp deleted file mode 100644 index 68b6d42cd88b59496b22a9e77919abe529b09014..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/nms/gpu_nms.hpp +++ /dev/null @@ -1,2 +0,0 @@ -void _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num, - int boxes_dim, float nms_overlap_thresh, int device_id); diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/convert_slow_tokenizer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/convert_slow_tokenizer.py deleted file mode 100644 index a2195d9cae578a7316563f56dd37ab8959c29f7a..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/convert_slow_tokenizer.py +++ /dev/null @@ -1,1318 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Utilities to convert slow tokenizers in their fast tokenizers counterparts. - -All the conversions are grouped here to gather SentencePiece dependencies outside of the fast tokenizers files and -allow to make our dependency on SentencePiece optional. -""" - -import warnings -from typing import Dict, List, Tuple - -from packaging import version -from tokenizers import AddedToken, Regex, Tokenizer, decoders, normalizers, pre_tokenizers, processors -from tokenizers.models import BPE, Unigram, WordPiece - -from .utils import is_protobuf_available, requires_backends -from .utils.import_utils import PROTOBUF_IMPORT_ERROR - - -def import_protobuf(error_message=""): - if is_protobuf_available(): - import google.protobuf - - if version.parse(google.protobuf.__version__) < version.parse("4.0.0"): - from transformers.utils import sentencepiece_model_pb2 - else: - from transformers.utils import sentencepiece_model_pb2_new as sentencepiece_model_pb2 - return sentencepiece_model_pb2 - else: - raise ImportError(PROTOBUF_IMPORT_ERROR.format(error_message)) - - -class SentencePieceExtractor: - """ - Extractor implementation for SentencePiece trained models. https://github.com/google/sentencepiece - """ - - def __init__(self, model: str): - requires_backends(self, "sentencepiece") - from sentencepiece import SentencePieceProcessor - - self.sp = SentencePieceProcessor() - self.sp.Load(model) - - def extract(self, vocab_scores=None) -> Tuple[Dict[str, int], List[Tuple]]: - """ - By default will return vocab and merges with respect to their order, by sending `vocab_scores` we're going to - order the merges with respect to the piece scores instead. - """ - sp = self.sp - vocab = {sp.id_to_piece(index): index for index in range(sp.GetPieceSize())} - if vocab_scores is not None: - vocab_scores, reverse = dict(vocab_scores), True - else: - vocab_scores, reverse = vocab, False - - # Merges - merges = [] - for merge, piece_score in vocab_scores.items(): - local = [] - for index in range(1, len(merge)): - piece_l, piece_r = merge[:index], merge[index:] - if piece_l in vocab and piece_r in vocab: - local.append((piece_l, piece_r, piece_score)) - local = sorted(local, key=lambda x: (vocab[x[0]], vocab[x[1]])) - merges.extend(local) - - merges = sorted(merges, key=lambda val: val[2], reverse=reverse) - merges = [(val[0], val[1]) for val in merges] - return vocab, merges - - -def check_number_comma(piece: str) -> bool: - return len(piece) < 2 or piece[-1] != "," or not piece[-2].isdigit() - - -class Converter: - def __init__(self, original_tokenizer): - self.original_tokenizer = original_tokenizer - - def converted(self) -> Tokenizer: - raise NotImplementedError() - - -class BertConverter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.vocab - tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(self.original_tokenizer.unk_token))) - - tokenize_chinese_chars = False - strip_accents = False - do_lower_case = False - if hasattr(self.original_tokenizer, "basic_tokenizer"): - tokenize_chinese_chars = self.original_tokenizer.basic_tokenizer.tokenize_chinese_chars - strip_accents = self.original_tokenizer.basic_tokenizer.strip_accents - do_lower_case = self.original_tokenizer.basic_tokenizer.do_lower_case - - tokenizer.normalizer = normalizers.BertNormalizer( - clean_text=True, - handle_chinese_chars=tokenize_chinese_chars, - strip_accents=strip_accents, - lowercase=do_lower_case, - ) - tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer() - - cls = str(self.original_tokenizer.cls_token) - sep = str(self.original_tokenizer.sep_token) - cls_token_id = self.original_tokenizer.cls_token_id - sep_token_id = self.original_tokenizer.sep_token_id - - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{cls}:0 $A:0 {sep}:0", - pair=f"{cls}:0 $A:0 {sep}:0 $B:1 {sep}:1", - special_tokens=[ - (cls, cls_token_id), - (sep, sep_token_id), - ], - ) - tokenizer.decoder = decoders.WordPiece(prefix="##") - - return tokenizer - - -class SplinterConverter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.vocab - tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(self.original_tokenizer.unk_token))) - - tokenize_chinese_chars = False - strip_accents = False - do_lower_case = False - if hasattr(self.original_tokenizer, "basic_tokenizer"): - tokenize_chinese_chars = self.original_tokenizer.basic_tokenizer.tokenize_chinese_chars - strip_accents = self.original_tokenizer.basic_tokenizer.strip_accents - do_lower_case = self.original_tokenizer.basic_tokenizer.do_lower_case - - tokenizer.normalizer = normalizers.BertNormalizer( - clean_text=True, - handle_chinese_chars=tokenize_chinese_chars, - strip_accents=strip_accents, - lowercase=do_lower_case, - ) - tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer() - - cls = str(self.original_tokenizer.cls_token) - sep = str(self.original_tokenizer.sep_token) - question = str(self.original_tokenizer.question_token) - dot = "." - cls_token_id = self.original_tokenizer.cls_token_id - sep_token_id = self.original_tokenizer.sep_token_id - question_token_id = self.original_tokenizer.question_token_id - dot_token_id = self.original_tokenizer.convert_tokens_to_ids(".") - - if self.original_tokenizer.padding_side == "right": - pair = f"{cls}:0 $A:0 {question} {dot} {sep}:0 $B:1 {sep}:1" - else: - pair = f"{cls}:0 $A:0 {sep}:0 $B:1 {question} {dot} {sep}:1" - - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{cls}:0 $A:0 {sep}:0", - pair=pair, - special_tokens=[ - (cls, cls_token_id), - (sep, sep_token_id), - (question, question_token_id), - (dot, dot_token_id), - ], - ) - tokenizer.decoder = decoders.WordPiece(prefix="##") - - return tokenizer - - -class FunnelConverter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.vocab - tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(self.original_tokenizer.unk_token))) - - tokenize_chinese_chars = False - strip_accents = False - do_lower_case = False - if hasattr(self.original_tokenizer, "basic_tokenizer"): - tokenize_chinese_chars = self.original_tokenizer.basic_tokenizer.tokenize_chinese_chars - strip_accents = self.original_tokenizer.basic_tokenizer.strip_accents - do_lower_case = self.original_tokenizer.basic_tokenizer.do_lower_case - - tokenizer.normalizer = normalizers.BertNormalizer( - clean_text=True, - handle_chinese_chars=tokenize_chinese_chars, - strip_accents=strip_accents, - lowercase=do_lower_case, - ) - tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer() - - cls = str(self.original_tokenizer.cls_token) - sep = str(self.original_tokenizer.sep_token) - cls_token_id = self.original_tokenizer.cls_token_id - sep_token_id = self.original_tokenizer.sep_token_id - - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{cls}:2 $A:0 {sep}:0", # token_type_id is 2 for Funnel transformer - pair=f"{cls}:2 $A:0 {sep}:0 $B:1 {sep}:1", - special_tokens=[ - (cls, cls_token_id), - (sep, sep_token_id), - ], - ) - tokenizer.decoder = decoders.WordPiece(prefix="##") - - return tokenizer - - -class MPNetConverter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.vocab - tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(self.original_tokenizer.unk_token))) - - tokenize_chinese_chars = False - strip_accents = False - do_lower_case = False - if hasattr(self.original_tokenizer, "basic_tokenizer"): - tokenize_chinese_chars = self.original_tokenizer.basic_tokenizer.tokenize_chinese_chars - strip_accents = self.original_tokenizer.basic_tokenizer.strip_accents - do_lower_case = self.original_tokenizer.basic_tokenizer.do_lower_case - - tokenizer.normalizer = normalizers.BertNormalizer( - clean_text=True, - handle_chinese_chars=tokenize_chinese_chars, - strip_accents=strip_accents, - lowercase=do_lower_case, - ) - tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer() - - cls = str(self.original_tokenizer.cls_token) - sep = str(self.original_tokenizer.sep_token) - cls_token_id = self.original_tokenizer.cls_token_id - sep_token_id = self.original_tokenizer.sep_token_id - - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{cls}:0 $A:0 {sep}:0", - pair=f"{cls}:0 $A:0 {sep}:0 {sep}:0 $B:1 {sep}:1", # MPNet uses two [SEP] tokens - special_tokens=[ - (cls, cls_token_id), - (sep, sep_token_id), - ], - ) - tokenizer.decoder = decoders.WordPiece(prefix="##") - - return tokenizer - - -class OpenAIGPTConverter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.encoder - merges = list(self.original_tokenizer.bpe_ranks.keys()) - unk_token = self.original_tokenizer.unk_token - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - unk_token=str(unk_token), - end_of_word_suffix="", - fuse_unk=False, - ) - ) - - if tokenizer.token_to_id(str(unk_token)) is not None: - tokenizer.add_special_tokens([str(unk_token)]) - - tokenizer.normalizer = normalizers.BertNormalizer(lowercase=True) - tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer() - tokenizer.decoder = decoders.BPEDecoder(suffix="") - - return tokenizer - - -class GPT2Converter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.encoder - merges = list(self.original_tokenizer.bpe_ranks.keys()) - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - continuing_subword_prefix="", - end_of_word_suffix="", - fuse_unk=False, - ) - ) - - tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=self.original_tokenizer.add_prefix_space) - tokenizer.decoder = decoders.ByteLevel() - if self.original_tokenizer.add_bos_token: - bos = self.original_tokenizer.bos_token - bos_token_id = self.original_tokenizer.bos_token_id - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{bos}:0 $A:0", - pair=f"{bos}:0 $A:0 $B:1", - special_tokens=[ - (bos, bos_token_id), - ], - ) - else: - # XXX trim_offsets=False actually means this post_processor doesn't - # really do anything. - tokenizer.post_processor = processors.ByteLevel(trim_offsets=False) - return tokenizer - - -class HerbertConverter(Converter): - def converted(self) -> Tokenizer: - tokenizer_info_str = "#version:" - token_suffix = "" - - vocab = self.original_tokenizer.encoder - merges = list(self.original_tokenizer.bpe_ranks.keys()) - if tokenizer_info_str in merges[0][0]: - merges = merges[1:] - - tokenizer = Tokenizer( - BPE( - vocab, - merges, - dropout=None, - unk_token=self.original_tokenizer.unk_token, - end_of_word_suffix=token_suffix, - ) - ) - - tokenizer.normalizer = normalizers.BertNormalizer(lowercase=False, strip_accents=False) - tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer() - tokenizer.decoder = decoders.BPEDecoder(suffix=token_suffix) - tokenizer.post_processor = processors.BertProcessing( - sep=(self.original_tokenizer.sep_token, self.original_tokenizer.sep_token_id), - cls=(self.original_tokenizer.cls_token, self.original_tokenizer.cls_token_id), - ) - - return tokenizer - - -class RobertaConverter(Converter): - def converted(self) -> Tokenizer: - ot = self.original_tokenizer - vocab = ot.encoder - merges = list(ot.bpe_ranks.keys()) - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - continuing_subword_prefix="", - end_of_word_suffix="", - fuse_unk=False, - ) - ) - - tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=ot.add_prefix_space) - tokenizer.decoder = decoders.ByteLevel() - tokenizer.post_processor = processors.RobertaProcessing( - sep=(ot.sep_token, ot.sep_token_id), - cls=(ot.cls_token, ot.cls_token_id), - add_prefix_space=ot.add_prefix_space, - trim_offsets=True, # True by default on Roberta (historical) - ) - - return tokenizer - - -class RoFormerConverter(Converter): - def converted(self) -> Tokenizer: - from .models.roformer.tokenization_utils import JiebaPreTokenizer - - vocab = self.original_tokenizer.vocab - tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(self.original_tokenizer.unk_token))) - - strip_accents = False - do_lower_case = False - if hasattr(self.original_tokenizer, "basic_tokenizer"): - strip_accents = self.original_tokenizer.basic_tokenizer.strip_accents - do_lower_case = self.original_tokenizer.basic_tokenizer.do_lower_case - - tokenizer.normalizer = normalizers.BertNormalizer( - clean_text=True, - handle_chinese_chars=False, - strip_accents=strip_accents, - lowercase=do_lower_case, - ) - tokenizer.pre_tokenizer = pre_tokenizers.PreTokenizer.custom(JiebaPreTokenizer(vocab)) - - cls = str(self.original_tokenizer.cls_token) - sep = str(self.original_tokenizer.sep_token) - cls_token_id = self.original_tokenizer.cls_token_id - sep_token_id = self.original_tokenizer.sep_token_id - - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{cls}:0 $A:0 {sep}:0", - pair=f"{cls}:0 $A:0 {sep}:0 $B:1 {sep}:1", - special_tokens=[ - (cls, cls_token_id), - (sep, sep_token_id), - ], - ) - tokenizer.decoder = decoders.WordPiece(prefix="##") - - return tokenizer - - -class DebertaConverter(Converter): - def converted(self) -> Tokenizer: - ot = self.original_tokenizer - vocab = ot.encoder - merges = list(ot.bpe_ranks.keys()) - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - continuing_subword_prefix="", - end_of_word_suffix="", - fuse_unk=False, - ) - ) - - tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=ot.add_prefix_space) - tokenizer.decoder = decoders.ByteLevel() - tokenizer.post_processor = processors.TemplateProcessing( - single="[CLS]:0 $A:0 [SEP]:0", - pair="[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1", - special_tokens=[ - ("[CLS]", self.original_tokenizer.convert_tokens_to_ids("[CLS]")), - ("[SEP]", self.original_tokenizer.convert_tokens_to_ids("[SEP]")), - ], - ) - - return tokenizer - - -class SpmConverter(Converter): - def __init__(self, *args): - requires_backends(self, "protobuf") - - super().__init__(*args) - - # from .utils import sentencepiece_model_pb2 as model_pb2 - model_pb2 = import_protobuf() - - m = model_pb2.ModelProto() - with open(self.original_tokenizer.vocab_file, "rb") as f: - m.ParseFromString(f.read()) - self.proto = m - - if self.proto.trainer_spec.byte_fallback: - if not getattr(self, "handle_byte_fallback", None): - warnings.warn( - "The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option" - " which is not implemented in the fast tokenizers. In practice this means that the fast version of the" - " tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these " - "unknown tokens into a sequence of byte tokens matching the original piece of text." - ) - - def vocab(self, proto): - return [(piece.piece, piece.score) for piece in proto.pieces] - - def unk_id(self, proto): - return proto.trainer_spec.unk_id - - def tokenizer(self, proto): - model_type = proto.trainer_spec.model_type - vocab_scores = self.vocab(proto) - unk_id = self.unk_id(proto) - - if model_type == 1: - tokenizer = Tokenizer(Unigram(vocab_scores, unk_id)) - elif model_type == 2: - _, merges = SentencePieceExtractor(self.original_tokenizer.vocab_file).extract() - bpe_vocab = {word: i for i, (word, score) in enumerate(vocab_scores)} - tokenizer = Tokenizer( - BPE( - bpe_vocab, - merges, - unk_token=proto.trainer_spec.unk_piece, - fuse_unk=True, - ) - ) - else: - raise Exception( - "You're trying to run a `Unigram` model but you're file was trained with a different algorithm" - ) - - return tokenizer - - def normalizer(self, proto): - precompiled_charsmap = proto.normalizer_spec.precompiled_charsmap - if not precompiled_charsmap: - return normalizers.Sequence([normalizers.Replace(Regex(" {2,}"), " ")]) - else: - return normalizers.Sequence( - [normalizers.Precompiled(precompiled_charsmap), normalizers.Replace(Regex(" {2,}"), " ")] - ) - - def pre_tokenizer(self, replacement, add_prefix_space): - return pre_tokenizers.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space) - - def post_processor(self): - return None - - def decoder(self, replacement, add_prefix_space): - return decoders.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space) - - def converted(self) -> Tokenizer: - tokenizer = self.tokenizer(self.proto) - - # Tokenizer assemble - normalizer = self.normalizer(self.proto) - if normalizer is not None: - tokenizer.normalizer = normalizer - - replacement = "▁" - add_prefix_space = True - pre_tokenizer = self.pre_tokenizer(replacement, add_prefix_space) - if pre_tokenizer is not None: - tokenizer.pre_tokenizer = pre_tokenizer - - tokenizer.decoder = self.decoder(replacement, add_prefix_space) - post_processor = self.post_processor() - if post_processor: - tokenizer.post_processor = post_processor - - return tokenizer - - -class AlbertConverter(SpmConverter): - def vocab(self, proto): - return [ - (piece.piece, piece.score) if check_number_comma(piece.piece) else (piece.piece, piece.score - 100) - for piece in proto.pieces - ] - - def normalizer(self, proto): - list_normalizers = [ - normalizers.Replace("``", '"'), - normalizers.Replace("''", '"'), - ] - if not self.original_tokenizer.keep_accents: - list_normalizers.append(normalizers.NFKD()) - list_normalizers.append(normalizers.StripAccents()) - if self.original_tokenizer.do_lower_case: - list_normalizers.append(normalizers.Lowercase()) - - precompiled_charsmap = proto.normalizer_spec.precompiled_charsmap - - if precompiled_charsmap: - list_normalizers.append(normalizers.Precompiled(precompiled_charsmap)) - - list_normalizers.append(normalizers.Replace(Regex(" {2,}"), " ")) - return normalizers.Sequence(list_normalizers) - - def post_processor(self): - return processors.TemplateProcessing( - single="[CLS]:0 $A:0 [SEP]:0", - pair="[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1", - special_tokens=[ - ("[CLS]", self.original_tokenizer.convert_tokens_to_ids("[CLS]")), - ("[SEP]", self.original_tokenizer.convert_tokens_to_ids("[SEP]")), - ], - ) - - -class BarthezConverter(SpmConverter): - def unk_id(self, proto): - unk_id = 3 - return unk_id - - def post_processor(self): - return processors.TemplateProcessing( - single=" $A ", - pair=" $A $B ", - special_tokens=[ - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class CamembertConverter(SpmConverter): - def vocab(self, proto): - vocab = [ - ("NOTUSED", 0.0), - ("", 0.0), - ("NOTUSED", 0.0), - ("", 0.0), - ("NOTUSED", -100), - ] - # We down-grade the original SentencePiece by -100 to avoid using it and use our added token instead - vocab += [(piece.piece, piece.score) for piece in proto.pieces[1:]] - vocab += [("", 0.0)] - return vocab - - def unk_id(self, proto): - # See vocab unk position - return 3 - - def post_processor(self): - return processors.TemplateProcessing( - single=" $A ", - pair=" $A $B ", - special_tokens=[ - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class DebertaV2Converter(SpmConverter): - def pre_tokenizer(self, replacement, add_prefix_space): - list_pretokenizers = [] - if self.original_tokenizer.split_by_punct: - list_pretokenizers.append(pre_tokenizers.Punctuation(behavior="isolated")) - list_pretokenizers.append(pre_tokenizers.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space)) - return pre_tokenizers.Sequence(list_pretokenizers) - - def normalizer(self, proto): - list_normalizers = [] - if self.original_tokenizer.do_lower_case: - list_normalizers.append(normalizers.Lowercase()) - list_normalizers.append(normalizers.Strip()) - - precompiled_charsmap = proto.normalizer_spec.precompiled_charsmap - if precompiled_charsmap: - list_normalizers.append(normalizers.Precompiled(precompiled_charsmap)) - list_normalizers.append(normalizers.Replace(Regex(" {2,}"), " ")) - - return normalizers.Sequence(list_normalizers) - - def post_processor(self): - return processors.TemplateProcessing( - single="[CLS]:0 $A:0 [SEP]:0", - pair="[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1", - special_tokens=[ - ("[CLS]", self.original_tokenizer.convert_tokens_to_ids("[CLS]")), - ("[SEP]", self.original_tokenizer.convert_tokens_to_ids("[SEP]")), - ], - ) - - -class MBartConverter(SpmConverter): - def vocab(self, proto): - vocab = [ - ("", 0.0), - ("", 0.0), - ("", 0.0), - ("", 0.0), - ] - vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]] - vocab += [ - ("ar_AR", 0.0), - ("cs_CZ", 0.0), - ("de_DE", 0.0), - ("en_XX", 0.0), - ("es_XX", 0.0), - ("et_EE", 0.0), - ("fi_FI", 0.0), - ("fr_XX", 0.0), - ("gu_IN", 0.0), - ("hi_IN", 0.0), - ("it_IT", 0.0), - ("ja_XX", 0.0), - ("kk_KZ", 0.0), - ("ko_KR", 0.0), - ("lt_LT", 0.0), - ("lv_LV", 0.0), - ("my_MM", 0.0), - ("ne_NP", 0.0), - ("nl_XX", 0.0), - ("ro_RO", 0.0), - ("ru_RU", 0.0), - ("si_LK", 0.0), - ("tr_TR", 0.0), - ("vi_VN", 0.0), - ("zh_CN", 0.0), - ] - vocab += [("", 0.0)] - return vocab - - def unk_id(self, proto): - return 3 - - def post_processor(self): - return processors.TemplateProcessing( - single="$A en_XX", - pair="$A $B en_XX", - special_tokens=[ - ("en_XX", self.original_tokenizer.convert_tokens_to_ids("en_XX")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class MBart50Converter(SpmConverter): - def vocab(self, proto): - vocab = [ - ("", 0.0), - ("", 0.0), - ("", 0.0), - ("", 0.0), - ] - vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]] - # fmt: off - vocab += [("ar_AR", 0.0), ("cs_CZ", 0.0), ("de_DE", 0.0), ("en_XX", 0.0), ("es_XX", 0.0), ("et_EE", 0.0), ("fi_FI", 0.0), ("fr_XX", 0.0), ("gu_IN", 0.0), ("hi_IN", 0.0), ("it_IT", 0.0), ("ja_XX", 0.0), ("kk_KZ", 0.0), ("ko_KR", 0.0), ("lt_LT", 0.0), ("lv_LV", 0.0), ("my_MM", 0.0), ("ne_NP", 0.0), ("nl_XX", 0.0), ("ro_RO", 0.0), ("ru_RU", 0.0), ("si_LK", 0.0), ("tr_TR", 0.0), ("vi_VN", 0.0), ("zh_CN", 0.0), ("af_ZA", 0.0), ("az_AZ", 0.0), ("bn_IN", 0.0), ("fa_IR", 0.0), ("he_IL", 0.0), ("hr_HR", 0.0), ("id_ID", 0.0), ("ka_GE", 0.0), ("km_KH", 0.0), ("mk_MK", 0.0), ("ml_IN", 0.0), ("mn_MN", 0.0), ("mr_IN", 0.0), ("pl_PL", 0.0), ("ps_AF", 0.0), ("pt_XX", 0.0), ("sv_SE", 0.0), ("sw_KE", 0.0), ("ta_IN", 0.0), ("te_IN", 0.0), ("th_TH", 0.0), ("tl_XX", 0.0), ("uk_UA", 0.0), ("ur_PK", 0.0), ("xh_ZA", 0.0), ("gl_ES", 0.0), ("sl_SI", 0.0)] - # fmt: on - vocab += [("", 0.0)] - return vocab - - def unk_id(self, proto): - return 3 - - def post_processor(self): - return processors.TemplateProcessing( - single="en_XX $A ", - pair="en_XX $A $B ", - special_tokens=[ - ("en_XX", self.original_tokenizer.convert_tokens_to_ids("en_XX")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class NllbConverter(SpmConverter): - def vocab(self, proto): - vocab = [ - ("", 0.0), - ("", 0.0), - ("", 0.0), - ("", 0.0), - ] - vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]] - vocab += [ - # fmt: off - ('ace_Arab', 0.0), ('ace_Latn', 0.0), ('acm_Arab', 0.0), ('acq_Arab', 0.0), ('aeb_Arab', 0.0), ('afr_Latn', 0.0), ('ajp_Arab', 0.0), ('aka_Latn', 0.0), ('amh_Ethi', 0.0), ('apc_Arab', 0.0), ('arb_Arab', 0.0), ('ars_Arab', 0.0), ('ary_Arab', 0.0), ('arz_Arab', 0.0), ('asm_Beng', 0.0), ('ast_Latn', 0.0), ('awa_Deva', 0.0), ('ayr_Latn', 0.0), ('azb_Arab', 0.0), ('azj_Latn', 0.0), ('bak_Cyrl', 0.0), ('bam_Latn', 0.0), ('ban_Latn', 0.0), ('bel_Cyrl', 0.0), ('bem_Latn', 0.0), ('ben_Beng', 0.0), ('bho_Deva', 0.0), ('bjn_Arab', 0.0), ('bjn_Latn', 0.0), ('bod_Tibt', 0.0), ('bos_Latn', 0.0), ('bug_Latn', 0.0), ('bul_Cyrl', 0.0), ('cat_Latn', 0.0), ('ceb_Latn', 0.0), ('ces_Latn', 0.0), ('cjk_Latn', 0.0), ('ckb_Arab', 0.0), ('crh_Latn', 0.0), ('cym_Latn', 0.0), ('dan_Latn', 0.0), ('deu_Latn', 0.0), ('dik_Latn', 0.0), ('dyu_Latn', 0.0), ('dzo_Tibt', 0.0), ('ell_Grek', 0.0), ('eng_Latn', 0.0), ('epo_Latn', 0.0), ('est_Latn', 0.0), ('eus_Latn', 0.0), ('ewe_Latn', 0.0), ('fao_Latn', 0.0), ('pes_Arab', 0.0), ('fij_Latn', 0.0), ('fin_Latn', 0.0), ('fon_Latn', 0.0), ('fra_Latn', 0.0), ('fur_Latn', 0.0), ('fuv_Latn', 0.0), ('gla_Latn', 0.0), ('gle_Latn', 0.0), ('glg_Latn', 0.0), ('grn_Latn', 0.0), ('guj_Gujr', 0.0), ('hat_Latn', 0.0), ('hau_Latn', 0.0), ('heb_Hebr', 0.0), ('hin_Deva', 0.0), ('hne_Deva', 0.0), ('hrv_Latn', 0.0), ('hun_Latn', 0.0), ('hye_Armn', 0.0), ('ibo_Latn', 0.0), ('ilo_Latn', 0.0), ('ind_Latn', 0.0), ('isl_Latn', 0.0), ('ita_Latn', 0.0), ('jav_Latn', 0.0), ('jpn_Jpan', 0.0), ('kab_Latn', 0.0), ('kac_Latn', 0.0), ('kam_Latn', 0.0), ('kan_Knda', 0.0), ('kas_Arab', 0.0), ('kas_Deva', 0.0), ('kat_Geor', 0.0), ('knc_Arab', 0.0), ('knc_Latn', 0.0), ('kaz_Cyrl', 0.0), ('kbp_Latn', 0.0), ('kea_Latn', 0.0), ('khm_Khmr', 0.0), ('kik_Latn', 0.0), ('kin_Latn', 0.0), ('kir_Cyrl', 0.0), ('kmb_Latn', 0.0), ('kon_Latn', 0.0), ('kor_Hang', 0.0), ('kmr_Latn', 0.0), ('lao_Laoo', 0.0), ('lvs_Latn', 0.0), ('lij_Latn', 0.0), ('lim_Latn', 0.0), ('lin_Latn', 0.0), ('lit_Latn', 0.0), ('lmo_Latn', 0.0), ('ltg_Latn', 0.0), ('ltz_Latn', 0.0), ('lua_Latn', 0.0), ('lug_Latn', 0.0), ('luo_Latn', 0.0), ('lus_Latn', 0.0), ('mag_Deva', 0.0), ('mai_Deva', 0.0), ('mal_Mlym', 0.0), ('mar_Deva', 0.0), ('min_Latn', 0.0), ('mkd_Cyrl', 0.0), ('plt_Latn', 0.0), ('mlt_Latn', 0.0), ('mni_Beng', 0.0), ('khk_Cyrl', 0.0), ('mos_Latn', 0.0), ('mri_Latn', 0.0), ('zsm_Latn', 0.0), ('mya_Mymr', 0.0), ('nld_Latn', 0.0), ('nno_Latn', 0.0), ('nob_Latn', 0.0), ('npi_Deva', 0.0), ('nso_Latn', 0.0), ('nus_Latn', 0.0), ('nya_Latn', 0.0), ('oci_Latn', 0.0), ('gaz_Latn', 0.0), ('ory_Orya', 0.0), ('pag_Latn', 0.0), ('pan_Guru', 0.0), ('pap_Latn', 0.0), ('pol_Latn', 0.0), ('por_Latn', 0.0), ('prs_Arab', 0.0), ('pbt_Arab', 0.0), ('quy_Latn', 0.0), ('ron_Latn', 0.0), ('run_Latn', 0.0), ('rus_Cyrl', 0.0), ('sag_Latn', 0.0), ('san_Deva', 0.0), ('sat_Beng', 0.0), ('scn_Latn', 0.0), ('shn_Mymr', 0.0), ('sin_Sinh', 0.0), ('slk_Latn', 0.0), ('slv_Latn', 0.0), ('smo_Latn', 0.0), ('sna_Latn', 0.0), ('snd_Arab', 0.0), ('som_Latn', 0.0), ('sot_Latn', 0.0), ('spa_Latn', 0.0), ('als_Latn', 0.0), ('srd_Latn', 0.0), ('srp_Cyrl', 0.0), ('ssw_Latn', 0.0), ('sun_Latn', 0.0), ('swe_Latn', 0.0), ('swh_Latn', 0.0), ('szl_Latn', 0.0), ('tam_Taml', 0.0), ('tat_Cyrl', 0.0), ('tel_Telu', 0.0), ('tgk_Cyrl', 0.0), ('tgl_Latn', 0.0), ('tha_Thai', 0.0), ('tir_Ethi', 0.0), ('taq_Latn', 0.0), ('taq_Tfng', 0.0), ('tpi_Latn', 0.0), ('tsn_Latn', 0.0), ('tso_Latn', 0.0), ('tuk_Latn', 0.0), ('tum_Latn', 0.0), ('tur_Latn', 0.0), ('twi_Latn', 0.0), ('tzm_Tfng', 0.0), ('uig_Arab', 0.0), ('ukr_Cyrl', 0.0), ('umb_Latn', 0.0), ('urd_Arab', 0.0), ('uzn_Latn', 0.0), ('vec_Latn', 0.0), ('vie_Latn', 0.0), ('war_Latn', 0.0), ('wol_Latn', 0.0), ('xho_Latn', 0.0), ('ydd_Hebr', 0.0), ('yor_Latn', 0.0), ('yue_Hant', 0.0), ('zho_Hans', 0.0), ('zho_Hant', 0.0), ('zul_Latn', 0.0) - # fmt: on - ] - vocab += [("", 0.0)] - return vocab - - def unk_id(self, proto): - return 3 - - def post_processor(self): - return processors.TemplateProcessing( - single="eng_Latn $A ", - pair="eng_Latn $A $B ", - special_tokens=[ - ("eng_Latn", self.original_tokenizer.convert_tokens_to_ids("eng_Latn")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class XLMRobertaConverter(SpmConverter): - def vocab(self, proto): - vocab = [ - ("", 0.0), - ("", 0.0), - ("", 0.0), - ("", 0.0), - ] - vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]] - vocab += [("", 0.0)] - return vocab - - def unk_id(self, proto): - unk_id = 3 - return unk_id - - def post_processor(self): - return processors.TemplateProcessing( - single=" $A ", - pair=" $A $B ", - special_tokens=[ - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class XLNetConverter(SpmConverter): - def vocab(self, proto): - return [ - (piece.piece, piece.score) if check_number_comma(piece.piece) else (piece.piece, piece.score - 100) - for piece in proto.pieces - ] - - def normalizer(self, proto): - list_normalizers = [ - normalizers.Replace("``", '"'), - normalizers.Replace("''", '"'), - ] - if not self.original_tokenizer.keep_accents: - list_normalizers.append(normalizers.NFKD()) - list_normalizers.append(normalizers.StripAccents()) - if self.original_tokenizer.do_lower_case: - list_normalizers.append(normalizers.Lowercase()) - - precompiled_charsmap = proto.normalizer_spec.precompiled_charsmap - - if precompiled_charsmap: - list_normalizers.append(normalizers.Precompiled(precompiled_charsmap)) - - list_normalizers.append(normalizers.Replace(Regex(" {2,}"), " ")) - return normalizers.Sequence(list_normalizers) - - def post_processor(self): - return processors.TemplateProcessing( - single="$A:0 :0 :2", - pair="$A:0 :0 $B:1 :1 :2", - special_tokens=[ - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class ReformerConverter(SpmConverter): - pass - - -class RemBertConverter(SpmConverter): - # Inspired from AlbertConverter - def normalizer(self, proto): - list_normalizers = [ - normalizers.Replace("``", '"'), - normalizers.Replace("''", '"'), - normalizers.Replace(Regex(" {2,}"), " "), - ] - if not self.original_tokenizer.keep_accents: - list_normalizers.append(normalizers.NFKD()) - list_normalizers.append(normalizers.StripAccents()) - if self.original_tokenizer.do_lower_case: - list_normalizers.append(normalizers.Lowercase()) - - precompiled_charsmap = proto.normalizer_spec.precompiled_charsmap - - if precompiled_charsmap: - list_normalizers.append(normalizers.Precompiled(precompiled_charsmap)) - - return normalizers.Sequence(list_normalizers) - - def post_processor(self): - return processors.TemplateProcessing( - single="[CLS]:0 $A:0 [SEP]:0", - pair="[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1", - special_tokens=[ - ("[CLS]", self.original_tokenizer.convert_tokens_to_ids("[CLS]")), - ("[SEP]", self.original_tokenizer.convert_tokens_to_ids("[SEP]")), - ], - ) - - -class BertGenerationConverter(SpmConverter): - pass - - -class PegasusConverter(SpmConverter): - def vocab(self, proto): - vocab = [ - (self.original_tokenizer.pad_token, 0.0), - (self.original_tokenizer.eos_token, 0.0), - ] - - if self.original_tokenizer.mask_token_sent is not None: - vocab += [(self.original_tokenizer.mask_token_sent, 0.0)] - - if ( - self.original_tokenizer.mask_token is not None - and self.original_tokenizer.mask_token_id < self.original_tokenizer.offset - ): - vocab += [(self.original_tokenizer.mask_token, 0.0)] - - vocab += [(f"", -100.0) for i in range(2, self.original_tokenizer.offset)] - vocab += [(piece.piece, piece.score) for piece in proto.pieces[2:]] - return vocab - - def unk_id(self, proto): - return proto.trainer_spec.unk_id + self.original_tokenizer.offset - - def pre_tokenizer(self, replacement, add_prefix_space): - return pre_tokenizers.Sequence( - [ - pre_tokenizers.WhitespaceSplit(), - pre_tokenizers.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space), - ] - ) - - def post_processor(self): - eos = self.original_tokenizer.eos_token - special_tokens = [ - (eos, self.original_tokenizer.eos_token_id), - ] - return processors.TemplateProcessing(single=["$A", eos], pair=["$A", "$B", eos], special_tokens=special_tokens) - - -class T5Converter(SpmConverter): - def vocab(self, proto): - num_extra_ids = self.original_tokenizer._extra_ids - vocab = [(piece.piece, piece.score) for piece in proto.pieces] - vocab += [(f"", 0.0) for i in range(num_extra_ids - 1, -1, -1)] - return vocab - - def post_processor(self): - return processors.TemplateProcessing( - single=["$A", ""], - pair=["$A", "", "$B", ""], - special_tokens=[ - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class WhisperConverter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.encoder - merges = list(self.original_tokenizer.bpe_ranks.keys()) - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - continuing_subword_prefix="", - end_of_word_suffix="", - fuse_unk=False, - ) - ) - - tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=self.original_tokenizer.add_prefix_space) - tokenizer.decoder = decoders.ByteLevel() - - prefix_token_ids = self.original_tokenizer.prefix_tokens - prefixes = self.original_tokenizer.convert_ids_to_tokens(prefix_token_ids) - eos = self.original_tokenizer.eos_token - eos_token_id = self.original_tokenizer.eos_token_id - prefix_template = " ".join([f"{token}:0" for token in prefixes]) - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{prefix_template} $A:0 {eos}:0", - pair=f"{prefix_template} $A:0 $B:1 {eos}:1", - special_tokens=[ - (eos, eos_token_id), - *zip(prefixes, prefix_token_ids), - ], - ) - - return tokenizer - - -class BigBirdConverter(SpmConverter): - def post_processor(self): - return processors.TemplateProcessing( - single="[CLS]:0 $A:0 [SEP]:0", - pair="[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1", - special_tokens=[ - ("[CLS]", self.original_tokenizer.convert_tokens_to_ids("[CLS]")), - ("[SEP]", self.original_tokenizer.convert_tokens_to_ids("[SEP]")), - ], - ) - - -class CLIPConverter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.encoder - merges = list(self.original_tokenizer.bpe_ranks.keys()) - unk_token = self.original_tokenizer.unk_token - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - continuing_subword_prefix="", - end_of_word_suffix="", - fuse_unk=False, - unk_token=str(unk_token), - ) - ) - - tokenizer.normalizer = normalizers.Sequence( - [normalizers.NFC(), normalizers.Replace(Regex(r"\s+"), " "), normalizers.Lowercase()] - ) - tokenizer.pre_tokenizer = pre_tokenizers.Sequence( - [ - pre_tokenizers.Split( - Regex(r"""'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+"""), - behavior="removed", - invert=True, - ), - pre_tokenizers.ByteLevel(add_prefix_space=False), - ] - ) - tokenizer.decoder = decoders.ByteLevel() - - # Hack to have a ByteLevel and TemplaceProcessor - tokenizer.post_processor = processors.RobertaProcessing( - sep=(self.original_tokenizer.eos_token, self.original_tokenizer.eos_token_id), - cls=(self.original_tokenizer.bos_token, self.original_tokenizer.bos_token_id), - add_prefix_space=False, - trim_offsets=False, - ) - return tokenizer - - -class LayoutLMv2Converter(Converter): - def converted(self) -> Tokenizer: - vocab = self.original_tokenizer.vocab - tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(self.original_tokenizer.unk_token))) - - tokenize_chinese_chars = False - strip_accents = False - do_lower_case = True - if hasattr(self.original_tokenizer, "basic_tokenizer"): - tokenize_chinese_chars = self.original_tokenizer.basic_tokenizer.tokenize_chinese_chars - strip_accents = self.original_tokenizer.basic_tokenizer.strip_accents - do_lower_case = self.original_tokenizer.basic_tokenizer.do_lower_case - - tokenizer.normalizer = normalizers.BertNormalizer( - clean_text=True, - handle_chinese_chars=tokenize_chinese_chars, - strip_accents=strip_accents, - lowercase=do_lower_case, - ) - tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer() - - cls = str(self.original_tokenizer.cls_token) - sep = str(self.original_tokenizer.sep_token) - cls_token_id = self.original_tokenizer.cls_token_id - sep_token_id = self.original_tokenizer.sep_token_id - - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{cls}:0 $A:0 {sep}:0", - pair=f"{cls}:0 $A:0 {sep}:0 $B:1 {sep}:1", - special_tokens=[ - (cls, cls_token_id), - (sep, sep_token_id), - ], - ) - tokenizer.decoder = decoders.WordPiece(prefix="##") - - return tokenizer - - -class BlenderbotConverter(Converter): - def converted(self) -> Tokenizer: - ot = self.original_tokenizer - vocab = ot.encoder - merges = list(ot.bpe_ranks.keys()) - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - continuing_subword_prefix="", - end_of_word_suffix="", - fuse_unk=False, - ) - ) - - tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=ot.add_prefix_space) - tokenizer.decoder = decoders.ByteLevel() - tokenizer.post_processor = processors.TemplateProcessing( - single=f"$A:0 {ot.eos_token}:0", - special_tokens=[ - (ot.eos_token, ot.eos_token_id), - ], - ) - - return tokenizer - - -class XGLMConverter(SpmConverter): - def vocab(self, proto): - vocab = [ - ("", 0.0), - ("", 0.0), - ("", 0.0), - ("", 0.0), - ] - vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]] - # fmt: off - vocab += [("", 0.0), ("", 0.0), ("", 0.0), ("", 0.0), ("", 0.0), ("", 0.0), ("", 0.0)] - # fmt: on - return vocab - - def unk_id(self, proto): - unk_id = 3 - return unk_id - - def post_processor(self): - return processors.TemplateProcessing( - single=" $A", - pair=" $A $B", - special_tokens=[ - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ("", self.original_tokenizer.convert_tokens_to_ids("")), - ], - ) - - -class LlamaConverter(SpmConverter): - handle_byte_fallback = True - - def vocab(self, proto): - vocab = [ - ("", 0.0), - ("", 0.0), - ("", 0.0), - ] - vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]] - return vocab - - def unk_id(self, proto): - unk_id = 0 - return unk_id - - def decoder(self, replacement, add_prefix_space): - return decoders.Sequence( - [ - decoders.Replace("▁", " "), - decoders.ByteFallback(), - decoders.Fuse(), - decoders.Strip(content=" ", left=1), - ] - ) - - def tokenizer(self, proto): - model_type = proto.trainer_spec.model_type - vocab_scores = self.vocab(proto) - if model_type == 1: - import tokenizers - - if version.parse(tokenizers.__version__) < version.parse("0.14.0"): - tokenizer = Tokenizer(Unigram(vocab_scores, 0)) - else: - tokenizer = Tokenizer(Unigram(vocab_scores, 0, byte_fallback=True)) - - elif model_type == 2: - _, merges = SentencePieceExtractor(self.original_tokenizer.vocab_file).extract(vocab_scores) - bpe_vocab = {word: i for i, (word, _score) in enumerate(vocab_scores)} - tokenizer = Tokenizer( - BPE(bpe_vocab, merges, unk_token=proto.trainer_spec.unk_piece, fuse_unk=True, byte_fallback=True) - ) - tokenizer.add_special_tokens( - [ - AddedToken(""), - AddedToken(""), - AddedToken(""), - ] - ) - else: - raise Exception( - "You're trying to run a `Unigram` model but you're file was trained with a different algorithm" - ) - - return tokenizer - - def normalizer(self, proto): - return normalizers.Sequence( - [ - normalizers.Prepend(prepend="▁"), - normalizers.Replace(pattern=" ", content="▁"), - ] - ) - - def pre_tokenizer(self, replacement, add_prefix_space): - return None - - def post_processor(self): - # the processor is defined in the LlamaTokenizerFast class. - return None - - -class MarkupLMConverter(Converter): - def converted(self) -> Tokenizer: - ot = self.original_tokenizer - vocab = ot.encoder - merges = list(ot.bpe_ranks.keys()) - - tokenizer = Tokenizer( - BPE( - vocab=vocab, - merges=merges, - dropout=None, - continuing_subword_prefix="", - end_of_word_suffix="", - fuse_unk=False, - unk_token=self.original_tokenizer.unk_token, - ) - ) - - tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=ot.add_prefix_space) - tokenizer.decoder = decoders.ByteLevel() - - cls = str(self.original_tokenizer.cls_token) - sep = str(self.original_tokenizer.sep_token) - cls_token_id = self.original_tokenizer.cls_token_id - sep_token_id = self.original_tokenizer.sep_token_id - - tokenizer.post_processor = processors.TemplateProcessing( - single=f"{cls} $A {sep}", - pair=f"{cls} $A {sep} $B {sep}", - special_tokens=[ - (cls, cls_token_id), - (sep, sep_token_id), - ], - ) - - return tokenizer - - -SLOW_TO_FAST_CONVERTERS = { - "AlbertTokenizer": AlbertConverter, - "BartTokenizer": RobertaConverter, - "BarthezTokenizer": BarthezConverter, - "BertTokenizer": BertConverter, - "BigBirdTokenizer": BigBirdConverter, - "BlenderbotTokenizer": BlenderbotConverter, - "CamembertTokenizer": CamembertConverter, - "CLIPTokenizer": CLIPConverter, - "CodeGenTokenizer": GPT2Converter, - "ConvBertTokenizer": BertConverter, - "DebertaTokenizer": DebertaConverter, - "DebertaV2Tokenizer": DebertaV2Converter, - "DistilBertTokenizer": BertConverter, - "DPRReaderTokenizer": BertConverter, - "DPRQuestionEncoderTokenizer": BertConverter, - "DPRContextEncoderTokenizer": BertConverter, - "ElectraTokenizer": BertConverter, - "FNetTokenizer": AlbertConverter, - "FunnelTokenizer": FunnelConverter, - "GPT2Tokenizer": GPT2Converter, - "HerbertTokenizer": HerbertConverter, - "LayoutLMTokenizer": BertConverter, - "LayoutLMv2Tokenizer": BertConverter, - "LayoutLMv3Tokenizer": RobertaConverter, - "LayoutXLMTokenizer": XLMRobertaConverter, - "LongformerTokenizer": RobertaConverter, - "LEDTokenizer": RobertaConverter, - "LxmertTokenizer": BertConverter, - "MarkupLMTokenizer": MarkupLMConverter, - "MBartTokenizer": MBartConverter, - "MBart50Tokenizer": MBart50Converter, - "MPNetTokenizer": MPNetConverter, - "MobileBertTokenizer": BertConverter, - "MvpTokenizer": RobertaConverter, - "NllbTokenizer": NllbConverter, - "OpenAIGPTTokenizer": OpenAIGPTConverter, - "PegasusTokenizer": PegasusConverter, - "RealmTokenizer": BertConverter, - "ReformerTokenizer": ReformerConverter, - "RemBertTokenizer": RemBertConverter, - "RetriBertTokenizer": BertConverter, - "RobertaTokenizer": RobertaConverter, - "RoFormerTokenizer": RoFormerConverter, - "SqueezeBertTokenizer": BertConverter, - "T5Tokenizer": T5Converter, - "WhisperTokenizer": WhisperConverter, - "XLMRobertaTokenizer": XLMRobertaConverter, - "XLNetTokenizer": XLNetConverter, - "SplinterTokenizer": SplinterConverter, - "XGLMTokenizer": XGLMConverter, - "LlamaTokenizer": LlamaConverter, - "CodeLlamaTokenizer": LlamaConverter, -} - - -def convert_slow_tokenizer(transformer_tokenizer) -> Tokenizer: - """ - Utilities to convert a slow tokenizer instance in a fast tokenizer instance. - - Args: - transformer_tokenizer ([`~tokenization_utils_base.PreTrainedTokenizer`]): - Instance of a slow tokenizer to convert in the backend tokenizer for - [`~tokenization_utils_base.PreTrainedTokenizerFast`]. - - Return: - A instance of [`~tokenizers.Tokenizer`] to be used as the backend tokenizer of a - [`~tokenization_utils_base.PreTrainedTokenizerFast`] - """ - - tokenizer_class_name = transformer_tokenizer.__class__.__name__ - - if tokenizer_class_name not in SLOW_TO_FAST_CONVERTERS: - raise ValueError( - f"An instance of tokenizer class {tokenizer_class_name} cannot be converted in a Fast tokenizer instance." - " No converter was found. Currently available slow->fast convertors:" - f" {list(SLOW_TO_FAST_CONVERTERS.keys())}" - ) - - converter_class = SLOW_TO_FAST_CONVERTERS[tokenizer_class_name] - - return converter_class(transformer_tokenizer).converted() diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/beam_constraints.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/beam_constraints.py deleted file mode 100644 index b53c4512427a8793449da9f68c39a12527721d40..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/beam_constraints.py +++ /dev/null @@ -1,521 +0,0 @@ -from abc import ABC, abstractmethod -from typing import List, Optional - - -class Constraint(ABC): - r"""Abstract base class for all constraints that can be applied during generation. - It must define how the constraint can be satisfied. - - All classes that inherit Constraint must follow the requirement that - - ```py - completed = False - while not completed: - _, completed = constraint.update(constraint.advance()) - ``` - - will always terminate (halt). - """ - - def __init__(self): - # test for the above condition - self.test() - - def test(self): - """ - Tests whether this constraint has been properly defined. - """ - counter = 0 - completed = False - while not completed: - if counter == 1: - self.reset() - advance = self.advance() - if not self.does_advance(advance): - raise Exception( - "Custom Constraint is not defined correctly. self.does_advance(self.advance()) must be true." - ) - - stepped, completed, reset = self.update(advance) - counter += 1 - - if counter > 10000: - raise Exception("update() does not fulfill the constraint.") - - if self.remaining() != 0: - raise Exception("Custom Constraint is not defined correctly.") - - @abstractmethod - def advance(self): - """ - When called, returns the token that would take this constraint one step closer to being fulfilled. - - Return: - token_ids(`torch.tensor`): Must be a tensor of a list of indexable tokens, not some integer. - """ - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - @abstractmethod - def does_advance(self, token_id: int): - """ - Reads in a token and returns whether it creates progress. - """ - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - @abstractmethod - def update(self, token_id: int): - """ - Reads in a token and returns booleans that indicate the progress made by it. This function will update the - state of this object unlikes `does_advance(self, token_id: int)`. - - This isn't to test whether a certain token will advance the progress; it's to update its state as if it has - been generated. This becomes important if token_id != desired token (refer to else statement in - PhrasalConstraint) - - Args: - token_id(`int`): - The id of a newly generated token in the beam search. - Return: - stepped(`bool`): - Whether this constraint has become one step closer to being fulfuilled. - completed(`bool`): - Whether this constraint has been completely fulfilled by this token being generated. - reset (`bool`): - Whether this constraint has reset its progress by this token being generated. - """ - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - @abstractmethod - def reset(self): - """ - Resets the state of this constraint to its initialization. We would call this in cases where the fulfillment of - a constraint is abrupted by an unwanted token. - """ - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - @abstractmethod - def remaining(self): - """ - Returns the number of remaining steps of `advance()` in order to complete this constraint. - """ - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - @abstractmethod - def copy(self, stateful=False): - """ - Creates a new instance of this constraint. - - Args: - stateful(`bool`): Whether to not only copy the constraint for new instance, but also its state. - - Return: - constraint(`Constraint`): The same constraint as the one being called from. - """ - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - -class PhrasalConstraint(Constraint): - r""" - [`Constraint`] enforcing that an ordered sequence of tokens is included in the output. - - Args: - token_ids (`List[int]`): - The id of the token that must be generated by the output. - """ - - def __init__(self, token_ids: List[int]): - super(Constraint, self).__init__() - - if not isinstance(token_ids, list) or len(token_ids) == 0: - raise ValueError(f"`token_ids` has to be a non-empty list, but is {token_ids}.") - if any((not isinstance(token_id, int) or token_id < 0) for token_id in token_ids): - raise ValueError(f"Each list in `token_ids` has to be a list of positive integers, but is {token_ids}.") - - self.token_ids = token_ids - - self.seqlen = len(self.token_ids) - self.fulfilled_idx = -1 # the index of the currently fulfilled step - self.completed = False - - def advance(self): - if self.completed: - return None - return self.token_ids[self.fulfilled_idx + 1] - - def does_advance(self, token_id: int): - if not isinstance(token_id, int): - raise ValueError(f"`token_id` has to be an `int`, but is {token_id} of type {type(token_id)}") - - if self.completed: - return False - - return token_id == self.token_ids[self.fulfilled_idx + 1] - - def update(self, token_id: int): - if not isinstance(token_id, int): - raise ValueError(f"`token_id` has to be an `int`, but is {token_id} of type {type(token_id)}") - - stepped = False - completed = False - reset = False - - if self.does_advance(token_id): - self.fulfilled_idx += 1 - stepped = True - if self.fulfilled_idx == (self.seqlen - 1): - completed = True - self.completed = completed - else: - # failed to make progress. - reset = True - self.reset() - return stepped, completed, reset - - def reset(self): - self.completed = False - self.fulfilled_idx = 0 - - def remaining(self): - return self.seqlen - (self.fulfilled_idx + 1) - - def copy(self, stateful=False): - new_constraint = PhrasalConstraint(self.token_ids) - - if stateful: - new_constraint.seq_len = self.seqlen - new_constraint.fulfilled_idx = self.fulfilled_idx - new_constraint.completed = self.completed - - return new_constraint - - -class DisjunctiveTrie: - def __init__(self, nested_token_ids: List[List[int]], no_subsets=True): - r""" - A helper class that builds a trie with the words represented in `nested_token_ids`. - """ - self.max_height = max([len(one) for one in nested_token_ids]) - - root = {} - for token_ids in nested_token_ids: - level = root - for tidx, token_id in enumerate(token_ids): - if token_id not in level: - level[token_id] = {} - - level = level[token_id] - - if no_subsets and self.has_subsets(root, nested_token_ids): - raise ValueError( - "Each list in `nested_token_ids` can't be a complete subset of another list, but is" - f" {nested_token_ids}." - ) - - self.trie = root - - def next_tokens(self, current_seq): - """ - The next possible tokens that will progress the trie, given the current sequence of tokens in `current_seq`. - """ - start = self.trie - - for current_token in current_seq: - start = start[current_token] - - next_tokens = list(start.keys()) - - return next_tokens - - def reached_leaf(self, current_seq): - next_tokens = self.next_tokens(current_seq) - - return len(next_tokens) == 0 - - def count_leaves(self, root): - next_nodes = list(root.values()) - if len(next_nodes) == 0: - return 1 - else: - return sum([self.count_leaves(nn) for nn in next_nodes]) - - def has_subsets(self, trie, nested_token_ids): - """ - Returns whether # of leaves == # of words. Otherwise some word is a subset of another. - """ - leaf_count = self.count_leaves(trie) - return len(nested_token_ids) != leaf_count - - -class DisjunctiveConstraint(Constraint): - r""" - A special [`Constraint`] that is fulfilled by fulfilling just one of several constraints. - - Args: - nested_token_ids (`List[List[int]]`): - A list of words, where each word is a list of ids. This constraint is fulfilled by generating just one from - the list of words. - """ - - def __init__(self, nested_token_ids: List[List[int]]): - super(Constraint, self).__init__() - - if not isinstance(nested_token_ids, list) or len(nested_token_ids) == 0: - raise ValueError(f"`nested_token_ids` has to be a non-empty list, but is {nested_token_ids}.") - if any(not isinstance(token_ids, list) for token_ids in nested_token_ids): - raise ValueError(f"`nested_token_ids` has to be a list of lists, but is {nested_token_ids}.") - if any( - any((not isinstance(token_id, int) or token_id < 0) for token_id in token_ids) - for token_ids in nested_token_ids - ): - raise ValueError( - f"Each list in `nested_token_ids` has to be a list of positive integers, but is {nested_token_ids}." - ) - - self.trie = DisjunctiveTrie(nested_token_ids) - self.token_ids = nested_token_ids - - self.seqlen = self.trie.max_height - self.current_seq = [] - self.completed = False - - def advance(self): - token_list = self.trie.next_tokens(self.current_seq) - - if len(token_list) == 0: - return None - else: - return token_list - - def does_advance(self, token_id: int): - if not isinstance(token_id, int): - raise ValueError(f"`token_id` is supposed to be type `int`, but is {token_id} of type {type(token_id)}") - - next_tokens = self.trie.next_tokens(self.current_seq) - - return token_id in next_tokens - - def update(self, token_id: int): - if not isinstance(token_id, int): - raise ValueError(f"`token_id` is supposed to be type `int`, but is {token_id} of type {type(token_id)}") - - stepped = False - completed = False - reset = False - - if self.does_advance(token_id): - self.current_seq.append(token_id) - stepped = True - else: - reset = True - self.reset() - - completed = self.trie.reached_leaf(self.current_seq) - self.completed = completed - - return stepped, completed, reset - - def reset(self): - self.completed = False - self.current_seq = [] - - def remaining(self): - if self.completed: - # since this can be completed without reaching max height - return 0 - else: - return self.seqlen - len(self.current_seq) - - def copy(self, stateful=False): - new_constraint = DisjunctiveConstraint(self.token_ids) - - if stateful: - new_constraint.seq_len = self.seqlen - new_constraint.current_seq = self.current_seq - new_constraint.completed = self.completed - - return new_constraint - - -class ConstraintListState: - r""" - A class for beam scorers to track its progress through a list of constraints. - - Args: - constraints (`List[Constraint]`): - A list of [`Constraint`] objects that must be fulfilled by the beam scorer. - """ - - def __init__(self, constraints: List[Constraint]): - self.constraints = constraints - - # max # of steps required to fulfill a given constraint - self.max_seqlen = max([c.seqlen for c in constraints]) - self.n_constraints = len(constraints) - self.completed = False - - self.init_state() - - def init_state(self): - self.complete_constraints = [] - self.inprogress_constraint = None - self.pending_constraints = [constraint.copy(stateful=False) for constraint in self.constraints] - - def get_bank(self): - add = 0 - if self.inprogress_constraint: - # extra points for having a constraint mid-fulfilled - add += self.max_seqlen - self.inprogress_constraint.remaining() - - return (len(self.complete_constraints) * self.max_seqlen) + add - - def advance(self): - """The list of tokens to generate such that we can make progress. - By "list" we don't mean the list of token that will fully fulfill a constraint. - - Given constraints `c_i = {t_ij | j == # of tokens}`, If we're not in the middle of progressing through a - specific constraint `c_i`, we return: - - `[t_k1 for k in indices of unfulfilled constraints]` - - If we are in the middle of a constraint, then we return: - `[t_ij]`, where `i` is the index of the inprogress constraint, `j` is the next step for the constraint. - - Though we don't care which constraint is fulfilled first, if we are in the progress of fulfilling a constraint, - that's the only one we'll return. - """ - token_list = [] - if self.inprogress_constraint is None: - for constraint in self.pending_constraints: # "pending" == "unfulfilled yet" - advance = constraint.advance() - if isinstance(advance, int): - token_list.append(advance) - elif isinstance(advance, list): - token_list.extend(advance) - else: - advance = self.inprogress_constraint.advance() - if isinstance(advance, int): - token_list.append(advance) - elif isinstance(advance, list): - token_list.extend(advance) - - if len(token_list) == 0: - return None - else: - return token_list - - def reset(self, token_ids: Optional[List[int]]): - """ - token_ids: the tokens generated thus far to reset the state of the progress through constraints. - """ - self.init_state() - - if token_ids is not None: - for token in token_ids: - # completes or steps **one** constraint - complete, stepped = self.add(token) - - # the entire list of constraints are fulfilled - if self.completed: - break - - def add(self, token_id: int): - if not isinstance(token_id, int): - raise ValueError(f"`token_id` should be an `int`, but is `{token_id}`.") - - complete, stepped = False, False - - if self.completed: - complete = True - stepped = False - return complete, stepped - - if self.inprogress_constraint is not None: - # In the middle of fulfilling a constraint. If the `token_id` *does* makes an incremental progress to current - # job, simply update the state - - stepped, complete, reset = self.inprogress_constraint.update(token_id) - if reset: - # 1. If the next token breaks the progress, then we must restart. - # e.g. constraint = "I love pies" and sequence so far is "I love" but `token_id` == "books". - - # But that doesn't mean we self.init_state(), since we only reset the state for this particular - # constraint, not the full list of constraints. - - self.pending_constraints.append(self.inprogress_constraint.copy(stateful=False)) - self.inprogress_constraint = None - - if complete: - # 2. If the next token completes the constraint, move it to completed list, set - # inprogress to None. If there are no pending constraints either, then this full list of constraints - # is complete. - - self.complete_constraints.append(self.inprogress_constraint) - self.inprogress_constraint = None - - if len(self.pending_constraints) == 0: - # we're done! - self.completed = True - - else: - # Not in the middle of fulfilling a constraint. So does this `token_id` helps us step towards any of our list - # of constraints? - - for cidx, pending_constraint in enumerate(self.pending_constraints): - if pending_constraint.does_advance(token_id): - stepped, complete, reset = pending_constraint.update(token_id) - - if not stepped: - raise Exception( - "`constraint.update(token_id)` is not yielding incremental progress, " - "even though `constraint.does_advance(token_id)` is true." - ) - - if complete: - self.complete_constraints.append(pending_constraint) - self.inprogress_constraint = None - - if not complete and stepped: - self.inprogress_constraint = pending_constraint - - if complete or stepped: - # If we made any progress at all, then it's at least not a "pending constraint". - - self.pending_constraints = ( - self.pending_constraints[:cidx] + self.pending_constraints[cidx + 1 :] - ) - - if len(self.pending_constraints) == 0 and self.inprogress_constraint is None: - # If there's no longer any pending after this and no inprogress either, then we must be - # complete. - - self.completed = True - - break # prevent accidentally stepping through multiple constraints with just one token. - - return complete, stepped - - def copy(self, stateful=True): - new_state = ConstraintListState(self.constraints) # we actually never though self.constraints objects - # throughout this process. So it's at initialization state. - - if stateful: - new_state.complete_constraints = [ - constraint.copy(stateful=True) for constraint in self.complete_constraints - ] - if self.inprogress_constraint is not None: - new_state.inprogress_constraint = self.inprogress_constraint.copy(stateful=True) - new_state.pending_constraints = [constraint.copy() for constraint in self.pending_constraints] - - return new_state diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/feature_extraction_flava.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/feature_extraction_flava.py deleted file mode 100644 index c707b575cef2eff9d3dff7e122cc6a875f3e3931..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/feature_extraction_flava.py +++ /dev/null @@ -1,33 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Meta Platforms authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Feature extractor class for FLAVA.""" - -import warnings - -from ...utils import logging -from .image_processing_flava import FlavaImageProcessor - - -logger = logging.get_logger(__name__) - - -class FlavaFeatureExtractor(FlavaImageProcessor): - def __init__(self, *args, **kwargs) -> None: - warnings.warn( - "The class FlavaFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please" - " use FlavaImageProcessor instead.", - FutureWarning, - ) - super().__init__(*args, **kwargs) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longformer/modeling_tf_longformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longformer/modeling_tf_longformer.py deleted file mode 100644 index 0397c2ba320ec57ecfe3f6f16b9f9937aff27fa8..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longformer/modeling_tf_longformer.py +++ /dev/null @@ -1,2581 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The Allen Institute for AI team and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tensorflow Longformer model.""" - - -from __future__ import annotations - -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_utils import ( - TFMaskedLanguageModelingLoss, - TFModelInputType, - TFMultipleChoiceLoss, - TFPreTrainedModel, - TFQuestionAnsweringLoss, - TFSequenceClassificationLoss, - TFTokenClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, -) -from .configuration_longformer import LongformerConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "allenai/longformer-base-4096" -_CONFIG_FOR_DOC = "LongformerConfig" - -LARGE_NEGATIVE = -1e8 - -TF_LONGFORMER_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "allenai/longformer-base-4096", - "allenai/longformer-large-4096", - "allenai/longformer-large-4096-finetuned-triviaqa", - "allenai/longformer-base-4096-extra.pos.embd.only", - "allenai/longformer-large-4096-extra.pos.embd.only", - # See all Longformer models at https://huggingface.co/models?filter=longformer -] - - -@dataclass -class TFLongformerBaseModelOutput(ModelOutput): - """ - Base class for Longformer's outputs, with potential hidden states, local and global attentions. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + - attention_window + 1)`, where `x` is the number of tokens with global attention mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first `x` values) and to every token in the attention window (remaining `attention_window - + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the - remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a - token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding - (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. - If the attention window contains a token with global attention, the attention weight at the corresponding - index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global - attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be - accessed from `global_attentions`. - global_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` - is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - last_hidden_state: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - global_attentions: Tuple[tf.Tensor] | None = None - - -@dataclass -class TFLongformerBaseModelOutputWithPooling(ModelOutput): - """ - Base class for Longformer's outputs that also contains a pooling of the last hidden states. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`): - Last layer hidden-state of the first token of the sequence (classification token) further processed by a - Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence - prediction (classification) objective during pretraining. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + - attention_window + 1)`, where `x` is the number of tokens with global attention mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first `x` values) and to every token in the attention window (remaining `attention_window - + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the - remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a - token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding - (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. - If the attention window contains a token with global attention, the attention weight at the corresponding - index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global - attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be - accessed from `global_attentions`. - global_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` - is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - last_hidden_state: tf.Tensor = None - pooler_output: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - global_attentions: Tuple[tf.Tensor] | None = None - - -@dataclass -class TFLongformerMaskedLMOutput(ModelOutput): - """ - Base class for masked language models outputs. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Masked language modeling (MLM) loss. - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + - attention_window + 1)`, where `x` is the number of tokens with global attention mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first `x` values) and to every token in the attention window (remaining `attention_window - + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the - remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a - token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding - (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. - If the attention window contains a token with global attention, the attention weight at the corresponding - index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global - attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be - accessed from `global_attentions`. - global_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` - is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: tf.Tensor | None = None - logits: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - global_attentions: Tuple[tf.Tensor] | None = None - - -@dataclass -class TFLongformerQuestionAnsweringModelOutput(ModelOutput): - """ - Base class for outputs of question answering Longformer models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. - start_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`): - Span-start scores (before SoftMax). - end_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`): - Span-end scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + - attention_window + 1)`, where `x` is the number of tokens with global attention mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first `x` values) and to every token in the attention window (remaining `attention_window - + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the - remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a - token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding - (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. - If the attention window contains a token with global attention, the attention weight at the corresponding - index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global - attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be - accessed from `global_attentions`. - global_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` - is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: tf.Tensor | None = None - start_logits: tf.Tensor = None - end_logits: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - global_attentions: Tuple[tf.Tensor] | None = None - - -@dataclass -class TFLongformerSequenceClassifierOutput(ModelOutput): - """ - Base class for outputs of sentence classification models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Classification (or regression if config.num_labels==1) scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + - attention_window + 1)`, where `x` is the number of tokens with global attention mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first `x` values) and to every token in the attention window (remaining `attention_window - + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the - remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a - token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding - (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. - If the attention window contains a token with global attention, the attention weight at the corresponding - index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global - attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be - accessed from `global_attentions`. - global_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` - is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: tf.Tensor | None = None - logits: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - global_attentions: Tuple[tf.Tensor] | None = None - - -@dataclass -class TFLongformerMultipleChoiceModelOutput(ModelOutput): - """ - Base class for outputs of multiple choice models. - - Args: - loss (`tf.Tensor` of shape *(1,)*, *optional*, returned when `labels` is provided): - Classification loss. - logits (`tf.Tensor` of shape `(batch_size, num_choices)`): - *num_choices* is the second dimension of the input tensors. (see *input_ids* above). - - Classification scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + - attention_window + 1)`, where `x` is the number of tokens with global attention mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first `x` values) and to every token in the attention window (remaining `attention_window - + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the - remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a - token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding - (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. - If the attention window contains a token with global attention, the attention weight at the corresponding - index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global - attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be - accessed from `global_attentions`. - global_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` - is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: tf.Tensor | None = None - logits: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - global_attentions: Tuple[tf.Tensor] | None = None - - -@dataclass -class TFLongformerTokenClassifierOutput(ModelOutput): - """ - Base class for outputs of token classification models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided) : - Classification loss. - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.num_labels)`): - Classification scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + - attention_window + 1)`, where `x` is the number of tokens with global attention mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first `x` values) and to every token in the attention window (remaining `attention_window - + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the - remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a - token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding - (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. - If the attention window contains a token with global attention, the attention weight at the corresponding - index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global - attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be - accessed from `global_attentions`. - global_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, where `x` - is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: tf.Tensor | None = None - logits: tf.Tensor = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - global_attentions: Tuple[tf.Tensor] | None = None - - -def _compute_global_attention_mask(input_ids_shape, sep_token_indices, before_sep_token=True): - """ - Computes global attention mask by putting attention on all tokens before `sep_token_id` if `before_sep_token is - True` else after `sep_token_id`. - """ - assert shape_list(sep_token_indices)[1] == 2, "`input_ids` should have two dimensions" - question_end_index = tf.reshape(sep_token_indices, (input_ids_shape[0], 3, 2))[:, 0, 1][:, None] - # bool attention mask with True in locations of global attention - attention_mask = tf.expand_dims(tf.range(input_ids_shape[1], dtype=tf.int64), axis=0) - attention_mask = tf.tile(attention_mask, (input_ids_shape[0], 1)) - if before_sep_token is True: - question_end_index = tf.tile(question_end_index, (1, input_ids_shape[1])) - attention_mask = tf.cast(attention_mask < question_end_index, dtype=question_end_index.dtype) - else: - # last token is separation token and should not be counted and in the middle are two separation tokens - question_end_index = tf.tile(question_end_index + 1, (1, input_ids_shape[1])) - attention_mask = tf.cast( - attention_mask > question_end_index, - dtype=question_end_index.dtype, - ) * tf.cast(attention_mask < input_ids_shape[-1], dtype=question_end_index.dtype) - - return attention_mask - - -# Copied from transformers.models.roberta.modeling_tf_roberta.TFRobertaLMHead with Roberta->Longformer -class TFLongformerLMHead(tf.keras.layers.Layer): - """Longformer Head for masked language modeling.""" - - def __init__(self, config, input_embeddings, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.hidden_size = config.hidden_size - self.dense = tf.keras.layers.Dense( - config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.layer_norm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layer_norm") - self.act = get_tf_activation("gelu") - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = input_embeddings - - def build(self, input_shape): - self.bias = self.add_weight(shape=(self.config.vocab_size,), initializer="zeros", trainable=True, name="bias") - - super().build(input_shape) - - def get_output_embeddings(self): - return self.decoder - - def set_output_embeddings(self, value): - self.decoder.weight = value - self.decoder.vocab_size = shape_list(value)[0] - - def get_bias(self): - return {"bias": self.bias} - - def set_bias(self, value): - self.bias = value["bias"] - self.config.vocab_size = shape_list(value["bias"])[0] - - def call(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.act(hidden_states) - hidden_states = self.layer_norm(hidden_states) - - # project back to size of vocabulary with bias - seq_length = shape_list(tensor=hidden_states)[1] - hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, self.hidden_size]) - hidden_states = tf.matmul(a=hidden_states, b=self.decoder.weight, transpose_b=True) - hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, seq_length, self.config.vocab_size]) - hidden_states = tf.nn.bias_add(value=hidden_states, bias=self.bias) - - return hidden_states - - -class TFLongformerEmbeddings(tf.keras.layers.Layer): - """ - Same as BertEmbeddings with a tiny tweak for positional embeddings indexing and some extra casting. - """ - - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.padding_idx = 1 - self.config = config - self.hidden_size = config.hidden_size - self.max_position_embeddings = config.max_position_embeddings - self.initializer_range = config.initializer_range - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def build(self, input_shape: tf.TensorShape): - with tf.name_scope("word_embeddings"): - self.weight = self.add_weight( - name="weight", - shape=[self.config.vocab_size, self.hidden_size], - initializer=get_initializer(self.initializer_range), - ) - - with tf.name_scope("token_type_embeddings"): - self.token_type_embeddings = self.add_weight( - name="embeddings", - shape=[self.config.type_vocab_size, self.hidden_size], - initializer=get_initializer(self.initializer_range), - ) - - with tf.name_scope("position_embeddings"): - self.position_embeddings = self.add_weight( - name="embeddings", - shape=[self.max_position_embeddings, self.hidden_size], - initializer=get_initializer(self.initializer_range), - ) - - super().build(input_shape) - - def create_position_ids_from_input_ids(self, input_ids, past_key_values_length=0): - """ - Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding - symbols are ignored. This is modified from fairseq's `utils.make_positions`. - - Args: - input_ids: tf.Tensor - Returns: tf.Tensor - """ - mask = tf.cast(tf.math.not_equal(input_ids, self.padding_idx), dtype=input_ids.dtype) - incremental_indices = (tf.math.cumsum(mask, axis=1) + past_key_values_length) * mask - - return incremental_indices + self.padding_idx - - def call( - self, - input_ids=None, - position_ids=None, - token_type_ids=None, - inputs_embeds=None, - past_key_values_length=0, - training=False, - ): - """ - Applies embedding based on inputs tensor. - - Returns: - final_embeddings (`tf.Tensor`): output embedding tensor. - """ - assert not (input_ids is None and inputs_embeds is None) - - if input_ids is not None: - check_embeddings_within_bounds(input_ids, self.config.vocab_size) - inputs_embeds = tf.gather(params=self.weight, indices=input_ids) - - input_shape = shape_list(inputs_embeds)[:-1] - - if token_type_ids is None: - token_type_ids = tf.cast(tf.fill(dims=input_shape, value=0), tf.int64) - - if position_ids is None: - if input_ids is not None: - # Create the position ids from the input token ids. Any padded tokens remain padded. - position_ids = self.create_position_ids_from_input_ids( - input_ids=input_ids, past_key_values_length=past_key_values_length - ) - else: - position_ids = tf.expand_dims( - tf.range(start=self.padding_idx + 1, limit=input_shape[-1] + self.padding_idx + 1, dtype=tf.int64), - axis=0, - ) - - position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids) - token_type_embeds = tf.gather(params=self.token_type_embeddings, indices=token_type_ids) - final_embeddings = inputs_embeds + position_embeds + token_type_embeds - final_embeddings = self.LayerNorm(inputs=final_embeddings) - final_embeddings = self.dropout(inputs=final_embeddings, training=training) - - return final_embeddings - - -# Copied from transformers.models.bert.modeling_tf_bert.TFBertIntermediate with Bert->Longformer -class TFLongformerIntermediate(tf.keras.layers.Layer): - def __init__(self, config: LongformerConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = get_tf_activation(config.hidden_act) - else: - self.intermediate_act_fn = config.hidden_act - - def call(self, hidden_states: tf.Tensor) -> tf.Tensor: - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - - return hidden_states - - -# Copied from transformers.models.bert.modeling_tf_bert.TFBertOutput with Bert->Longformer -class TFLongformerOutput(tf.keras.layers.Layer): - def __init__(self, config: LongformerConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.dropout(inputs=hidden_states, training=training) - hidden_states = self.LayerNorm(inputs=hidden_states + input_tensor) - - return hidden_states - - -# Copied from transformers.models.bert.modeling_tf_bert.TFBertPooler with Bert->Longformer -class TFLongformerPooler(tf.keras.layers.Layer): - def __init__(self, config: LongformerConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.hidden_size, - kernel_initializer=get_initializer(config.initializer_range), - activation="tanh", - name="dense", - ) - - def call(self, hidden_states: tf.Tensor) -> tf.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(inputs=first_token_tensor) - - return pooled_output - - -# Copied from transformers.models.bert.modeling_tf_bert.TFBertSelfOutput with Bert->Longformer -class TFLongformerSelfOutput(tf.keras.layers.Layer): - def __init__(self, config: LongformerConfig, **kwargs): - super().__init__(**kwargs) - - self.dense = tf.keras.layers.Dense( - units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.dropout(inputs=hidden_states, training=training) - hidden_states = self.LayerNorm(inputs=hidden_states + input_tensor) - - return hidden_states - - -class TFLongformerSelfAttention(tf.keras.layers.Layer): - def __init__(self, config, layer_id, **kwargs): - super().__init__(**kwargs) - self.config = config - - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads}" - ) - - self.num_heads = config.num_attention_heads - self.head_dim = int(config.hidden_size / config.num_attention_heads) - self.embed_dim = config.hidden_size - self.query = tf.keras.layers.Dense( - self.embed_dim, - kernel_initializer=get_initializer(config.initializer_range), - name="query", - ) - self.key = tf.keras.layers.Dense( - self.embed_dim, - kernel_initializer=get_initializer(config.initializer_range), - name="key", - ) - self.value = tf.keras.layers.Dense( - self.embed_dim, - kernel_initializer=get_initializer(config.initializer_range), - name="value", - ) - - # separate projection layers for tokens with global attention - self.query_global = tf.keras.layers.Dense( - self.embed_dim, - kernel_initializer=get_initializer(config.initializer_range), - name="query_global", - ) - self.key_global = tf.keras.layers.Dense( - self.embed_dim, - kernel_initializer=get_initializer(config.initializer_range), - name="key_global", - ) - self.value_global = tf.keras.layers.Dense( - self.embed_dim, - kernel_initializer=get_initializer(config.initializer_range), - name="value_global", - ) - self.dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) - self.global_dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) - self.layer_id = layer_id - attention_window = config.attention_window[self.layer_id] - - assert ( - attention_window % 2 == 0 - ), f"`attention_window` for layer {self.layer_id} has to be an even value. Given {attention_window}" - assert ( - attention_window > 0 - ), f"`attention_window` for layer {self.layer_id} has to be positive. Given {attention_window}" - - self.one_sided_attn_window_size = attention_window // 2 - - def build(self, input_shape=None): - if not self.built: - with tf.name_scope("query_global"): - self.query_global.build((self.config.hidden_size,)) - with tf.name_scope("key_global"): - self.key_global.build((self.config.hidden_size,)) - with tf.name_scope("value_global"): - self.value_global.build((self.config.hidden_size,)) - super().build(input_shape) - - def call( - self, - inputs, - training=False, - ): - """ - LongformerSelfAttention expects *len(hidden_states)* to be multiple of *attention_window*. Padding to - *attention_window* happens in LongformerModel.forward to avoid redoing the padding on each layer. - - The *attention_mask* is changed in [`LongformerModel.forward`] from 0, 1, 2 to: - - - -10000: no attention - - 0: local attention - - +10000: global attention - """ - # retrieve input args - ( - hidden_states, - attention_mask, - layer_head_mask, - is_index_masked, - is_index_global_attn, - is_global_attn, - ) = inputs - - # project hidden states - query_vectors = self.query(hidden_states) - key_vectors = self.key(hidden_states) - value_vectors = self.value(hidden_states) - batch_size, seq_len, embed_dim = shape_list(hidden_states) - - tf.debugging.assert_equal( - embed_dim, - self.embed_dim, - message=f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}", - ) - - # normalize query - query_vectors /= tf.math.sqrt(tf.cast(self.head_dim, dtype=query_vectors.dtype)) - query_vectors = tf.reshape(query_vectors, (batch_size, seq_len, self.num_heads, self.head_dim)) - key_vectors = tf.reshape(key_vectors, (batch_size, seq_len, self.num_heads, self.head_dim)) - - # attn_probs = (batch_size, seq_len, num_heads, window*2+1) - attn_scores = self._sliding_chunks_query_key_matmul( - query_vectors, key_vectors, self.one_sided_attn_window_size - ) - - # values to pad for attention probs - remove_from_windowed_attention_mask = attention_mask != 0 - # cast to fp32/fp16 then replace 1's with -inf - float_mask = tf.cast(remove_from_windowed_attention_mask, dtype=query_vectors.dtype) * LARGE_NEGATIVE - - # diagonal mask with zeros everywhere and -inf inplace of padding - diagonal_mask = self._sliding_chunks_query_key_matmul( - tf.ones(shape_list(attention_mask)), - float_mask, - self.one_sided_attn_window_size, - ) - - # pad local attention probs - attn_scores += diagonal_mask - - tf.debugging.assert_equal( - shape_list(attn_scores), - [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1], - message=( - f"attn_probs should be of size ({batch_size}, {seq_len}, {self.num_heads}," - f" {self.one_sided_attn_window_size * 2 + 1}), but is of size {shape_list(attn_scores)}" - ), - ) - - # compute global attn indices required through out forward fn - ( - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - ) = self._get_global_attn_indices(is_index_global_attn) - - # this function is only relevant for global attention - if is_global_attn: - attn_scores = self._concat_with_global_key_attn_probs( - attn_scores=attn_scores, - query_vectors=query_vectors, - key_vectors=key_vectors, - max_num_global_attn_indices=max_num_global_attn_indices, - is_index_global_attn_nonzero=is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, - ) - - attn_probs = stable_softmax(attn_scores, axis=-1) - - # softmax sometimes inserts NaN if all positions are masked, replace them with 0 - # Make sure to create a mask with the proper shape: - # if is_global_attn==True => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1] - # if is_global_attn==False => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1] - if is_global_attn: - masked_index = tf.tile( - is_index_masked[:, :, None, None], - (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1), - ) - else: - masked_index = tf.tile( - is_index_masked[:, :, None, None], - (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + 1), - ) - attn_probs = tf.where( - masked_index, - tf.zeros(shape_list(masked_index), dtype=attn_probs.dtype), - attn_probs, - ) - - if layer_head_mask is not None: - tf.debugging.assert_equal( - shape_list(layer_head_mask), - [self.num_heads], - message=( - f"Head mask for a single layer should be of size {(self.num_heads)}, but is" - f" {shape_list(layer_head_mask)}" - ), - ) - - attn_probs = tf.reshape(layer_head_mask, (1, 1, -1, 1)) * attn_probs - - # apply dropout - attn_probs = self.dropout(attn_probs, training=training) - value_vectors = tf.reshape(value_vectors, (batch_size, seq_len, self.num_heads, self.head_dim)) - - # if global attention, compute sum of global and local attn - - if is_global_attn: - attn_output = self._compute_attn_output_with_global_indices( - value_vectors=value_vectors, - attn_probs=attn_probs, - max_num_global_attn_indices=max_num_global_attn_indices, - is_index_global_attn_nonzero=is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, - ) - else: - attn_output = self._sliding_chunks_matmul_attn_probs_value( - attn_probs, value_vectors, self.one_sided_attn_window_size - ) - - tf.debugging.assert_equal( - shape_list(attn_output), [batch_size, seq_len, self.num_heads, self.head_dim], message="Unexpected size" - ) - - attn_output = tf.reshape(attn_output, (batch_size, seq_len, embed_dim)) - - # compute value for global attention and overwrite to attention output - if is_global_attn: - attn_output, global_attn_probs = self._compute_global_attn_output_from_hidden( - attn_output=attn_output, - hidden_states=hidden_states, - max_num_global_attn_indices=max_num_global_attn_indices, - layer_head_mask=layer_head_mask, - is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, - is_index_global_attn_nonzero=is_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, - is_index_masked=is_index_masked, - training=training, - ) - else: - # Leave attn_output unchanged - global_attn_probs = tf.zeros((batch_size, self.num_heads, max_num_global_attn_indices, seq_len)) - - # make sure that local attention probabilities are set to 0 for indices of global attn - # Make sure to create a mask with the proper shape: - # if is_global_attn==True => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1] - # if is_global_attn==False => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1] - if is_global_attn: - masked_global_attn_index = tf.tile( - is_index_global_attn[:, :, None, None], - (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1), - ) - else: - masked_global_attn_index = tf.tile( - is_index_global_attn[:, :, None, None], - (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + 1), - ) - attn_probs = tf.where( - masked_global_attn_index, - tf.zeros(shape_list(masked_global_attn_index), dtype=attn_probs.dtype), - attn_probs, - ) - - outputs = (attn_output, attn_probs, global_attn_probs) - - return outputs - - def _sliding_chunks_query_key_matmul(self, query, key, window_overlap): - """ - Matrix multiplication of query and key tensors using with a sliding window attention pattern. This - implementation splits the input into overlapping chunks of size 2w (e.g. 512 for pretrained Longformer) with an - overlap of size window_overlap - """ - batch_size, seq_len, num_heads, head_dim = shape_list(query) - - tf.debugging.assert_equal( - seq_len % (window_overlap * 2), - 0, - message=f"Sequence length should be multiple of {window_overlap * 2}. Given {seq_len}", - ) - tf.debugging.assert_equal( - shape_list(query), - shape_list(key), - message=( - f"Shape of query and key should be equal, but got query: {shape_list(query)} and key:" - f" {shape_list(key)}" - ), - ) - - chunks_count = seq_len // window_overlap - 1 - - # group batch_size and num_heads dimensions into one, then chunk seq_len into chunks of size window_overlap * 2 - query = tf.reshape( - tf.transpose(query, (0, 2, 1, 3)), - (batch_size * num_heads, seq_len, head_dim), - ) - key = tf.reshape(tf.transpose(key, (0, 2, 1, 3)), (batch_size * num_heads, seq_len, head_dim)) - chunked_query = self._chunk(query, window_overlap) - chunked_key = self._chunk(key, window_overlap) - - # matrix multiplication - # bcxd: batch_size * num_heads x chunks x 2window_overlap x head_dim - # bcyd: batch_size * num_heads x chunks x 2window_overlap x head_dim - # bcxy: batch_size * num_heads x chunks x 2window_overlap x 2window_overlap - chunked_query = tf.cast(chunked_query, dtype=chunked_key.dtype) - chunked_attention_scores = tf.einsum("bcxd,bcyd->bcxy", chunked_query, chunked_key) # multiply - - # convert diagonals into columns - paddings = tf.convert_to_tensor([[0, 0], [0, 0], [0, 1], [0, 0]]) - diagonal_chunked_attention_scores = self._pad_and_transpose_last_two_dims(chunked_attention_scores, paddings) - - # allocate space for the overall attention matrix where the chunks are combined. The last dimension - # has (window_overlap * 2 + 1) columns. The first (window_overlap) columns are the window_overlap lower triangles (attention from a word to - # window_overlap previous words). The following column is attention score from each word to itself, then - # followed by window_overlap columns for the upper triangle. - - # copy parts from diagonal_chunked_attention_scores into the combined matrix of attentions - # - copying the main diagonal and the upper triangle - # TODO: This code is most likely not very efficient and should be improved - diagonal_attn_scores_up_triang = tf.concat( - [ - diagonal_chunked_attention_scores[:, :, :window_overlap, : window_overlap + 1], - diagonal_chunked_attention_scores[:, -1:, window_overlap:, : window_overlap + 1], - ], - axis=1, - ) - - # - copying the lower triangle - diagonal_attn_scores_low_triang = tf.concat( - [ - tf.zeros( - (batch_size * num_heads, 1, window_overlap, window_overlap), - dtype=diagonal_chunked_attention_scores.dtype, - ), - diagonal_chunked_attention_scores[:, :, -(window_overlap + 1) : -1, window_overlap + 1 :], - ], - axis=1, - ) - diagonal_attn_scores_first_chunk = tf.concat( - [ - tf.roll( - diagonal_chunked_attention_scores, - shift=[1, window_overlap], - axis=[2, 3], - )[:, :, :window_overlap, :window_overlap], - tf.zeros( - (batch_size * num_heads, 1, window_overlap, window_overlap), - dtype=diagonal_chunked_attention_scores.dtype, - ), - ], - axis=1, - ) - first_chunk_mask = ( - tf.tile( - tf.range(chunks_count + 1, dtype=tf.int64)[None, :, None, None], - (batch_size * num_heads, 1, window_overlap, window_overlap), - ) - < 1 - ) - diagonal_attn_scores_low_triang = tf.where( - first_chunk_mask, - diagonal_attn_scores_first_chunk, - diagonal_attn_scores_low_triang, - ) - - # merging upper and lower triangle - diagonal_attention_scores = tf.concat( - [diagonal_attn_scores_low_triang, diagonal_attn_scores_up_triang], axis=-1 - ) - - # separate batch_size and num_heads dimensions again - diagonal_attention_scores = tf.transpose( - tf.reshape( - diagonal_attention_scores, - (batch_size, num_heads, seq_len, 2 * window_overlap + 1), - ), - (0, 2, 1, 3), - ) - - diagonal_attention_scores = self._mask_invalid_locations(diagonal_attention_scores, window_overlap) - - return diagonal_attention_scores - - @staticmethod - def _mask_invalid_locations(input_tensor, window_overlap): - # create correct upper triangle bool mask - mask_2d_upper = tf.reverse( - tf.linalg.band_part(tf.ones(shape=(window_overlap, window_overlap + 1)), -1, 0), - axis=[0], - ) - - # pad to full matrix - padding = tf.convert_to_tensor( - [[0, shape_list(input_tensor)[1] - window_overlap], [0, shape_list(input_tensor)[3] - window_overlap - 1]] - ) - - # create lower mask - mask_2d = tf.pad(mask_2d_upper, padding) - - # combine with upper mask - mask_2d = mask_2d + tf.reverse(mask_2d, axis=[0, 1]) - - # broadcast to full matrix - mask_4d = tf.tile(mask_2d[None, :, None, :], (shape_list(input_tensor)[0], 1, 1, 1)) - - # inf tensor used for masking - inf_tensor = -float("inf") * tf.ones_like(input_tensor) - - # mask - input_tensor = tf.where(tf.math.greater(mask_4d, 0), inf_tensor, input_tensor) - - return input_tensor - - def _sliding_chunks_matmul_attn_probs_value(self, attn_probs, value, window_overlap): - """ - Same as _sliding_chunks_query_key_matmul but for attn_probs and value tensors. Returned tensor will be of the - same shape as `attn_probs` - """ - - batch_size, seq_len, num_heads, head_dim = shape_list(value) - - tf.debugging.assert_equal( - seq_len % (window_overlap * 2), 0, message="Seq_len has to be multiple of 2 * window_overlap" - ) - tf.debugging.assert_equal( - shape_list(attn_probs)[:3], - shape_list(value)[:3], - message="value and attn_probs must have same dims (except head_dim)", - ) - tf.debugging.assert_equal( - shape_list(attn_probs)[3], - 2 * window_overlap + 1, - message="attn_probs last dim has to be 2 * window_overlap + 1", - ) - - chunks_count = seq_len // window_overlap - 1 - - # group batch_size and num_heads dimensions into one, then chunk seq_len into chunks of size 2 window overlap - chunked_attn_probs = tf.reshape( - tf.transpose(attn_probs, (0, 2, 1, 3)), - ( - batch_size * num_heads, - seq_len // window_overlap, - window_overlap, - 2 * window_overlap + 1, - ), - ) - - # group batch_size and num_heads dimensions into one - value = tf.reshape( - tf.transpose(value, (0, 2, 1, 3)), - (batch_size * num_heads, seq_len, head_dim), - ) - - # pad seq_len with w at the beginning of the sequence and another window overlap at the end - paddings = tf.convert_to_tensor([[0, 0], [window_overlap, window_overlap], [0, 0]]) - padded_value = tf.pad(value, paddings, constant_values=-1) - - # chunk padded_value into chunks of size 3 window overlap and an overlap of size window overlap - frame_size = 3 * window_overlap * head_dim - frame_hop_size = (shape_list(padded_value)[1] * head_dim - frame_size) // chunks_count - chunked_value = tf.signal.frame( - tf.reshape(padded_value, (batch_size * num_heads, -1)), - frame_size, - frame_hop_size, - ) - chunked_value = tf.reshape( - chunked_value, - (batch_size * num_heads, chunks_count + 1, 3 * window_overlap, head_dim), - ) - - tf.debugging.assert_equal( - shape_list(chunked_value), - [batch_size * num_heads, chunks_count + 1, 3 * window_overlap, head_dim], - message="Chunked value has the wrong shape", - ) - - chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) - context = tf.einsum("bcwd,bcdh->bcwh", chunked_attn_probs, chunked_value) - context = tf.transpose( - tf.reshape(context, (batch_size, num_heads, seq_len, head_dim)), - (0, 2, 1, 3), - ) - - return context - - @staticmethod - def _pad_and_transpose_last_two_dims(hidden_states_padded, paddings): - """pads rows and then flips rows and columns""" - hidden_states_padded = tf.pad( - hidden_states_padded, paddings - ) # padding value is not important because it will be overwritten - batch_size, chunk_size, seq_length, hidden_dim = shape_list(hidden_states_padded) - hidden_states_padded = tf.reshape(hidden_states_padded, (batch_size, chunk_size, hidden_dim, seq_length)) - - return hidden_states_padded - - @staticmethod - def _pad_and_diagonalize(chunked_hidden_states): - """ - shift every row 1 step right, converting columns into diagonals. - - Example: - - ```python - chunked_hidden_states: [ - 0.4983, - 2.6918, - -0.0071, - 1.0492, - -1.8348, - 0.7672, - 0.2986, - 0.0285, - -0.7584, - 0.4206, - -0.0405, - 0.1599, - 2.0514, - -1.1600, - 0.5372, - 0.2629, - ] - window_overlap = num_rows = 4 - ``` - - (pad & diagonalize) => [ 0.4983, 2.6918, -0.0071, 1.0492, 0.0000, 0.0000, 0.0000 - 0.0000, -1.8348, 0.7672, 0.2986, 0.0285, 0.0000, 0.0000 0.0000, 0.0000, -0.7584, 0.4206, - -0.0405, 0.1599, 0.0000 0.0000, 0.0000, 0.0000, 2.0514, -1.1600, 0.5372, 0.2629 ] - """ - total_num_heads, num_chunks, window_overlap, hidden_dim = shape_list(chunked_hidden_states) - paddings = tf.convert_to_tensor([[0, 0], [0, 0], [0, 0], [0, window_overlap + 1]]) - chunked_hidden_states = tf.pad( - chunked_hidden_states, paddings - ) # total_num_heads x num_chunks x window_overlap x (hidden_dim+window_overlap+1). Padding value is not important because it'll be overwritten - chunked_hidden_states = tf.reshape( - chunked_hidden_states, (total_num_heads, num_chunks, -1) - ) # total_num_heads x num_chunks x window_overlapL+window_overlapwindow_overlap+window_overlap - chunked_hidden_states = chunked_hidden_states[ - :, :, :-window_overlap - ] # total_num_heads x num_chunks x window_overlapL+window_overlapwindow_overlap - chunked_hidden_states = tf.reshape( - chunked_hidden_states, - (total_num_heads, num_chunks, window_overlap, window_overlap + hidden_dim), - ) # total_num_heads x num_chunks, window_overlap x hidden_dim+window_overlap - chunked_hidden_states = chunked_hidden_states[:, :, :, :-1] - - return chunked_hidden_states - - @staticmethod - def _chunk(hidden_states, window_overlap): - """convert into overlapping chunks. Chunk size = 2w, overlap size = w""" - batch_size, seq_length, hidden_dim = shape_list(hidden_states) - num_output_chunks = 2 * (seq_length // (2 * window_overlap)) - 1 - - # define frame size and frame stride (similar to convolution) - frame_hop_size = window_overlap * hidden_dim - frame_size = 2 * frame_hop_size - hidden_states = tf.reshape(hidden_states, (batch_size, seq_length * hidden_dim)) - - # chunk with overlap - chunked_hidden_states = tf.signal.frame(hidden_states, frame_size, frame_hop_size) - - tf.debugging.assert_equal( - shape_list(chunked_hidden_states), - [batch_size, num_output_chunks, frame_size], - message=( - "Make sure chunking is correctly applied. `Chunked hidden states should have output dimension" - f" {[batch_size, frame_size, num_output_chunks]}, but got {shape_list(chunked_hidden_states)}." - ), - ) - - chunked_hidden_states = tf.reshape( - chunked_hidden_states, - (batch_size, num_output_chunks, 2 * window_overlap, hidden_dim), - ) - - return chunked_hidden_states - - @staticmethod - def _get_global_attn_indices(is_index_global_attn): - """compute global attn indices required throughout forward pass""" - # helper variable - num_global_attn_indices = tf.math.count_nonzero(is_index_global_attn, axis=1) - num_global_attn_indices = tf.cast(num_global_attn_indices, dtype=tf.constant(1).dtype) - - # max number of global attn indices in batch - max_num_global_attn_indices = tf.reduce_max(num_global_attn_indices) - - # indices of global attn - is_index_global_attn_nonzero = tf.where(is_index_global_attn) - - # helper variable - is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims( - num_global_attn_indices, axis=-1 - ) - - # location of the non-padding values within global attention indices - is_local_index_global_attn_nonzero = tf.where(is_local_index_global_attn) - - # location of the padding values within global attention indices - is_local_index_no_global_attn_nonzero = tf.where(tf.math.logical_not(is_local_index_global_attn)) - - return ( - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - ) - - def _concat_with_global_key_attn_probs( - self, - attn_scores, - key_vectors, - query_vectors, - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - ): - batch_size = shape_list(key_vectors)[0] - - # select global key vectors - global_key_vectors = tf.gather_nd(key_vectors, is_index_global_attn_nonzero) - - # create only global key vectors - key_vectors_only_global = tf.scatter_nd( - is_local_index_global_attn_nonzero, - global_key_vectors, - shape=( - batch_size, - max_num_global_attn_indices, - self.num_heads, - self.head_dim, - ), - ) - - # (batch_size, seq_len, num_heads, max_num_global_attn_indices) - attn_probs_from_global_key = tf.einsum("blhd,bshd->blhs", query_vectors, key_vectors_only_global) - - # (batch_size, max_num_global_attn_indices, seq_len, num_heads) - attn_probs_from_global_key_trans = tf.transpose(attn_probs_from_global_key, (0, 3, 1, 2)) - mask_shape = (shape_list(is_local_index_no_global_attn_nonzero)[0],) + tuple( - shape_list(attn_probs_from_global_key_trans)[-2:] - ) - mask = tf.ones(mask_shape) * -10000.0 - mask = tf.cast(mask, dtype=attn_probs_from_global_key_trans.dtype) - - # scatter mask - attn_probs_from_global_key_trans = tf.tensor_scatter_nd_update( - attn_probs_from_global_key_trans, - is_local_index_no_global_attn_nonzero, - mask, - ) - - # (batch_size, seq_len, num_heads, max_num_global_attn_indices) - attn_probs_from_global_key = tf.transpose(attn_probs_from_global_key_trans, (0, 2, 3, 1)) - - # concat to attn_probs - # (batch_size, seq_len, num_heads, extra attention count + 2*window+1) - attn_scores = tf.concat((attn_probs_from_global_key, attn_scores), axis=-1) - - return attn_scores - - def _compute_attn_output_with_global_indices( - self, - value_vectors, - attn_probs, - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - ): - batch_size = shape_list(attn_probs)[0] - - # cut local attn probs to global only - attn_probs_only_global = attn_probs[:, :, :, :max_num_global_attn_indices] - - # select global value vectors - global_value_vectors = tf.gather_nd(value_vectors, is_index_global_attn_nonzero) - - # create only global value vectors - value_vectors_only_global = tf.scatter_nd( - is_local_index_global_attn_nonzero, - global_value_vectors, - shape=( - batch_size, - max_num_global_attn_indices, - self.num_heads, - self.head_dim, - ), - ) - - # compute attn output only global - attn_output_only_global = tf.einsum("blhs,bshd->blhd", attn_probs_only_global, value_vectors_only_global) - - # reshape attn probs - attn_probs_without_global = attn_probs[:, :, :, max_num_global_attn_indices:] - - # compute attn output with global - attn_output_without_global = self._sliding_chunks_matmul_attn_probs_value( - attn_probs_without_global, value_vectors, self.one_sided_attn_window_size - ) - - return attn_output_only_global + attn_output_without_global - - def _compute_global_attn_output_from_hidden( - self, - attn_output, - hidden_states, - max_num_global_attn_indices, - layer_head_mask, - is_local_index_global_attn_nonzero, - is_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - is_index_masked, - training, - ): - batch_size, seq_len = shape_list(hidden_states)[:2] - - # prepare global hidden states - global_attn_hidden_states = tf.gather_nd(hidden_states, is_index_global_attn_nonzero) - global_attn_hidden_states = tf.scatter_nd( - is_local_index_global_attn_nonzero, - global_attn_hidden_states, - shape=(batch_size, max_num_global_attn_indices, self.embed_dim), - ) - - # global key, query, value - global_query_vectors_only_global = self.query_global(global_attn_hidden_states) - global_key_vectors = self.key_global(hidden_states) - global_value_vectors = self.value_global(hidden_states) - - # normalize - global_query_vectors_only_global /= tf.math.sqrt( - tf.cast(self.head_dim, dtype=global_query_vectors_only_global.dtype) - ) - global_query_vectors_only_global = self.reshape_and_transpose(global_query_vectors_only_global, batch_size) - global_key_vectors = self.reshape_and_transpose(global_key_vectors, batch_size) - global_value_vectors = self.reshape_and_transpose(global_value_vectors, batch_size) - - # compute attn scores - global_attn_scores = tf.matmul(global_query_vectors_only_global, global_key_vectors, transpose_b=True) - - tf.debugging.assert_equal( - shape_list(global_attn_scores), - [batch_size * self.num_heads, max_num_global_attn_indices, seq_len], - message=( - "global_attn_scores have the wrong size. Size should be" - f" {(batch_size * self.num_heads, max_num_global_attn_indices, seq_len)}, but is" - f" {shape_list(global_attn_scores)}." - ), - ) - - global_attn_scores = tf.reshape( - global_attn_scores, - (batch_size, self.num_heads, max_num_global_attn_indices, seq_len), - ) - global_attn_scores_trans = tf.transpose(global_attn_scores, (0, 2, 1, 3)) - mask_shape = (shape_list(is_local_index_no_global_attn_nonzero)[0],) + tuple( - shape_list(global_attn_scores_trans)[-2:] - ) - global_attn_mask = tf.ones(mask_shape) * -10000.0 - global_attn_mask = tf.cast(global_attn_mask, dtype=global_attn_scores_trans.dtype) - - # scatter mask - global_attn_scores_trans = tf.tensor_scatter_nd_update( - global_attn_scores_trans, - is_local_index_no_global_attn_nonzero, - global_attn_mask, - ) - global_attn_scores = tf.transpose(global_attn_scores_trans, (0, 2, 1, 3)) - - # mask global attn scores - attn_mask = tf.tile(is_index_masked[:, None, None, :], (1, shape_list(global_attn_scores)[1], 1, 1)) - global_attn_scores = tf.where(attn_mask, -10000.0, global_attn_scores) - global_attn_scores = tf.reshape( - global_attn_scores, - (batch_size * self.num_heads, max_num_global_attn_indices, seq_len), - ) - - # compute global attn probs - global_attn_probs_float = stable_softmax(global_attn_scores, axis=-1) - - # apply layer head masking - if layer_head_mask is not None: - tf.debugging.assert_equal( - shape_list(layer_head_mask), - [self.num_heads], - message=( - f"Head mask for a single layer should be of size {(self.num_heads)}, but is" - f" {shape_list(layer_head_mask)}" - ), - ) - global_attn_probs_float = tf.reshape(layer_head_mask, (1, -1, 1, 1)) * tf.reshape( - global_attn_probs_float, (batch_size, self.num_heads, max_num_global_attn_indices, seq_len) - ) - global_attn_probs_float = tf.reshape( - global_attn_probs_float, (batch_size * self.num_heads, max_num_global_attn_indices, seq_len) - ) - - # dropout - global_attn_probs = self.global_dropout(global_attn_probs_float, training=training) - - # global attn output - global_attn_output = tf.matmul(global_attn_probs, global_value_vectors) - - tf.debugging.assert_equal( - shape_list(global_attn_output), - [batch_size * self.num_heads, max_num_global_attn_indices, self.head_dim], - message=( - "global_attn_output tensor has the wrong size. Size should be" - f" {(batch_size * self.num_heads, max_num_global_attn_indices, self.head_dim)}, but is" - f" {shape_list(global_attn_output)}." - ), - ) - - global_attn_output = tf.reshape( - global_attn_output, - (batch_size, self.num_heads, max_num_global_attn_indices, self.head_dim), - ) - - # get only non zero global attn output - nonzero_global_attn_output = tf.gather_nd( - tf.transpose(global_attn_output, (0, 2, 1, 3)), - is_local_index_global_attn_nonzero, - ) - nonzero_global_attn_output = tf.reshape( - nonzero_global_attn_output, - (shape_list(is_local_index_global_attn_nonzero)[0], -1), - ) - - # overwrite values with global attention - attn_output = tf.tensor_scatter_nd_update( - attn_output, is_index_global_attn_nonzero, nonzero_global_attn_output - ) - - global_attn_probs = tf.reshape( - global_attn_probs, (batch_size, self.num_heads, max_num_global_attn_indices, seq_len) - ) - - return attn_output, global_attn_probs - - def reshape_and_transpose(self, vector, batch_size): - return tf.reshape( - tf.transpose( - tf.reshape(vector, (batch_size, -1, self.num_heads, self.head_dim)), - (0, 2, 1, 3), - ), - (batch_size * self.num_heads, -1, self.head_dim), - ) - - -class TFLongformerAttention(tf.keras.layers.Layer): - def __init__(self, config, layer_id=0, **kwargs): - super().__init__(**kwargs) - - self.self_attention = TFLongformerSelfAttention(config, layer_id, name="self") - self.dense_output = TFLongformerSelfOutput(config, name="output") - - def prune_heads(self, heads): - raise NotImplementedError - - def call(self, inputs, training=False): - ( - hidden_states, - attention_mask, - layer_head_mask, - is_index_masked, - is_index_global_attn, - is_global_attn, - ) = inputs - - self_outputs = self.self_attention( - [hidden_states, attention_mask, layer_head_mask, is_index_masked, is_index_global_attn, is_global_attn], - training=training, - ) - attention_output = self.dense_output(self_outputs[0], hidden_states, training=training) - outputs = (attention_output,) + self_outputs[1:] - - return outputs - - -class TFLongformerLayer(tf.keras.layers.Layer): - def __init__(self, config, layer_id=0, **kwargs): - super().__init__(**kwargs) - - self.attention = TFLongformerAttention(config, layer_id, name="attention") - self.intermediate = TFLongformerIntermediate(config, name="intermediate") - self.longformer_output = TFLongformerOutput(config, name="output") - - def call(self, inputs, training=False): - ( - hidden_states, - attention_mask, - layer_head_mask, - is_index_masked, - is_index_global_attn, - is_global_attn, - ) = inputs - - attention_outputs = self.attention( - [hidden_states, attention_mask, layer_head_mask, is_index_masked, is_index_global_attn, is_global_attn], - training=training, - ) - attention_output = attention_outputs[0] - intermediate_output = self.intermediate(attention_output) - layer_output = self.longformer_output(intermediate_output, attention_output, training=training) - outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them - - return outputs - - -class TFLongformerEncoder(tf.keras.layers.Layer): - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.output_hidden_states = config.output_hidden_states - self.output_attentions = config.output_attentions - self.layer = [TFLongformerLayer(config, i, name=f"layer_._{i}") for i in range(config.num_hidden_layers)] - - def call( - self, - hidden_states, - attention_mask=None, - head_mask=None, - padding_len=0, - is_index_masked=None, - is_index_global_attn=None, - is_global_attn=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - training=False, - ): - all_hidden_states = () if output_hidden_states else None - all_attentions = all_global_attentions = () if output_attentions else None - - for idx, layer_module in enumerate(self.layer): - if output_hidden_states: - hidden_states_to_add = hidden_states[:, :-padding_len] if padding_len > 0 else hidden_states - all_hidden_states = all_hidden_states + (hidden_states_to_add,) - - layer_outputs = layer_module( - [ - hidden_states, - attention_mask, - head_mask[idx] if head_mask is not None else None, - is_index_masked, - is_index_global_attn, - is_global_attn, - ], - training=training, - ) - hidden_states = layer_outputs[0] - - if output_attentions: - # bzs x seq_len x num_attn_heads x (num_global_attn + attention_window_len + 1) => bzs x num_attn_heads x seq_len x (num_global_attn + attention_window_len + 1) - all_attentions = all_attentions + (tf.transpose(layer_outputs[1], (0, 2, 1, 3)),) - - # bzs x num_attn_heads x num_global_attn x seq_len => bzs x num_attn_heads x seq_len x num_global_attn - all_global_attentions = all_global_attentions + (tf.transpose(layer_outputs[2], (0, 1, 3, 2)),) - - # Add last layer - if output_hidden_states: - hidden_states_to_add = hidden_states[:, :-padding_len] if padding_len > 0 else hidden_states - all_hidden_states = all_hidden_states + (hidden_states_to_add,) - - # undo padding - # unpad `hidden_states` because the calling function is expecting a length == input_ids.size(1) - hidden_states = hidden_states[:, :-padding_len] if padding_len > 0 else hidden_states - if output_attentions: - all_attentions = ( - tuple([state[:, :, :-padding_len, :] for state in all_attentions]) - if padding_len > 0 - else all_attentions - ) - - if not return_dict: - return tuple( - v for v in [hidden_states, all_hidden_states, all_attentions, all_global_attentions] if v is not None - ) - - return TFLongformerBaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_attentions, - global_attentions=all_global_attentions, - ) - - -@keras_serializable -class TFLongformerMainLayer(tf.keras.layers.Layer): - config_class = LongformerConfig - - def __init__(self, config, add_pooling_layer=True, **kwargs): - super().__init__(**kwargs) - - if isinstance(config.attention_window, int): - assert config.attention_window % 2 == 0, "`config.attention_window` has to be an even value" - assert config.attention_window > 0, "`config.attention_window` has to be positive" - config.attention_window = [config.attention_window] * config.num_hidden_layers # one value per layer - else: - assert len(config.attention_window) == config.num_hidden_layers, ( - "`len(config.attention_window)` should equal `config.num_hidden_layers`. " - f"Expected {config.num_hidden_layers}, given {len(config.attention_window)}" - ) - - self.config = config - self.num_hidden_layers = config.num_hidden_layers - self.initializer_range = config.initializer_range - self.output_attentions = config.output_attentions - self.output_hidden_states = config.output_hidden_states - self.return_dict = config.use_return_dict - self.pad_token_id = config.pad_token_id - self.attention_window = config.attention_window - self.embeddings = TFLongformerEmbeddings(config, name="embeddings") - self.encoder = TFLongformerEncoder(config, name="encoder") - self.pooler = TFLongformerPooler(config, name="pooler") if add_pooling_layer else None - - def get_input_embeddings(self): - return self.embeddings - - def set_input_embeddings(self, value): - self.embeddings.weight = value - self.embeddings.vocab_size = shape_list(value)[0] - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - raise NotImplementedError - - @unpack_inputs - def call( - self, - input_ids=None, - attention_mask=None, - head_mask=None, - global_attention_mask=None, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - training=False, - ): - if input_ids is not None and not isinstance(input_ids, tf.Tensor): - input_ids = tf.convert_to_tensor(input_ids, dtype=tf.int64) - elif input_ids is not None: - input_ids = tf.cast(input_ids, tf.int64) - - if attention_mask is not None and not isinstance(attention_mask, tf.Tensor): - attention_mask = tf.convert_to_tensor(attention_mask, dtype=tf.int64) - elif attention_mask is not None: - attention_mask = tf.cast(attention_mask, tf.int64) - - if global_attention_mask is not None and not isinstance(global_attention_mask, tf.Tensor): - global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64) - elif global_attention_mask is not None: - global_attention_mask = tf.cast(global_attention_mask, tf.int64) - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = shape_list(input_ids) - elif inputs_embeds is not None: - input_shape = shape_list(inputs_embeds)[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if attention_mask is None: - attention_mask = tf.cast(tf.fill(input_shape, 1), tf.int64) - - if token_type_ids is None: - token_type_ids = tf.cast(tf.fill(input_shape, 0), tf.int64) - - # merge `global_attention_mask` and `attention_mask` - if global_attention_mask is not None: - attention_mask = self._merge_to_attention_mask(attention_mask, global_attention_mask) - - ( - padding_len, - input_ids, - attention_mask, - token_type_ids, - position_ids, - inputs_embeds, - ) = self._pad_to_window_size( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - pad_token_id=self.pad_token_id, - ) - - # is index masked or global attention - is_index_masked = tf.math.less(attention_mask, 1) - is_index_global_attn = tf.math.greater(attention_mask, 1) - is_global_attn = tf.math.reduce_any(is_index_global_attn) - - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, to_seq_length, 1, 1] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask_shape = shape_list(attention_mask) - extended_attention_mask = tf.reshape(attention_mask, (attention_mask_shape[0], attention_mask_shape[1], 1, 1)) - - # Since attention_mask is 1.0 for positions we want to attend locally and 0.0 for - # masked and global attn positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = tf.cast(tf.math.abs(1 - extended_attention_mask), tf.dtypes.float32) * -10000.0 - embedding_output = self.embeddings( - input_ids, - position_ids, - token_type_ids, - inputs_embeds, - training=training, - ) - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - padding_len=padding_len, - is_index_masked=is_index_masked, - is_index_global_attn=is_index_global_attn, - is_global_attn=is_global_attn, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return ( - sequence_output, - pooled_output, - ) + encoder_outputs[1:] - - return TFLongformerBaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - global_attentions=encoder_outputs.global_attentions, - ) - - def _pad_to_window_size( - self, - input_ids, - attention_mask, - token_type_ids, - position_ids, - inputs_embeds, - pad_token_id, - ): - """A helper function to pad tokens and mask to work with implementation of Longformer selfattention.""" - # padding - attention_window = ( - self.attention_window if isinstance(self.attention_window, int) else max(self.attention_window) - ) - - assert attention_window % 2 == 0, f"`attention_window` should be an even value. Given {attention_window}" - - input_shape = shape_list(input_ids) if input_ids is not None else shape_list(inputs_embeds) - batch_size, seq_len = input_shape[:2] - padding_len = (attention_window - seq_len % attention_window) % attention_window - - paddings = tf.convert_to_tensor([[0, 0], [0, padding_len]]) - - if input_ids is not None: - input_ids = tf.pad(input_ids, paddings, constant_values=pad_token_id) - - if position_ids is not None: - # pad with position_id = pad_token_id as in modeling_roberta.RobertaEmbeddings - position_ids = tf.pad(position_ids, paddings, constant_values=pad_token_id) - - if inputs_embeds is not None: - if padding_len > 0: - input_ids_padding = tf.cast(tf.fill((batch_size, padding_len), self.pad_token_id), tf.int64) - inputs_embeds_padding = self.embeddings(input_ids_padding) - inputs_embeds = tf.concat([inputs_embeds, inputs_embeds_padding], axis=-2) - - attention_mask = tf.pad(attention_mask, paddings, constant_values=False) # no attention on the padding tokens - token_type_ids = tf.pad(token_type_ids, paddings, constant_values=0) # pad with token_type_id = 0 - - return ( - padding_len, - input_ids, - attention_mask, - token_type_ids, - position_ids, - inputs_embeds, - ) - - @staticmethod - def _merge_to_attention_mask(attention_mask: tf.Tensor, global_attention_mask: tf.Tensor): - # longformer self attention expects attention mask to have 0 (no attn), 1 (local attn), 2 (global attn) - # (global_attention_mask + 1) => 1 for local attention, 2 for global attention - # => final attention_mask => 0 for no attention, 1 for local attention 2 for global attention - if attention_mask is not None: - attention_mask = attention_mask * (global_attention_mask + 1) - else: - # simply use `global_attention_mask` as `attention_mask` - # if no `attention_mask` is given - attention_mask = global_attention_mask + 1 - - return attention_mask - - -class TFLongformerPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = LongformerConfig - base_model_prefix = "longformer" - - @property - def input_signature(self): - sig = super().input_signature - sig["global_attention_mask"] = tf.TensorSpec((None, None), tf.int32, name="global_attention_mask") - return sig - - -LONGFORMER_START_DOCSTRING = r""" - - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Parameters: - config ([`LongformerConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - - -LONGFORMER_INPUTS_DOCSTRING = r""" - Args: - input_ids (`np.ndarray` or `tf.Tensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - head_mask (`np.ndarray` or `tf.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - global_attention_mask (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Mask to decide the attention given on each token, local attention or global attention. Tokens with global - attention attends to all other tokens, and all other tokens attend to them. This is important for - task-specific finetuning because it makes the model more flexible at representing the task. For example, - for classification, the token should be given global attention. For QA, all question tokens should also - have global attention. Please refer to the [Longformer paper](https://arxiv.org/abs/2004.05150) for more - details. Mask values selected in `[0, 1]`: - - - 0 for local attention (a sliding window attention), - - 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them). - - token_type_ids (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - inputs_embeds (`np.ndarray` or `tf.Tensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False`): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - - -@add_start_docstrings( - "The bare Longformer Model outputting raw hidden-states without any specific head on top.", - LONGFORMER_START_DOCSTRING, -) -class TFLongformerModel(TFLongformerPreTrainedModel): - """ - - This class copies code from [`TFRobertaModel`] and overwrites standard self-attention with longformer - self-attention to provide the ability to process long sequences following the self-attention approach described in - [Longformer: the Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, and - Arman Cohan. Longformer self-attention combines a local (sliding window) and global attention to extend to long - documents without the O(n^2) increase in memory and compute. - - The self-attention module `TFLongformerSelfAttention` implemented here supports the combination of local and global - attention but it lacks support for autoregressive attention and dilated attention. Autoregressive and dilated - attention are more relevant for autoregressive language modeling than finetuning on downstream tasks. Future - release will add support for autoregressive attention, but the support for dilated attention requires a custom CUDA - kernel to be memory and compute efficient. - - """ - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.longformer = TFLongformerMainLayer(config, name="longformer") - - @unpack_inputs - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - global_attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[TFLongformerBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - outputs = self.longformer( - input_ids=input_ids, - attention_mask=attention_mask, - head_mask=head_mask, - global_attention_mask=global_attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return outputs - - -@add_start_docstrings( - """Longformer Model with a `language modeling` head on top.""", - LONGFORMER_START_DOCSTRING, -) -class TFLongformerForMaskedLM(TFLongformerPreTrainedModel, TFMaskedLanguageModelingLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.longformer = TFLongformerMainLayer(config, add_pooling_layer=False, name="longformer") - self.lm_head = TFLongformerLMHead(config, self.longformer.embeddings, name="lm_head") - - def get_lm_head(self): - return self.lm_head - - def get_prefix_bias_name(self): - warnings.warn("The method get_prefix_bias_name is deprecated. Please use `get_bias` instead.", FutureWarning) - return self.name + "/" + self.lm_head.name - - @unpack_inputs - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint="allenai/longformer-base-4096", - output_type=TFLongformerMaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - mask="", - expected_output="' Paris'", - expected_loss=0.44, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - global_attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[TFLongformerMaskedLMOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - """ - - outputs = self.longformer( - input_ids=input_ids, - attention_mask=attention_mask, - head_mask=head_mask, - global_attention_mask=global_attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - prediction_scores = self.lm_head(sequence_output, training=training) - loss = None if labels is None else self.hf_compute_loss(labels, prediction_scores) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - - return ((loss,) + output) if loss is not None else output - - return TFLongformerMaskedLMOutput( - loss=loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -@add_start_docstrings( - """ - Longformer Model with a span classification head on top for extractive question-answering tasks like SQuAD / - TriviaQA (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - LONGFORMER_START_DOCSTRING, -) -class TFLongformerForQuestionAnswering(TFLongformerPreTrainedModel, TFQuestionAnsweringLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - self.longformer = TFLongformerMainLayer(config, add_pooling_layer=False, name="longformer") - self.qa_outputs = tf.keras.layers.Dense( - config.num_labels, - kernel_initializer=get_initializer(config.initializer_range), - name="qa_outputs", - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint="allenai/longformer-large-4096-finetuned-triviaqa", - output_type=TFLongformerQuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - expected_output="' puppet'", - expected_loss=0.96, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - global_attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - start_positions: np.ndarray | tf.Tensor | None = None, - end_positions: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[TFLongformerQuestionAnsweringModelOutput, Tuple[tf.Tensor]]: - r""" - start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (*sequence_length*). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (*sequence_length*). Position outside of the sequence - are not taken into account for computing the loss. - """ - - if input_ids is not None and not isinstance(input_ids, tf.Tensor): - input_ids = tf.convert_to_tensor(input_ids, dtype=tf.int64) - elif input_ids is not None: - input_ids = tf.cast(input_ids, tf.int64) - - if attention_mask is not None and not isinstance(attention_mask, tf.Tensor): - attention_mask = tf.convert_to_tensor(attention_mask, dtype=tf.int64) - elif attention_mask is not None: - attention_mask = tf.cast(attention_mask, tf.int64) - - if global_attention_mask is not None and not isinstance(global_attention_mask, tf.Tensor): - global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64) - elif global_attention_mask is not None: - global_attention_mask = tf.cast(global_attention_mask, tf.int64) - - # set global attention on question tokens - if global_attention_mask is None and input_ids is not None: - if shape_list(tf.where(input_ids == self.config.sep_token_id))[0] != 3 * shape_list(input_ids)[0]: - logger.warning( - f"There should be exactly three separator tokens: {self.config.sep_token_id} in every sample for" - " questions answering. You might also consider to set `global_attention_mask` manually in the" - " forward function to avoid this. This is most likely an error. The global attention is disabled" - " for this forward pass." - ) - global_attention_mask = tf.cast(tf.fill(shape_list(input_ids), value=0), tf.int64) - else: - logger.info("Initializing global attention on question tokens...") - # put global attention on all tokens until `config.sep_token_id` is reached - sep_token_indices = tf.where(input_ids == self.config.sep_token_id) - sep_token_indices = tf.cast(sep_token_indices, dtype=tf.int64) - global_attention_mask = _compute_global_attention_mask(shape_list(input_ids), sep_token_indices) - - outputs = self.longformer( - input_ids=input_ids, - attention_mask=attention_mask, - head_mask=head_mask, - global_attention_mask=global_attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = tf.split(logits, 2, axis=-1) - start_logits = tf.squeeze(start_logits, axis=-1) - end_logits = tf.squeeze(end_logits, axis=-1) - loss = None - - if start_positions is not None and end_positions is not None: - labels = {"start_position": start_positions} - labels["end_position"] = end_positions - loss = self.hf_compute_loss(labels, (start_logits, end_logits)) - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - - return ((loss,) + output) if loss is not None else output - - return TFLongformerQuestionAnsweringModelOutput( - loss=loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -class TFLongformerClassificationHead(tf.keras.layers.Layer): - """Head for sentence-level classification tasks.""" - - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - self.dense = tf.keras.layers.Dense( - config.hidden_size, - kernel_initializer=get_initializer(config.initializer_range), - activation="tanh", - name="dense", - ) - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - self.out_proj = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="out_proj" - ) - - def call(self, hidden_states, training=False): - hidden_states = hidden_states[:, 0, :] # take token (equiv. to [CLS]) - hidden_states = self.dropout(hidden_states, training=training) - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states, training=training) - output = self.out_proj(hidden_states) - return output - - -@add_start_docstrings( - """ - Longformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the - pooled output) e.g. for GLUE tasks. - """, - LONGFORMER_START_DOCSTRING, -) -class TFLongformerForSequenceClassification(TFLongformerPreTrainedModel, TFSequenceClassificationLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - - self.longformer = TFLongformerMainLayer(config, add_pooling_layer=False, name="longformer") - self.classifier = TFLongformerClassificationHead(config, name="classifier") - - @unpack_inputs - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFLongformerSequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - global_attention_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[TFLongformerSequenceClassifierOutput, Tuple[tf.Tensor]]: - if input_ids is not None and not isinstance(input_ids, tf.Tensor): - input_ids = tf.convert_to_tensor(input_ids, dtype=tf.int64) - elif input_ids is not None: - input_ids = tf.cast(input_ids, tf.int64) - - if attention_mask is not None and not isinstance(attention_mask, tf.Tensor): - attention_mask = tf.convert_to_tensor(attention_mask, dtype=tf.int64) - elif attention_mask is not None: - attention_mask = tf.cast(attention_mask, tf.int64) - - if global_attention_mask is not None and not isinstance(global_attention_mask, tf.Tensor): - global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64) - elif global_attention_mask is not None: - global_attention_mask = tf.cast(global_attention_mask, tf.int64) - - if global_attention_mask is None and input_ids is not None: - logger.info("Initializing global attention on CLS token...") - # global attention on cls token - global_attention_mask = tf.zeros_like(input_ids) - updates = tf.ones(shape_list(input_ids)[0], dtype=tf.int64) - indices = tf.pad( - tensor=tf.expand_dims(tf.range(shape_list(input_ids)[0], dtype=tf.int64), axis=1), - paddings=[[0, 0], [0, 1]], - constant_values=0, - ) - global_attention_mask = tf.tensor_scatter_nd_update( - global_attention_mask, - indices, - updates, - ) - - outputs = self.longformer( - input_ids=input_ids, - attention_mask=attention_mask, - head_mask=head_mask, - global_attention_mask=global_attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - logits = self.classifier(sequence_output) - - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFLongformerSequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -@add_start_docstrings( - """ - Longformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and - a softmax) e.g. for RocStories/SWAG tasks. - """, - LONGFORMER_START_DOCSTRING, -) -class TFLongformerForMultipleChoice(TFLongformerPreTrainedModel, TFMultipleChoiceLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.longformer = TFLongformerMainLayer(config, name="longformer") - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - self.classifier = tf.keras.layers.Dense( - 1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @property - def input_signature(self): - return { - "input_ids": tf.TensorSpec((None, None, None), tf.int32, name="input_ids"), - "attention_mask": tf.TensorSpec((None, None, None), tf.int32, name="attention_mask"), - "global_attention_mask": tf.TensorSpec((None, None, None), tf.int32, name="global_attention_mask"), - } - - @unpack_inputs - @add_start_docstrings_to_model_forward( - LONGFORMER_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length") - ) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFLongformerMultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - global_attention_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[TFLongformerMultipleChoiceModelOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` - where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above) - """ - - if input_ids is not None: - num_choices = shape_list(input_ids)[1] - seq_length = shape_list(input_ids)[2] - else: - num_choices = shape_list(inputs_embeds)[1] - seq_length = shape_list(inputs_embeds)[2] - - flat_input_ids = tf.reshape(input_ids, (-1, seq_length)) if input_ids is not None else None - flat_attention_mask = tf.reshape(attention_mask, (-1, seq_length)) if attention_mask is not None else None - flat_token_type_ids = tf.reshape(token_type_ids, (-1, seq_length)) if token_type_ids is not None else None - flat_position_ids = tf.reshape(position_ids, (-1, seq_length)) if position_ids is not None else None - flat_global_attention_mask = ( - tf.reshape(global_attention_mask, (-1, shape_list(global_attention_mask)[-1])) - if global_attention_mask is not None - else None - ) - flat_inputs_embeds = ( - tf.reshape(inputs_embeds, (-1, seq_length, shape_list(inputs_embeds)[3])) - if inputs_embeds is not None - else None - ) - - outputs = self.longformer( - flat_input_ids, - position_ids=flat_position_ids, - token_type_ids=flat_token_type_ids, - attention_mask=flat_attention_mask, - head_mask=head_mask, - global_attention_mask=flat_global_attention_mask, - inputs_embeds=flat_inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - reshaped_logits = tf.reshape(logits, (-1, num_choices)) - - loss = None if labels is None else self.hf_compute_loss(labels, reshaped_logits) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFLongformerMultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -@add_start_docstrings( - """ - Longformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. - for Named-Entity-Recognition (NER) tasks. - """, - LONGFORMER_START_DOCSTRING, -) -class TFLongformerForTokenClassification(TFLongformerPreTrainedModel, TFTokenClassificationLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - self.longformer = TFLongformerMainLayer(config=config, add_pooling_layer=False, name="longformer") - self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) - self.classifier = tf.keras.layers.Dense( - config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFLongformerTokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - global_attention_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[Union[np.array, tf.Tensor]] = None, - training: Optional[bool] = False, - ) -> Union[TFLongformerTokenClassifierOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - - outputs = self.longformer( - input_ids=input_ids, - attention_mask=attention_mask, - head_mask=head_mask, - global_attention_mask=global_attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - loss = None if labels is None else self.hf_compute_loss(labels, logits) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFLongformerTokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) diff --git a/spaces/ykilcher/apes/torch_utils/training_stats.py b/spaces/ykilcher/apes/torch_utils/training_stats.py deleted file mode 100644 index 26f467f9eaa074ee13de1cf2625cd7da44880847..0000000000000000000000000000000000000000 --- a/spaces/ykilcher/apes/torch_utils/training_stats.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict() # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -#---------------------------------------------------------------------------- - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -#---------------------------------------------------------------------------- - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num(name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -#---------------------------------------------------------------------------- - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -#---------------------------------------------------------------------------- diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/fpn.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/fpn.py deleted file mode 100644 index d0bdfc9da8cb7afc9ef421baef2c173a63ff1743..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/fpn.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -import fvcore.nn.weight_init as weight_init -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import Conv2d, ShapeSpec, get_norm - -from .backbone import Backbone -from .build import BACKBONE_REGISTRY -from .resnet import build_resnet_backbone - -__all__ = ["build_resnet_fpn_backbone", "build_retinanet_resnet_fpn_backbone", "FPN"] - - -class FPN(Backbone): - """ - This module implements :paper:`FPN`. - It creates pyramid features built on top of some input feature maps. - """ - - _fuse_type: torch.jit.Final[str] - - def __init__( - self, bottom_up, in_features, out_channels, norm="", top_block=None, fuse_type="sum" - ): - """ - Args: - bottom_up (Backbone): module representing the bottom up subnetwork. - Must be a subclass of :class:`Backbone`. The multi-scale feature - maps generated by the bottom up network, and listed in `in_features`, - are used to generate FPN levels. - in_features (list[str]): names of the input feature maps coming - from the backbone to which FPN is attached. For example, if the - backbone produces ["res2", "res3", "res4"], any *contiguous* sublist - of these may be used; order must be from high to low resolution. - out_channels (int): number of channels in the output feature maps. - norm (str): the normalization to use. - top_block (nn.Module or None): if provided, an extra operation will - be performed on the output of the last (smallest resolution) - FPN output, and the result will extend the result list. The top_block - further downsamples the feature map. It must have an attribute - "num_levels", meaning the number of extra FPN levels added by - this block, and "in_feature", which is a string representing - its input feature (e.g., p5). - fuse_type (str): types for fusing the top down features and the lateral - ones. It can be "sum" (default), which sums up element-wise; or "avg", - which takes the element-wise mean of the two. - """ - super(FPN, self).__init__() - assert isinstance(bottom_up, Backbone) - assert in_features, in_features - - # Feature map strides and channels from the bottom up network (e.g. ResNet) - input_shapes = bottom_up.output_shape() - strides = [input_shapes[f].stride for f in in_features] - in_channels_per_feature = [input_shapes[f].channels for f in in_features] - - _assert_strides_are_log2_contiguous(strides) - lateral_convs = [] - output_convs = [] - - use_bias = norm == "" - for idx, in_channels in enumerate(in_channels_per_feature): - lateral_norm = get_norm(norm, out_channels) - output_norm = get_norm(norm, out_channels) - - lateral_conv = Conv2d( - in_channels, out_channels, kernel_size=1, bias=use_bias, norm=lateral_norm - ) - output_conv = Conv2d( - out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - ) - weight_init.c2_xavier_fill(lateral_conv) - weight_init.c2_xavier_fill(output_conv) - stage = int(math.log2(strides[idx])) - self.add_module("fpn_lateral{}".format(stage), lateral_conv) - self.add_module("fpn_output{}".format(stage), output_conv) - - lateral_convs.append(lateral_conv) - output_convs.append(output_conv) - # Place convs into top-down order (from low to high resolution) - # to make the top-down computation in forward clearer. - self.lateral_convs = lateral_convs[::-1] - self.output_convs = output_convs[::-1] - self.top_block = top_block - self.in_features = tuple(in_features) - self.bottom_up = bottom_up - # Return feature names are "p", like ["p2", "p3", ..., "p6"] - self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides} - # top block output feature maps. - if self.top_block is not None: - for s in range(stage, stage + self.top_block.num_levels): - self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1) - - self._out_features = list(self._out_feature_strides.keys()) - self._out_feature_channels = {k: out_channels for k in self._out_features} - self._size_divisibility = strides[-1] - assert fuse_type in {"avg", "sum"} - self._fuse_type = fuse_type - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - """ - Args: - input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to - feature map tensor for each feature level in high to low resolution order. - - Returns: - dict[str->Tensor]: - mapping from feature map name to FPN feature map tensor - in high to low resolution order. Returned feature names follow the FPN - paper convention: "p", where stage has stride = 2 ** stage e.g., - ["p2", "p3", ..., "p6"]. - """ - bottom_up_features = self.bottom_up(x) - results = [] - prev_features = self.lateral_convs[0](bottom_up_features[self.in_features[-1]]) - results.append(self.output_convs[0](prev_features)) - - # Reverse feature maps into top-down order (from low to high resolution) - for idx, (lateral_conv, output_conv) in enumerate( - zip(self.lateral_convs, self.output_convs) - ): - # Slicing of ModuleList is not supported https://github.com/pytorch/pytorch/issues/47336 - # Therefore we loop over all modules but skip the first one - if idx > 0: - features = self.in_features[-idx - 1] - features = bottom_up_features[features] - top_down_features = F.interpolate(prev_features, scale_factor=2.0, mode="nearest") - lateral_features = lateral_conv(features) - prev_features = lateral_features + top_down_features - if self._fuse_type == "avg": - prev_features /= 2 - results.insert(0, output_conv(prev_features)) - - if self.top_block is not None: - if self.top_block.in_feature in bottom_up_features: - top_block_in_feature = bottom_up_features[self.top_block.in_feature] - else: - top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)] - results.extend(self.top_block(top_block_in_feature)) - assert len(self._out_features) == len(results) - return {f: res for f, res in zip(self._out_features, results)} - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - -def _assert_strides_are_log2_contiguous(strides): - """ - Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2". - """ - for i, stride in enumerate(strides[1:], 1): - assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format( - stride, strides[i - 1] - ) - - -class LastLevelMaxPool(nn.Module): - """ - This module is used in the original FPN to generate a downsampled - P6 feature from P5. - """ - - def __init__(self): - super().__init__() - self.num_levels = 1 - self.in_feature = "p5" - - def forward(self, x): - return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)] - - -class LastLevelP6P7(nn.Module): - """ - This module is used in RetinaNet to generate extra layers, P6 and P7 from - C5 feature. - """ - - def __init__(self, in_channels, out_channels, in_feature="res5"): - super().__init__() - self.num_levels = 2 - self.in_feature = in_feature - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - weight_init.c2_xavier_fill(module) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - - -@BACKBONE_REGISTRY.register() -def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelMaxPool(), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - in_channels_p6p7 = bottom_up.output_shape()["res5"].channels - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7(in_channels_p6p7, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/structures/test_imagelist.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/structures/test_imagelist.py deleted file mode 100644 index e446e44a37f5d8f9a68362e4b93a291d314d5d68..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/structures/test_imagelist.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import unittest -from typing import List, Sequence, Tuple -import torch - -from detectron2.structures import ImageList - - -class TestImageList(unittest.TestCase): - def test_imagelist_padding_tracing(self): - # test that the trace does not contain hard-coded constant sizes - def to_imagelist(tensors: Sequence[torch.Tensor]): - image_list = ImageList.from_tensors(tensors, 4) - return image_list.tensor, image_list.image_sizes - - def _tensor(*shape): - return torch.ones(shape, dtype=torch.float32) - - # test CHW (inputs needs padding vs. no padding) - for shape in [(3, 10, 10), (3, 12, 12)]: - func = torch.jit.trace(to_imagelist, ([_tensor(*shape)],)) - tensor, image_sizes = func([_tensor(3, 15, 20)]) - self.assertEqual(tensor.shape, (1, 3, 16, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0]) - - # test HW - func = torch.jit.trace(to_imagelist, ([_tensor(10, 10)],)) - tensor, image_sizes = func([_tensor(15, 20)]) - self.assertEqual(tensor.shape, (1, 16, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0]) - - # test 2x CHW - func = torch.jit.trace( - to_imagelist, - ([_tensor(3, 16, 10), _tensor(3, 13, 11)],), - ) - tensor, image_sizes = func([_tensor(3, 25, 20), _tensor(3, 10, 10)]) - self.assertEqual(tensor.shape, (2, 3, 28, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [25, 20], image_sizes[0]) - self.assertEqual(image_sizes[1].tolist(), [10, 10], image_sizes[1]) - # support calling with different spatial sizes, but not with different #images - - def test_imagelist_scriptability(self): - image_nums = 2 - image_tensor = torch.randn((image_nums, 10, 20), dtype=torch.float32) - image_shape = [(10, 20)] * image_nums - - def f(image_tensor, image_shape: List[Tuple[int, int]]): - return ImageList(image_tensor, image_shape) - - ret = f(image_tensor, image_shape) - ret_script = torch.jit.script(f)(image_tensor, image_shape) - - self.assertEqual(len(ret), len(ret_script)) - for i in range(image_nums): - self.assertTrue(torch.equal(ret[i], ret_script[i])) - - def test_imagelist_from_tensors_scriptability(self): - image_tensor_0 = torch.randn(10, 20, dtype=torch.float32) - image_tensor_1 = torch.randn(12, 22, dtype=torch.float32) - inputs = [image_tensor_0, image_tensor_1] - - def f(image_tensor: List[torch.Tensor]): - return ImageList.from_tensors(image_tensor, 10) - - ret = f(inputs) - ret_script = torch.jit.script(f)(inputs) - - self.assertEqual(len(ret), len(ret_script)) - self.assertTrue(torch.equal(ret.tensor, ret_script.tensor)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/younker/chatgpt-turbo/answer_question_turbo_good.py b/spaces/younker/chatgpt-turbo/answer_question_turbo_good.py deleted file mode 100644 index 988ad9979ef243edbeba602d7a14950a9b29b8a3..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/answer_question_turbo_good.py +++ /dev/null @@ -1,69 +0,0 @@ -from utils import get_embedding -from flask import jsonify -from config import * -from flask import current_app - -import openai - -from config import * - -TOP_K = 10 - - -def get_answer_from_files(question, session_id, pinecone_index): - logging.info(f"Getting answer for question: {question}") - - search_query_embedding = get_embedding(question, EMBEDDINGS_MODEL) - - try: - query_response = pinecone_index.query( - namespace=session_id, - top_k=TOP_K, - include_values=False, - include_metadata=True, - vector=search_query_embedding, - ) - logging.info( - f"[get_answer_from_files] received query response from Pinecone: {query_response}") - - files_string = "" - file_text_dict = current_app.config["file_text_dict"] - - for i in range(len(query_response.matches)): - result = query_response.matches[i] - file_chunk_id = result.id - score = result.score - filename = result.metadata["filename"] - file_text = file_text_dict.get(file_chunk_id) - file_string = f"###\n\"{filename}\"\n{file_text}\n" - if score < COSINE_SIM_THRESHOLD and i > 0: - logging.info( - f"[get_answer_from_files] score {score} is below threshold {COSINE_SIM_THRESHOLD} and i is {i}, breaking") - break - files_string += file_string - - my_messages = [ - {"role": "system", "content": "Given a question, try to answer it using the content of the file extracts below, and if you cannot answer, or find a relevant file, just output 'I could not find the answer to that question in your files.' If the answer is not contained in the files or if there are no file extracts, respond with \"I couldn't find the answer to that question in your files.\" If the question is not actually a question, respond with \"That is not a valid question.\" In the cases where you can find the answer, first give the answer. Then explain how you found the answer from the source or sources, and use the exact filenames of the source files you mention. Do not make up the names of any other files other than those mentioned in the files context. Give the answer in markdown format. Use the following format:\n\nQuestion: \n\nFiles:\n<###\n\"filename 1\"\nfile text>\n<###\n\"filename 2\"\nfile text>...\n\n Answer: \n\n"}, - {"role": "user", "content": question}, - {"role": "assistant", "content": files_string}, - ] - logging.info(f"[get_answer_from_files] prompt: {my_messages}") - - response = openai.ChatCompletion.create( - model=GENERATIVE_MODEL, - messages=my_messages, - temperature=1, - max_tokens=1000, - top_p=1, - frequency_penalty=0, - presence_penalty=0 - ) - - answer = response['choices'][0]['message']['content'] - logging.info(f"[get_answer_from_files] answer: {answer}") - - return jsonify({"answer": answer}) - - except Exception as e: - logging.info(f"[get_answer_from_files] error: {e}") - return str(e) diff --git a/spaces/ysharma/Low-rank-Adaptation/app.py b/spaces/ysharma/Low-rank-Adaptation/app.py deleted file mode 100644 index ba7e03c207f6df9f1a336cd88d60c27d1c6fab73..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Low-rank-Adaptation/app.py +++ /dev/null @@ -1,121 +0,0 @@ -from diffusers import StableDiffusionPipeline -from lora_diffusion import monkeypatch_lora, tune_lora_scale -import torch -import os, shutil -import gradio as gr -import subprocess - -MODEL_NAME="stabilityai/stable-diffusion-2-1-base" -INSTANCE_DIR="./data_example" -OUTPUT_DIR="./output_example" - -model_id = "stabilityai/stable-diffusion-2-1-base" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") -#prompt = "style of sks, baby lion" -torch.manual_seed(1) -#image = pipe(prompt, num_inference_steps=50, guidance_scale= 7).images[0] #no need -#image # nice. diffusers are cool. #no need -#finetuned_lora_weights = "./lora_weight.pt" - -#global var -counter = 0 - -#Getting Lora fine-tuned weights -def monkeypatching(unet_alpha,texten_alpha, in_prompt, wts): #, prompt, pipe): finetuned_lora_weights - print("****** inside monkeypatching *******") - print(f"in_prompt is - {str(in_prompt)}") - global counter - #if model == 'Text-encoder': - unet_wt = wts[-2] - #else: - texten_wt = wts[-1] - print(f"UNET weight is = {unet_wt}, Text-encoder weight is = {texten_wt}") - if counter == 0 : - #if wt == "./lora_playgroundai_wt.pt" : - monkeypatch_lora(pipe.unet, torch.load(unet_wt)) #finetuned_lora_weights - monkeypatch_lora(pipe.text_encoder, torch.load(texten_wt), target_replace_module=["CLIPAttention"]) #text-encoder #"./lora/lora_kiriko.text_encoder.pt" - #tune_lora_scale(pipe.unet, alpha) #1.00) - tune_lora_scale(pipe.unet, unet_alpha) - tune_lora_scale(pipe.text_encoder, texten_alpha) - counter +=1 - #else: - #monkeypatch_lora(pipe.unet, torch.load("./output_example/lora_weight.pt")) #finetuned_lora_weights - #tune_lora_scale(pipe.unet, alpha) #1.00) - #counter +=1 - else : - tune_lora_scale(pipe.unet, unet_alpha) - tune_lora_scale(pipe.text_encoder, texten_alpha) - #tune_lora_scale(pipe.unet, alpha) #1.00) - prompt = str(in_prompt) #"style of hclu, " + str(in_prompt) #"baby lion" - image = pipe(prompt, num_inference_steps=50, guidance_scale=7).images[0] - image.save("./illust_lora.jpg") #"./contents/illust_lora.jpg") - return image - -#{in_prompt} line68 --pl ignore -def accelerate_train_lora(steps, images, in_prompt): - print("*********** inside accelerate_train_lora ***********") - print(f"images are -- {images}") - # path can be retrieved by file_obj.name and original filename can be retrieved with file_obj.orig_name - for file in images: - print(f"file passed -- {file.name}") - os.makedirs(INSTANCE_DIR, exist_ok=True) - shutil.copy( file.name, INSTANCE_DIR) #/{file.orig_name} - #subprocess.Popen(f'accelerate launch {"./train_lora_dreambooth.py"} \ - os.system( f"accelerate launch {'./train_lora_dreambooth.py'} --pretrained_model_name_or_path={MODEL_NAME} --instance_data_dir={INSTANCE_DIR} --output_dir={OUTPUT_DIR} --instance_prompt='{in_prompt}' --train_text_encoder --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=1 --learning_rate='1e-4' --learning_rate_text='5e-5' --color_jitter --lr_scheduler='constant' --lr_warmup_steps=0 --max_train_steps={int(steps)}") #10000 - print("*********** completing accelerate_train_lora ***********") - print(f"files in output_dir -- {os.listdir(OUTPUT_DIR)}") - #lora_trained_weights = "./output_example/lora_weight.pt" - files = os.listdir(OUTPUT_DIR) - file_list = [] - for file in files: #os.listdir(OUTPUT_DIR): - if file.endswith(".pt"): - print("weight files are -- ",os.path.join(f"{OUTPUT_DIR}", file)) - file_list.append(os.path.join(f"{OUTPUT_DIR}", file)) - return file_list #files[1:] - #return f"{OUTPUT_DIR}/*.pt" - -with gr.Blocks() as demo: - gr.Markdown("""

                  LORA (Low-rank Adaptation) for Faster Text-to-Image Diffusion Fine-tuning (UNET+CLIP)

                  - """) - gr.HTML("

                  You can skip the queue by duplicating this space and upgrading to GPU in settings: Duplicate Space

                  ") - #gr.Markdown("""NEW!! : I have fine-tuned the SD model for 15,000 steps using 100 PlaygroundAI images and LORA. You can load this trained model using the example component. Load the weight and start using the Space with the Inference button. Feel free to toggle the Alpha value.""") - gr.Markdown( - """**Main Features**
                  - Fine-tune Stable diffusion models twice as faster as Dreambooth method by Low-rank Adaptation.
                  - Get insanely small end results, easy to share and download.
                  - Easy to use, compatible with diffusers.
                  - Sometimes even better performance than full fine-tuning

                  Please refer to the GitHub repo this Space is based on, here - LORA. You can also refer to this tweet by AK to quote/retweet/like here on Twitter.This Gradio Space is an attempt to explore this novel LORA approach to fine-tune Stable diffusion models, using the power and flexibility of Gradio! The higher number of steps results in longer training time and better fine-tuned SD models.

                  To use this Space well:
                  - First, upload your set of images (suggested number of images is between 4-9), enter the prompt, enter the number of fine-tuning steps (suggested value is between 2000-4000), and then press the 'Train LORA model' button. This will produce your fine-tuned model weights.
                  - Modify the previous prompt by adding to it (suffix), set the alpha value using the Slider (nearer to 1 implies overfitting to the uploaded images), and then press the 'Inference' button. This will produce an image by the newly fine-tuned UNET and Text-Encoder LORA models.
                  Bonus:You can download your fine-tuned model weights from the Gradio file component. The smaller size of LORA models (around 3-4 MB files) is the main highlight of this 'Low-rank Adaptation' approach of fine-tuning.""") - - with gr.Row(): - in_images = gr.File(label="Upload images to fine-tune for LORA", file_count="multiple") - with gr.Column(): - b1 = gr.Button(value="Train LORA model") - in_prompt = gr.Textbox(label="Enter a prompt for fine-tuned LORA model", visible=True) - b2 = gr.Button(value="Inference using LORA model") - - with gr.Row(): - out_image = gr.Image(label="Image generated by LORA model") - with gr.Column(): - with gr.Accordion("Advance settings for Training and Inference", open=False): - gr.Markdown("Advance settings for a number of Training Steps and Alpha. Set alpha to 1.0 to fully add LORA. If the LORA seems to have too much effect (i.e., overfitting), set alpha to a lower value. If the LORA seems to have too little effect, set the alpha higher. You can tune these two values to your needs.") - in_steps = gr.Number(label="Enter the number of training steps", value = 2000) - in_alpha_unet = gr.Slider(0.1,1.0, step=0.01, label="Set UNET Alpha level", value=0.5) - in_alpha_texten = gr.Slider(0.1,1.0, step=0.01, label="Set Text-Encoder Alpha level", value=0.5) - #in_model = gr.Radio(["Text-encoder", "Unet"], label="Select the fine-tuned model for inference", value="Text-encoder", type="value") - out_file = gr.File(label="Lora trained model weights", file_count='multiple' ) - - #gr.Examples( - # examples=[[0.65, 0.6, "lion", ["./lora_playgroundai_wt.pt","./lora_playgroundai_wt.pt"], ],], - # inputs=[in_alpha_unet, in_alpha_texten, in_prompt, out_file ], - # outputs=out_image, - # fn=monkeypatching, - # cache_examples=True,) - #gr.Examples( - # examples=[[2500, ['./simba1.jpg', './simba2.jpg', './simba3.jpg', './simba4.jpg'], "baby lion in disney style"]], - # inputs=[in_steps, in_images, in_prompt], - # outputs=out_file, - # fn=accelerate_train_lora, - # cache_examples=False, - # run_on_click=False) - - b1.click(fn = accelerate_train_lora, inputs=[in_steps, in_images, in_prompt] , outputs=out_file) - b2.click(fn = monkeypatching, inputs=[in_alpha_unet, in_alpha_texten, in_prompt, out_file,], outputs=out_image) - -demo.queue(concurrency_count=3) -demo.launch(debug=True, show_error=True,) \ No newline at end of file diff --git a/spaces/ysharma/Zero123PlusDemo/README.md b/spaces/ysharma/Zero123PlusDemo/README.md deleted file mode 100644 index f06a8e2181d5c13e4b8f0a84732259de185ac0dc..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Zero123PlusDemo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Zero123PlusDemo -emoji: 🌖 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ysharma/test_speech_to_text/app.py b/spaces/ysharma/test_speech_to_text/app.py deleted file mode 100644 index cee473396931af19fb6d161fb12881b8f337848f..0000000000000000000000000000000000000000 --- a/spaces/ysharma/test_speech_to_text/app.py +++ /dev/null @@ -1,195 +0,0 @@ -import os -import gradio as gr -import whisper -import requests -import tempfile -from neon_tts_plugin_coqui import CoquiTTS - -# Language common in all three multilingual models - English, Chinese, Spanish, and French -# So it would make sense to test the App on these four prominently - -# Whisper: Speech-to-text -model = whisper.load_model("base") -model_med = whisper.load_model("medium") -# Languages covered in Whisper - (exhaustive list) : -#"en": "english", "zh": "chinese", "de": "german", "es": "spanish", "ru": "russian", -#"ko": "korean", "fr": "french", "ja": "japanese", "pt": "portuguese", "tr": "turkish", -#"pl": "polish", "ca": "catalan", "nl": "dutch", "ar": "arabic", "sv": "swedish", -#"it": "italian", "id": "indonesian", "hi": "hindi", "fi": "finnish", "vi": "vietnamese", -#"iw": "hebrew", "uk": "ukrainian", "el": "greek", "ms": "malay", "cs": "czech", -#"ro": "romanian", "da": "danish", "hu": "hungarian", "ta": "tamil", "no": "norwegian", -#"th": "thai", "ur": "urdu", "hr": "croatian", "bg": "bulgarian", "lt": "lithuanian", -#"la": "latin", "mi": "maori", "ml": "malayalam", "cy": "welsh", "sk": "slovak", -#"te": "telugu", "fa": "persian", "lv": "latvian", "bn": "bengali", "sr": "serbian", -#"az": "azerbaijani", "sl": "slovenian", "kn": "kannada", "et": "estonian", -#"mk": "macedonian", "br": "breton", "eu": "basque", "is": "icelandic", "hy": "armenian", -#"ne": "nepali", "mn": "mongolian", "bs": "bosnian", "kk": "kazakh", "sq": "albanian", -#"sw": "swahili", "gl": "galician", "mr": "marathi", "pa": "punjabi", "si": "sinhala", -#"km": "khmer", "sn": "shona", "yo": "yoruba", "so": "somali", "af": "afrikaans", -#"oc": "occitan", "ka": "georgian", "be": "belarusian", "tg": "tajik", "sd": "sindhi", -#"gu": "gujarati", "am": "amharic", "yi": "yiddish", "lo": "lao", "uz": "uzbek", -#"fo": "faroese", "ht": "haitian creole", "ps": "pashto", "tk": "turkmen", "nn": "nynorsk", -#"mt": "maltese", "sa": "sanskrit", "lb": "luxembourgish", "my": "myanmar", "bo": "tibetan", -#"tl": "tagalog", "mg": "malagasy", "as": "assamese", "tt": "tatar", "haw": "hawaiian", -#"ln": "lingala", "ha": "hausa", "ba": "bashkir", "jw": "javanese", "su": "sundanese", - - -# LLM : Bloom as inference -API_URL = "https://api-inference.huggingface.co/models/bigscience/bloom" -HF_TOKEN = os.environ["HF_TOKEN"] -headers = {"Authorization": f"Bearer {HF_TOKEN}"} -# Main Languages covered in Bloom are (not exhaustive list): -# English, Chinese, French, Spanish, Portuguese, Arabic, Hindi, Vietnamese, Indonesian, Bengali, Tamil, Telugu - - -# Text-to-Speech -LANGUAGES = list(CoquiTTS.langs.keys()) -coquiTTS = CoquiTTS() -print(f"Languages for Coqui are: {LANGUAGES}") -#Languages for Coqui are: ['en', 'es', 'fr', 'de', 'pl', 'uk', 'ro', 'hu', 'el', 'bg', 'nl', 'fi', 'sl', 'lv', 'ga'] -# en - Engish, es - Spanish, fr - French, de - German, pl - Polish -# uk - Ukrainian, ro - Romanian, hu - Hungarian, el - Greek, bg - Bulgarian, -# nl - dutch, fi - finnish, sl - slovenian, lv - latvian, ga - ?? - - -# Driver function -def driver_fun(audio) : - transcribe, translation, lang = whisper_stt(audio) - #text1 = model.transcribe(audio)["text"] - - #For now only taking in English text for Bloom prompting as inference model is not high spec - text_generated = lang_model_response(transcribe, lang) - text_generated_en = lang_model_response(translation, 'en') - - if lang in ['es', 'fr']: - speech = tts(text_generated, lang) - else: - speech = tts(text_generated_en, 'en') #'en') - return transcribe, translation, text_generated, text_generated_en, speech - - -# Whisper - speech-to-text -def whisper_stt(audio): - print("Inside Whisper TTS") - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - lang = max(probs, key=probs.get) - print(f"Detected language: {max(probs, key=probs.get)}") - - # decode the audio - options_transc = whisper.DecodingOptions(fp16 = False, language=lang, task='transcribe') #lang - options_transl = whisper.DecodingOptions(fp16 = False, language='en', task='translate') #lang - result_transc = whisper.decode(model_med, mel, options_transc) - result_transl = whisper.decode(model_med, mel, options_transl) - - # print the recognized text - print(f"transcript is : {result_transc.text}") - print(f"translation is : {result_transl.text}") - - return result_transc.text, result_transl.text, lang - - -# LLM - Bloom Response -def lang_model_response(prompt, prompt_en, language): - print(f"Inside lang_model_response - Prompt is :{prompt}") - p_en = """Question: How are you doing today? - Answer: I am doing good, thanks. - Question: """ - p_es = """Pregunta: Cómo estás hoy? - Responder: Estoy bien, gracias. - Pregunta: """ - p_fr = """Question: Comment vas-tu aujourd'hui? - Réponse: Je vais bien, merci. - Question: """ - - if len(prompt) == 0 or len(prompt_en) == 0 : - prompt = """Question: Can you help me please? - Answer: Sure, I am here for you. - Question: What do you do when you don't get what you want?""" - - #if language == 'en': - prompt = p_en + prompt_en + "\n" + "Answer: " - solution_en = query(prompt, 'en') - solution = solution_en - if language == 'es': - prompt = p_es + prompt + "\n" + "Responder: " - solution = query(prompt, 'es') - elif language == 'fr': - prompt = p_fr + prompt + "\n" + "Réponse: " - solution = query(prompt, 'fr') - - return solution, solution_en - -# Bloom API Request -def query(prompt, language): - json_ = {"inputs": prompt, - "parameters": - { - "top_p": 0.90, #0.90 default - "max_new_tokens": 64, - "temperature": 1.1, #1.1 default - "return_full_text": False, - "do_sample": True, - }, - "options": - {"use_cache": True, - "wait_for_model": True, - },} - response = requests.post(API_URL, headers=headers, json=json_) - #print(f"Response is : {response}") - output = response.json() - output_tmp = output[0]['generated_text'] - print(f"Bloom API Response is : {output_tmp}") - if language == 'en': - solution = output_tmp.split("Answer: ")[2].split("\n")[0] - elif language == 'es': - solution = output_tmp.split("Responder: ")[2].split("\n")[0] - elif language == 'fr': - solution = output_tmp.split("Réponse: ")[2].split("\n")[0] - # solution = output_tmp.split(".")[1] - print(f"Final Bloom Response after splits is: {solution}") - return solution - -# Coqui - Text-to-Speech -def tts(text, text_en, language): - print(f"Inside tts - language is : {language}") - coqui_langs = ['en' ,'es' ,'fr' ,'de' ,'pl' ,'uk' ,'ro' ,'hu' ,'bg' ,'nl' ,'fi' ,'sl' ,'lv' ,'ga'] - if language =='en' or language not in coqui_langs: - language = 'en' - text = text_en - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - coquiTTS.get_tts(text, fp, speaker = {"language" : language}) - return fp.name - -demo = gr.Blocks() -with demo: - gr.Markdown("

                  Talk to Your Multilingual AI Assistant

                  ") - gr.Markdown( - """Model pipeline consisting of -
                  - [**Whisper**](https://github.com/openai/whisper)for Speech-to-text,
                  - [**Bloom**](https://huggingface.co/bigscience/bloom) for Text-generation, and
                  - [**CoquiTTS**](https://huggingface.co/coqui) for Text-To-Speech.

                  Front end is built using [**Gradio Block API**](https://gradio.app/docs/#blocks).
                  All three models are Multilingual, however, there are only these three overlapping languages among them - Spanish (es), French(fr), and English(en). Hence it would be suggested to test using these languages to get the best results out of this ML-App. If an English voice input is given then both the textbox on the left-hand side would show the same transcripts. However, if the input is either in Spanish or French, then the first textbox would show the language transcript, while the next one would show its English translations.

                  Note: This is a duplicate Space of [ysharma/Talk_to_Multilingual_AI_WhisperBloomCoqui](https://huggingface.co/spaces/ysharma/Talk_to_Multilingual_AI_WhisperBloomCoqui) and might not be maintained over time. Please refer to the original Space for updated results. - """) - with gr.Row(): - with gr.Column(): - in_audio = gr.Audio(source="microphone", type="filepath", label='Record your voice here') #type='filepath' - b1 = gr.Button("Whisper") #- Bloom - Coqui pipeline - out_transcript = gr.Textbox(label= 'As is Transcript using OpenAI Whisper') - out_translation_en = gr.Textbox(label= 'English Translation of audio using OpenAI Whisper') - out_lang = gr.Textbox(visible=False) - with gr.Column(): - b2 = gr.Button("Bloom") #-- Coqui pipeline - out_generated_text = gr.Textbox(label= 'AI response to your query in your preferred language using Bloom! ') - out_generated_text_en = gr.Textbox(label= 'AI response to your query in English using Bloom! ') - b3 = gr.Button("CoquiTTS") #-- pipeline complets - out_audio = gr.Audio(label='AI response in Audio form in your preferred language') - - b1.click(whisper_stt, inputs=[in_audio], outputs=[out_transcript, out_translation_en, out_lang]) - b2.click(lang_model_response, inputs=[out_transcript, out_translation_en, out_lang], outputs=[out_generated_text,out_generated_text_en]) - b3.click(tts,inputs=[out_generated_text,out_generated_text_en,out_lang], outputs=[out_audio]) - -demo.launch(enable_queue=True, debug=True) \ No newline at end of file diff --git a/spaces/yueranseo/mygpt/modules/config.py b/spaces/yueranseo/mygpt/modules/config.py deleted file mode 100644 index c9224996dd7056508519be8cbe906746f362abb0..0000000000000000000000000000000000000000 --- a/spaces/yueranseo/mygpt/modules/config.py +++ /dev/null @@ -1,190 +0,0 @@ -from collections import defaultdict -from contextlib import contextmanager -import os -import logging -import sys -import commentjson as json - -from . import shared -from . import presets - - -__all__ = [ - "my_api_key", - "authflag", - "auth_list", - "dockerflag", - "retrieve_proxy", - "log_level", - "advance_docs", - "update_doc_config", - "usage_limit", - "multi_api_key", - "server_name", - "server_port", - "share", - "hide_history_when_not_logged_in", - "default_chuanhu_assistant_model" -] - -# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低) -# 同时,也可以为后续支持自定义功能提供config的帮助 -if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) -else: - config = {} - -lang_config = config.get("language", "auto") -language = os.environ.get("LANGUAGE", lang_config) - -hide_history_when_not_logged_in = config.get("hide_history_when_not_logged_in", False) - -if os.path.exists("api_key.txt"): - logging.info("检测到api_key.txt文件,正在进行迁移...") - with open("api_key.txt", "r", encoding="utf-8") as f: - config["openai_api_key"] = f.read().strip() - os.rename("api_key.txt", "api_key(deprecated).txt") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4, ensure_ascii=False) - -if os.path.exists("auth.json"): - logging.info("检测到auth.json文件,正在进行迁移...") - auth_list = [] - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - for _ in auth: - if auth[_]["username"] and auth[_]["password"]: - auth_list.append((auth[_]["username"], auth[_]["password"])) - else: - logging.error("请检查auth.json文件中的用户名和密码!") - sys.exit(1) - config["users"] = auth_list - os.rename("auth.json", "auth(deprecated).json") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4, ensure_ascii=False) - -## 处理docker if we are running in Docker -dockerflag = config.get("dockerflag", False) -if os.environ.get("dockerrun") == "yes": - dockerflag = True - -## 处理 api-key 以及 允许的用户列表 -my_api_key = config.get("openai_api_key", "") -my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key) - -xmchat_api_key = config.get("xmchat_api_key", "") -os.environ["XMCHAT_API_KEY"] = xmchat_api_key - -minimax_api_key = config.get("minimax_api_key", "") -os.environ["MINIMAX_API_KEY"] = minimax_api_key -minimax_group_id = config.get("minimax_group_id", "") -os.environ["MINIMAX_GROUP_ID"] = minimax_group_id - - -usage_limit = os.environ.get("USAGE_LIMIT", config.get("usage_limit", 120)) - -## 多账户机制 -multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制 -if multi_api_key: - api_key_list = config.get("api_key_list", []) - if len(api_key_list) == 0: - logging.error("多账号模式已开启,但api_key_list为空,请检查config.json") - sys.exit(1) - shared.state.set_api_key_queue(api_key_list) - -auth_list = config.get("users", []) # 实际上是使用者的列表 -authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度 - -# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配 -api_host = os.environ.get("OPENAI_API_BASE", config.get("openai_api_base", None)) -if api_host is not None: - shared.state.set_api_host(api_host) - -default_chuanhu_assistant_model = config.get("default_chuanhu_assistant_model", "gpt-3.5-turbo") -for x in ["GOOGLE_CSE_ID", "GOOGLE_API_KEY", "WOLFRAM_ALPHA_APPID", "SERPAPI_API_KEY"]: - if config.get(x, None) is not None: - os.environ[x] = config[x] - -@contextmanager -def retrieve_openai_api(api_key = None): - old_api_key = os.environ.get("OPENAI_API_KEY", "") - if api_key is None: - os.environ["OPENAI_API_KEY"] = my_api_key - yield my_api_key - else: - os.environ["OPENAI_API_KEY"] = api_key - yield api_key - os.environ["OPENAI_API_KEY"] = old_api_key - -## 处理log -log_level = config.get("log_level", "INFO") -logging.basicConfig( - level=log_level, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -## 处理代理: -http_proxy = config.get("http_proxy", "") -https_proxy = config.get("https_proxy", "") -http_proxy = os.environ.get("HTTP_PROXY", http_proxy) -https_proxy = os.environ.get("HTTPS_PROXY", https_proxy) - -# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错 -os.environ["HTTP_PROXY"] = "" -os.environ["HTTPS_PROXY"] = "" - -local_embedding = config.get("local_embedding", False) # 是否使用本地embedding - -@contextmanager -def retrieve_proxy(proxy=None): - """ - 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理 - 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量 - """ - global http_proxy, https_proxy - if proxy is not None: - http_proxy = proxy - https_proxy = proxy - yield http_proxy, https_proxy - else: - old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] - os.environ["HTTP_PROXY"] = http_proxy - os.environ["HTTPS_PROXY"] = https_proxy - yield http_proxy, https_proxy # return new proxy - - # return old proxy - os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var - - -## 处理advance docs -advance_docs = defaultdict(lambda: defaultdict(dict)) -advance_docs.update(config.get("advance_docs", {})) -def update_doc_config(two_column_pdf): - global advance_docs - advance_docs["pdf"]["two_column"] = two_column_pdf - - logging.info(f"更新后的文件参数为:{advance_docs}") - -## 处理gradio.launch参数 -server_name = config.get("server_name", None) -server_port = config.get("server_port", None) -if server_name is None: - if dockerflag: - server_name = "0.0.0.0" - else: - server_name = "127.0.0.1" -if server_port is None: - if dockerflag: - server_port = 7860 - -assert server_port is None or type(server_port) == int, "要求port设置为int类型" - -# 设置默认model -default_model = config.get("default_model", "") -try: - presets.DEFAULT_MODEL = presets.MODELS.index(default_model) -except ValueError: - pass - -share = config.get("share", False) diff --git a/spaces/yuhanbo/chat-gpt/app/store/prompt.ts b/spaces/yuhanbo/chat-gpt/app/store/prompt.ts deleted file mode 100644 index 97da8ca58bd6b0a967fb19d03a6eb6954e952e70..0000000000000000000000000000000000000000 --- a/spaces/yuhanbo/chat-gpt/app/store/prompt.ts +++ /dev/null @@ -1,117 +0,0 @@ -import { create } from "zustand"; -import { persist } from "zustand/middleware"; -import Fuse from "fuse.js"; - -export interface Prompt { - id?: number; - title: string; - content: string; -} - -export interface PromptStore { - latestId: number; - prompts: Map; - - add: (prompt: Prompt) => number; - remove: (id: number) => void; - search: (text: string) => Prompt[]; -} - -export const PROMPT_KEY = "prompt-store"; - -export const SearchService = { - ready: false, - engine: new Fuse([], { keys: ["title"] }), - count: { - builtin: 0, - }, - - init(prompts: Prompt[]) { - if (this.ready) { - return; - } - this.engine.setCollection(prompts); - this.ready = true; - }, - - remove(id: number) { - this.engine.remove((doc) => doc.id === id); - }, - - add(prompt: Prompt) { - this.engine.add(prompt); - }, - - search(text: string) { - const results = this.engine.search(text); - return results.map((v) => v.item); - }, -}; - -export const usePromptStore = create()( - persist( - (set, get) => ({ - latestId: 0, - prompts: new Map(), - - add(prompt) { - const prompts = get().prompts; - prompt.id = get().latestId + 1; - prompts.set(prompt.id, prompt); - - set(() => ({ - latestId: prompt.id!, - prompts: prompts, - })); - - return prompt.id!; - }, - - remove(id) { - const prompts = get().prompts; - prompts.delete(id); - SearchService.remove(id); - - set(() => ({ - prompts, - })); - }, - - search(text) { - return SearchService.search(text) as Prompt[]; - }, - }), - { - name: PROMPT_KEY, - version: 1, - onRehydrateStorage(state) { - const PROMPT_URL = "./prompts.json"; - - type PromptList = Array<[string, string]>; - - fetch(PROMPT_URL) - .then((res) => res.json()) - .then((res) => { - const builtinPrompts = [res.en, res.cn] - .map((promptList: PromptList) => { - return promptList.map( - ([title, content]) => - ({ - title, - content, - } as Prompt) - ); - }) - .concat([...(state?.prompts?.values() ?? [])]); - - const allPromptsForSearch = builtinPrompts.reduce( - (pre, cur) => pre.concat(cur), - [] - ); - SearchService.count.builtin = res.en.length + res.cn.length; - SearchService.init(allPromptsForSearch); - }); - }, - } - ) -); diff --git a/spaces/yuhangzang/ContextDet-Demo/csrc/MsDeformAttn/ms_deform_attn_cuda.h b/spaces/yuhangzang/ContextDet-Demo/csrc/MsDeformAttn/ms_deform_attn_cuda.h deleted file mode 100644 index ad1311a78f61303616504eb991aaa9c4a93d9948..0000000000000000000000000000000000000000 --- a/spaces/yuhangzang/ContextDet-Demo/csrc/MsDeformAttn/ms_deform_attn_cuda.h +++ /dev/null @@ -1,33 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/yuichi/pdf-ocr/pdf2text.py b/spaces/yuichi/pdf-ocr/pdf2text.py deleted file mode 100644 index e8586ae5c954c54cc962dfb7f22379e691193566..0000000000000000000000000000000000000000 --- a/spaces/yuichi/pdf-ocr/pdf2text.py +++ /dev/null @@ -1,403 +0,0 @@ -# -*- coding: utf-8 -*- -""" - -easyocr.py - A wrapper for easyocr to convert pdf to images to text -""" - -import logging -from pathlib import Path - -logging.basicConfig( - level=logging.INFO, - format="%(asctime)s %(levelname)s %(message)s", - datefmt="%m/%d/%Y %I:%M:%S", -) - - -import os -import pprint as pp -import re -import shutil -import time -from datetime import date, datetime -from os.path import basename, dirname, join -from pathlib import Path - -from cleantext import clean -from doctr.io import DocumentFile -from doctr.models import ocr_predictor -from libretranslatepy import LibreTranslateAPI -from natsort import natsorted -from spellchecker import SpellChecker -from tqdm.auto import tqdm - - -def simple_rename(filepath, target_ext=".txt"): - _fp = Path(filepath) - basename = _fp.stem - return f"OCR_{basename}_{target_ext}" - - -def rm_local_text_files(name_contains="RESULT_"): - """ - rm_local_text_files - remove local text files - - Args: - name_contains (str, optional): [description]. Defaults to "OCR_". - """ - files = [ - f - for f in Path.cwd().iterdir() - if f.is_file() and f.suffix == ".txt" and name_contains in f.name - ] - logging.info(f"removing {len(files)} text files") - for f in files: - os.remove(f) - logging.info("done") - - -def corr( - s: str, - add_space_when_numerics=False, - exceptions=["e.g.", "i.e.", "etc.", "cf.", "vs.", "p."], -) -> str: - """corrects spacing in a string - - Args: - s (str): the string to correct - add_space_when_numerics (bool, optional): [add a space when a period is between two numbers, example 5.73]. Defaults to False. - exceptions (list, optional): [do not change these substrings]. Defaults to ['e.g.', 'i.e.', 'etc.', 'cf.', 'vs.', 'p.']. - - Returns: - str: the corrected string - """ - if add_space_when_numerics: - s = re.sub(r"(\d)\.(\d)", r"\1. \2", s) - - s = re.sub(r"\s+", " ", s) - s = re.sub(r'\s([?.!"](?:\s|$))', r"\1", s) - - # fix space before apostrophe - s = re.sub(r"\s\'", r"'", s) - # fix space after apostrophe - s = re.sub(r"'\s", r"'", s) - # fix space before comma - s = re.sub(r"\s,", r",", s) - - for e in exceptions: - expected_sub = re.sub(r"\s", "", e) - s = s.replace(expected_sub, e) - - return s - - -def fix_punct_spaces(string): - """ - fix_punct_spaces - replace spaces around punctuation with punctuation. For example, "hello , there" -> "hello, there" - - Parameters - ---------- - string : str, required, input string to be corrected - - Returns - ------- - str, corrected string - """ - - fix_spaces = re.compile(r"\s*([?!.,]+(?:\s+[?!.,]+)*)\s*") - string = fix_spaces.sub(lambda x: "{} ".format(x.group(1).replace(" ", "")), string) - string = string.replace(" ' ", "'") - string = string.replace(' " ', '"') - return string.strip() - - -def clean_OCR(ugly_text: str): - """ - clean_OCR - clean the OCR text files. - - Parameters - ---------- - ugly_text : str, required, input string to be cleaned - - Returns - ------- - str, cleaned string - """ - # Remove all the newlines. - cleaned_text = ugly_text.replace("\n", " ") - # Remove all the tabs. - cleaned_text = cleaned_text.replace("\t", " ") - # Remove all the double spaces. - cleaned_text = cleaned_text.replace(" ", " ") - # Remove all the spaces at the beginning of the text. - cleaned_text = cleaned_text.lstrip() - # remove all instances of "- " and " - " - cleaned_text = cleaned_text.replace("- ", "") - cleaned_text = cleaned_text.replace(" -", "") - return fix_punct_spaces(cleaned_text) - - -def move2completed(from_dir, filename, new_folder="completed", verbose=False): - - # this is the better version - old_filepath = join(from_dir, filename) - - new_filedirectory = join(from_dir, new_folder) - - if not os.path.isdir(new_filedirectory): - os.mkdir(new_filedirectory) - if verbose: - print("created new directory for files at: \n", new_filedirectory) - new_filepath = join(new_filedirectory, filename) - - try: - shutil.move(old_filepath, new_filepath) - logging.info("successfully moved the file {} to */completed.".format(filename)) - except: - logging.info( - "ERROR! unable to move file to \n{}. Please investigate".format( - new_filepath - ) - ) - - -"""## pdf2text functions - -""" - - -custom_replace_list = { - "t0": "to", - "'$": "'s", - ",,": ", ", - "_ ": " ", - " '": "'", -} - -replace_corr_exceptions = { - "i. e.": "i.e.", - "e. g.": "e.g.", - "e. g": "e.g.", - " ,": ",", -} - - -spell = SpellChecker() - - -def check_word_spelling(word: str) -> bool: - """ - check_word_spelling - check the spelling of a word - - Args: - word (str): word to check - - Returns: - bool: True if word is spelled correctly, False if not - """ - - misspelled = spell.unknown([word]) - - return len(misspelled) == 0 - - -def eval_and_replace(text: str, match_token: str = "- ") -> str: - """ - eval_and_replace - conditionally replace all instances of a substring in a string based on whether the eliminated substring results in a valid word - - Args: - text (str): text to evaluate - match_token (str, optional): token to replace. Defaults to "- ". - - Returns: - str: text with replaced tokens - """ - - if match_token not in text: - return text - else: - while True: - full_before_text = text.split(match_token, maxsplit=1)[0] - before_text = [ - char for char in full_before_text.split()[-1] if char.isalpha() - ] - before_text = "".join(before_text) - full_after_text = text.split(match_token, maxsplit=1)[-1] - after_text = [char for char in full_after_text.split()[0] if char.isalpha()] - after_text = "".join(after_text) - full_text = before_text + after_text - if check_word_spelling(full_text): - text = full_before_text + full_after_text - else: - text = full_before_text + " " + full_after_text - if match_token not in text: - break - return text - - -def cleantxt_ocr(ugly_text, lower=False, lang: str = "en") -> str: - """ - cleantxt_ocr - clean text from OCR - - Args: - ugly_text (str): text to clean - lower (bool, optional): _description_. Defaults to False. - lang (str, optional): _description_. Defaults to "en". - - Returns: - str: cleaned text - """ - # a wrapper for clean text with options different than default - - # https://pypi.org/project/clean-text/ - cleaned_text = clean( - ugly_text, - fix_unicode=True, # fix various unicode errors - to_ascii=True, # transliterate to closest ASCII representation - lower=lower, # lowercase text - no_line_breaks=True, # fully strip line breaks as opposed to only normalizing them - no_urls=True, # replace all URLs with a special token - no_emails=True, # replace all email addresses with a special token - no_phone_numbers=False, # replace all phone numbers with a special token - no_numbers=False, # replace all numbers with a special token - no_digits=False, # replace all digits with a special token - no_currency_symbols=False, # replace all currency symbols with a special token - no_punct=False, # remove punctuations - replace_with_punct="", # instead of removing punctuations you may replace them - replace_with_url="", - replace_with_email="", - replace_with_phone_number="", - replace_with_number="", - replace_with_digit="0", - replace_with_currency_symbol="", - lang=lang, # set to 'de' for German special handling - ) - - return cleaned_text - - -def format_ocr_out(OCR_data): - - if isinstance(OCR_data, list): - text = " ".join(OCR_data) - else: - text = str(OCR_data) - _clean = cleantxt_ocr(text) - return corr(_clean) - - -def postprocess(text: str) -> str: - """to be used after recombining the lines""" - - proc = corr(cleantxt_ocr(text)) - - for k, v in custom_replace_list.items(): - proc = proc.replace(str(k), str(v)) - - proc = corr(proc) - - for k, v in replace_corr_exceptions.items(): - proc = proc.replace(str(k), str(v)) - - return eval_and_replace(proc) - - -def result2text(result, as_text=False) -> str or list: - """Convert OCR result to text""" - - full_doc = [] - for i, page in enumerate(result.pages, start=1): - text = "" - for block in page.blocks: - text += "\n\t" - for line in block.lines: - for word in line.words: - # print(dir(word)) - text += word.value + " " - full_doc.append(text) - - return "\n".join(full_doc) if as_text else full_doc - - -def convert_PDF_to_Text( - PDF_file, - ocr_model=None, - max_pages: int = 20, -): - - st = time.perf_counter() - PDF_file = Path(PDF_file) - ocr_model = ocr_predictor(pretrained=True) if ocr_model is None else ocr_model - logging.info(f"starting OCR on {PDF_file.name}") - doc = DocumentFile.from_pdf(PDF_file) - truncated = False - if len(doc) > max_pages: - logging.warning( - f"PDF has {len(doc)} pages, which is more than {max_pages}.. truncating" - ) - doc = doc[:max_pages] - truncated = True - - # Analyze - logging.info(f"running OCR on {len(doc)} pages") - result = ocr_model(doc) - raw_text = result2text(result) - proc_text = [format_ocr_out(r) for r in raw_text] - fin_text = [postprocess(t) for t in proc_text] - - ocr_results = "\n\n".join(fin_text) - - fn_rt = time.perf_counter() - st - - logging.info("OCR complete") - - results_dict = { - "num_pages": len(doc), - "runtime": round(fn_rt, 2), - "date": str(date.today()), - "converted_text": ocr_results, - "truncated": truncated, - "length": len(ocr_results), - } - - return results_dict - - -# @title translation functions - -lt = LibreTranslateAPI("https://translate.astian.org/") - - -def translate_text(text, source_l, target_l="en"): - - return str(lt.translate(text, source_l, target_l)) - - -def translate_doc(filepath, lang_start, lang_end="en", verbose=False): - """translate a document from lang_start to lang_end - - {'code': 'en', 'name': 'English'}, - {'code': 'fr', 'name': 'French'}, - {'code': 'de', 'name': 'German'}, - {'code': 'it', 'name': 'Italian'},""" - - src_folder = dirname(filepath) - src_folder = Path(src_folder) - trgt_folder = src_folder / f"translated_{lang_end}" - trgt_folder.mkdir(exist_ok=True) - with open(filepath, "r", encoding="utf-8", errors="ignore") as f: - foreign_t = f.readlines() - in_name = basename(filepath) - translated_doc = [] - for line in tqdm( - foreign_t, total=len(foreign_t), desc="translating {}...".format(in_name[:10]) - ): - translated_line = translate_text(line, lang_start, lang_end) - translated_doc.append(translated_line) - t_out_name = "[To {}]".format(lang_end) + simple_rename(in_name) + ".txt" - out_path = join(trgt_folder, t_out_name) - with open(out_path, "w", encoding="utf-8", errors="ignore") as f_o: - f_o.writelines(translated_doc) - if verbose: - print("finished translating the document! - ", datetime.now()) - return out_path diff --git a/spaces/yunfei0710/gpt-academic/app.py b/spaces/yunfei0710/gpt-academic/app.py deleted file mode 100644 index cd5a39cf16a45fd1f25ff7392a2f7cd179ac259a..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/app.py +++ /dev/null @@ -1,215 +0,0 @@ -import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 - -def main(): - import subprocess, sys - subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'gradio-stable-fork']) - import gradio as gr - if gr.__version__ not in ['3.28.3','3.32.3']: assert False, "请用 pip install -r requirements.txt 安装依赖" - from request_llm.bridge_all import predict - from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith - # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY, AVAIL_LLM_MODELS = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY', 'AVAIL_LLM_MODELS') - - # 如果WEB_PORT是-1, 则随机选取WEB端口 - PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT - if not AUTHENTICATION: AUTHENTICATION = None - - from check_proxy import get_current_version - initial_prompt = "Serve me as a writing and programming assistant." - title_html = f"

                  ChatGPT 学术优化 {get_current_version()}

                  " - description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)""" - - # 问询记录, python 版本建议3.9+(越新越好) - import logging - os.makedirs("gpt_log", exist_ok=True) - try:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, encoding="utf-8") - except:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO) - print("所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!") - - # 一些普通功能模块 - from core_functional import get_core_functions - functional = get_core_functions() - - # 高级函数插件 - from crazy_functional import get_crazy_functions - crazy_fns = get_crazy_functions() - - # 处理markdown文本格式的转变 - gr.Chatbot.postprocess = format_io - - # 做一些外观色彩上的调整 - from theme import adjust_theme, advanced_css - set_theme = adjust_theme() - - # 代理与自动更新 - from check_proxy import check_proxy, auto_update, warm_up_modules - proxy_info = check_proxy(proxies) - - gr_L1 = lambda: gr.Row().style() - gr_L2 = lambda scale: gr.Column(scale=scale) - if LAYOUT == "TOP-DOWN": - gr_L1 = lambda: DummyWith() - gr_L2 = lambda scale: gr.Row() - CHATBOT_HEIGHT /= 2 - - cancel_handles = [] - with gr.Blocks(title="ChatGPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo: - gr.HTML(title_html) - gr.HTML('''
                  Duplicate Space请您打开此页面后务必点击上方的“复制空间”(Duplicate Space)按钮!使用时,先在输入框填入API-KEY然后回车。
                  切忌在“复制空间”(Duplicate Space)之前填入API_KEY或进行提问,否则您的API_KEY将极可能被空间所有者攫取!
                  支持任意数量的OpenAI的密钥和API2D的密钥共存,例如输入"OpenAI密钥1,API2D密钥2",然后提交,即可同时使用两种模型接口。
                  ''') - cookies = gr.State({'api_key': API_KEY, 'llm_model': LLM_MODEL}) - with gr_L1(): - with gr_L2(scale=2): - chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}") - chatbot.style(height=CHATBOT_HEIGHT) - history = gr.State([]) - with gr_L2(scale=1): - with gr.Accordion("输入区", open=True) as area_input_primary: - with gr.Row(): - txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥,输入多个密钥时,用英文逗号间隔。支持OpenAI密钥和API2D密钥共存。").style(container=False) - with gr.Row(): - submitBtn = gr.Button("提交", variant="primary") - with gr.Row(): - resetBtn = gr.Button("重置", variant="secondary"); resetBtn.style(size="sm") - stopBtn = gr.Button("停止", variant="secondary"); stopBtn.style(size="sm") - clearBtn = gr.Button("清除", variant="secondary", visible=False); clearBtn.style(size="sm") - with gr.Row(): - status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}") - with gr.Accordion("基础功能区", open=True) as area_basic_fn: - with gr.Row(): - for k in functional: - if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue - variant = functional[k]["Color"] if "Color" in functional[k] else "secondary" - functional[k]["Button"] = gr.Button(k, variant=variant) - with gr.Accordion("函数插件区", open=True) as area_crazy_fn: - with gr.Row(): - gr.Markdown("注意:以下“红颜色”标识的函数插件需从输入区读取路径作为参数.") - with gr.Row(): - for k in crazy_fns: - if not crazy_fns[k].get("AsButton", True): continue - variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary" - crazy_fns[k]["Button"] = gr.Button(k, variant=variant) - crazy_fns[k]["Button"].style(size="sm") - with gr.Row(): - with gr.Accordion("更多函数插件", open=True): - dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)] - with gr.Row(): - dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False) - with gr.Row(): - plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, - placeholder="这里是特殊函数插件的高级参数输入区").style(container=False) - with gr.Row(): - switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary") - with gr.Row(): - with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up: - file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple") - with gr.Accordion("更换模型 & SysPrompt & 交互界面布局", open=(LAYOUT == "TOP-DOWN")): - system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt) - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",) - max_length_sl = gr.Slider(minimum=256, maximum=4096, value=512, step=1, interactive=True, label="Local LLM MaxLength",) - checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区") - md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False) - - gr.Markdown(description) - with gr.Accordion("备选输入区", open=True, visible=False) as area_input_secondary: - with gr.Row(): - txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", label="输入区2").style(container=False) - with gr.Row(): - submitBtn2 = gr.Button("提交", variant="primary") - with gr.Row(): - resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm") - stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm") - clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm") - # 功能区显示开关与功能区的互动 - def fn_area_visibility(a): - ret = {} - ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))}) - ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))}) - ret.update({area_input_primary: gr.update(visible=("底部输入区" not in a))}) - ret.update({area_input_secondary: gr.update(visible=("底部输入区" in a))}) - ret.update({clearBtn: gr.update(visible=("输入清除键" in a))}) - ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))}) - ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))}) - if "底部输入区" in a: ret.update({txt: gr.update(value="")}) - return ret - checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] ) - # 整理反复出现的控件句柄组合 - input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg] - output_combo = [cookies, chatbot, history, status] - predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=input_combo, outputs=output_combo) - # 提交按钮、重置按钮 - cancel_handles.append(txt.submit(**predict_args)) - cancel_handles.append(txt2.submit(**predict_args)) - cancel_handles.append(submitBtn.click(**predict_args)) - cancel_handles.append(submitBtn2.click(**predict_args)) - resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) - resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) - clearBtn.click(lambda: ("",""), None, [txt, txt2]) - clearBtn2.click(lambda: ("",""), None, [txt, txt2]) - # 基础功能区的回调函数注册 - for k in functional: - if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue - click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo) - cancel_handles.append(click_handle) - # 文件上传区,接收文件后与chatbot的互动 - file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes], [chatbot, txt, txt2]) - # 函数插件-固定按钮区 - for k in crazy_fns: - if not crazy_fns[k].get("AsButton", True): continue - click_handle = crazy_fns[k]["Button"].click(ArgsGeneralWrapper(crazy_fns[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo) - click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]) - cancel_handles.append(click_handle) - # 函数插件-下拉菜单与随变按钮的互动 - def on_dropdown_changed(k): - variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary" - ret = {switchy_bt: gr.update(value=k, variant=variant)} - if crazy_fns[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区 - ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + crazy_fns[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))}) - else: - ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")}) - return ret - dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] ) - def on_md_dropdown_changed(k): - return {chatbot: gr.update(label="当前模型:"+k)} - md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] ) - # 随变按钮的回调函数注册 - def route(k, *args, **kwargs): - if k in [r"打开插件列表", r"请先从插件列表中选择"]: return - yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs) - click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo) - click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]) - cancel_handles.append(click_handle) - # 终止按钮的回调函数注册 - stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) - stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) - - # gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数 - def auto_opentab_delay(): - import threading, webbrowser, time - print(f"如果浏览器没有自动打开,请复制并转到以下URL:") - print(f"\t(亮色主题): http://localhost:{PORT}") - print(f"\t(暗色主题): http://localhost:{PORT}/?__theme=dark") - def open(): - time.sleep(2) # 打开浏览器 - DARK_MODE, = get_conf('DARK_MODE') - if DARK_MODE: webbrowser.open_new_tab(f"http://localhost:{PORT}/?__theme=dark") - else: webbrowser.open_new_tab(f"http://localhost:{PORT}") - threading.Thread(target=open, name="open-browser", daemon=True).start() - threading.Thread(target=auto_update, name="self-upgrade", daemon=True).start() - threading.Thread(target=warm_up_modules, name="warm-up", daemon=True).start() - - auto_opentab_delay() - demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png", blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"]) - - # 如果需要在二级路径下运行 - # CUSTOM_PATH, = get_conf('CUSTOM_PATH') - # if CUSTOM_PATH != "/": - # from toolbox import run_gradio_in_subpath - # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH) - # else: - # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png", - # blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"]) - -if __name__ == "__main__": - main() diff --git a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/modules/crepe.py b/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/modules/crepe.py deleted file mode 100644 index b58c1680d02fef54497c36bd47a36776cc7f6af5..0000000000000000000000000000000000000000 --- a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/modules/crepe.py +++ /dev/null @@ -1,331 +0,0 @@ -from typing import Optional,Union -try: - from typing import Literal -except Exception as e: - from typing_extensions import Literal -import numpy as np -import torch -import torchcrepe -from torch import nn -from torch.nn import functional as F -import scipy - -#from:https://github.com/fishaudio/fish-diffusion - -def repeat_expand( - content: Union[torch.Tensor, np.ndarray], target_len: int, mode: str = "nearest" -): - """Repeat content to target length. - This is a wrapper of torch.nn.functional.interpolate. - - Args: - content (torch.Tensor): tensor - target_len (int): target length - mode (str, optional): interpolation mode. Defaults to "nearest". - - Returns: - torch.Tensor: tensor - """ - - ndim = content.ndim - - if content.ndim == 1: - content = content[None, None] - elif content.ndim == 2: - content = content[None] - - assert content.ndim == 3 - - is_np = isinstance(content, np.ndarray) - if is_np: - content = torch.from_numpy(content) - - results = torch.nn.functional.interpolate(content, size=target_len, mode=mode) - - if is_np: - results = results.numpy() - - if ndim == 1: - return results[0, 0] - elif ndim == 2: - return results[0] - - -class BasePitchExtractor: - def __init__( - self, - hop_length: int = 512, - f0_min: float = 50.0, - f0_max: float = 1100.0, - keep_zeros: bool = True, - ): - """Base pitch extractor. - - Args: - hop_length (int, optional): Hop length. Defaults to 512. - f0_min (float, optional): Minimum f0. Defaults to 50.0. - f0_max (float, optional): Maximum f0. Defaults to 1100.0. - keep_zeros (bool, optional): Whether keep zeros in pitch. Defaults to True. - """ - - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.keep_zeros = keep_zeros - - def __call__(self, x, sampling_rate=44100, pad_to=None): - raise NotImplementedError("BasePitchExtractor is not callable.") - - def post_process(self, x, sampling_rate, f0, pad_to): - if isinstance(f0, np.ndarray): - f0 = torch.from_numpy(f0).float().to(x.device) - - if pad_to is None: - return f0 - - f0 = repeat_expand(f0, pad_to) - - if self.keep_zeros: - return f0 - - vuv_vector = torch.zeros_like(f0) - vuv_vector[f0 > 0.0] = 1.0 - vuv_vector[f0 <= 0.0] = 0.0 - - # Remove 0 frequency and apply linear interpolation - nzindex = torch.nonzero(f0).squeeze() - f0 = torch.index_select(f0, dim=0, index=nzindex).cpu().numpy() - time_org = self.hop_length / sampling_rate * nzindex.cpu().numpy() - time_frame = np.arange(pad_to) * self.hop_length / sampling_rate - - if f0.shape[0] <= 0: - return torch.zeros(pad_to, dtype=torch.float, device=x.device),torch.zeros(pad_to, dtype=torch.float, device=x.device) - - if f0.shape[0] == 1: - return torch.ones(pad_to, dtype=torch.float, device=x.device) * f0[0],torch.ones(pad_to, dtype=torch.float, device=x.device) - - # Probably can be rewritten with torch? - f0 = np.interp(time_frame, time_org, f0, left=f0[0], right=f0[-1]) - vuv_vector = vuv_vector.cpu().numpy() - vuv_vector = np.ceil(scipy.ndimage.zoom(vuv_vector,pad_to/len(vuv_vector),order = 0)) - - return f0,vuv_vector - - -class MaskedAvgPool1d(nn.Module): - def __init__( - self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0 - ): - """An implementation of mean pooling that supports masked values. - - Args: - kernel_size (int): The size of the median pooling window. - stride (int, optional): The stride of the median pooling window. Defaults to None. - padding (int, optional): The padding of the median pooling window. Defaults to 0. - """ - - super(MaskedAvgPool1d, self).__init__() - self.kernel_size = kernel_size - self.stride = stride or kernel_size - self.padding = padding - - def forward(self, x, mask=None): - ndim = x.dim() - if ndim == 2: - x = x.unsqueeze(1) - - assert ( - x.dim() == 3 - ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)" - - # Apply the mask by setting masked elements to zero, or make NaNs zero - if mask is None: - mask = ~torch.isnan(x) - - # Ensure mask has the same shape as the input tensor - assert x.shape == mask.shape, "Input tensor and mask must have the same shape" - - masked_x = torch.where(mask, x, torch.zeros_like(x)) - # Create a ones kernel with the same number of channels as the input tensor - ones_kernel = torch.ones(x.size(1), 1, self.kernel_size, device=x.device) - - # Perform sum pooling - sum_pooled = nn.functional.conv1d( - masked_x, - ones_kernel, - stride=self.stride, - padding=self.padding, - groups=x.size(1), - ) - - # Count the non-masked (valid) elements in each pooling window - valid_count = nn.functional.conv1d( - mask.float(), - ones_kernel, - stride=self.stride, - padding=self.padding, - groups=x.size(1), - ) - valid_count = valid_count.clamp(min=1) # Avoid division by zero - - # Perform masked average pooling - avg_pooled = sum_pooled / valid_count - - # Fill zero values with NaNs - avg_pooled[avg_pooled == 0] = float("nan") - - if ndim == 2: - return avg_pooled.squeeze(1) - - return avg_pooled - - -class MaskedMedianPool1d(nn.Module): - def __init__( - self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0 - ): - """An implementation of median pooling that supports masked values. - - This implementation is inspired by the median pooling implementation in - https://gist.github.com/rwightman/f2d3849281624be7c0f11c85c87c1598 - - Args: - kernel_size (int): The size of the median pooling window. - stride (int, optional): The stride of the median pooling window. Defaults to None. - padding (int, optional): The padding of the median pooling window. Defaults to 0. - """ - - super(MaskedMedianPool1d, self).__init__() - self.kernel_size = kernel_size - self.stride = stride or kernel_size - self.padding = padding - - def forward(self, x, mask=None): - ndim = x.dim() - if ndim == 2: - x = x.unsqueeze(1) - - assert ( - x.dim() == 3 - ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)" - - if mask is None: - mask = ~torch.isnan(x) - - assert x.shape == mask.shape, "Input tensor and mask must have the same shape" - - masked_x = torch.where(mask, x, torch.zeros_like(x)) - - x = F.pad(masked_x, (self.padding, self.padding), mode="reflect") - mask = F.pad( - mask.float(), (self.padding, self.padding), mode="constant", value=0 - ) - - x = x.unfold(2, self.kernel_size, self.stride) - mask = mask.unfold(2, self.kernel_size, self.stride) - - x = x.contiguous().view(x.size()[:3] + (-1,)) - mask = mask.contiguous().view(mask.size()[:3] + (-1,)).to(x.device) - - # Combine the mask with the input tensor - #x_masked = torch.where(mask.bool(), x, torch.fill_(torch.zeros_like(x),float("inf"))) - x_masked = torch.where(mask.bool(), x, torch.FloatTensor([float("inf")]).to(x.device)) - - # Sort the masked tensor along the last dimension - x_sorted, _ = torch.sort(x_masked, dim=-1) - - # Compute the count of non-masked (valid) values - valid_count = mask.sum(dim=-1) - - # Calculate the index of the median value for each pooling window - median_idx = (torch.div((valid_count - 1), 2, rounding_mode='trunc')).clamp(min=0) - - # Gather the median values using the calculated indices - median_pooled = x_sorted.gather(-1, median_idx.unsqueeze(-1).long()).squeeze(-1) - - # Fill infinite values with NaNs - median_pooled[torch.isinf(median_pooled)] = float("nan") - - if ndim == 2: - return median_pooled.squeeze(1) - - return median_pooled - - -class CrepePitchExtractor(BasePitchExtractor): - def __init__( - self, - hop_length: int = 512, - f0_min: float = 50.0, - f0_max: float = 1100.0, - threshold: float = 0.05, - keep_zeros: bool = False, - device = None, - model: Literal["full", "tiny"] = "full", - use_fast_filters: bool = True, - ): - super().__init__(hop_length, f0_min, f0_max, keep_zeros) - - self.threshold = threshold - self.model = model - self.use_fast_filters = use_fast_filters - self.hop_length = hop_length - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - if self.use_fast_filters: - self.median_filter = MaskedMedianPool1d(3, 1, 1).to(device) - self.mean_filter = MaskedAvgPool1d(3, 1, 1).to(device) - - def __call__(self, x, sampling_rate=44100, pad_to=None): - """Extract pitch using crepe. - - - Args: - x (torch.Tensor): Audio signal, shape (1, T). - sampling_rate (int, optional): Sampling rate. Defaults to 44100. - pad_to (int, optional): Pad to length. Defaults to None. - - Returns: - torch.Tensor: Pitch, shape (T // hop_length,). - """ - - assert x.ndim == 2, f"Expected 2D tensor, got {x.ndim}D tensor." - assert x.shape[0] == 1, f"Expected 1 channel, got {x.shape[0]} channels." - - x = x.to(self.dev) - f0, pd = torchcrepe.predict( - x, - sampling_rate, - self.hop_length, - self.f0_min, - self.f0_max, - pad=True, - model=self.model, - batch_size=1024, - device=x.device, - return_periodicity=True, - ) - - # Filter, remove silence, set uv threshold, refer to the original warehouse readme - if self.use_fast_filters: - pd = self.median_filter(pd) - else: - pd = torchcrepe.filter.median(pd, 3) - - pd = torchcrepe.threshold.Silence(-60.0)(pd, x, sampling_rate, 512) - f0 = torchcrepe.threshold.At(self.threshold)(f0, pd) - - if self.use_fast_filters: - f0 = self.mean_filter(f0) - else: - f0 = torchcrepe.filter.mean(f0, 3) - - f0 = torch.where(torch.isnan(f0), torch.full_like(f0, 0), f0)[0] - - if torch.all(f0 == 0): - rtn = f0.cpu().numpy() if pad_to==None else np.zeros(pad_to) - return rtn,rtn - - return self.post_process(x, sampling_rate, f0, pad_to) diff --git a/spaces/zeeba/minima/README.md b/spaces/zeeba/minima/README.md deleted file mode 100644 index 247a9e460c8d2795e3d2f0b9a92e056f3465a135..0000000000000000000000000000000000000000 --- a/spaces/zeeba/minima/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Minima -emoji: 💻 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/inference_ui.py b/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/inference_ui.py deleted file mode 100644 index f3b2f390dfbdccfb2a01213017dc85baf3628e79..0000000000000000000000000000000000000000 --- a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/inference_ui.py +++ /dev/null @@ -1,902 +0,0 @@ -import gradio as gr -import os -import time -import json - -from transformers import GenerationConfig - -from ..config import Config -from ..globals import Global -from ..models import get_model, get_tokenizer, get_device -from ..lib.csv_logger import CSVLogger -from ..utils.data import ( - get_available_template_names, - get_available_lora_model_names, - get_info_of_available_lora_model) -from ..utils.prompter import Prompter - -device = get_device() - -default_show_raw = True -inference_output_lines = 12 - - -class LoggingItem: - def __init__(self, label): - self.label = label - - def deserialize(self, value, **kwargs): - return value - - -def prepare_inference(lora_model_name, progress=gr.Progress(track_tqdm=True)): - base_model_name = Global.base_model_name - tokenizer_name = Global.tokenizer_name or Global.base_model_name - - try: - get_tokenizer(tokenizer_name) - get_model(base_model_name, lora_model_name) - return ("", "", gr.Textbox.update(visible=False)) - - except Exception as e: - raise gr.Error(e) - - -def do_inference( - lora_model_name, - prompt_template, - variable_0, variable_1, variable_2, variable_3, - variable_4, variable_5, variable_6, variable_7, - temperature=0.1, - top_p=0.75, - top_k=40, - num_beams=4, - repetition_penalty=1.2, - max_new_tokens=128, - stream_output=False, - show_raw=False, - progress=gr.Progress(track_tqdm=True), -): - base_model_name = Global.base_model_name - - try: - if Global.generation_force_stopped_at is not None: - required_elapsed_time_after_forced_stop = 1 - current_unix_time = time.time() - remaining_time = required_elapsed_time_after_forced_stop - \ - (current_unix_time - Global.generation_force_stopped_at) - if remaining_time > 0: - time.sleep(remaining_time) - Global.generation_force_stopped_at = None - - variables = [variable_0, variable_1, variable_2, variable_3, - variable_4, variable_5, variable_6, variable_7] - prompter = Prompter(prompt_template) - prompt = prompter.generate_prompt(variables) - - generation_config = GenerationConfig( - # to avoid ValueError('`temperature` has to be a strictly positive float, but is 2') - temperature=float(temperature), - top_p=top_p, - top_k=top_k, - repetition_penalty=repetition_penalty, - num_beams=num_beams, - # https://github.com/huggingface/transformers/issues/22405#issuecomment-1485527953 - do_sample=temperature > 0, - ) - - def get_output_for_flagging(output, raw_output, completed=True): - return json.dumps({ - 'base_model': base_model_name, - 'adaptor_model': lora_model_name, - 'prompt': prompt, - 'output': output, - 'completed': completed, - 'raw_output': raw_output, - 'max_new_tokens': max_new_tokens, - 'prompt_template': prompt_template, - 'prompt_template_variables': variables, - 'generation_config': generation_config.to_dict(), - }) - - if Config.ui_dev_mode: - message = f"Hi, I’m currently in UI-development mode and do not have access to resources to process your request. However, this behavior is similar to what will actually happen, so you can try and see how it will work!\n\nBase model: {base_model_name}\nLoRA model: {lora_model_name}\n\nThe following is your prompt:\n\n{prompt}" - print(message) - - if stream_output: - def word_generator(sentence): - lines = message.split('\n') - out = "" - for line in lines: - words = line.split(' ') - for i in range(len(words)): - if out: - out += ' ' - out += words[i] - yield out - out += "\n" - yield out - - output = "" - for partial_sentence in word_generator(message): - output = partial_sentence - yield ( - gr.Textbox.update( - value=output, - lines=inference_output_lines), - json.dumps( - list(range(len(output.split()))), - indent=2), - gr.Textbox.update( - value=get_output_for_flagging( - output, "", completed=False), - visible=True) - ) - time.sleep(0.05) - - yield ( - gr.Textbox.update( - value=output, - lines=inference_output_lines), - json.dumps( - list(range(len(output.split()))), - indent=2), - gr.Textbox.update( - value=get_output_for_flagging( - output, "", completed=True), - visible=True) - ) - - return - time.sleep(1) - yield ( - gr.Textbox.update(value=message, lines=inference_output_lines), - json.dumps(list(range(len(message.split()))), indent=2), - gr.Textbox.update( - value=get_output_for_flagging(message, ""), - visible=True) - ) - return - - tokenizer = get_tokenizer(base_model_name) - model = get_model(base_model_name, lora_model_name) - - def ui_generation_stopping_criteria(input_ids, score, **kwargs): - if Global.should_stop_generating: - return True - return False - - Global.should_stop_generating = False - - generation_args = { - 'model': model, - 'tokenizer': tokenizer, - 'prompt': prompt, - 'generation_config': generation_config, - 'max_new_tokens': max_new_tokens, - 'stopping_criteria': [ui_generation_stopping_criteria], - 'stream_output': stream_output - } - - for (decoded_output, output, completed) in Global.inference_generate_fn(**generation_args): - raw_output_str = str(output) - response = prompter.get_response(decoded_output) - - if Global.should_stop_generating: - return - - yield ( - gr.Textbox.update( - value=response, lines=inference_output_lines), - raw_output_str, - gr.Textbox.update( - value=get_output_for_flagging( - decoded_output, raw_output_str, completed=completed), - visible=True) - ) - - if Global.should_stop_generating: - # If the user stops the generation, and then clicks the - # generation button again, they may mysteriously landed - # here, in the previous, should-be-stopped generation - # function call, with the new generation function not be - # called at all. To workaround this, we yield a message - # and setting lines=1, and if the front-end JS detects - # that lines has been set to 1 (rows="1" in HTML), - # it will automatically click the generate button again - # (gr.Textbox.update() does not support updating - # elem_classes or elem_id). - # [WORKAROUND-UI01] - yield ( - gr.Textbox.update( - value="Please retry", lines=1), - None, None) - - return - except Exception as e: - raise gr.Error(str(e)) - - -def handle_stop_generate(): - Global.generation_force_stopped_at = time.time() - Global.should_stop_generating = True - - -def reload_selections(current_lora_model, current_prompt_template): - available_template_names = get_available_template_names() - available_template_names_with_none = available_template_names + ["None"] - - if current_prompt_template not in available_template_names_with_none: - current_prompt_template = None - - current_prompt_template = current_prompt_template or next( - iter(available_template_names_with_none), None) - - default_lora_models = [] - available_lora_models = default_lora_models + get_available_lora_model_names() - available_lora_models = available_lora_models + ["None"] - - current_lora_model = current_lora_model or next( - iter(available_lora_models), None) - - return (gr.Dropdown.update(choices=available_lora_models, value=current_lora_model), - gr.Dropdown.update(choices=available_template_names_with_none, value=current_prompt_template)) - - -def get_warning_message_for_lora_model_and_prompt_template(lora_model, prompt_template): - messages = [] - - lora_mode_info = get_info_of_available_lora_model(lora_model) - - if lora_mode_info and isinstance(lora_mode_info, dict): - model_base_model = lora_mode_info.get("base_model") - if model_base_model and model_base_model != Global.base_model_name: - messages.append( - f"⚠️ This model was trained on top of base model `{model_base_model}`, it might not work properly with the selected base model `{Global.base_model_name}`.") - - model_prompt_template = lora_mode_info.get("prompt_template") - if model_prompt_template and model_prompt_template != prompt_template: - messages.append( - f"This model was trained with prompt template `{model_prompt_template}`.") - - return " ".join(messages) - - -def handle_prompt_template_change(prompt_template, lora_model): - prompter = Prompter(prompt_template) - var_names = prompter.get_variable_names() - human_var_names = [' '.join(word.capitalize() - for word in item.split('_')) for item in var_names] - gr_updates = [gr.Textbox.update( - label=name, visible=True) for name in human_var_names] - while len(gr_updates) < 8: - gr_updates.append(gr.Textbox.update( - label="Not Used", visible=False)) - - model_prompt_template_message_update = gr.Markdown.update( - "", visible=False) - warning_message = get_warning_message_for_lora_model_and_prompt_template( - lora_model, prompt_template) - if warning_message: - model_prompt_template_message_update = gr.Markdown.update( - warning_message, visible=True) - - return [model_prompt_template_message_update] + gr_updates - - -def handle_lora_model_change(lora_model, prompt_template): - lora_mode_info = get_info_of_available_lora_model(lora_model) - - if lora_mode_info and isinstance(lora_mode_info, dict): - model_prompt_template = lora_mode_info.get("prompt_template") - if model_prompt_template: - available_template_names = get_available_template_names() - if model_prompt_template in available_template_names: - prompt_template = model_prompt_template - - model_prompt_template_message_update = gr.Markdown.update( - "", visible=False) - warning_message = get_warning_message_for_lora_model_and_prompt_template( - lora_model, prompt_template) - if warning_message: - model_prompt_template_message_update = gr.Markdown.update( - warning_message, visible=True) - - return model_prompt_template_message_update, prompt_template - - -def update_prompt_preview(prompt_template, - variable_0, variable_1, variable_2, variable_3, - variable_4, variable_5, variable_6, variable_7): - variables = [variable_0, variable_1, variable_2, variable_3, - variable_4, variable_5, variable_6, variable_7] - prompter = Prompter(prompt_template) - prompt = prompter.generate_prompt(variables) - return gr.Textbox.update(value=prompt) - - -def inference_ui(): - flagging_dir = os.path.join(Config.data_dir, "flagging", "inference") - if not os.path.exists(flagging_dir): - os.makedirs(flagging_dir) - - flag_callback = CSVLogger() - flag_components = [ - LoggingItem("Base Model"), - LoggingItem("Adaptor Model"), - LoggingItem("Type"), - LoggingItem("Prompt"), - LoggingItem("Output"), - LoggingItem("Completed"), - LoggingItem("Config"), - LoggingItem("Raw Output"), - LoggingItem("Max New Tokens"), - LoggingItem("Prompt Template"), - LoggingItem("Prompt Template Variables"), - LoggingItem("Generation Config"), - ] - flag_callback.setup(flag_components, flagging_dir) - - def get_flag_callback_args(output_for_flagging_str, flag_type): - output_for_flagging = json.loads(output_for_flagging_str) - generation_config = output_for_flagging.get("generation_config", {}) - config = [] - if generation_config.get('do_sample', False): - config.append( - f"Temperature: {generation_config.get('temperature')}") - config.append(f"Top P: {generation_config.get('top_p')}") - config.append(f"Top K: {generation_config.get('top_k')}") - num_beams = generation_config.get('num_beams', 1) - if num_beams > 1: - config.append(f"Beams: {generation_config.get('num_beams')}") - config.append(f"RP: {generation_config.get('repetition_penalty')}") - return [ - output_for_flagging.get("base_model", ""), - output_for_flagging.get("adaptor_model", ""), - flag_type, - output_for_flagging.get("prompt", ""), - output_for_flagging.get("output", ""), - str(output_for_flagging.get("completed", "")), - ", ".join(config), - output_for_flagging.get("raw_output", ""), - str(output_for_flagging.get("max_new_tokens", "")), - output_for_flagging.get("prompt_template", ""), - json.dumps(output_for_flagging.get( - "prompt_template_variables", "")), - json.dumps(output_for_flagging.get("generation_config", "")), - ] - - def get_flag_filename(output_for_flagging_str): - output_for_flagging = json.loads(output_for_flagging_str) - base_model = output_for_flagging.get("base_model", None) - adaptor_model = output_for_flagging.get("adaptor_model", None) - if adaptor_model == "None": - adaptor_model = None - if not base_model: - return "log.csv" - if not adaptor_model: - return f"log-{base_model}.csv" - return f"log-{base_model}#{adaptor_model}.csv" - - things_that_might_timeout = [] - - with gr.Blocks() as inference_ui_blocks: - with gr.Row(elem_classes="disable_while_training"): - with gr.Column(elem_id="inference_lora_model_group"): - model_prompt_template_message = gr.Markdown( - "", visible=False, elem_id="inference_lora_model_prompt_template_message") - lora_model = gr.Dropdown( - label="LoRA Model", - elem_id="inference_lora_model", - value="None", - allow_custom_value=True, - ) - prompt_template = gr.Dropdown( - label="Prompt Template", - elem_id="inference_prompt_template", - ) - reload_selections_button = gr.Button( - "↻", - elem_id="inference_reload_selections_button" - ) - reload_selections_button.style( - full_width=False, - size="sm") - with gr.Row(elem_classes="disable_while_training"): - with gr.Column(): - with gr.Column(elem_id="inference_prompt_box"): - variable_0 = gr.Textbox( - lines=2, - label="Prompt", - placeholder="Tell me about alpecas and llamas.", - elem_id="inference_variable_0" - ) - variable_1 = gr.Textbox( - lines=2, label="", visible=False, elem_id="inference_variable_1") - variable_2 = gr.Textbox( - lines=2, label="", visible=False, elem_id="inference_variable_2") - variable_3 = gr.Textbox( - lines=2, label="", visible=False, elem_id="inference_variable_3") - variable_4 = gr.Textbox( - lines=2, label="", visible=False, elem_id="inference_variable_4") - variable_5 = gr.Textbox( - lines=2, label="", visible=False, elem_id="inference_variable_5") - variable_6 = gr.Textbox( - lines=2, label="", visible=False, elem_id="inference_variable_6") - variable_7 = gr.Textbox( - lines=2, label="", visible=False, elem_id="inference_variable_7") - - with gr.Accordion("Preview", open=False, elem_id="inference_preview_prompt_container"): - preview_prompt = gr.Textbox( - show_label=False, interactive=False, elem_id="inference_preview_prompt") - update_prompt_preview_btn = gr.Button( - "↻", elem_id="inference_update_prompt_preview_btn") - update_prompt_preview_btn.style(size="sm") - - # with gr.Column(): - # with gr.Row(): - # generate_btn = gr.Button( - # "Generate", variant="primary", label="Generate", elem_id="inference_generate_btn", - # ) - # stop_btn = gr.Button( - # "Stop", variant="stop", label="Stop Iterating", elem_id="inference_stop_btn") - - # with gr.Column(): - with gr.Accordion("Options", open=True, elem_id="inference_options_accordion"): - temperature = gr.Slider( - minimum=0, maximum=2, value=0, step=0.01, - label="Temperature", - elem_id="inference_temperature" - ) - - with gr.Row(elem_classes="inference_options_group"): - top_p = gr.Slider( - minimum=0, maximum=1, value=0.75, step=0.01, - label="Top P", - elem_id="inference_top_p" - ) - - top_k = gr.Slider( - minimum=0, maximum=100, value=40, step=1, - label="Top K", - elem_id="inference_top_k" - ) - - num_beams = gr.Slider( - minimum=1, maximum=5, value=2, step=1, - label="Beams", - elem_id="inference_beams" - ) - - repetition_penalty = gr.Slider( - minimum=0, maximum=2.5, value=1.2, step=0.01, - label="Repetition Penalty", - elem_id="inference_repetition_penalty" - ) - - max_new_tokens = gr.Slider( - minimum=0, maximum=4096, value=128, step=1, - label="Max New Tokens", - elem_id="inference_max_new_tokens" - ) - - with gr.Row(elem_id="inference_options_bottom_group"): - stream_output = gr.Checkbox( - label="Stream Output", - elem_id="inference_stream_output", - value=True - ) - show_raw = gr.Checkbox( - label="Show Raw", - elem_id="inference_show_raw", - value=default_show_raw - ) - - with gr.Column(): - with gr.Row(): - generate_btn = gr.Button( - "Generate", variant="primary", label="Generate", elem_id="inference_generate_btn", - ) - stop_btn = gr.Button( - "Stop", variant="stop", label="Stop Iterating", elem_id="inference_stop_btn") - - with gr.Column(elem_id="inference_output_group_container"): - with gr.Column(elem_id="inference_output_group"): - inference_output = gr.Textbox( - lines=inference_output_lines, label="Output", elem_id="inference_output") - inference_output.style(show_copy_button=True) - - with gr.Row(elem_id="inference_flagging_group", variant="panel"): - output_for_flagging = gr.Textbox( - interactive=False, visible=False, - elem_id="inference_output_for_flagging") - flag_btn = gr.Button( - "Flag", elem_id="inference_flag_btn") - flag_up_btn = gr.Button( - "👍", elem_id="inference_flag_up_btn") - flag_down_btn = gr.Button( - "👎", elem_id="inference_flag_down_btn") - flag_output = gr.Markdown( - "", elem_id="inference_flag_output") - flag_btn.click( - lambda d: (flag_callback.flag( - get_flag_callback_args(d, "Flag"), - flag_option="Flag", - username=None, - filename=get_flag_filename(d) - ), "")[1], - inputs=[output_for_flagging], - outputs=[flag_output], - preprocess=False) - flag_up_btn.click( - lambda d: (flag_callback.flag( - get_flag_callback_args(d, "👍"), - flag_option="Up Vote", - username=None, - filename=get_flag_filename(d) - ), "")[1], - inputs=[output_for_flagging], - outputs=[flag_output], - preprocess=False) - flag_down_btn.click( - lambda d: (flag_callback.flag( - get_flag_callback_args(d, "👎"), - flag_option="Down Vote", - username=None, - filename=get_flag_filename(d) - ), "")[1], - inputs=[output_for_flagging], - outputs=[flag_output], - preprocess=False) - - with gr.Accordion( - "Raw Output", - open=not default_show_raw, - visible=default_show_raw, - elem_id="inference_inference_raw_output_accordion" - ) as raw_output_group: - inference_raw_output = gr.Code( - # label="Raw Output", - label="Tensor", - language="json", - lines=8, - interactive=False, - elem_id="inference_raw_output") - - reload_selected_models_btn = gr.Button( - "", elem_id="inference_reload_selected_models_btn") - - show_raw_change_event = show_raw.change( - fn=lambda show_raw: gr.Accordion.update(visible=show_raw), - inputs=[show_raw], - outputs=[raw_output_group]) - things_that_might_timeout.append(show_raw_change_event) - - reload_selections_event = reload_selections_button.click( - reload_selections, - inputs=[lora_model, prompt_template], - outputs=[lora_model, prompt_template], - ) - things_that_might_timeout.append(reload_selections_event) - - prompt_template_change_event = prompt_template.change( - fn=handle_prompt_template_change, - inputs=[prompt_template, lora_model], - outputs=[ - model_prompt_template_message, - variable_0, variable_1, variable_2, variable_3, variable_4, variable_5, variable_6, variable_7]) - things_that_might_timeout.append(prompt_template_change_event) - - reload_selected_models_btn_event = reload_selected_models_btn.click( - fn=handle_prompt_template_change, - inputs=[prompt_template, lora_model], - outputs=[ - model_prompt_template_message, - variable_0, variable_1, variable_2, variable_3, variable_4, variable_5, variable_6, variable_7]) - things_that_might_timeout.append(reload_selected_models_btn_event) - - lora_model_change_event = lora_model.change( - fn=handle_lora_model_change, - inputs=[lora_model, prompt_template], - outputs=[model_prompt_template_message, prompt_template]) - things_that_might_timeout.append(lora_model_change_event) - - generate_event = generate_btn.click( - fn=prepare_inference, - inputs=[lora_model], - outputs=[inference_output, - inference_raw_output, output_for_flagging], - ).then( - fn=do_inference, - inputs=[ - lora_model, - prompt_template, - variable_0, variable_1, variable_2, variable_3, - variable_4, variable_5, variable_6, variable_7, - temperature, - top_p, - top_k, - num_beams, - repetition_penalty, - max_new_tokens, - stream_output, - show_raw, - ], - outputs=[inference_output, - inference_raw_output, output_for_flagging], - api_name="inference" - ) - stop_btn.click( - fn=handle_stop_generate, - inputs=None, - outputs=None, - cancels=[generate_event] - ) - - update_prompt_preview_event = update_prompt_preview_btn.click(fn=update_prompt_preview, inputs=[prompt_template, - variable_0, variable_1, variable_2, variable_3, - variable_4, variable_5, variable_6, variable_7,], outputs=preview_prompt) - things_that_might_timeout.append(update_prompt_preview_event) - - stop_timeoutable_btn = gr.Button( - "stop not-responding elements", - elem_id="inference_stop_timeoutable_btn", - elem_classes="foot_stop_timeoutable_btn") - stop_timeoutable_btn.click( - fn=None, inputs=None, outputs=None, cancels=things_that_might_timeout) - - inference_ui_blocks.load(_js=""" - function inference_ui_blocks_js() { - // Auto load options - setTimeout(function () { - document.getElementById('inference_reload_selections_button').click(); - - // Workaround default value not shown. - document.querySelector('#inference_lora_model input').value = - 'None'; - }, 100); - - // Add tooltips - setTimeout(function () { - tippy('#inference_lora_model', { - placement: 'top-start', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Select a LoRA model form your data directory, or type in a model name on HF (e.g.: tloen/alpaca-lora-7b).', - allowHTML: true, - }); - - tippy('#inference_prompt_template', { - placement: 'top-start', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Templates are loaded from the "templates" folder of your data directory. Be sure to select the template that matches your selected LoRA model to get the best results.', - }); - - tippy('#inference_reload_selections_button', { - placement: 'bottom-end', - delay: [500, 0], - animation: 'scale-subtle', - content: 'Press to reload LoRA Model and Prompt Template selections.', - }); - - document - .querySelector('#inference_preview_prompt_container .label-wrap') - .addEventListener('click', function () { - tippy('#inference_preview_prompt', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: 'This is the prompt that will be sent to the language model.', - }); - - const update_btn = document.getElementById( - 'inference_update_prompt_preview_btn' - ); - if (update_btn) update_btn.click(); - }); - - function setTooltipForOptions() { - tippy('#inference_temperature', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Controls randomness: Higher values (e.g., 1.0) make the model generate more diverse and random outputs. As the temperature approaches zero, the model will become deterministic and repetitive.
                  Setting a value larger then 0 will enable sampling.', - allowHTML: true, - }); - - tippy('#inference_top_p', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Controls diversity via nucleus sampling: only the tokens whose cumulative probability exceeds top_p are considered. 0.5 means half of all likelihood-weighted options are considered.
                  Will only take effect if Temperature is set to > 0.', - allowHTML: true, - }); - - tippy('#inference_top_k', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Controls diversity of the generated text by only considering the top_k tokens with the highest probabilities. This method can lead to more focused and coherent outputs by reducing the impact of low probability tokens.
                  Will only take effect if Temperature is set to > 0.', - allowHTML: true, - }); - - tippy('#inference_beams', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Number of candidate sequences explored in parallel during text generation using beam search. A higher value increases the chances of finding high-quality, coherent output, but may slow down the generation process.', - }); - - tippy('#inference_repetition_penalty', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Applies a penalty to the probability of tokens that have already been generated, discouraging the model from repeating the same words or phrases. The penalty is applied by dividing the token probability by a factor based on the number of times the token has appeared in the generated text.', - }); - - tippy('#inference_max_new_tokens', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'Limits the maximum number of tokens generated in a single iteration.', - }); - - tippy('#inference_stream_output', { - placement: 'right', - delay: [500, 0], - animation: 'scale-subtle', - content: - 'When enabled, generated text will be displayed in real-time as it is being produced by the model, allowing you to observe the text generation process as it unfolds.', - }); - } - setTooltipForOptions(); - - const inference_options_accordion_toggle = document.querySelector( - '#inference_options_accordion .label-wrap' - ); - if (inference_options_accordion_toggle) { - inference_options_accordion_toggle.addEventListener('click', function () { - setTooltipForOptions(); - }); - } - }, 100); - - // Show/hide generate and stop button base on the state. - setTimeout(function () { - // Make the '#inference_output > .wrap' element appear - document.getElementById('inference_stop_btn').click(); - - setTimeout(function () { - const output_wrap_element = document.querySelector( - '#inference_output > .wrap' - ); - function handle_output_wrap_element_class_change() { - if (Array.from(output_wrap_element.classList).includes('hide')) { - document.getElementById('inference_generate_btn').style.display = - 'block'; - document.getElementById('inference_stop_btn').style.display = 'none'; - } else { - document.getElementById('inference_generate_btn').style.display = - 'none'; - document.getElementById('inference_stop_btn').style.display = 'block'; - } - } - new MutationObserver(function (mutationsList, observer) { - handle_output_wrap_element_class_change(); - }).observe(output_wrap_element, { - attributes: true, - attributeFilter: ['class'], - }); - handle_output_wrap_element_class_change(); - }, 500); - }, 0); - - // Reload model selection on possible base model change. - setTimeout(function () { - const elem = document.getElementById('main_page_tabs_container'); - if (!elem) return; - - let prevClassList = []; - - new MutationObserver(function (mutationsList, observer) { - const currentPrevClassList = prevClassList; - const currentClassList = Array.from(elem.classList); - prevClassList = Array.from(elem.classList); - - if (!currentPrevClassList.includes('hide')) return; - if (currentClassList.includes('hide')) return; - - const inference_reload_selected_models_btn_elem = document.getElementById('inference_reload_selected_models_btn'); - - if (inference_reload_selected_models_btn_elem) inference_reload_selected_models_btn_elem.click(); - }).observe(elem, { - attributes: true, - attributeFilter: ['class'], - }); - }, 0); - - // Debounced updating the prompt preview. - setTimeout(function () { - function debounce(func, wait) { - let timeout; - return function (...args) { - const context = this; - clearTimeout(timeout); - const fn = () => { - if (document.querySelector('#inference_preview_prompt > .wrap:not(.hide)')) { - // Preview request is still loading, wait for 10ms and try again. - timeout = setTimeout(fn, 10); - return; - } - func.apply(context, args); - }; - timeout = setTimeout(fn, wait); - }; - } - - function update_preview() { - const update_btn = document.getElementById( - 'inference_update_prompt_preview_btn' - ); - if (!update_btn) return; - - update_btn.click(); - } - - for (let i = 0; i < 8; i++) { - const e = document.querySelector(`#inference_variable_${i} textarea`); - if (!e) return; - e.addEventListener('input', debounce(update_preview, 500)); - } - - const prompt_template_selector = document.querySelector( - '#inference_prompt_template .wrap-inner' - ); - - if (prompt_template_selector) { - new MutationObserver( - debounce(function () { - if (prompt_template_selector.classList.contains('showOptions')) return; - update_preview(); - }, 500) - ).observe(prompt_template_selector, { - attributes: true, - attributeFilter: ['class'], - }); - } - }, 100); - - // [WORKAROUND-UI01] - setTimeout(function () { - const inference_output_textarea = document.querySelector( - '#inference_output textarea' - ); - if (!inference_output_textarea) return; - const observer = new MutationObserver(function () { - if (inference_output_textarea.getAttribute('rows') === '1') { - setTimeout(function () { - const inference_generate_btn = document.getElementById( - 'inference_generate_btn' - ); - if (inference_generate_btn) inference_generate_btn.click(); - }, 10); - } - }); - observer.observe(inference_output_textarea, { - attributes: true, - attributeFilter: ['rows'], - }); - }, 100); - - return []; - } - """) diff --git a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/monitor/watch.js b/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/monitor/watch.js deleted file mode 100644 index 1ef14086d25ce94482325ea5e2c2d9b428f3e554..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/monitor/watch.js +++ /dev/null @@ -1,239 +0,0 @@ -module.exports.watch = watch; -module.exports.resetWatchers = resetWatchers; - -var debug = require('debug')('nodemon:watch'); -var debugRoot = require('debug')('nodemon'); -var chokidar = require('chokidar'); -var undefsafe = require('undefsafe'); -var config = require('../config'); -var path = require('path'); -var utils = require('../utils'); -var bus = utils.bus; -var match = require('./match'); -var watchers = []; -var debouncedBus; - -bus.on('reset', resetWatchers); - -function resetWatchers() { - debugRoot('resetting watchers'); - watchers.forEach(function (watcher) { - watcher.close(); - }); - watchers = []; -} - -function watch() { - if (watchers.length) { - debug('early exit on watch, still watching (%s)', watchers.length); - return; - } - - var dirs = [].slice.call(config.dirs); - - debugRoot('start watch on: %s', dirs.join(', ')); - const rootIgnored = config.options.ignore; - debugRoot('ignored', rootIgnored); - - var watchedFiles = []; - - const promise = new Promise(function (resolve) { - const dotFilePattern = /[/\\]\./; - var ignored = match.rulesToMonitor( - [], // not needed - Array.from(rootIgnored), - config - ).map(pattern => pattern.slice(1)); - - const addDotFile = dirs.filter(dir => dir.match(dotFilePattern)); - - // don't ignore dotfiles if explicitly watched. - if (addDotFile.length === 0) { - ignored.push(dotFilePattern); - } - - var watchOptions = { - ignorePermissionErrors: true, - ignored: ignored, - persistent: true, - usePolling: config.options.legacyWatch || false, - interval: config.options.pollingInterval, - // note to future developer: I've gone back and forth on adding `cwd` - // to the props and in some cases it fixes bugs but typically it causes - // bugs elsewhere (since nodemon is used is so many ways). the final - // decision is to *not* use it at all and work around it - // cwd: ... - }; - - if (utils.isWindows) { - watchOptions.disableGlobbing = true; - } - - if (process.env.TEST) { - watchOptions.useFsEvents = false; - } - - var watcher = chokidar.watch( - dirs, - Object.assign({}, watchOptions, config.options.watchOptions || {}) - ); - - watcher.ready = false; - - var total = 0; - - watcher.on('change', filterAndRestart); - watcher.on('add', function (file) { - if (watcher.ready) { - return filterAndRestart(file); - } - - watchedFiles.push(file); - bus.emit('watching', file); - debug('chokidar watching: %s', file); - }); - watcher.on('ready', function () { - watchedFiles = Array.from(new Set(watchedFiles)); // ensure no dupes - total = watchedFiles.length; - watcher.ready = true; - resolve(total); - debugRoot('watch is complete'); - }); - - watcher.on('error', function (error) { - if (error.code === 'EINVAL') { - utils.log.error( - 'Internal watch failed. Likely cause: too many ' + - 'files being watched (perhaps from the root of a drive?\n' + - 'See https://github.com/paulmillr/chokidar/issues/229 for details' - ); - } else { - utils.log.error('Internal watch failed: ' + error.message); - process.exit(1); - } - }); - - watchers.push(watcher); - }); - - return promise.catch(e => { - // this is a core error and it should break nodemon - so I have to break - // out of a promise using the setTimeout - setTimeout(() => { - throw e; - }); - }).then(function () { - utils.log.detail(`watching ${watchedFiles.length} file${ - watchedFiles.length === 1 ? '' : 's'}`); - return watchedFiles; - }); -} - -function filterAndRestart(files) { - if (!Array.isArray(files)) { - files = [files]; - } - - if (files.length) { - var cwd = process.cwd(); - if (this.options && this.options.cwd) { - cwd = this.options.cwd; - } - - utils.log.detail( - 'files triggering change check: ' + - files - .map(file => { - const res = path.relative(cwd, file); - return res; - }) - .join(', ') - ); - - // make sure the path is right and drop an empty - // filenames (sometimes on windows) - files = files.filter(Boolean).map(file => { - return path.relative(process.cwd(), path.relative(cwd, file)); - }); - - if (utils.isWindows) { - // ensure the drive letter is in uppercase (c:\foo -> C:\foo) - files = files.map(f => { - if (f.indexOf(':') === -1) { return f; } - return f[0].toUpperCase() + f.slice(1); - }); - } - - - debug('filterAndRestart on', files); - - var matched = match( - files, - config.options.monitor, - undefsafe(config, 'options.execOptions.ext') - ); - - debug('matched?', JSON.stringify(matched)); - - // if there's no matches, then test to see if the changed file is the - // running script, if so, let's allow a restart - if (config.options.execOptions && config.options.execOptions.script) { - const script = path.resolve(config.options.execOptions.script); - if (matched.result.length === 0 && script) { - const length = script.length; - files.find(file => { - if (file.substr(-length, length) === script) { - matched = { - result: [file], - total: 1, - }; - return true; - } - }); - } - } - - utils.log.detail( - 'changes after filters (before/after): ' + - [files.length, matched.result.length].join('/') - ); - - // reset the last check so we're only looking at recently modified files - config.lastStarted = Date.now(); - - if (matched.result.length) { - if (config.options.delay > 0) { - utils.log.detail('delaying restart for ' + config.options.delay + 'ms'); - if (debouncedBus === undefined) { - debouncedBus = debounce(restartBus, config.options.delay); - } - debouncedBus(matched); - } else { - return restartBus(matched); - } - } - } -} - -function restartBus(matched) { - utils.log.status('restarting due to changes...'); - matched.result.map(file => { - utils.log.detail(path.relative(process.cwd(), file)); - }); - - if (config.options.verbose) { - utils.log._log(''); - } - - bus.emit('restart', matched.result); -} - -function debounce(fn, delay) { - var timer = null; - return function () { - const context = this; - const args = arguments; - clearTimeout(timer); - timer = setTimeout(() =>fn.apply(context, args), delay); - }; -} diff --git a/spaces/zhang-wei-jian/docker/node_modules/vary/index.js b/spaces/zhang-wei-jian/docker/node_modules/vary/index.js deleted file mode 100644 index 5b5e741279d4b800b0c408c5efbac8de6ece450b..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/vary/index.js +++ /dev/null @@ -1,149 +0,0 @@ -/*! - * vary - * Copyright(c) 2014-2017 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict' - -/** - * Module exports. - */ - -module.exports = vary -module.exports.append = append - -/** - * RegExp to match field-name in RFC 7230 sec 3.2 - * - * field-name = token - * token = 1*tchar - * tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" - * / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" - * / DIGIT / ALPHA - * ; any VCHAR, except delimiters - */ - -var FIELD_NAME_REGEXP = /^[!#$%&'*+\-.^_`|~0-9A-Za-z]+$/ - -/** - * Append a field to a vary header. - * - * @param {String} header - * @param {String|Array} field - * @return {String} - * @public - */ - -function append (header, field) { - if (typeof header !== 'string') { - throw new TypeError('header argument is required') - } - - if (!field) { - throw new TypeError('field argument is required') - } - - // get fields array - var fields = !Array.isArray(field) - ? parse(String(field)) - : field - - // assert on invalid field names - for (var j = 0; j < fields.length; j++) { - if (!FIELD_NAME_REGEXP.test(fields[j])) { - throw new TypeError('field argument contains an invalid header name') - } - } - - // existing, unspecified vary - if (header === '*') { - return header - } - - // enumerate current values - var val = header - var vals = parse(header.toLowerCase()) - - // unspecified vary - if (fields.indexOf('*') !== -1 || vals.indexOf('*') !== -1) { - return '*' - } - - for (var i = 0; i < fields.length; i++) { - var fld = fields[i].toLowerCase() - - // append value (case-preserving) - if (vals.indexOf(fld) === -1) { - vals.push(fld) - val = val - ? val + ', ' + fields[i] - : fields[i] - } - } - - return val -} - -/** - * Parse a vary header into an array. - * - * @param {String} header - * @return {Array} - * @private - */ - -function parse (header) { - var end = 0 - var list = [] - var start = 0 - - // gather tokens - for (var i = 0, len = header.length; i < len; i++) { - switch (header.charCodeAt(i)) { - case 0x20: /* */ - if (start === end) { - start = end = i + 1 - } - break - case 0x2c: /* , */ - list.push(header.substring(start, end)) - start = end = i + 1 - break - default: - end = i + 1 - break - } - } - - // final token - list.push(header.substring(start, end)) - - return list -} - -/** - * Mark that a request is varied on a header field. - * - * @param {Object} res - * @param {String|Array} field - * @public - */ - -function vary (res, field) { - if (!res || !res.getHeader || !res.setHeader) { - // quack quack - throw new TypeError('res argument is required') - } - - // get existing header - var val = res.getHeader('Vary') || '' - var header = Array.isArray(val) - ? val.join(', ') - : String(val) - - // set new header - if ((val = append(header, field))) { - res.setHeader('Vary', val) - } -} diff --git a/spaces/zideliu/styledrop/timm/models/mobilenetv3.py b/spaces/zideliu/styledrop/timm/models/mobilenetv3.py deleted file mode 100644 index 8a48ce728b065033470b297f83378d456554c4f4..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/mobilenetv3.py +++ /dev/null @@ -1,444 +0,0 @@ - -""" MobileNet V3 - -A PyTorch impl of MobileNet-V3, compatible with TF weights from official impl. - -Paper: Searching for MobileNetV3 - https://arxiv.org/abs/1905.02244 - -Hacked together by / Copyright 2020 Ross Wightman -""" -import torch -import torch.nn as nn -import torch.nn.functional as F - -from typing import List - -from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD -from .efficientnet_blocks import round_channels, resolve_bn_args, resolve_act_layer, BN_EPS_TF_DEFAULT -from .efficientnet_builder import EfficientNetBuilder, decode_arch_def, efficientnet_init_weights -from .features import FeatureInfo, FeatureHooks -from .helpers import build_model_with_cfg, default_cfg_for_features -from .layers import SelectAdaptivePool2d, Linear, create_conv2d, get_act_fn, hard_sigmoid -from .registry import register_model - -__all__ = ['MobileNetV3'] - - -def _cfg(url='', **kwargs): - return { - 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (1, 1), - 'crop_pct': 0.875, 'interpolation': 'bilinear', - 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, - 'first_conv': 'conv_stem', 'classifier': 'classifier', - **kwargs - } - - -default_cfgs = { - 'mobilenetv3_large_075': _cfg(url=''), - 'mobilenetv3_large_100': _cfg( - interpolation='bicubic', - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_large_100_ra-f55367f5.pth'), - 'mobilenetv3_small_075': _cfg(url=''), - 'mobilenetv3_small_100': _cfg(url=''), - 'mobilenetv3_rw': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_100-35495452.pth', - interpolation='bicubic'), - 'tf_mobilenetv3_large_075': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_075-150ee8b0.pth', - mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), - 'tf_mobilenetv3_large_100': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_100-427764d5.pth', - mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), - 'tf_mobilenetv3_large_minimal_100': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_minimal_100-8596ae28.pth', - mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), - 'tf_mobilenetv3_small_075': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_075-da427f52.pth', - mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), - 'tf_mobilenetv3_small_100': _cfg( - url= 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_100-37f49e2b.pth', - mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), - 'tf_mobilenetv3_small_minimal_100': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_minimal_100-922a7843.pth', - mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD), -} - -_DEBUG = False - - -class MobileNetV3(nn.Module): - """ MobiletNet-V3 - - Based on my EfficientNet implementation and building blocks, this model utilizes the MobileNet-v3 specific - 'efficient head', where global pooling is done before the head convolution without a final batch-norm - layer before the classifier. - - Paper: https://arxiv.org/abs/1905.02244 - """ - - def __init__(self, block_args, num_classes=1000, in_chans=3, stem_size=16, num_features=1280, head_bias=True, - channel_multiplier=1.0, pad_type='', act_layer=nn.ReLU, drop_rate=0., drop_path_rate=0., - se_kwargs=None, norm_layer=nn.BatchNorm2d, norm_kwargs=None, global_pool='avg'): - super(MobileNetV3, self).__init__() - - self.num_classes = num_classes - self.num_features = num_features - self.drop_rate = drop_rate - - # Stem - stem_size = round_channels(stem_size, channel_multiplier) - self.conv_stem = create_conv2d(in_chans, stem_size, 3, stride=2, padding=pad_type) - self.bn1 = norm_layer(stem_size, **norm_kwargs) - self.act1 = act_layer(inplace=True) - - # Middle stages (IR/ER/DS Blocks) - builder = EfficientNetBuilder( - channel_multiplier, 8, None, 32, pad_type, act_layer, se_kwargs, - norm_layer, norm_kwargs, drop_path_rate, verbose=_DEBUG) - self.blocks = nn.Sequential(*builder(stem_size, block_args)) - self.feature_info = builder.features - head_chs = builder.in_chs - - # Head + Pooling - self.global_pool = SelectAdaptivePool2d(pool_type=global_pool) - num_pooled_chs = head_chs * self.global_pool.feat_mult() - self.conv_head = create_conv2d(num_pooled_chs, self.num_features, 1, padding=pad_type, bias=head_bias) - self.act2 = act_layer(inplace=True) - self.classifier = Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() - - efficientnet_init_weights(self) - - def as_sequential(self): - layers = [self.conv_stem, self.bn1, self.act1] - layers.extend(self.blocks) - layers.extend([self.global_pool, self.conv_head, self.act2]) - layers.extend([nn.Flatten(), nn.Dropout(self.drop_rate), self.classifier]) - return nn.Sequential(*layers) - - def get_classifier(self): - return self.classifier - - def reset_classifier(self, num_classes, global_pool='avg'): - self.num_classes = num_classes - # cannot meaningfully change pooling of efficient head after creation - self.global_pool = SelectAdaptivePool2d(pool_type=global_pool) - self.classifier = Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - x = self.conv_stem(x) - x = self.bn1(x) - x = self.act1(x) - x = self.blocks(x) - x = self.global_pool(x) - x = self.conv_head(x) - x = self.act2(x) - return x - - def forward(self, x): - x = self.forward_features(x) - if not self.global_pool.is_identity(): - x = x.flatten(1) - if self.drop_rate > 0.: - x = F.dropout(x, p=self.drop_rate, training=self.training) - return self.classifier(x) - - -class MobileNetV3Features(nn.Module): - """ MobileNetV3 Feature Extractor - - A work-in-progress feature extraction module for MobileNet-V3 to use as a backbone for segmentation - and object detection models. - """ - - def __init__(self, block_args, out_indices=(0, 1, 2, 3, 4), feature_location='bottleneck', - in_chans=3, stem_size=16, channel_multiplier=1.0, output_stride=32, pad_type='', - act_layer=nn.ReLU, drop_rate=0., drop_path_rate=0., se_kwargs=None, - norm_layer=nn.BatchNorm2d, norm_kwargs=None): - super(MobileNetV3Features, self).__init__() - norm_kwargs = norm_kwargs or {} - self.drop_rate = drop_rate - - # Stem - stem_size = round_channels(stem_size, channel_multiplier) - self.conv_stem = create_conv2d(in_chans, stem_size, 3, stride=2, padding=pad_type) - self.bn1 = norm_layer(stem_size, **norm_kwargs) - self.act1 = act_layer(inplace=True) - - # Middle stages (IR/ER/DS Blocks) - builder = EfficientNetBuilder( - channel_multiplier, 8, None, output_stride, pad_type, act_layer, se_kwargs, - norm_layer, norm_kwargs, drop_path_rate, feature_location=feature_location, verbose=_DEBUG) - self.blocks = nn.Sequential(*builder(stem_size, block_args)) - self.feature_info = FeatureInfo(builder.features, out_indices) - self._stage_out_idx = {v['stage']: i for i, v in enumerate(self.feature_info) if i in out_indices} - - efficientnet_init_weights(self) - - # Register feature extraction hooks with FeatureHooks helper - self.feature_hooks = None - if feature_location != 'bottleneck': - hooks = self.feature_info.get_dicts(keys=('module', 'hook_type')) - self.feature_hooks = FeatureHooks(hooks, self.named_modules()) - - def forward(self, x) -> List[torch.Tensor]: - x = self.conv_stem(x) - x = self.bn1(x) - x = self.act1(x) - if self.feature_hooks is None: - features = [] - if 0 in self._stage_out_idx: - features.append(x) # add stem out - for i, b in enumerate(self.blocks): - x = b(x) - if i + 1 in self._stage_out_idx: - features.append(x) - return features - else: - self.blocks(x) - out = self.feature_hooks.get_output(x.device) - return list(out.values()) - - -def _create_mnv3(model_kwargs, variant, pretrained=False): - features_only = False - model_cls = MobileNetV3 - if model_kwargs.pop('features_only', False): - features_only = True - model_kwargs.pop('num_classes', 0) - model_kwargs.pop('num_features', 0) - model_kwargs.pop('head_conv', None) - model_kwargs.pop('head_bias', None) - model_cls = MobileNetV3Features - model = build_model_with_cfg( - model_cls, variant, pretrained, default_cfg=default_cfgs[variant], - pretrained_strict=not features_only, **model_kwargs) - if features_only: - model.default_cfg = default_cfg_for_features(model.default_cfg) - return model - - -def _gen_mobilenet_v3_rw(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a MobileNet-V3 model. - - Ref impl: ? - Paper: https://arxiv.org/abs/1905.02244 - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16_nre_noskip'], # relu - # stage 1, 112x112 in - ['ir_r1_k3_s2_e4_c24_nre', 'ir_r1_k3_s1_e3_c24_nre'], # relu - # stage 2, 56x56 in - ['ir_r3_k5_s2_e3_c40_se0.25_nre'], # relu - # stage 3, 28x28 in - ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], # hard-swish - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c112_se0.25'], # hard-swish - # stage 5, 14x14in - ['ir_r3_k5_s2_e6_c160_se0.25'], # hard-swish - # stage 6, 7x7 in - ['cn_r1_k1_s1_c960'], # hard-swish - ] - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - head_bias=False, - channel_multiplier=channel_multiplier, - norm_kwargs=resolve_bn_args(kwargs), - act_layer=resolve_act_layer(kwargs, 'hard_swish'), - se_kwargs=dict(gate_fn=get_act_fn('hard_sigmoid'), reduce_mid=True, divisor=1), - **kwargs, - ) - model = _create_mnv3(model_kwargs, variant, pretrained) - return model - - -def _gen_mobilenet_v3(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a MobileNet-V3 model. - - Ref impl: ? - Paper: https://arxiv.org/abs/1905.02244 - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - if 'small' in variant: - num_features = 1024 - if 'minimal' in variant: - act_layer = resolve_act_layer(kwargs, 'relu') - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s2_e1_c16'], - # stage 1, 56x56 in - ['ir_r1_k3_s2_e4.5_c24', 'ir_r1_k3_s1_e3.67_c24'], - # stage 2, 28x28 in - ['ir_r1_k3_s2_e4_c40', 'ir_r2_k3_s1_e6_c40'], - # stage 3, 14x14 in - ['ir_r2_k3_s1_e3_c48'], - # stage 4, 14x14in - ['ir_r3_k3_s2_e6_c96'], - # stage 6, 7x7 in - ['cn_r1_k1_s1_c576'], - ] - else: - act_layer = resolve_act_layer(kwargs, 'hard_swish') - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s2_e1_c16_se0.25_nre'], # relu - # stage 1, 56x56 in - ['ir_r1_k3_s2_e4.5_c24_nre', 'ir_r1_k3_s1_e3.67_c24_nre'], # relu - # stage 2, 28x28 in - ['ir_r1_k5_s2_e4_c40_se0.25', 'ir_r2_k5_s1_e6_c40_se0.25'], # hard-swish - # stage 3, 14x14 in - ['ir_r2_k5_s1_e3_c48_se0.25'], # hard-swish - # stage 4, 14x14in - ['ir_r3_k5_s2_e6_c96_se0.25'], # hard-swish - # stage 6, 7x7 in - ['cn_r1_k1_s1_c576'], # hard-swish - ] - else: - num_features = 1280 - if 'minimal' in variant: - act_layer = resolve_act_layer(kwargs, 'relu') - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16'], - # stage 1, 112x112 in - ['ir_r1_k3_s2_e4_c24', 'ir_r1_k3_s1_e3_c24'], - # stage 2, 56x56 in - ['ir_r3_k3_s2_e3_c40'], - # stage 3, 28x28 in - ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c112'], - # stage 5, 14x14in - ['ir_r3_k3_s2_e6_c160'], - # stage 6, 7x7 in - ['cn_r1_k1_s1_c960'], - ] - else: - act_layer = resolve_act_layer(kwargs, 'hard_swish') - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16_nre'], # relu - # stage 1, 112x112 in - ['ir_r1_k3_s2_e4_c24_nre', 'ir_r1_k3_s1_e3_c24_nre'], # relu - # stage 2, 56x56 in - ['ir_r3_k5_s2_e3_c40_se0.25_nre'], # relu - # stage 3, 28x28 in - ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], # hard-swish - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c112_se0.25'], # hard-swish - # stage 5, 14x14in - ['ir_r3_k5_s2_e6_c160_se0.25'], # hard-swish - # stage 6, 7x7 in - ['cn_r1_k1_s1_c960'], # hard-swish - ] - - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - num_features=num_features, - stem_size=16, - channel_multiplier=channel_multiplier, - norm_kwargs=resolve_bn_args(kwargs), - act_layer=act_layer, - se_kwargs=dict(act_layer=nn.ReLU, gate_fn=hard_sigmoid, reduce_mid=True, divisor=8), - **kwargs, - ) - model = _create_mnv3(model_kwargs, variant, pretrained) - return model - - -@register_model -def mobilenetv3_large_075(pretrained=False, **kwargs): - """ MobileNet V3 """ - model = _gen_mobilenet_v3('mobilenetv3_large_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -@register_model -def mobilenetv3_large_100(pretrained=False, **kwargs): - """ MobileNet V3 """ - model = _gen_mobilenet_v3('mobilenetv3_large_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -@register_model -def mobilenetv3_small_075(pretrained=False, **kwargs): - """ MobileNet V3 """ - model = _gen_mobilenet_v3('mobilenetv3_small_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -@register_model -def mobilenetv3_small_100(pretrained=False, **kwargs): - """ MobileNet V3 """ - model = _gen_mobilenet_v3('mobilenetv3_small_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -@register_model -def mobilenetv3_rw(pretrained=False, **kwargs): - """ MobileNet V3 """ - if pretrained: - # pretrained model trained with non-default BN epsilon - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - model = _gen_mobilenet_v3_rw('mobilenetv3_rw', 1.0, pretrained=pretrained, **kwargs) - return model - - -@register_model -def tf_mobilenetv3_large_075(pretrained=False, **kwargs): - """ MobileNet V3 """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_large_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -@register_model -def tf_mobilenetv3_large_100(pretrained=False, **kwargs): - """ MobileNet V3 """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_large_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -@register_model -def tf_mobilenetv3_large_minimal_100(pretrained=False, **kwargs): - """ MobileNet V3 """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_large_minimal_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -@register_model -def tf_mobilenetv3_small_075(pretrained=False, **kwargs): - """ MobileNet V3 """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_small_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -@register_model -def tf_mobilenetv3_small_100(pretrained=False, **kwargs): - """ MobileNet V3 """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_small_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -@register_model -def tf_mobilenetv3_small_minimal_100(pretrained=False, **kwargs): - """ MobileNet V3 """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_small_minimal_100', 1.0, pretrained=pretrained, **kwargs) - return model